text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I've been working on setting up new domain with DFS. I've read various articles on how to do this but am wondering if the following is ok and will work. I have two 2008 R2's for DFS DFS is located on d:\DFSRoot I made name spaces under here for my main shares and then created each folder under and applied permissions as needed if needed. Is it ok to do it this way? Example: d:\DFSRoot\Departments(Namespace) This has security permissions only allowing people in the department security group to access. I've read that another way is using replication which basically creates a link to a shared folder. Since i'm creating all of these fresh it didn't make since to setup the folder structure, share everything out and then duplicate it under replication. Does this make sense? 22 Replies Nov 26, 2012 at 11:01 UTC The idea of replication is to create a server for failover purposes and duplicating data is part of that process; so two or more servers have synchronised copies of the shares (all the content files and folders), they share the same DFS name so if one server goes down the clients are automatically redirected to another server hosting the shares, by still using the same DFS namespace to get to the servers. If you need or want this feature then go for it - otherwise DFS namespace on a single server is fine too, but obviously you won't have that extra protection and/or ability to minimise downtime whilst users are accessing your shares. Nov 26, 2012 at 12:21 UTC DFS provides two other important features amount others. 1) Allow you to place a replicated copy local to your users if you move more then one site. 2) Virtualization of storage spaces. a) So if you need to expand your storage or need to move it to a new server, it's transparent to the uses and the applications. For example if a user has linked excel files. When using DFS the links in the spreadsheets are still valid on a new server, without DFS they links have to be recreated. b) it just give you allot more flexibility then a normal file share. Nov 26, 2012 at 7:04 UTC When I read the original post, it reads like you are directly manipulating ACLs and contents of the D:\DFSRoot folder. If my understanding is correct, you should be letting Windows manage this folder. You can manage the target folder as you would any other, but not the DFS root folder. If you have directly modified the D:\DFSRoot folder, I'd suggest removing the DFS namespace and namespace server and re-create it from scratch. Adding to Samuel's comments, the sharename abstraction occurs only if you're using domain-hosted DFS, in which case the \\servername portion of the UNC is your domain name. Servers come and go, but the domain is (usually) forever. If you're using server-hosted DFS, you lose that advantage, as the \\servername will change with the server that hosts DFS. Nov 26, 2012 at 7:15 UTC True make sure you use FQDN in the link or you will run into problems later. Nov 26, 2012 at 10:12 UTC One more DFS gotcha, learned here the hard way: If you use a file-based database that has a database manager service on the server, such as SQLAnywhere or Pervasive.SQL, and you are hosting the data file on a share that is a domain-hosted DFS target, do NOT use the DFS namespace path when you connect to the data file. Use the physical (non-DFS) UNC path, or a drive letter mapped to the physical path. This is because both of these programs (and, I'd guess, others of their ilk) parse the server name from the UNC (or the drive mapping, if that's how you connect) in order to set up client-server communications. Since it won't find a database server named domain.local, it will fail to connect. You'll either get an error message, or, worse, the program will work, but VERY slowly and using lots of network bandwidth. Quickbooks is a commonly-used program that this applies to. Nov 27, 2012 at 12:24 UTC Thanks for all the replies. I'll clarify a little more. The DFSRoots folder I have not touched nor published. I am using domain name based. What currently is is the following. Two 2008 R2 DCs with DC1 running file services(DFS) and a non DC running file services One replication group containing DC1 and non-DC replicating DFSRoots I end up with //domain.local/departments //domain.local/Global //domain.local/staff etc. I started applying permissions at the namespace, Departments, global, staff and then further permissions within these folders. They are replicating between both servers. If I go to the replication->my replication group and click on replicated folders it shows DFSRoots not published. All drives to the locations above are mapped via group policy to //domain.local/namespace From what everyone has replied, it sounds like this will work and work correctly? Related to JRV's post, I am going to have a couple access databases I was going to have staff use by going to say mapped drive R which is mapped to //domain.local/databases . What I've seen if I go directly to the servername/share and create something, it will disappear. I'm assuming this is because DFS is overwriting or not seeing it since it is looking for items in domain.local/namespace? I hope this makes sense. Nov 27, 2012 at 12:39 UTC It sounds to me that this will work correctly. Regarding Access, Access does not have a database manager running on the server, so my message does not apply. It will work the same way with a DFS share as it does with the physical share. \\servername\share will work interchangeably with \\domain.local\databases. DFS merely provides a different path to the same location. So the disappearing objects are a mystery. Can you provide an example of what you mean by "when I...create something, it will disappear"? Nov 27, 2012 at 12:55 UTC I actually ran into this the other day when I was testing. If I go to server1/share name and make a new folder it will disappear. If I go to the same location using domain.local/share and make new folder, it will stay. Nov 27, 2012 at 1:19 UTC Here are some possibilities: - You have 1-way DFSR from the published DFS Target to the "other" server where it's replicated for fault tolerance but not published, and you're creating the folder on the "other" server. It will not be replicated to, and will therefore not be visible from, the DFS Target. - If it's 2-way replication on a schedule, and the replication event has not yet occurred. - 2-way sync and Initial sync is still in progress. - 2-way sync and replication has been disabled. But short of writing the folder to the wrong server and having it be out of reach of replication for whatever reason, the folder should immediately be visible in both locations. Another possibility--and this is very important configuration--if the folders on both servers are enabled as DFS Targets and have equal referral priority, DFS could be referring you to the "other" server. If you look at the Property sheet for the DFS target, you'll see a DFS tab has been added, and it will tell you which server is active. In the DFS console, you'll want to make sure that the "wrong" server's DFS Target is disabled, or set to lowest priority, or removed entirely. Users should use one server or the other, but not both interchangeably, because there's no file locking between DFS targets. If your goal for DFSR is fault tolerance with auto failover, then you'll want to set the "main" server as the highest priority target, and the "other" server as the lowest. Nov 27, 2012 at 1:30 UTC I'm not sure where some of these settings are. Here is what I see. DFS Management->Replication->My Replication->Print screens attached. Nov 27, 2012 at 1:42 UTC Your screenshots tell me you have 2-way replication and your schedule is "Always". The referral cost settings are on the Property sheets of each Namespace in the Namespaces node. You'll want the "main" server to be highest priority among all targets of equal cost, and the failover server to be lowest priority among all targets of equal cost. And I'm assuming, here, that both servers and all clients are on the same AD site. Nov 27, 2012 at 10:30 UTC atttached are print screens from primary(1) and secondary(2) On the referral tab I only see under ordering mether random order, lowest cost(both set on this) and exclude targets outside which both are in the same site. What about the option for clients fail back to preferred targets? Don't see highest priority. On the Advanced tab for both obtomize for consistency is selected Nov 27, 2012 at 4:01 UTC Referral priority is enabled when you upgrade your domain functionality level to (IIRC) WS2008 or higher. What is your domain functionality level? Nov 28, 2012 at 12:26 UTC 2008 Nov 28, 2012 at 12:52 UTC Sorry...I sent you to the wrong place. Select the DFS namespace in the left pane, and click on the Namespace Servers tab in the right pane. You'll find the referral ordering setting on the Advanced tab of the Property sheet of the Namespace Servers. Nov 28, 2012 at 1:27 UTC No problem. Print screens attached. I bet you are going to suggest on server 1 to check override referral ordering and choose first among targets of equal cost? and on server 2 override referral ordering and choose last among targets of equal cost?? Nov 28, 2012 at 1:39 UTC Yep. Either that, or manually disable Server2 as a target until it's needed, at which time you manually fail over by enabling Server2 as a target and disable Server1. If you have any XP SP2 clients (at this late date, unlikely, but in the interests of completeness...) you'll need to install a hotfix on them in order to enable automatic failback; without it, you have to reboot after Server1 comes back up. With XP SP3 or later and you're good to go. Nov 28, 2012 at 3:51 UTC Hi, I would prefer to have auto fail over since we are a 24 hour facility. PArt of the move to the new domain is refreshing all the machines with Win 7. I really appreciate all your help. On another note, are you familiar with Exchange 2010? I posted a question on it but they have it waiting for moderator approval. Nov 28, 2012 at 3:54 UTC One more question. I changed on server 1 as stated above. When I went to server 2 on the advanced tab it has server 1 listed with the change I made. Does server 2 automatically know it is second? Nov 28, 2012 at 4:34 UTC The short answer is that there is no need to do anything to Server2's settings. The long answer is that because you've told the DFS clients to use Server1 before all other DFS targets, they will not use Server2 targets as long as Server1 is available. If Server1 is down, DFS clients will fail over to Server2. When and if clients later discover that Server1 is online again, they'll fail back to Server1. Wasn't sure whether the comment about settings on the Server1 console matching Server2's console was an observation or a question, but what you saw is what we'd expect. Both consoles are looking at AD for the settings. If they didn't match (after DC replication, at least), that would be a problem. I do have EX2010 experience...and so do many others! But if I see your thread and have something to contribute, I'll certainly jump in. Nov 28, 2012 at 6:12 UTC Thanks!!! You've been a great help. Nov 28, 2012 at 6:26 UTC You're very welcome! (Shameless hint: Consider spicing me up and mark the best answer<g>) This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion.
https://community.spiceworks.com/topic/277443-windows-2008-r2-dfs
CC-MAIN-2017-39
refinedweb
2,125
71.44
- NAME - VERSION - SYNOPSIS - DESCRIPTION - EXPORTS - FUNCTIONS - METHODS - Data::XHash->new( ) - $xhash->new( ) - $tiedref->{$key} - $tiedref->{\@path} - $xhash->fetch($key) - $xhash->fetch(\@path) - $tiedref->{$key} = $value - $tiedref->{\@path} = $value - $xhash->store($key, $value, %options) - $xhash->store(\@path, $value, %options) - %$tiedref = () - $xhash->clear( ) - delete $tiedref->{$key} # or \@path - $xhash->delete($key) # or \@path - $xhash->delete(\%options?, @keys) - exists $tiedref->{$key} # or \@path - $xhash->exists($key) # or \@path - $xhash->FIRSTKEY( ) - $xhash->first_key( ) - $xhash->previous_key($key) - $xhash->NEXTKEY( ) - $xhash->next_key($key) - $xhash->last_key( ) - $xhash->next_index( ) - scalar(%$tiedref) - $xhash->scalar( ) - $xhash->keys(%options) - $xhash->values(\@keys?) - $xhash->foreach(\&coderef, @more_args) - $xhash->pop( ) - $xhash->shift( ) - $xhash->push(@elements) - $xhash->pushref(\@elements, %options) - $xhash->unshift(@elements) - $xhash->unshiftref(\@elements, %options) - $xhash->merge(\%options?, @xhashes) - $xhash->as_array(%options) - $xhash->as_arrayref(%options) - $xhash->as_hash(%options) - $xhash->as_hashref(%options) - $xhash->reorder($refkey, @keys) - $xhash->remap(\%mapping) - $xhash->remap(%mapping) - $xhash->renumber(%options) - $xhash->traverse($path, %options?) - AUTHOR - BUG TRACKING - SUPPORT - SEE ALSO - LICENSE AND COPYRIGHT NAME Data::XHash - Extended, ordered hash (commonly known as an associative array or map) with key-path traversal and automatic index keys VERSION Version 0.09 SYNOPSIS use Data::XHash; use Data::XHash qw/xhash xhashref/; use Data::XHash qw/xh xhn xhr xhrn/; $tiedhref = Data::XHash->new(); # A blessed and tied hashref # Note: Don't call "tie" yourself! # Exports are shortcuts to call Data::XHash->new()->push() # or Data::XHash->new()->pushref() for you. $tiedhref = xh('auto-indexed', { key => 'value' }); $tiedhref = xhash('auto-indexed', { key => 'value' }); $tiedhref = xhashref([ 'auto-indexed', { key => 'value' } ]); $tiedhref = xhn('hello', { root => { branch => [ { leaf => 'value' }, 'world' ] } }); # (nested) $tiedhref = xhr([ 'auto-indexed', { key => 'value' } ]); $tiedhref = xhrn([ 'hello', { root => { branch => [ { leaf => 'value' }, 'world' ] } } ]); # (nested) # Note: $xhash means you can use either $tiedhref or the # underlying object at tied(%$tiedhref) ## Hash-like operations # Getting keys or paths $value = $tiedhref->{$key}; $value = $tiedhref->{\@path}; $value = $xhash->fetch($key); $value = $xhash->fetch(\@path); # Auto-vivify a Data::XHash at the end of the path $tiedhref2 = $tiedhref1->{ [ @path, {} ] }; $tiedhref->{ [ @path, {} ] }->$some_xh_method(...); $tiedhref = $xhash->fetch( [ @path, {} ] ); $xhash->fetch( [ @path, {} ] )->$some_xh_method(...); # Setting keys or paths $tiedhref->{$key} = $value; $tiedhref->{\@path} = $value; $xhash->store($key, $value, %options); $xhash->store(\@path, $value, %options); # Setting the next auto-index key $tiedhref->{[]} = $value; # Recommended syntax $tiedhref->{+undef} = $value; $tiedhref->{[ undef ]} = $value; # Any path key may be undef $xhash->store([], $value, %options); $xhash->store(undef, $value, %options); $xhash->store([ undef ], $value, %options); # Clear the xhash %$tiedhref = (); $xhash->clear(); # Delete a key and get its value $value = delete $tiedhref->{$key}; # or \@path $value = $xhash->delete($key); # or \@path $value = $xhash->delete(\%options?, @local_keys); # Does a key exist? $boolean = exists $tiedhref->{$key}; # or \@path $boolean = $xhash->exists($key); # or \@path # Keys and lists of keys @keys = keys %$tiedhref; # All keys using iterator @keys = $xhash->keys(%options); # Faster, without iterator $key = $xhash->FIRSTKEY(); # Uses iterator $key = $xhash->first_key(); $key1 = $xhash->previous_key($key2); $key = $xhash->NEXTKEY(); # Uses iterator $key2 = $xhash->next_key($key1); $key = $xhash->last_key(); $key = $xhash->next_index(); # The next auto-index key # Values @all_values = values %$tiedhref; # Uses iterator @all_values = $xhash->values(); # Faster, without iterator @some_values = @{%$tiedhref}{@keys}; # or pathrefs @some_values = $xhash->values(\@keys); # or pathrefs ($key, $value) = each(%$tiedhref); # Keys/values using iterator # Call coderef with ($xhash, $key, $value, @more_args) for # each key/value pair and then undef/undef. @results = $xhash->foreach(\&coderef, @more_args); # Does the hash contain any key/value pairs? $boolean = scalar(%$tiedhref); $boolean = $xhash->scalar(); ## Array-like operations $value = $xhash->pop(); # last value ($key, $value) = $xhash->pop(); # last key/value $value = $xhash->shift(); # first value ($key, $value) = $xhash->shift(); # first key/value # Append values or { keys => values } $xhash->push(@elements); $xhash->pushref(\@elements, %options); # Insert values or { keys => values } $xhash->unshift(@elements); $xhash->unshiftref(\@elements, %options); # Merge in other XHashes (recursively) $xhash->merge(\%options?, @xhashes); # Export in array-like fashion @list = $xhash->as_array(%options); $list = $xhash->as_arrayref(%options); # Export in hash-like fasion @list = $xhash->as_hash(%options); $list = $xhash->as_hashref(%options); # Reorder elements $xhash->reorder($reference, @keys); # [] = sorted index_only # Remap elements $xhash->remap(%mapping); # or \%mapping $xhash->renumber(%options); ## TIEHASH methods - see perltie # TIEHASH, FETCH, STORE, CLEAR, DELETE, EXISTS # FIRSTKEY, NEXTKEY, UNTIE, DESTROY information or HTTP query parameters in which order may at least sometimes be significant, for passing mixed positional and named parameters, sparse arrays, or porting PHP code. EXPORTS You may export any of the shortcut functions. None are exported by default. FUNCTIONS $tiedref = xh(@elements) $tiedref = xhash(@elements) $tiedref = xhashref(\@elements, %options) $tiedref = xhn(@elements) $tiedref = xhr(\@elements, %options) $tiedref = xhrn(\@elements, %options) These convenience functions call Data::XHash->new() and then pushref() the specified elements. The "r" and "ref" versions take an arrayref of elements; the others take a list. The "n" versions are shortcuts for the nested => 1 option of pushref(). $tiedref = xh('hello', {root=>xh({leaf=>'value'}), {list=>xh(1, 2, 3)}); $tiedref = xhn('hello', {root=>{leaf=>'value'}}, {list=>[1, 2, 3]}); METHODS Data::XHash->new( ) $xhash->new( ) These create a new Data::XHash object and tie it to a new, empty hash. They bless the hash as well and return a reference to the hash ( $tiedref). Do not use tie %some_hash, 'Data::XHash'; - it will croak! $tiedref->{$key} $tiedref->{\@path} $xhash->fetch($key) $xhash->fetch(\@path) These return the value for the specified hash key, or undef if the key does not exist. If the key parameter is reference to a non-empty array, its elements are traversed as a path through nested XHashes. If the last path element is a hashref, the path will be auto-vivified (Perl-speak for "created when referenced") and made to be an XHash if necessary (think "fetch a path to a hash"). Otherwise, any missing element along the path will cause undef to be returned. $xhash->{[]}; # undef $xhash->{[qw/some path/, {}]}->isa('Data::XHash'); # always true # Similar to native Perl: $hash->{some}{path} ||= {}; $tiedref->{$key} = $value $tiedref->{\@path} = $value $xhash->store($key, $value, %options) $xhash->store(\@path, $value, %options) These store the value for the specified key in the XHash. Any existing value for the key is overwritten. New keys are stored at the end of the XHash. If the key parameter is a reference to a non-empty array, its elements are traversed as a path through nested XHashes. Path elements will be auto-vivified as necessary and intermediate ones will be forced to XHashes. If the key is an empty path or the undef value, or any path key is the undef value, the next available non-negative integer index in the corresponding XHash is used instead. These return the XHash tiedref or object (whichever was used). Options: - nested => $boolean If this option is true, arrayref and hashref values will be converted into XHashes. %$tiedref = () $xhash->clear( ) These clear the XHash. Clear returns the XHash tiedref or object (whichever was used). delete $tiedref->{$key} # or \@path $xhash->delete($key) # or \@path $xhash->delete(\%options?, @keys) These remove the element with the specified key and return its value. They quietly return undef if the key does not exist. The method call can also delete (and return) multiple local (not path) keys at once. Options: - to => $destination If $destinationis an arrayref, hashref, or XHash, each deleted { $key => $value }is added to it and the destination is returned instead of the most recently deleted value. exists $tiedref->{$key} # or \@path $xhash->exists($key) # or \@path These return true if the key (or path) exists. $xhash->FIRSTKEY( ) This returns the first key (or undef if the XHash is empty) and resets the internal iterator. $xhash->first_key( ) This returns the first key (or undef if the XHash is empty). $xhash->previous_key($key) This returns the key before $key, or undef if $key is the first key or doesn't exist. $xhash->NEXTKEY( ) This returns the next key using the internal iterator, or undef if there are no more keys. $xhash->next_key($key) This returns the key after $key, or undef if $key is the last key or doesn't exist. Path keys are not supported. $xhash->last_key( ) This returns the last key, or undef if the XHash is empty. $xhash->next_index( ) This returns the next numeric insertion index. This is either "0" or one more than the current largest non-negative integer index. scalar(%$tiedref) $xhash->scalar( ) This returns true if the XHash is not empty. $xhash->keys(%options) This method is equivalent to keys(%$tiedref) but may be called on the object (and is much faster). Options: - index_only => $boolean If true, only the integer index keys are returned. If false, all keys are returned, - sorted => $boolean If index_only mode is true, this option determines whether index keys are returned in ascending order (true) or XHash insertion order (false). $xhash->values(\@keys?) This method is equivalent to values(%$tiedref) but may be called on the object (and, if called without specific keys, is much faster too). You may optionally pass a reference to an array of keys whose values should be returned (equivalent to the slice @{$tiedref}{@keys}). Key paths are allowed, but don't forget that the list of keys/paths must be provided as an array ref ( [ $local_key, \@path ]). $xhash->foreach(\&coderef, @more_args) This method calls the coderef as follows push(@results, &$coderef($xhash, $key, $value, @more_args)); once for each key/value pair in the XHash (if any), followed by a call with both set to undef. It returns the accumulated list of coderef's return values. Example: # The sum and product across an XHash of numeric values %results = $xhash->foreach(sub { my ($xhash, $key, $value, $calc) = @_; return %$calc unless defined($key); $calc->{sum} += $value; $calc->{product} *= $value; return (); }, { sum => 0, product => 1 }); $xhash->pop( ) $xhash->shift( ) These remove the first element (shift) or last element (pop) from the XHash and return its value (in scalar context) or its key and value (in list context). If the XHash was already empty, undef or () is returned instead. $xhash->push(@elements) $xhash->pushref(\@elements, %options) $xhash->unshift(@elements) $xhash->unshiftref(\@elements, %options) These append elements at the end of the XHash ( push() and pushref()) or insert elements at the beginning of the XHash ( unshift() and unshiftref()). Scalar elements are automatically assigned a numeric index using next_index(). Hashrefs are added as key/value pairs. References to references are dereferenced by one level before being added. (To add a hashref as a hashref rather than key/value pairs, push or unshift a reference to the hashref instead.) These return the XHash tiedref or object (whichever was used). Options: - at_key => $key This will push after $keyinstead of at the end of the XHash or unshift before $keyinstead of at the beginning of the XHash. This only applies to the first level of a nested push or unshift. This must be a local key (not a path), and the operation will croak if the key is not found. - nested => $boolean If true, values that are arrayrefs (possibly containing hashrefs) or hashrefs will be recursively converted to XHashes. $xhash->merge(\%options?, @xhashes) This recursively merges each of the XHash trees in @xhashes into the current XHash tree $xhash as follows: If a key has both existing and new values and both are XHashes, the elements in the new XHash are added to the existing XHash. Otherwise, if the new value is an XHash, the value is set to a copy of the new XHash. Otherwise the value is set to the new value. Returns the XHash tiedref or object (whichever was used). Examples: # Clone a tree of nested XHashes (preserving index keys) $clone = xh()->merge({ indexed_as => 'hash' }, $xhash); # Merge $xhash2 (with new keys) into existing XHash $xhash1 $xhash1->merge($xhash2); Options: - indexed_as => $type If $typeis array(the default), numerically-indexed items in each merged XHash are renumbered as they are added (like push($xhash->as_array())). If $typeis hash, numerically-indexed items are merged without renumbering (like push($xhash->as_hash())). $xhash->as_array(%options) $xhash->as_arrayref(%options) $xhash->as_hash(%options) $xhash->as_hashref(%options) These methods export the contents of the XHash as native Perl arrays or arrayrefs. The "array" versions return the elements in an "array-like" array or array reference; elements with numerically indexed keys are returned without their keys. The "hash" versions return the elements in an "hash-like" array or array reference; all elements, including numerically indexed ones, are returned with keys. xh( { foo => 'bar' }, 123, \{ key => 'value' } )->as_arrayref(); # [ { foo => 'bar' }, 123, \{ key => 'value'} ] xh( { foo => 'bar' }, 123, \{ key => 'value' } )->as_hash(); # ( { foo => 'bar' }, { 0 => 123 }, { 1 => { key => 'value' } } ) xh(xh({ 3 => 'three' }, { 2 => 'two' })->as_array())->as_hash(); # ( { 0 => 'three' }, { 1 => 'two' } ) xh( 'old', { key => 'old' } )->push( xh( 'new', { key => 'new' } )->as_array())->as_array(); # ( 'old', { key => 'new' }, 'new' ) xh( 'old', { key => 'old' } )->push( xh( 'new', { key => 'new' } )->as_hash())->as_hash(); # ( { 0 => 'new' }, { key => 'new' } ) Options: $xhash->reorder($refkey, @keys) This reorders elements within the XHash relative to the reference element having key $refkey, which must exist and will not be moved. If the reference key appears in @keys, the elements with keys preceding it will be moved immediately before the reference element. All other elements will be moved immediately following the reference element. Only the first occurence of any given key in @keys is considered - duplicates are ignored. If any key is an arrayref, it is replaced with a sorted list of index keys. This method returns the XHash tiedref or object (whichever was used). # Move some keys to the beginning of the XHash. $xhash->reorder($xhash->first_key(), @some_keys, $xhash->first_key()); # Move some keys to the end of the XHash. $xhash->reorder($xhash->last_key(), @some_keys); # Group numeric index keys in ascending order at the lowest one. $xhash->reorder([]); $xhash->remap(\%mapping) $xhash->remap(%mapping) This remaps element keys according to the specified mapping (a hash of $old_key => $new_key). The mapping must map old keys to new keys one-to-one. The order of elements in the XHash is unchanged. The XHash tiedref or object is returned (whichever was used). $xhash->renumber(%options) This renumbers all elements with an integer index (those returned by $xhash->keys(index_only => 1)). The order of elements is unchanged. It returns the XHash tiedref or object (whichever was used). Options: - from => $starting_index Renumber from $starting_indexinstead of the default zero. - sorted => $boolean This option is passed to $xhash->keys(). If set to true, keys will be renumbered in sorted sequence. This results in a "relative" renumbering (previously higher index keys will still be higher after renumbering, regardless of order in the XHash). If false or not set, keys will be renumbered in XHash (or "absolute") order. $xhash->traverse($path, %options?) This method traverses key paths across nested XHash trees. The path may be a simple scalar key, or it may be an array reference containing multiple keys along the path. An undef value along the path will translate to the next available integer index at that level in the path. A {} at the end of the path forces auto-vivification of an XHash at the end of the path if one does not already exist there. This method returns a reference to an hash containing the elements "container", "key", and "value". If the path does not exist, the container and key values with be undef. An empty path ( []) is equivalent to a path of undef. Options: - op This option specifies the operation for which the traversal is being performed (fetch, store, exists, or delete). - xhash This forces the path to terminate with an XHash (for "fetch" paths ending in {}). - vivify This will auto-vivify missing intermediate path elements. AUTHOR Brian Katzung, <briank at kappacs.com> BUG TRACKING Please report any bugs or feature requests to bug-data-xhash::XHash You can also look for information at: RT: CPAN's request tracker (report bugs here) AnnoCPAN: Annotated CPAN documentation CPAN Ratings Search CPAN SEE ALSO - Array::AsHash An array wrapper to manage elements as key/value pairs. - Array::Assign Allows you to assign names to array indexes. - Array::OrdHash Like Array::Assign, but with native Perl syntax. - Data::Omap An ordered map implementation, currently implementing an array of single-key hashes stored in key-sorting order. - Hash::AsObject Auto accessors and mutators for hashes and tied hashes. - Hash::Path A basic hash-of-hash traverser. Discovered by the author after writing Data::XHash. - Tie::IxHash An ordered hash implementation with a different interface and data structure and without auto-indexed keys and some of Data::XHash's other features. Tie::IxHash is probably the "standard" ordered hash module. Its simpler interface and underlying array-based implementation allow it to be almost 2.5 times faster than Data::XHash for some operations. However, its Delete, Shift, Splice, and Unshift methods degrade in performance with the size of the hash. Data::XHash uses a doubly-linked list, so its delete, shift, splice, and unshift methods are unaffected by hash size. - Tie::Hash::Array Hashes stored as arrays in key sorting-order. - Tie::LLHash A linked-list-based hash like Data::XHash, but it doesn't support the push/pop/shift/unshift array interface and it doesn't have automatic keys. - Tie::StoredOrderHash Hashes with items stored in least-recently-used order. LICENSE AND COPYRIGHT This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License. See for more information.
https://metacpan.org/pod/release/BKATZUNG/Data-XHash-0.09/lib/Data/XHash/Splice.pm
CC-MAIN-2018-13
refinedweb
2,863
52.39
Try Statements in Java A try statement is used to catch exceptions that might be thrown as your program executes. You should use a try statement whenever you use a statement that might throw an exception That way, your program won’t crash if the exception occurs. The try statement has this general form: try { statements that can throw exceptions } catch (exception-type identifier) { statements executed when exception is thrown } finally { statements that are executed whether or not exceptions occur The statements that might throw an exception within a try block. Then you catch the exception with a catch block. The finally block is used to provide statements that are executed regardless of whether any exceptions occur. Here is a simple example: int a = 5; int b = 0; // you know this won’t work try { int c = a / b; // but you try it anyway } catch (ArithmeticException e) { System.out.println("Can't do that!"); } In the preceding example, a divide-by-zero exception is thrown when the program attempts to divide a by b. This exception is intercepted by the catch block, which displays an error message on the console. Here are a few things to note about try statements: You can code more than one catch block. That way, if the statements in the try block might throw more than one type of exception, you can catch each type of exception in a separate catch block. In Java 7, you can catch more than one exception in a single catch block. The exceptions are separated with vertical bars, like this: try { // statements that might throw // FileNotFoundException // or IOException } catch (FileNotFoundException | IOException e) { System.out.println(e.getMessage()); } A try block is its own self-contained block, separate from the catch block. As a result, any variables you declare in the try block are not visible to the catch block. If you want them to be, declare them immediately before the try statement. The various exception types are defined as classes in various packages of the Java API. If you use an exception class that isn’t defined in the standard java.lang package that’s always available, you need to provide an import statement for the package that defines the exception class. For example: import java.io.*; If you want to ignore the exception, you can catch the exception in the catch block that contains no statements, like this: try { // Statements that might throw // FileNotFoundException } catch (FileNotFoundException e) { } This technique is called swallowing the exception, and is considered a dangerous programming practice because program errors may go undetected.
http://www.dummies.com/how-to/content/try-statements-in-java.html
CC-MAIN-2016-26
refinedweb
429
60.04
. You may also want to grab the Zelle documentation. Create a file named physics_objects.py. Your first task in the lab is to create a class Ball transform a position in "physics"s;" and "set" without modification.. def getMass(self): # Returns the mass of the object as a scalar value def setMass(self, m): # m is the new mass of the object time step, dt that indicates how much time to model. The algorithm is as follows. def update(self, dt): # update the x position by adding the x velocity times dt to its old value # update the y position by adding the y velocity times dt to its old begin a while loop while win.checkMouse is equal to None. Inside the loop it should call the ball's update() method with a time constant of 0.1. Then it should get the position of the ball. If the y position of the ball is less than 0, it should set the ball's velocity back to zero, set its x position to some random value between 0 and 50, and set its y position to some random value between 40 and 50. The penultimate step in the loop should be to call win.update() so that any changes in the ball's position will be reflected in the window. The final step in the loop should be to call the time.sleep() function with 0.075 as the argument. You can adjust that value to modify the speed of your simulation. After the loop, you can put a win.close() call. Test your code and make sure you have a ball falling down, then re-spawning near the top of the window. When you are done with the lab exercises, you may start on the rest of the project. © 2018 Caitrin Eaton.
http://cs.colby.edu/courses/F17/cs152-labs/lab08.php
CC-MAIN-2018-47
refinedweb
304
82.14
This AnalysisHandler books histograms of the weak boson's pt in Drell-Yan processes. More... #include <DrellYanPT.h> This AnalysisHandler books histograms of the weak boson's pt in Drell-Yan processes. An Interface switch allows you to choose between output of the histograms as TopDraw or plain ASCII files, which may be processed further, e.g. by gnuplot. Definition at line 30 of file DrellYanPT.h. Analyze a given Event. Note that a fully generated event may be presented several times, if it has been manipulated in between. The default version of this function will call transform to make a lorentz transformation of the whole event, then extract all final state particles and call analyze(tPVector) of this analysis object and those of all associated analysis objects. The default version will not, however, do anything on events which have not been fully generated, or have been manipulated in any way. Reimplemented from ThePEG::AnalysisHandler. Make a simple clone of this object. Definition at line 79 of file DrellYanPT.h. Finalize this object. Called in the run phase just after a run has ended. Used eg. to write out statistics. Reimplemented from ThePEG::InterfacedBase. Make a clone of this object, possibly modifying the cloned object to make it sane. Definition at line 85 of file DrellYanPT. The static object used to initialize the description of this class. Indicates that this is a concrete class with persistent data. Definition at line 121 of file DrellYanPT.h.
https://herwig.hepforge.org/doxygen/classHerwig_1_1DrellYanPT.html
CC-MAIN-2019-04
refinedweb
245
59.6
Created on 2011-04-07 15:57 by tebeka, last changed 2013-12-24 02:44 by r.david.murray. This issue is now closed. The following code is not changed by 2to3:: import os reload(os) reload has moved to the imp module. This should get fixed, but I'm *really* curious about what kind of code actually needs to do this ;-) Find a fixer for this attached. I really just did sed 's/intern/reload' fix_intern.py >fix_reload.py, but it seems to work. I didn't write any tests (I couldn't seem to find any for any other fixers). Raymond: Sometimes I store configuration in Python files and would like to reload the configuration. Miki: That's a really great use case. Thanks. The other use case I see is to reload a module during debugging after changing the code. This is especially useful for big GUI applications. File looks good, although I’m not sure about the “Copyright 2006 Georg Brandl” line. I also don’t know if stable branches can get this fix. I sure didn't have anything to do with that file :) Ah, that's my fault. As I mentioned, I simply replaced sys with imp and intern with reload from fix_intern.py. Seeing as the vast majority of the file was not modified, I didn't bother to change the copyright notices. Please find attached a new file with the copyright notice removed. Would someone like to review it please. FixIntern → FixReload More importantly, tests would be great. Here's a patch that adds tests and updates the documentation. Thanks for the patch. Could you try to share could with fix_intern? Maybe by moving some things to fixer_utils. > Could you try to share could with fix_intern? Maybe by moving some > things to fixer_utils. Thanks for the suggestion. Here's a new patch. I'm not sure the name of the helper is correct. New changeset 3576c0c6f860 by Benjamin Peterson in branch 'default': add fixer for reload() -> imp.reload() (closes #11797)\n\nPatch by Laurie Clark-Michalek and Berker Peksag Since this patch was applied, imp.reload has been deprecated in favor of importlib.reload. I don't know how we handle differences between python3 versions...is there anything that should be done here, or do we just use imp.reload even though it is deprecated in 3.4?
https://bugs.python.org/issue11797
CC-MAIN-2017-43
refinedweb
397
77.64
16 June 2011 07:59 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Its PE plants include a 350,000 tonne/year high density PE (HDPE) unit and a 264,000 tonne/year linear low density PE (LLDPE) unit. The HDPE unit was restarted four days after it was shut on 27 May following a fire, which resulted in the shutdown of the Formosa group’s No 1 700,000 tonne/year cracker at the same site. FPC’s LLDPE unit, which was also shut on 27 May, was restarted on 1 June. The operating rates at both units were initially kept at around 80% capacity after they were restarted, he said. The rates are currently at 90-95%, he added. “We have been running the PE units at [these] high rates for two weeks to build up our stock levels, [as another] cracker may be possibly shut after the shutdown of other downstream units from 20 June,” the source said. Nan Ya Plastics, which is a subsidiary of the FPC's offers for end-June/early-July shipments were at $1,420/tonne (€1,008/tonne) CFR (cost & freight) However, these offers were dismissed by most buyers as the prevailing price level was about $100/tonne lower. “We are not in a hurry to sell and in order to keep some margins [as a naphtha-based maker], we can’t afford to lower our offers further,” he said. Imported LLDPE film cargoes were discussed at $1,260-1,300/tonne CFR China, while HDPE film was discussed at $1,300-1,340/tonne CFR China on Thursday. ($1 = €0.71) For more on HDPE and LL.
http://www.icis.com/Articles/2011/06/16/9469982/taiwans+fpc+runs+hdpe+lldpe+units+at+90-95.html
CC-MAIN-2013-20
refinedweb
276
74.32
Proposal to enhance existing XML functionality in Derby in the following ways: - Add support for setting/getting XML values from a JDBC 3.0 app (without requiring explicit XMLPARSE and XMLSERIALIZE operators). This includes returning meaningful metadata and defining the proper getXXX/setXXX support. - Slight modifications to existing XML operators to comply with SQL/XML. - New XMLQUERY operator for retrieving XML query results. Specification initially posted to DERBY-688. Relevant discussion is buried in different threads, found here: Please continue any discussion on the derby-dev mailing list; this page will be updated to summarize the discussion. Current Discussions Discussion: JDBC 4.0 While JDBC 4.0 defines support for a "java.sql.SQLXML" class (see SQLXML), implementation of that class and associated APIs will not be part of this proposal. I will try to avoid making changes/decisions for this proposal that might interfere with future JDBC 4.0 support for SQLXML, but I will not be making changes that are specific to JDBC 4.0; any such changes will have to be part of a separate effort. Discussion: JDBC metadata and getXXX/setXXX methods After reading through the relevant parts of the above-mentioned thread, I plan to code the following behavior for XML in JDBC 3.0: The JDBC 3.0 type for XML will be java.sql.Types.OTHER, since Types.SQLXML isn't defined until JDBC 4.0. In other words, a call to ResultSetMetaData.getColumnType() on an XML column will return Types.OTHER. The Java API defines Types.OTHER as "the constant in the Java programming language that indicates that the SQL type is database-specific and gets mapped to a Java object that can be accessed via the methods getObject and setObject". HOWEVER, the get/setObject() methods will not actually be allowed on a Derby XML value for JDBC 3.0. The reason is that there is no standard object type to return (java.sql.SQLXML isn't defined until JDBC 4.0), and use of another type (such as java.sql.Clob) would lead to incompatibility issues down the road, when JDBC 4.0 support is added. Thus, attempts to use getObject on an XML value will result in Derby error 22005: "An attempt was made to get a data value of type 'java.lang.Object' from a data value of type 'XML'"; a corresponding error will be thrown for setObject(), as well. As a corollary to this, calls to ResultSetMetaData.getColumnClassName() will also return an error (when used on an XML column). NOTE: That said, other relevant getXXX/setXXX methods will be allowed on XML values, namely: get/setString(), get/setAsciiStream(), get/setCharacterStream(), and get/setClob(). The SQL type name that will be returned from metadata (ex. ResultSetMetaData.getTypeName()) will be "XML". Discussion: Xerces and Xalan dependencies As part of my work for these XML enhancements, I'm removing the explicit dependency on Xerces that exists in 10.1. Instead, the XML parser will be picked up from the JVM, based on existing JDBC 3.0 API calls (JAXP) in the javax.xml.parsers package. If no such parser is found, we will generate an error message saying that a valid parser is required for use of XML. My reason for removing this hard-coded dependency on Xerces is that different JVMs have different XML parsers and we don't want to force users of Derby to download a specific XML parser if we can avoid it. Instead, we can use the JAXP API to pick up the parser that comes with the JVM. An example of why this is useful can be seen with Sun JVM verses IBM JVM: the former comes with Crimson, the latter with Xerces; by changing Derby to use the JAXP API, we ensure that XML will work with both JVMs without requiring a specific parser to be in the user's classpath. As for Xalan, I plan to retain Derby's dependency on Xalan for four reasons: - the JDBC 3.0 API for evaluation of XPath/XQuery expressions is limited to the javax.xml.transform.* packages, which (in my experience) makes it difficult to process the full results of an XPath/XQuery expression, and is also a bit slow; - Xalan provides a lower-level API for evaluating XPath expressions that, based on some very simple tests I've run, provides much better evaluation performance than the transform API, and allows easier manipulation of the results; - both Sun and IBM JVMs come with Xalan embedded, which means two of the most commonly used JVMs will work with Xalan-dependent code as they are, requiring no additional downloads. Xalan is an XPath processor, and it is my hope that at some point in the near future we will be able to find a full XQuery processor to use for (and hopefully embed within?) Derby. Thus, our dependency on Xalan is hopefully not a permanent one. For JVMs that are earlier than JDBC 3.0 and/or for any JVMs that do not include Xalan (ex. do j9 and Apple JVMs include Xalan?), users who wish to use Derby XML will have to put the missing jar files (JAXP parser implementation and/or Xalan) in their classpaths. That done, everything else should work as normal.
https://wiki.apache.org/db-derby/XmlEnhancements
CC-MAIN-2017-09
refinedweb
877
64.41
I received a ZuneHD the day after release. All of the Zunes are user programmable via the XNA framework. So in the “Because I Can” category, I decided to make a level program. After making the program, I found that some others wanted the source code and know how to get the XNA framework on their computers. I already had some text that I had written on programming the previous Zunes, so I’ve taken that text, added some information on the accelerometer and multi-touch panel to it, and the result is this document. You can see a video of the level in action on YouTube here. To make use of this article, you should already have an understanding of .NET and C# foundational concepts. You should also have experience with Windows Forms programming. While the code presented here doesn’t make use of the Windows Forms namespace (it’s not supported on the Zune), some of the concepts that you may have encountered while developing Windows Forms applications (such as isolated storage) are used here. To get started on programming for a Zune, there are a few things that you will need: At the time of this writing, there are three Zune generations available. The first generation Zune (also known as the Zune 30) has a 5 way directional pad, a play button, and a back button. The second generation Zunes are available in a number of different capacities from 4 gigs to 120 gigs. The 80 and 120 gig models use hard drives while the others use flash memory. On these Zunes, the 4-way directional pad has been replaced by what I can best describe as being a dual-mode mouse pad. One can either slide a finger over the pad or press on it to use it as a 5-way directional pad. The differences in the directional pads aren’t major, but you should be aware of them when designing user interactions; you will need to account for both hardware variations. The latest addition to the Zune family has only three buttons (back, power, and a media control button). It lacks the directional pad, but has an accelerometer and multi-touch panel for input. The screen supports up to 4 simultaneous touches. It’s going to happen eventually; you’ll write some code that gets the Zune stuck in an infinite loop and you have no way to exit your application. If you press and hold the back button for two seconds, the application will exit. If for some reason you are unable to exit, you can also reboot the Zune. Press and hold the back button while pressing up on the directional pad if your Zune has a touch pad, or hold down the power and back button if you have a ZuneHD. After a few seconds, the Zune will reboot. When you create a new XNA project in Visual Studio, you can target the XBox 360, the PC, or the Zune. For this article, we will only be concerned with the Zune project types, and will ignore the others. The Zune project types include “Zune Game (3.1)” and the “Zune Game Library (3.1)”. (While it’s clear that Microsoft intends for these project types to be used for games, keep in mind that you aren’t limited to using them for that purpose. If you can think of some other productivity tasks that you want your Zune to do that’s not a game, by all means go ahead and write it!) Think of the “Zune Game (3.1)” project type being like making an application, while the “Zune Game Library (3.1)” is like a DLL. Go ahead and create a new project in Visual Studio for a Zune Game (3.1) project called “Hello World.” The project will contain a number of stub methods automatically. Don’t worry about what they do, we will examine that later. This program does nothing more than render a cornflower blue background (Microsoft’s favourite colour!). Connect your Zune to your computer and run the project. You’ll get an error. No devices are available to deploy the project ‘HelloWorld’. Register a device using the XNA Game Studio Device Center. If the Zune client software is running, close it. Only one program can sync to the Zune at a time, and if it is running, you’ll encounter failure in the following steps. You’ll find the “XNA Game Studio Device Center” under “Microsoft XNA Game Studio (3.1)” in the programs group. Start the program and select the option to “Add Device”. You’ll be asked whether you want to add a Zune or an XBox 360. Select “Zune”. Your Zune’s name will be displayed and you can select it. After you select “Finish”, go back to Visual Studio and try to run your project again. If this is your first time running an XNA game on your Zune, then the .NET Framework will deploy followed by your application. When the application runs, you’ll see the screen turn cornflower blue (which is all this program does). To exit the program, press down the back button. Congratulations! You’ve compiled and run your first Zune application. Our first program didn’t do much. We’ll add functionality to it, but before doing so, let’s take apart. Having an understanding of the structure of an XNA application will help in creating new applications. Microsoft.Xna.Framework.Game is the base class for an XNA application. You create your application by deriving from this class and then overriding its methods. The application class generated for us by Visual Studio overrides five methods for us: Microsoft.Xna.Framework.Game Initialize LoadContent Update Draw Unload The Update and Draw methods are called cyclically until the application is terminating. A graphical representation of these method calls looks like the following: The Draw/Update cycle will occur up to 60 times per second on the first generation Zune, and up to 30 times per second on the second generation Zune (for more information, see the Hardware Variation section). The XNA Framework was designed with the XBox controller in mind. The XBox controller (and XNA) are usable on both the XBox 360 and the PC. The buttons on the Zune are mapped to some of the XBox controller buttons so that you can interact with it in the same way that you would interact with an XBox controller. The back button on the XBox 360 controller is mapped to the back button on the Zune. On the Zune HD, this is the button directly under the screen. On the other two Zune models, this is the button to the left of the directional pad. None of the other buttons are mapped to the ZuneHD. The A button on the XBox 360 controller is mapped to the Action button in the center of the directional pad. The B button is mapped to the Play button on the Zune. For the original Zune, the directional pad is mapped to be treated like the 4-way directional pad on the XBox 360 controller. On the second generation Zune, if the user presses down on the edges of the Zune pad, it is treated as a directional controller. But, if the user moves a finger over the surface of the Zune pad, it will be treated the same as the left joy stick. Now that I’ve covered the button mappings, I’ll talk about how to actually read the controller. This information is true for the Zune, a PC with an XBox 360 controller, and of course, for XNA on the XBox 360. The main class used for interacting with the controller is the GamePad class. It’s a static class, and almost all of the methods require an enumerated variable of type PlayerIndex. The PlayerIndex enumeration has the values One, Two, Three, and Four. This number is used to identify which controller one wishes to interact with. For Zune only, PlayerIndex.One is valid. GamePad PlayerIndex One Two Three Four PlayerIndex.One Like many of the XNA classes, the GamePad class has a method named GetCapabilities that gives you information on the capabilities of the controller. Had I not mentioned the button mappings above, the GamePadCapabilities object returned by this method would have informed you which button mappings were present. The GamePad's other method of interest is the GetState method. It will return the state of all of the buttons and directional inputs on the controller in a GamePadState object. The Buttons member of the GamePadState object has a group of ButtonState members whose value can be Pressed or Released (if a button is not present, then its state is always Released). The ThumbSticks member contains two Vector2 members named Left and Right. The Right member will never have anything of interest on the Zune, but the Left member will inform you of the position on the ZunePad that the user is touching. GetCapabilities GamePadCapabilities GetState GamePadState Buttons ButtonState Pressed Released ThumbSticks Vector2 Left Right The only button on the Zune that I use in the example program is the Back button for exiting the program. You’ll need to designate your Zune as a device that XNA can target. Start by connecting your Zune to the computer. In order to deploy programs to your Zune, Visual Studio (or Visual Studio Express) will need to be able to communicate with the Zune through the XNA Game Studio Connector. The Zune software must be installed to ensure the proper drivers are present for the Zune. However, Visual Studio cannot communicate with the Zune if the Zune software is running on the PC. Make sure that it isn’t open. Start the XNA Game Studio Connector, and select the option to Add Device. You’ll be given an option to add an XBox 360 or a Zune. Select Zune and click Next. If your Zune is connected to the computer, then it will be listed as a device you can target. Select it and click Next. Another screen will ask you if you would like to set this Zune to be the default Zune. You can safely leave this option with its default value, and click Next to complete the process. Repeat the same process for any other Zune you would like to add. After you’ve completed the process for one or more Zunes, the XNA Game Studio Device Center will display all of your Zunes. Oddly enough, the ZuneHD is displayed with the same icon as the Zune80/120. The Zune30 and Zune4 display with the correct icons. Before version 3 of XNA Game Studio for Zune, if you wanted to draw text on the screen, you had to load a set of images of the letters and selectively draw sections from your font image to the screen. As of the 3.1 version, some of this process has been automated by the SpriteMap class. We are going to update our HelloWorld program so that it will display the time. To display the time, we first need to import a font into our application. Then, we must use that font to render text. SpriteMap To import a font, do the following: SpriteFont MyFont.spritemap <FontName> CharacterRegion Start End Now, to use the font : MyFont MyFont = Content.Load<SpriteFont>("Myfont"); GraphicsDevice.Clear(Color.CornflowerBlue); Vector2 TextPosition = new Vector2(); TextPosition.Y=100; TextPosition.X=20; spriteBatch.Begin(); spriteBatch.DrawString(MyFont, DateTime.Now.ToString(),TextPosition,Color.Red); spriteBatch.End(); When the program runs, it will display the current date and time up to the second. You can exit the program at any time by pressing the Back button. A few words of warning: Fonts, like software, are intellectual property, and you may not be able to use certain fonts because of potential copyright infringement. Also note that fonts, like any other resource, take up space. Whenever possible, strictly limit the character range of fonts included only to those that your program will use. There are a number of different file formats that can be displayed using XNA on the Zune. I would suggest using PNG in most cases. Here is a non-exhaustive list of the image formats that you can use: The current Zunes all have a resolution of 240x320 or 272x480. Keep that in mind when sizing resources. Since we are just getting started, I want to keep my operations with images simple. I want to take the program that I already have and load an image into the background. I’ll be using a Texture object to handle my image. Using any paint editor, create an image of size 240x320. Add the image to your project’s Content folder. The XNA Game Studio will automatically make it a content resource of type texture. I named my image “Background.png”. Once added to the project, the following code changes should be applied: Texture MyTexture MyTexture = Content.Load<Texture>(“Background”); Draw() Clear() spriteBatch.Begin(); spriteBatch.Draw(MyTexture, imagePosition, Color.White); spriteBatch.DrawString(MyFont, DateTime.Now.ToString(), TextPosition, Color.Red); spriteBatch.End(); Now, when you run the program, you will see your background behind the time. It is possible to share the same source code between the Zune, XBox 360, and PC. However, there are aspects of XNA that are specific to each of these devices. Conditional compiling can be used to keep a single code base while not using features on platforms that don’t support them. For Zune projects, the ZUNE symbol is defined. #if ZUNE Like programming for many other mobile devices, for the Zune, you must remember that you are not programming with the same resources that you would have when developing for a desktop. In an XNA program, you will be limited to 16 megabytes of RAM, and the screen size will be 272x480 on the ZuneHD, or 240x320 on the other Zunes. Also, the colour depth on the Zunes is 16-bit. Since the Zune has no keyboard or mouse, if you try to query the states of either of these devices on the Zune, you will find that they always report that no keys are being pressed and the X and Y values for the mouse are always zero. Currently, ZuneHD is the only XNA platform that supports the accelerometer. If you are writing your code generically (so that it will run on a PC, XBox 360, and variations of the Zune), then you’ll want to be able to detect whether or not you are running on a device that has an accelerometer. You can get all the information that you want to know about the accelerometer from the appropriately named Accelerometer class. The Accelerometer class is static so you can immediately begin using it. The GetCapabilities method will let you know whether or not an accelerometer is present through the IsConnected member of the AccelerometerCapabilities instance that it returns. The AccelerometerCapabilities class also has three members (HasXAxis, HasYAxis, HasZAxis) that suggest at some point in the future there could exist devices that don’t detect accelerations along all three planes. I could see a steering wheel type interface being implemented in that manner. The limits of the device’s acceleration detection can be queried through the AccelerationResolution and the MaximumAcceleration and MinimumAcceleration values. Accelerometer IsConnected AccelerometerCapabilities HasXAxis HasYAxis HasZAxis AccelerationResolution MaximumAcceleration MinimumAcceleration To read the state of the accelerometer, use the GetState method of the Accelerometer class. It returns an AccelerometerState object. The instance contains a Boolean property named IsConnected that lets you know whether or not an accelerometer is present, and a Vector3 instance that tells you the readings of the accelerometer along the X-axis, Y-axis, and Z-axis. AccelerometerState Vector3 The touch panel is a minor component of this program. If you press the icon of the back button on the screen, then it will exit. The touch panel is not used in any other way in this program. But using the touch panel is similar to using the game pad or the accelerometer. You have a static class with a GetCapabilities method (for detecting whether or not the hardware is available), and a GetState method for reading the actual state of the hardware. You may notice that I keep referring to the surface as a touch panel instead of a touch screen. In theory, the touch surface does not have to be on the screen. Think of a mouse pad or a pen tablet, both of which are touch surfaces of a type but are not on the screen. The term “Touch Panel” is more encompassing of other types of touch surfaces that could be added to XNA support in the future. The static class is named TouchPanel. The TouchCapabilities object returned by its GetCapabilities method has a member named IsConnected to inform you whether or not a touch panel is available, and MaxTouchCount to inform you how many touches the panel can detect. The TouchPanel’s GetState method returns a TouchCollection object. The TouchCollection contains a list of TouchLocations that, among other things, contains the coordinates (as a Vector2) of the areas on the screen being touched. The touch panel on the ZuneHD can detect up to four touches, so this member will never have more than four members in it. TouchPanel TouchCapabilities MaxTouchCount TouchCollection TouchLocation For the attached program, I only check to see if any of the couch points are intersecting the exit button. Touch is not used in any other way here. It seems that for any mobile device with an accelerometer out there, you can find a bubble level program. It’s the unofficial “Hello World” app of devices with accelerometer capabilities. So in sticking with tradition, I’ve written a bubble level program for my ZuneHD. The creation of the program started off on a sheet of graph paper. I marked off a rectangle that was to scale for the ZuneHD’s 272x480 screen, and then used it to decide how the visual elements would be laid out. I wanted to have a horizontal level, a vertical level, and a circular level. This is the layout that I came out with: Now that I knew the desired layout, I could begin creating my graphical assets. Since the sketch was drawn to scale, I could use it to decide on the exact sizes for each one of my assets. Since I had only planned to invest no more than a couple of hours into this program, I wanted to put the graphics together quickly. But at the same time, I didn’t want them to look bad. So, I used Expressions Design to quickly put together the graphics I needed. I already had a brushed metal image saved from a personal project from years ago, so I reused it. The resulting image assets follow: The graphics were added as content to the program. I also added a font to the content so that I could render text. With all of the needed assets present, the next step was to write the code. But before writing the code, some math needs to be understood. An accelerometer measures the acceleration that a device experiences relative to freefall. In other words, it measures a device’s acceleration minus the force of gravity that the object is experiencing. The Accelerometer class returns a Vector3 object that contains an X, Y, and Z component that can be used to describe the orientation of the device. For a stationary device, the vector described by these three numbers will have a length close to one. If you remember the Pythagorean theorem (which is usually written in the form of ), then you’ll also realize that this means that x*x+y*y+z*z will be equal to about 1. If I place the ZuneHD face-up on a level table, the accelerometer reading would be {X , Y , Z:-1}. If I place the unit face down, the accelerometer reading would be {X , Y , Z:1}. If I stand the Zune up with its left side on the floor and the right side on the ceiling, then the vector gets close to {X:-1, Y , Z }. If I have the ZuneHD flat on a table and begin to tilt it to the left or right, then then the reading along the Z-axis will begin to decrease in magnitude as the reading on the X-axis increases in magnitude. This may be difficult to visualize. So, I encourage you to run the program “AccelerometerTest” that is included with this article to get a feel for the accelerometer’s response to movement. So now that we have the accelerometer’s reading, we need to be able to change it to an angle. If you remember your trigonometry, then you may recall that the arctangent function can be used to get an angle from an X and a Y coordinate. But before we start going down this trail, we need to keep in mind that we have three coordinates, not two. Also, the arctangent function can’t distinguish between the angles of some positions. There is a second form of the arctangent function, usually labeled arctan2, that has the functionality that we need. arctan2 Vector3 accelReading = accelState.Acceleration; tiltDirection = (float)Math.Atan2(accelReading.Y, accelReading.X); tiltMagnitude = (float)Math.Sqrt(accelReading.X * accelReading.X + accelReading.Y * accelReading.Y ); Now that I have the tilt direction and the magnitude, I need to be able to translate that into X and Y displacements for the bubble indicators. Easily done using sine and cosine. Math.Sin(tiltAngle) will convert my angle to a horizontal displacement between 1 and -1. -Math.Cos(tiltAngle) will convert the angle to a displacement between 1 and -1 also. I can take the results of these functions and scale them up to fit into the maximum displacement that I want to allow for the bubble indicators. Math.Sin(tiltAngle) -Math.Cos(tiltAngle) Rectangle horizontalBubbleRectangle = new Rectangle((int)(118 + 102 * tiltMagnitude * - Math.Cos(tiltDirection)), 22, 36, 36); Rectangle verticalBubbleRectangle = new Rectangle(22, (int)(80 + 140 + 124 * tiltMagnitude * Math.Sin(tiltDirection )), 36, 36); Rectangle smallBubbleArea = new Rectangle(156 + (int)(48*tiltMagnitude * -Math.Cos(tiltDirection )), 156 + (int)(48 *tiltMagnitude* Math.Sin(tiltDirection )), 8, 8); That is all the computation that is needed. Once that is done, all that is left is placing the objects on the screen. spriteBatch.Begin(); spriteBatch.Draw(ProgramBackground, backgroundTextureArea, Color.White); spriteBatch.Draw(HorizontalLevelTexture, horizontalLevelTextureArea, Color.White); spriteBatch.Draw(VerticalLevelTexture, verticalLevelTextureArea, Color.White); spriteBatch.Draw(RoundLevelTexture, roundLevelTextureArea, Color.White); spriteBatch.Draw(BubbleTexture, horizontalBubbleRectangle, Color.White); spriteBatch.Draw(BubbleTexture, verticalBubbleRectangle, Color.White); spriteBatch.Draw(SmallBubbleTexture, smallBubbleArea, Color.White); spriteBatch.Draw(BackArrowTexture, BackArrowPosition, Color.White); spriteBatch.DrawString(MainFont, "J2i.net", siteAddressPosition, Color.Blue); spriteBatch.End(); The one thing I skipped over is how to define an icon for the game. When you create the project, it adds a file named GameThumbnail.png to the project. Edit this file to change the icon. If you'd like to look into the XNA program further, then there are plenty of resources available online. But the two that I would highly suggest are the Zune Programming Reference and the XNA Creator's Club Forums. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Hello_World { /// <summary> /// This is the main type for your game /// </summary> public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; SpriteFont MyFont; Texture MyTexture; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; // Frame rate is 30 fps by default for Zune. TargetElapsedTime = TimeSpan.FromSeconds(1 / 30.0); } /// . MyFont = Content.Load<SpriteFont>("Myfont"); MyTexture = Content.Load<Texture>("Background"); //) { Vector2 TextPosition = new Vector2(); TextPosition.Y = 100; TextPosition.X = 20; spriteBatch.Begin(); spriteBatch.Draw(MyTexture, imagePosition, Color.White); spriteBatch.DrawString(MyFont, DateTime.Now.ToString(), TextPosition, Color.Red); spriteBatch.End(); // TODO: Add your drawing code here base.Draw(gameTime); } } } Member 7668198 wrote:I know this is old, but I am new to this, and this is the site that I came to. I am a total noob when it comes to c++, and so I am trying to follow along. Member 7668198 wrote:Why does everyone who write a guide, always assume that everyone knows what is being said? Member 7668198 wrote: you lost me on the part that said to add a font type a second time, and then when I added the line of code: MyFont = Content.Load("Myfont"); to the "Load Content" part of the file, I keep getting an error that says: Member 7668198 wrote: "Method must have a return type" What does that mean, and why is it happening? General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/script/Articles/View.aspx?aid=42647
CC-MAIN-2016-30
refinedweb
4,154
64.1
Hi all, Can the system.tag.browse function be used on a expression for example to bind a tag to a label or value display? Hi all, Can the system.tag.browse function be used on a expression for example to bind a tag to a label or value display? system.tag.browse is a script language function. These are different to expression language functions and can’t be mixed. In addition, you don’t “bind a tag to a label” - you bind one of the label’s properties (usually the text) to a tag. What is the problem you’re trying to solve? You can bind script functions to a component either using the runScript expression function or using a script transform in Perspective. But binding to the tag.browse function seems like a bad idea. As @Transistor asked, perhaps let us know what you’re trying to solve instead and we can offer alternative solutions to the issue Hey guys, here is why Philip and I want to use System.Tag.Browse in an expression: I need to modify the instances property of a Flex Repeater component based on the results of a System Tag Browse. The purpose of this whole thing is to automate the output of a Flex Repeater component. The core data needed to automate the flex repeater lives within the tag path data. This is why I’m trying to execute a System.Tag.Browse function within an expression connected to the flex repeater. If you know of a better way to do this, then I’m open to suggestions. Much appreciated. Try this: Ok, so you have different sets of tags in these different folders and therefore need to query the folder to know what to display in the flex repeater? In that case a browse is really your only option unless you have that info available somewhere else like a dataset tag or sql table. It sounds like you’ll have a trigger (eg the tagpath to browse) that should trigger the browse and hence update your repeater items, so I would use a property binding using this tagpath and attach a script transform to do your browse. Then create your flex instances list of dicts using the results @Transistor - Yeah, this was basically my plan. Create a view level parameter and then store the specific tag path data into this parameter. Then have a script on the instance property of the flex repeater to build the instance property data based on the list of data in this view parameter. However, I’m still stuck at figuring out how to run the System.Tag.Browse script and pass that data to a parameter. @nminchin - I don’t have the information anywhere else. I thought about storing the data in SQL, but I realized the complex folder (device) hierarchy information I need is already part of the tag paths so this is why I’m trying to figure out how to run the System.Tag.Browse from expression. @Transistor & @nminchin - Is the only way to do this is to write a project level script function that contains the System.Tag.Browse and then reference this script using a “RunScript” function? If so, I attempted to do this and still was unsuccessful, but maybe I had a syntax problem. I could try again and post my script here to have you look at it. Nope. Bind the flex repeater’s props.instances to your tag path property (the property that changes when you want to browse a different folder and hence produce different flex repeater instances) and add a script transform to it which would do the browse and the creation of the list of dicts that props.instances expects. Something like: tagpath = value tags = system.tag.browse(tagpath, filter={'tagType'='AtomicTag'}) tagPaths = [tag['fullPath'] for tag in tags] instances = [{'tagPath': tagPath} for tagPath in tagPaths] return instances Note that an absolute performance cliff lies down this road. Browsing is a fairly expensive operation. You’re browsing your folder paths N * M times, where N is the number of Perspective sessions you have and M is the number of these components you have (since this binding will also run on initialization). If this is something you don’t expect to change often, I would strongly encourage you to look into “pre-caching” this information on the gateway somewhere - you can still use browse, but maybe once an hour in a gateway scheduled script or something, rather than constantly per session. I think the most important question here is : What will make the data in the repeater change ? Is it user triggered (dropdown, button, text field…) or does it depend on a general condition (a tag’s value, the result of a scheduled script…). As @PGriffith said, depending on what the trigger is and how often you expect it to change, there might be different ways to do this that would avoid doing a full browse too often. @PGriffith and @nminchin - Thanks for the advice. I’m doing as you suggested and implementing the system.tag.browse in a Gateway Events script to minimize performance impact. I would ultimately run this script once a day and pass the data to a mySQL database. However, I’m struggling to pass the tag.browse results to the mySQL database. I have a couple questions that I’m hoping you have answers to. When I do a system.tag.browse the results all have a ‘u’ in front of tags. Here’s example: u’hasChildren’: True, u’name’: u’DCI01’, u’tagType’:Folder}, —> Why is this ‘u’ being inserted into the JSON? When I try to SET this data in the mySQL database into a JSON datatype column I keep getting a log error saying “You have an error in your SQL syntax”. below is the result of my expression string. system.db.runUpdateQuery(UPDATE sitedetails SET jsonTags = ‘: [{u’fullPath’: [Ignition_… If I replace the 'jsonTags = with something simple like 'jsonTags = ‘I am awesome’ then the script executes fine. So this tells me that the mySQL string for some reason doesn’t like whatever the tag.browse function returns. Any help here is appreciated. system.tag.browse returns something that looks like a Python list of Python dictionaries, which itself looks like a JSON structure, but neither are exactly equivalent. (The u prefix on strings is a giveaway that it’s not actually JSON, but in fact the string representation of Python data). The immediate answer is to use system.util.jsonEncode to translate the Python data structure into a plain JSON string, which you should then be able to store in your DB. However, you’ll probably want to ‘massage’ the data a bit to extract only the fields you particularly care about before encoding it; the direct output of jsonEncode will probably have lots of details and nested information you don’t really need for this use case. @Trent_Derr I imagine all you want to put into SQL is a list of paths to the tags: tags = system.tag.browse(...) tagpaths = [str(tag['fullPath']) for tag in tags] # this will produce a list of tagpaths # you may want to convert the list into a CSV for SQL tagpaths_csv = ','.join(tagpaths) ‘u’ for Unicode, as far as I’m aware. Thanks. With everyone’s help I was able to get the first part of my automation done. Here is the script near final script for collecting the tag paths. import json #Set the tag browse path and filter parameters filter = {‘tagType’:‘Folder’,‘recursive’:True} #TODO - Instead of hardcoding the siteID and basePath, fetch the list of siteIDs from the database and iterate through each site siteID = ‘H4683’ basePath = “[Ignition_HOST01_MQTT_Engine]Edge Nodes/…” tags = system.tag.browse(path = basePath, filter = filter) tagpaths = [str(tag[‘fullPath’]).replace(basePath,‘’) for tag in tags] tagpaths_csv = ‘,’.join(tagpaths) #system.file.writeFile(r"C:\myExports\myExport.csv", tagpaths_csv) rowsChanged = system.db.runUpdateQuery(“UPDATE sitesdetails SET tagFolders = ‘%s’ WHERE siteID = ‘%s’” %(tagpaths_csv, siteID),‘db’) My next task is to create the change script on the flex repeater that will search through these tags to gather specific folder parameters for the flex repeater instances. My plan is iterate through with regex searches, but not sure if that’s possible.
https://forum.inductiveautomation.com/t/system-tag-browse/61488
CC-MAIN-2022-40
refinedweb
1,383
62.58
I have a short question regarding model constants and persistent state. Let's say I have something like class MyModule(nn.Module): def __init__(self, n=2): self.n = n What's the best way to make n part of the persistent state (i.e. the state_dict)? Should I make it a buffer? But then I would need to convert it into a tensor, which seems a bit of a hassle. Is there another more elegant way? n state_dict I think you'd better make it a buffer by self.register_buffer('n', n) self.register_buffer('n', n) def state_dict(self, destination=None, prefix=''): """Returns a dictionary containing a whole state of the module. Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Example: >>> module.state_dict().keys() ['bias', 'weight'] """ if destination is None: destination = OrderedDict() for name, param in self._parameters.items(): if param is not None: destination[prefix + name] = param.data for name, buf in self._buffers.items(): if buf is not None: destination[prefix + name] = buf for name, module in self._modules.items(): if module is not None: module.state_dict(destination, prefix + name + '.') return destinatio
https://discuss.pytorch.org/t/add-constants-to-persistent-state/2080
CC-MAIN-2017-39
refinedweb
195
54.18
Add missing folder "glext" and "lib3ds" in the include folder created by windows installer. New Features : * COFF file format support. * Basic Shader support (GLSL). * Use Vertex Buffer Object (VBO) instead of display list. If VBO is not supported use Vertex Array. * Manage transparency by materials. Enhancements and Bug Fix : * Reduce memory consumption. * Reduce 3ds loading time. * Use selection based on color. Up to 10x faster. New Features : * Include lib3ds 1.3 source. * 3D Studio file format (.3DS) support. * STL file format. (.STL) support. * OFF file format (.OFF) support. * Add a set of geometry tools for non convex polygon partition into triangles. Enhancements and Bug Fix : * GLC_Mesh2 vertexs, normals and texture coordinate are now stored as float instead of double. * OBJ with non convex polygon are now correctly rendered. * Introduce distinction between points and vectors. * Introduce glc namespace for function tools and constants. * Introduce C++ Include conventions. * Windows manual installation : Add intall target to simplify installation process.... read more * Mac OS X (intel) Support. * Add first version of scene graph : GLC_World, GLC_Product GLC_Part and GLC_Node. * Add the possibility to handle transparent GLC_Geometry in GLC_Collection. * Add the possibility to hide GLC_Instance from GLC_Collection. * Add the possibility to create a GLC_World class with GLC_ObjToWorld class. * More Obj and mtl file type support. * GLC_Mesh2 internal material are now implicity shared.... read more New Features : * Instance support (last step before scene graph) * Add the possibility to built rotation matrix with Euler angle. Enhancements and Bug Fix : * Fix memory leak in GLC_ObjToMesh2 class. * Code optimisation in GLC_ObjToMesh2 class. * When a problem occur while loading a OBJ file, add line number in exception description. * Refactoring class's method.. * Fix some bug in GLC_ImagePlane class. * Improve selection feedback performance for GLC_Mesh2 selection.... read more Bug fix in GLC_Texture copy constructor. This bug cause a crash in GLCviewer when the properties of an object with texture was edited. Package Updated : GLC_lib_0.9.7-setup.exe GLC_lib_src_0_9_7.zip New Features : * Load OBJ file without normal added. * Reverse mesh normal. * Selection feedback. * Add Exceptions. Enhancements and Bug Fix : * Compilation OK with GCC 4.xx * GLC_ObjToMesh2 completely rewrite. * GLC_Texture can be creates before OpenGL rendering context. * GLC_Factory use OpenGL rendering context instead of QGLWidget. GLC Viewer new features: * Create GLC_lib primitive (GLC_Point, GLC_Box, GLC_Circle and GLC_Cylinder). * Edit scene's object. * Remove a scene object. * Edit object's material. * Change background color or texture. * View/edit camera parameters. * Reframe on scene or selected Object. * OBJ file loading in another thread. * Exception handler when load OBJ file. Before the end of summmer, a new release off GLC will be coming out. New Feature and improvement of GLC_lib 0.9.7 : - Introduction of exceptions. - GLC_ObjToMesh2 entirely rewrote - Bugs fix to compile GLC_lib with GCC 4.x - Rename off all method with an lower case char - ... New Feature in GLC_Viewer : - Add Progression bar while loding a file - Creation off GLC_Lib primitive - Modifying Backrgound (Color or Picture) - ...... read more SVN migration of the CVS is now complete. CVS no longer needed. CVS to SVN migration in progress. See for more informations. Windows 32 bits Executable installer is now available in the download section. New GLC_Mesh2 class : - which support multiple material and texture. New GLC_ObjToMesh2 Class : - Support "obj" "mtl" file and texture. New Mesh2 feature available from CVS. GLC_lib 0.9.5 will coming as soon as possible.
https://sourceforge.net/p/glc-lib/news/?source=navbar
CC-MAIN-2017-51
refinedweb
550
55
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video! gstreamer0.10-ffmpeg gstreamer0.10-plugins-goodpackages. Yo friends! It's Webpack time! Yeeeeeeah! Well, maybe not super "yeeeeeeah!" if you are the person responsible for installing and configuring Webpack... Cause, woh, yea, that can be tough! Unless... you're using Webpack Encore! More about that in a few minutes. But first, I want you to know why you should care about Webpack... like super please-let-me-start-using-webpack-right-now-and-never-stop-using-it kind of care. Sure, technically, Webpack is just a tool to build & compile your JavaScript and CSS files. But, it will revolutionize the way you write JavaScript. The reason is right on their homepage! In PHP, we organize our code into small files that work together. But then, traditionally, in JavaScript, we just smash all our code into one gigantic file. Or, if we do split them up, it's still a pain because we then need to remember to go add a new script tag to every page that needs it... and those script tags need to be in just the right order. If they're not, kaboom! And even if you do have a build system like Gulp, you still need to manage keeping all of the files listed there and in the right order. How can our code be so nicely organized in PHP, but such a disaster in JavaScript? Webpack changes this. Suppose we have an index.js file but we want to organize a function into a different, bar.js file. Thanks to Webpack, you can "export" that function as a value from bar.js and then import and use it in index.js. Yes, we can organize our code into small pieces! Webpack's job is to read index.js, parse through all of the import statements it finds, and output one JavaScript file that has everything inside of it. Webpack is a huge over-achiever. So let's get to it! To import or... Webpack the maximum amount of knowledge from this tutorial, download the course code from this page and code along with me. After you unzip the file, you'll find a start/ directory that has the same code I have here: a Symfony 4 app. Open up the README.md file for all the setup details. The last step will be to open a terminal, move into the project and start a web server. I'm going to use the Symfony local web server, which you can get from. Run it with: symfony serve Then, swing back over to your browser and open up to see... The Space Bar! An app we've been working on throughout our Symfony series. And, we did write some JavaScript and CSS in that series... but we kept it super traditional: the JavaScript is pretty boring, and there are multiple files but each has its own script tag in my templates. This is not the way I really code. So, let's do this correctly. So even though both Webpack and Encore are Node libraries, if you're using Symfony, you'll install Encore via composer... well... sort of. Open a new terminal tab and run: composer require encore This downloads a small bundle called WebpackEncoreBundle. Actually, Encore itself can be used with any framework or any language! But, it works super well with Symfony, and this thin bundle is part of the reason. This bundle also has a Flex recipe - oooooOOOOooo - which gives us some files to get started! If you want to use Webpack from outside of a Symfony app, you would just need these files in your app. Back in the editor, check out package.json: This is the composer.json file of the Node world. It requires Encore itself plus two optional packages that we'll use: To actually download these, go back to your terminal and run: yarn Or... yarn install if you're less lazy than me - it's the same thing. Node has two package managers - "Yarn" and "npm" - I know, kinda weird - but you can install and use whichever you want. Anyways, this is downloading our 3 libraries and their dependencies into Node's version of the vendor/ directory: node_modules/. And... done! Congrats! You now have a gigantic node_modules/ directory... because JavaScript has tons of dependencies. Oh, the recipe also updated our .gitignore file to ignore node_modules/: Just like with Composer, there is no reason to commit this stuff. This also ignores public/build/, which is where Webpack will put our final, built files. In fact, I'll show you why. At the root of your app, the recipe added the most important file of all webpack.config.js: This is the configuration file that Encore reads. Actually, if you use Webpack by itself, you would have this exact same file! Encore is basically a configuration generator: you tell it how you want Webpack to behave and then, all the way at the bottom, say: Please give me the standard Webpack config that will give me that behavior. Encore makes things easy, but it's still true Webpack under-the-hood. Most of the stuff in this file is for configuring some optional features that we'll talk about along the way - so ignore it all for now. The three super important things that we need to talk about are output path, public path and this addEntry() thing: Let's do that next, build our first Webpack'ed files and include them on the page.
https://symfonycasts.com/screencast/webpack-encore/encore-install
CC-MAIN-2020-29
refinedweb
938
75.5
Make sure you didn't miss anything with this list of the Best of the Week in the HTML5 Zone (December 05 - December 12). Here they are, in order of popularity: 1. Why You Don't Commit Code on the First Day How many of you have committed code on your first day of work? As you might know, it’s not a common practice. In fact, when you do commit code to production on your first day of work, you’re a bit of a hero! Why is it so hard to make a meaningful impact during your first day of work? 2. Autocomplete Phing Targets on the Command Line Shortly after writing my last post, I thought it could autocomplete the targets in my Phing build file on the command line. What we do is create a list of words for the -W option to compgen. Once done, compgen creates the wordlist used by complete when you press tab after the word phing on the command line. 3. Learning AngularJS One Step at a Time I have a new in-depth article published online today called, “AngularJS: One Step at a Time”. See below for more information and a link to the online article. 4. A Look Into HTML6 - What Is It, and What Does it Have to Offer? HTML5, the current revision of HTML, is considered to be one of the most sought-after revisions, compared to all the previous HTML versions. So what could HTML6 have to offer? Here I'll give you a rundown of new HTML6 features, like its new namespaces with XML-like structures, and some HTML6 API samples. 5. Taming WordPress Taxonomies I have created a plugin which will give users of WordPress greater control over the taxonomies that are available to them. I have called this plugin the Taxonomy Toolkit {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/best-week-dec-5-12-web-dev
CC-MAIN-2017-30
refinedweb
321
70.13
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Bad Programming Practices 101 (e-book) this week in the Beginning Java forum! todd runstein Ranch Hand 64 9 Threads 0 Cows since Feb 15, 2005 Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 0 Received in last 30 days 0 Total given 0 Given in last 30 days 0 Forums and Threads Scavenger Hunt Ranch Hand Scavenger Hunt Number Posts (64/100) Number Threads Started (9 (64/10) Number Threads Started (9/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by todd runstein Migrate4j - a. This version"). show more 10 years ago Blatant Advertising Getting the generic type of a Collection Thanks, I looked at using Field.getGenericType().toString(), but because I need the java.lang.Number class, that didn't get me very far. I couldn't see a convenient (or reliable) way of getting the correct class from the getGenericType().toString() output. Todd show more 10 years ago Java in General Getting the generic type of a Collection I'm using reflection to inspect a collection and want to get the Class of the objects that can be put inside of it. Is this seriously the only way to get the class: (Class)((ParameterizedType)field.getGenericType()).getActualTypeArguments()[0] show more 10 years ago Java in General ResourceBundle not using Locale I solved it. I had 2 properties files: Text_Labels.properties and Text_Labels_fr_CA.properties. Simply adding a file named Text_Labels_en.properties (which matches Text_Labels.properties) fixes the problem! show more 10 years ago Java in General ResourceBundle not using Locale Not sure what I'm doing wrong - this seems very straightforward. I have simple internationalization code and I'm trying to override the Locale being used. However, using the PropertyResourceBundle.getBundle(String, Locale), the Locale seems to be ignored. Here's the test class: import java.util.Locale; import junit.framework.TestCase; import static TextLabel.*; public class TextLabelTest extends TestCase { TextLabel textLabel; @Override protected void setUp() throws Exception { super.setUp(); textLabel = new TextLabel(); } public void testEnglish_WithLocale() { Locale.setDefault(new Locale("fr", "CA")); assertEquals("Oui", textLabel.getString(YES)); assertEquals("Non", textLabel.getString(NO)); Locale.setDefault(new Locale("en", "US")); assertEquals("Yes", textLabel.getString(YES)); assertEquals("No", textLabel.getString(NO)); //Make sure the system is using the wrong locale Locale.setDefault(new Locale("fr", "CA")); assertEquals("Yes", textLabel.getString(YES, LANG_ENGLISH, COUNTRY_US)); assertEquals("No", textLabel.getString(NO, LANG_ENGLISH, COUNTRY_US)); } } Here's the TextLabel class: import java.util.Locale; import java.util.PropertyResourceBundle; import java.util.ResourceBundle; public class TextLabel { public static final String BASE_FILENAME = "Text_Labels"; public static final String LANG_ENGLISH = "en"; public static final String LANG_FRENCH = "fr"; public static final String COUNTRY_US = "US"; public static final String COUNTRY_CANADA = "CA"; public static final String YES = "Yes"; public static final String NO = "No"; public String getString(String key) { return getString(key, null, null); } public String getString(String key, String language, String country) { Locale locale; if (language == null || country == null) { locale = Locale.getDefault(); } else { locale = new Locale(language, country); } System.out.println("Locale = " + locale); ResourceBundle bundle = PropertyResourceBundle.getBundle(BASE_FILENAME, locale); System.out.println("Bundle using " + bundle.getLocale()); return bundle.getString(key); } } Pretty straightforward. However, the output generated is: Locale = en_US Bundle using Locale = en_US Bundle using Locale = en_US Bundle using fr_CA I would expect the last one to say "Bundle using en_US". I'm running the test inside Eclipse 3.3.0 using Sun's 1.5.0.13 JVM. Any ideas why the Locale is being ignored? Thanks in advance! show more 10 years ago Java in General Running application as default without losing Manager Not sure what happened - reinstalled Tomcat and now it's looking good. Sorry for the false alarm. show more 10 years ago Tomcat Running application as default without losing Manager I'm trying to deploy an application as the default application. I'm able to do this by naming my war ROOT.war and copying it the webapps directory. Unfortunately, this makes the admin and manager apps disappear. I'm able to reinstall the admin application (by downloading it from the Apache site) but haven't figured out how to get the Manager app to continue working. Any suggestions? show more 10 years ago Tomcat Error unregistering mbean Last night we upgraded our JVM from 1.4 to 1.5. After the upgrade, the server still works, however we're getting an exception thrown repeatedly: - Error unregistering mbean javax.management.RuntimeOperationsException: Object name cannot be null at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.isRegistered( DefaultMBeanServerInterceptor.java:545) at com.sun.jmx.mbeanserver.JmxMBeanServer.isRegistered( JmxMBeanServer.java:619) at org.apache.commons.modeler.Registry.unregisterComponent( Registry.java:642) at org.apache.jk.common.ChannelSocket.processConnection( ChannelSocket.java:706) at org.apache.jk.common.SocketConnection.runIt(ChannelSocket.java :866) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run( ThreadPool.java:684) at java.lang.Thread.run(Thread.java:595) Caused by: java.lang.IllegalArgumentException: Object name cannot be null ... 7 more We're running Tomcat 5.0, Apache 2.0 with jk_mod against a sybase db. The only change was the JVM and our webapp compiled in 1.5. Any ideas where to start looking??? Todd show more 10 years ago Tomcat DB Vs Mq Any chance you can provide a bit more info? Message queues are really good for exchanging messages. Databases are really good at storing and retrieving data. ;) show more 10 years ago JDBC and Relational Databases Storing Arabic language Is your database configured for UTF-8? Some databases use non-Unicode by default (for example, MySql before version 4.1). show more 10 years ago Servlets Class.forName(ClassName) Muni, There are only a few patterns where doing dynamic loading would be your first choice. If you know what class you want to use, call one of it's constructors with "new". However, if you don't know what class you're going to be using, you could do dynamic loading. For example, maybe you have a "RuleSet" interface. Perhaps you have several classes that implement this and you routinely add more. Perhaps you want to store information in a database (maybe about a Procedure, ManufacturingStep, etc) - you can store the fully qualified classname of the RuleSet that belongs to this record and load the class dynamically using this value. There are, of course, other ways to solve this problem, and other problems that lend themselves to dynamic class loading. Hopefully that gives you a better idea of where and when you'd use dynamic loading. Assuming you're going through this for learning, here's an idea of how you could use dynamic classloading to call an "apply" method (totally off the top of my head - untested): -- File rule/set/RuleSet.java public interface RuleSet { public void apply(); } -- File rule/set/BasicRuleSet.java public class BasicRuleSet implements RuleSet { public void applySet() { System.out.println("Applying Rule Set"); } } -- In some other class ... public static void applyProceduresRuleSet(Procedure p) { try { Class c = Class.forName(p.getRuleSetName); if (!RuleSet.class.isAssignableFrom(c)) { //throw exception, log error, whatever } Object obj = c.newInstance(); RuleSet ruleSet = (RuleSet)obj; ruleSet.apply(); } catch (Exception e) { //throw exception, log error, whatever } } ... Hope that helps! show more 10 years ago Beginning Java comparing dates Perhaps I'm missing something here . . . could you just create a static method that takes two String objects, tries to convert them to dates, and returns the difference (in milliseconds, or whatever you want). That way you don't have to change the rest of your application, but you can do the comparison. Again, I may be missing something. It seems that my answer is pretty simplistic, so perhaps there more to the problem than I'm reading in your post. show more 11 years ago Java in General Database Connectivity issues Looks like you don't have the proper permissions set up. What account are you using to connect? Does that account have permissions to the DB? Are you using the correct password? show more 11 years ago JDBC and Relational Databases Query problem with MySQL 5 So, for game 3, would something like this work? select player_id, max(points) as points from result where game_id = 3 group by player_id order by points; My "order by" statement may not work in your db as is, but does the general idea work? This will list all the players who played game 3, and put them in order from most points to least. Is that closer to what you're looking for? show more 11 years ago JDBC and Relational Databases Which OpenSource Java Installer? While it requires a bit of investment (to learn), the motivation is that you will end up using it again and again. It will allow doing the XML config file for this task, and if you wanted, it could be used to do the Java install (no more bat/zip file - it's all in a professional and easy to use .exe setup file). If you plan on doing other windoze installers in the future, NSIS will be able to handle that too. If you're looking to solve just this problem, there are other options - if you want a tool that will solve this, and any other installer situation (as long as it's targetted for win), NSIS is well worth trying. show more 11 years ago Other JSE/JEE APIs
https://coderanch.com/u/92764/todd-runstein
CC-MAIN-2018-34
refinedweb
1,604
50.33
The Teensy and Teensy++ boards have a LED connected to the input pin. If the input signal is unable to drive the LED, a buffer must be used. The 74HC14 is a good choice. An amplifier may be needed if the input signal is a sine wave or small AC signal which can not directly drive a TTL logic level input. Begin frequency counting. GateInterval is the time in milliseconds for each measurement. Using 1000 provides direct frequency output without mulitplying or dividing by a scale factor. Returns true when a new measurement is available. Only a single measurement is buffered, so it must be read before the next gating interval. Returns the most recent measurement, an unsigned long containing the number of rising edges seen within the gating interval. There is no delay between gating intervals, so for example a 1000.5 Hz input (and perfectly accurate crystal osciallator) will alternate between 1000 and 1001 when used with a 1 second gate interval. Stop frequency counting. PWM (analogWrite) functionality may be used again. #include <FreqCount.h> void setup() { Serial.begin(57600); FreqCount.begin(1000); } void loop() { if (FreqCount.available()) { unsigned long count = FreqCount.read(); Serial.println(count); } } During USB startup on Teensy, activity from the PC can cause interrupts which interfere with FreqCount's gate interval. Other libraries which disable interrupts for long times, such as NewSoftSerial, can cause trouble.. Martin Nawrath's FreqCounter and FreqPeriod are similar to FreqCount and FreqMeasure. I originally ported FreqCounter to Teensy, but could not get it to work reliably. It did work, but had trouble at certain frequencies, and requires a "compensation" factor for accurate results. Ultimately I wrote a FreqCount from scratch, using a thin hardware abstraction layer for easy porting to different boards, and accurate results without a compensation factor. I also used a more Arduino-like API (begin, available, read) and designed for continuously repeating measurements. I did not try to use FreqPeriod.
https://www.pjrc.com/teensy/td_libs_FreqCount.html
CC-MAIN-2017-17
refinedweb
324
50.43
Asked by: Constructor return type Hi all, I was wondering if anyone can tell me what actually does a constructor return, is it a pointer or some generic object type. public class MyClass { public MyClass() { Console.WriteLine("my class"); } public static void main() { MyClass obj= new MyCLass(); } } I know obj is an object of MyClass but what value does it actually have, some address or a pointer to an address? Shivi Gupta General discussion All replies Reference to the instance of the MyClass. I think it's just something that references the instance of class and you should not think about them addresses or pointer I have read from somewhere possibly from Eric Lipperts blog. Not sure whether return is correct technical term or it that the reference is set to the instance of MyClass. - Yes Sir, But that something must have a name. I wanted to know so that I could use some of the return type value or that something, so that it doesn't go to waste. Because at every instance created fo MyClass , the constructor will be called and it will return something. Shivi Gupta Contructor creates new instance of the class and sets the reference. I don't know what you are after, but if you don't want to create so many instance then share the reference instead of creating new one; just be sure that the class instance can be shared. MyClass obj = new MyClass(); MyClass obj2 = obj; // now obj2 references the same instance as obj - A constructor doesn't return anything. Internally, its return type is void. When you use the 'new' operator, a new instance is created and the constructor is called on that instance (like any instance method). The 'new' operator then returns the instance. Hi, - A class can have any number of constructors. - A constructor doesn’t have any return type even void. Check the below links in detail about Constructor Constructors (C# Programming Guide) Constructor types with example programs in C#.NET An Intro to Constructors in C# PS.Shakeer Hussain A class can have any number of constructors. I didn't say otherwise. A constructor doesn’t have any return type even void. You cannot declare a return type. It doesn't mean there is none. It's something you find a lot in C#: with very few exceptions, when something can have only one value, you cannot declare it. For example, interfaces members are public: you cannot declare their access modifiers, even 'public'. Note also that I said "internally, its return type is void". Look at a compiled class: any method which doesn't return anything has a return type defined as void, including constructors. As noted by others technically it's not the constructor that sets the reference to the instance it is the new operator and the constructor itselft returns just void. In your code the reference of the obj is set to instance of MyClass. So new operator sets or returns, what ever you want to call it, reference to the instance of created class and does much more. If you are interested about the technical details you can check C# specification section 7.6.10 The new operator. new: creates a new Instance. Cares for it to get the memory it needs. set's all fields whose starting values were asigned outside of constructor calls the constructor on the new isntance whose signature matches the parameters you gave. Wich may call the constructors of the base class or other functions. Assigns the values given via Object Initialisers. And finally returns you a Referece to the created object that you store in a Reference Varriable. You aren't required to save it at all. You could write (parathesis not needed): (new Form1()).show();And it compiles and works perfectly. You never have to store or process a return value. However in most cases it makes sense to do so. Especially if you want/have to do more more then just one action with the object. - Edited by Christopher84 Wednesday, January 30, 2013 9:47 AM
https://social.msdn.microsoft.com/Forums/vstudio/en-US/9ff96aba-cbc5-439b-bf85-77b8822d0be5/constructor-return-type?forum=csharpgeneral
CC-MAIN-2015-40
refinedweb
682
65.42
- 16 Nov, 2020 1 commit - 15 Nov, 2020 - 01 Nov, 2020 1 commit It only has a bunch of '.gitkeep' files to ensure its subdirectories are present, and is a major source of status pollution, being used for data files. --HG-- branch : heptapod-stable - 30 Aug, 2020 1 commit The current heptapod branch will use a new testing image with new versions of many things, notably of the upstream base image, so we need separate images for our two development branches. --HG-- branch : heptapod-stable - 26 Oct, 2020 6 commits Closes #354 --HG-- branch : heptapod-stable This test was failing, using a wider margin makes it pass for me. Is it because we're close to a day's end? Didn't notice about 1.5 hours ago, so it's a small corroboration. This should be taken upstream. --HG-- branch : heptapod-stable This concludes #361: the previous implementation was only checking if there's at least a group where the user has Developer role, whereas it actually depends on the group configuration. It's possible that this is partly redundant with `groups_with_developer_maintainer_project_access`, but it's hard to be sure, and we're careful to stop as soon as we have at least a match. --HG-- branch : heptapod-stable This is the non-UX part of #361: we're now enforcing at the service level that `User#can_create_project_in_personal_namespace?` is called, where it could have been bypassed. It would be arguably better to just add a similar check in `user.can?(:create_project, namespace)`, but that would be harder and perhaps more dangerous, especially on the stable branch. At least, we now have RSpec tests to back us for this. --HG-- branch : heptapod-stable Arguably, this is masking the real issue, which is #360, but we the benefits of having these tests in CI outweigh the drawback of not fixing the issue. Also, it's arguably a good thing to validate that the README init option works in Heptapod for Git projects. --HG-- branch : heptapod-stable - 24 Sep, 2020 4 commits This needs a JavaScript change, that we're performing trying to minimize impact (still storing the select on $this) while avoiding code duplication. We're trying to get duplication in the HAML template low, but there's only so much that can be done there. --HG-- branch : heptapod-stable This is the inner part of #138: we have to lift the restrictions against the change of that column precisely, expose it in the API, and do the post-treatment (system note, cancellation of automatic merge). The diff invalidation in app/models/repository.rb was already taking this case into account. --HG-- branch : heptapod-stable - 10 Sep, 2020 1 commit The `CommitReferenceFilter` class is the one responsible for the rendering. We're *adding* resolution of truncated Mercurial Node ids for the case of the `hg_git` VCS type, relying on the existing map of full Mercurial SHA (Node id) to Git SHA. This is very inefficient, and proper calls with low startup overhead to a persistent Mercurial nodemap would do much better above a certain repository size (persistent nodemap is provided by Rust native extensions, and low overhead would be achieved only by a fastpath without Python or a long-running process, such as HGitaly) While the `Git::Commit.batch_by_oid` method doesn't seem to be called except from the `#commits_by` method of the `Repository` model, the latter is used in several places of the application, with risks of breakage and/or severe performance degradation. For `hg_git` VCS type, most callers don't need the Mercurial resolution. That's why we're executing the new lookup only if the new `hg_full_resolution` argument is set to `true`. Preliminary performance analysis (non scientific, on my workstation) shows that in the 100k changesets range (pypy repo) this naive lookup takes about 30ms, whereas `hg log -T` is in the 100-200ms ballpark. Around 500k changesets (mozilla-central repo), times have the same order of magnitude (around 100-200ms again). To insist, this is with the hg startup overhead and without the Rust persistent nodemap (would take less than 1ms). We fully expect the current hg<->git maps to be an unbereable performance problem in Heptapod anyway around 500k to 1M changesets anyway, only solved by HGitaly. All in all, the performance question seems to be acceptable in the current inefficient context *for Note rendering*. HGitaly would provide an efficient RPC call anyway. --HG-- branch : heptapod-stable - 27 Sep, 2020 1 commit This one can be seen in Merge Request commits page. The clipboard button was doing the right thing, but the accompanying label (most of what users see) was the Git SHA. --HG-- branch : heptapod-stable - 26 Sep, 2020 8 commits Closes #349 --HG-- branch : heptapod-stable This is now the same as for other elements, maybe it's changed upstream since we modified it, but that doesn't matter. --HG-- branch : heptapod-stable Using the new hg archival metadata parsing facility, or a direct call to Mercurial if appropriate, this tells if the current running revision bears the tag corresponding to `HEPTAPOD_VERSION`. --HG-- branch : heptapod-stable From now on HEPTAPOD_REVISION is ignored, see #349. --HG-- branch : heptapod-stable These will from now be launched by CI. The previously split methods made this easier, although we needed one more, to have a constant `Pathname` instance to mock. --HG-- branch : heptapod-stable These will be useful for other inspection of the currently version or revision. --HG-- branch : heptapod-stable - 22 Sep, 2020 1 commit Technically, this is a merge from the heptapod branch, right after it's been merged into the heptapod-0-16 branch to cut the Heptapod 0.16.0 release. --HG-- branch : heptapod-stable - 21 Sep, 2020 4 commits These are served directly by workhorse (or nginx if workhorse is itself not available) --HG-- branch : heptapod It is not completely obvious to me what is derived from that one. --HG-- branch : heptapod At least on a HDK modified to run in production mode, this puts a Heptapod favicon instead of the GitLab one. --HG-- branch : heptapod Closes #345. The selective pull we're adding should be enough in most cases with our current branching strategy. Since we can't really test it without polluting the tags, and it's not a really big problem if it doesn't work, we're allowing the pull to fail. This can be made stricter in a follow-up. --HG-- branch : heptapod - 10 Sep, 2020 4 commits This fixes the rendering of all links to Mercurial changesets in Markdown (#342), because the changed method is the one called from `lib/banzai/filter/abstract_reference_filter`. In particular, the "system note" displaying additional commits in MRs are actually rendered server side from Markdown produced by `SystemNotes::CommitService#commits_list`. This doesn't change the lookup, i.e., the SHA prefix to render is still the Git one. Also, the rendering of `Note` is persisted in the database, in the `note_html` column, so this change doesn't apply to existing Notes. We could have a background migration for this, or simply a mass invalidation if that is suitable. --HG-- branch : heptapod --HG-- branch : heptapod Of course the revisions of that file that matter are those from the release branches, but sometimes we want to work on it ahead, and it looked like an error to have nothing after 0.13 --HG-- branch : heptapod See #344 This is a reviving of acc9c69295de, whose JavaScript part was unintentionally reverted in the big churn for the jump to GitLab 12.10 (05c3e70b82f8). It wouldn't have worked anyway before #338, except for source installations. --HG-- branch : heptapod - 09 Sep, 2020 4 commits This version just got released, with at least a fix for a Python 3 problem (bug6390) that was first seen in Heptapod context --HG-- branch : heptapod We'd rather not use Detached Merge Request pipelines as in earlier GitLab versions than to launch these heavy jobs twice. As explained in the comment, we really want the `rspec` job to run for pushes not MR related. --HG-- branch : heptapod This was spotted by the development setup seeding snippets. Probably the novelty in GitLab 13.3 is to go straight through this instead of `Gitlab::Shell`. The point is that a Repository is not necessarily tied to a Project any more, it has a container, which can itself have a Project, or not. Snippets can exist independently of any Project. --HG-- branch : heptapod This bugfix version was released after we started the Heptapod 0.16 development cycle, with a bugfix in the HTTP subsystem: --HG-- branch : heptapod - 03 Sep, 2020 1 commit --HG-- branch : heptapod
https://foss.heptapod.net/heptapod/heptapod/-/commits/28b2ac628485b0bad6a28900c8b0162f1be82399
CC-MAIN-2022-40
refinedweb
1,439
59.33
Opened 9 years ago Closed 9 years ago Last modified 5 years ago #3804 closed (invalid) Filter for translation instead of block Description I suggest using filters for translation of strings instead of special blocks. This is much more flexible, so instead of {% filter title %}{% trans "add message" %}{% endfilter %} you can get {{ "add message"|trans|title }} The filter to do this is really simple: def trans(value): return translation.gettext(value) register.filter('trans', trans) The make-messages tool needs to be updated to handle this though. Change History (11) comment:1 Changed 9 years ago by Simon G. <dev@…> - Needs documentation unset - Needs tests unset - Patch needs improvement unset - Triage Stage changed from Unreviewed to Design decision needed comment:2 Changed 9 years ago by mtredinnick - Resolution set to wontfix - Status changed from new to closed comment:3 Changed 9 years ago by mtredinnick - Resolution wontfix deleted - Status changed from closed to reopened Whoops... I jumped the gun here. My brain read "blocktrans" everywhere. Moving back to "design decision needed" because I want to think about it some more. comment:4 Changed 9 years ago by boxed@… Well, I'm gonna comment anyhow :P The problem I keep getting is that I get link texts and titles for pages that are essentially the same string, but with different cases, and it doesn't map cleanly across languages either.. in english there's title case, which doesn't exist in swedish so there "Add Message" and "Add message" becomes the same string. It makes more sense in these cases to use my suggestion and user |capfirst for links and |title for titles, and then have the actual handling of these differences in the i18n layer. comment:5 Changed 9 years ago by mtredinnick We can't have l10n-aware versions of filters like "title" -- it would be ridiculously large amounts of work and not even necessarily possible to get correct. That is why I want to think about this, because I don't want us to encourage false expectations. comment:6 Changed 9 years ago by boxed@… Well the point of my thinking is to be able to have some kind of declarative process, because there are many cases where the one single string or word is the same in English, but needs to be two different strings in another language for example. comment:7 Changed 9 years ago by boxed@… - Component changed from Internationalization to Documentation Well it turns out that there is support for {{ _("some string") }} which does exactly what I mean AND is supported by make-messages.py. To reflect this I have changed the component of this issue to "documentation", since all that was really needed was for this _() syntax to be said to be supported in the templates. comment:8 Changed 9 years ago by adrian - Component changed from Documentation to Internationalization comment:9 Changed 9 years ago by anonymous kjk comment:10 Changed 9 years ago by mtredinnick - Resolution set to invalid - Status changed from reopened to closed Looks like this is a non-issue, since the _() support in templates is already in the i18n docs. comment:11 Changed 5 years ago by mark0978 I'd like to add this link to some documentation that covers how to use this with other filters This is only a space-saver in the relatively rare case when you want to translate a single variable's content. Translation strings should normally be extracted in blocks of text so that translators have some context to work with and because translations are not made on a word-by-word basis. Working on a block basis for internationalising content is the right thing to do here. I don't see any value in also including this filter. If you have a more realistic example that requires this that doesn't make things any harder for translators, please bring it up on the django-developers list where we can discuss it. We're not going to include it as "another way to do something", though -- it would have to fix a problem that isn't already solved.
https://code.djangoproject.com/ticket/3804
CC-MAIN-2016-07
refinedweb
689
51.31
A common pattern to name the transactions would be to use the current URL ( window.location.href). However, it creates too many unique transactions (blog titles, query strings, etc.) and would be less useful when visualizing the traces in Kibana APM UI. To overcome this problem, the agent groups the page load transactions based on the current URL. Let’s look at the below example // Blog Posts - '/blog/:id' // Documentation - '/guide/en/*' The page load transaction names for the above URL’s would be inferred automatically and categorized as /blog/:id and /guide/en/* by the agent. The grouping logic in the agent works by recursively traversing the URL path tree until the depth of 2 and converting them to wildcard or slugged matches based on the number of digits, special characters, the mix of upper and lowercase characters in the path. The algorithm uses heuristics that are derived from common patterns in URL’s and therefore, it might not correctly identify matches in some cases. If the inferred transaction names are not helpful, please set pageLoadTransactionName configuration to something meaningful that groups transactions under the same categories (blog, guide, etc.) and avoid using the full URL at all costs. import {apm} from '@elastic/apm-rum' apm.init({ serviceName: "service-name", pageLoadTransactionName: '/homepage' })
https://www.elastic.co/guide/en/apm/agent/rum-js/5.x/custom-transaction-name.html
CC-MAIN-2021-49
refinedweb
213
53
Subject: Re: [OMPI devel] 1.5rc5 has been posted From: Larry Baker (baker_at_[hidden]) Date: 2010-08-23 20:29:22 The PGI C compiler complains (issues a warning) for the redefinition of the assert macro in opal/mca/memory/ptmalloc2/malloc.c: Making all in mca/memory/ptmalloc2 make[2]: Entering directory `/home/baker/openmpi-1.5rc5/opal/mca/ memory/ptmalloc2' CC opal_ptmalloc2_component.lo CC opal_ptmalloc2_munmap.lo CC malloc.lo PGC-W-0221-Redefinition of symbol assert (/usr/include/assert.h: 51) PGC-W-0258-Argument 1 in macro assert is not identical to previous definition (/usr/include/assert.h: 51) FYI. assert.h is an unusual include file -- it does not use an ifdef guard macro in the usual way, but undef's assert if the guard macro is defined (NOT if assert is defined, which is the root cause of this warning), define's the guard macro, then (re)define's assert() based on the current value of NDEBUG. opal/mca/memory/ptmalloc2/malloc.c did not change from OpenMPI 1.4.2. malloc.c include's opal/mca/memory/ptmalloc2/hooks.c, which did change in OpenMPI 1.5rc5. hooks.c indirectly include's <assert.h> through opal/mca/base/mca_base_param.h. This is where the warning occurs. malloc.c define's its own assert macro in lines 364-369: 364 #if MALLOC_DEBUG 365 #include <assert.h> 366 #else 367 #undef assert 368 #define assert(x) ((void)0) 369 #endif The warning occurs because the definition of assert in line 368 is not the same as the definition in <assert.h>: # define assert(expr) (__ASSERT_VOID_CAST (0)) However, there is no reason to define assert here -- the only code in malloc.c that needs assert is already inside an #if ! MALLOC_DEBUG conditional at line 2450. The fix is to delete lines 364-396 in opal/mca/memory/ptmalloc2/ malloc.c and move the #include <assert.h> to be inside the conditional between lines 2459 and 2461: 2459 #else #include <alloc.h> 2461 #define check_chunk(A,P) do_check_chunk(A,P) Larry Baker US Geological Survey 650-329-5608 baker_at_[hidden] On Aug 17, 2010, at 2:18 PM, Jeff Squyres wrote: > We still have one known possible regression: > > > > But we posted rc5 anyway (there's a bunch of stuff that has been > pending for a while that is now in). Please test! > > > > -- > Jeff Squyres > jsquyres_at_[hidden] > For corporate legal information go to: > > > > _______________________________________________ > devel mailing list > devel_at_[hidden] >
https://www.open-mpi.org/community/lists/devel/2010/08/8311.php
CC-MAIN-2016-30
refinedweb
410
50.53
Discover three ways to work out the binary gap for a number. Examples are both in C# and VB.NET. Latest Mathematics Articles Shannon Entropy and .NET How to Use Enums in C# Become more proficient in using enums in your C# programming. Palindrome Checking in C# "Taco cat" is a Palindrome; it reads the same forward or backward. Here's a simple program to allow you to check for Palindromes in your coding. Trigonometry and .NET Follow along and learn to program trigonometric functions in both C# and VB.NET. Creating Complex Math in .NET Learn to work with Complex Numbers in VB.NET and in C# with the use of Operator overriding and built-in Numerics namespaces..
https://www.codeguru.com/csharp/csharp/cs_misc/mathematics/
CC-MAIN-2020-16
refinedweb
119
68.36
11 July 2011 07:40 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The prices of C14 myristic acid have been stable at $3,450-3,550/tonne (€2,415-2,485/tonne) FOB (free on board) SE (southeast) Asia since 8 June, according to ICIS data. “C14 prices are not following CPKO prices as compared with other acid groups as steady demand is helping to support current prices,” a Malaysian fatty acids producer said. “Moreover, downstream isopropyl myristic (IPM) ester demand is stable as cosmetics production is at a high this quarter,” a Japanese fatty acids trader added. The third quarter of the year is the traditional peak demand period for the production of cosmetics. Prices of CPKO fell from $1,895/tonne FOB (.
http://www.icis.com/Articles/2011/07/11/9476336/asia-c14-myristic-acid-prices-stable-despite-low-feedstock-prices.html
CC-MAIN-2013-20
refinedweb
124
61.16
. Patroni is an open source tool that can be used for creating and managing your own customized, high-availability cluster for PostgreSQL with Python. It can be used to handle tasks such as replication, backups and restorations. Patroni also provides an HAProxy configuration, giving your application a single endpoint for connecting to the cluster's leader. If you are looking to quickly deploy HA PostgreSQL cluster in the data center, then Patroni is definitely worth considering. In this tutorial, we will be configuring a highly available PostgreSQL cluster using Patroni on Alibaba Cloud Elastic Compute Service (ECS) with Ubuntu 16.04. We will need four ECS instances; we will use Instance1 as a master and Instance2 as a slave, configure replication from master to slaves, and configure automatically failover to the slave if the master goes down. In this tutorial, we will be using the following setup: First, you will need to install PostgreSQL on Instance1 and Instance2. By default, PostgreSQL is available in the Ubuntu 16.04 repository. You can install it by just running the following command: apt-get install postgresql -y Once the installation is completed, verify the status of PostgreSQL using the following command: systemctl status postgresql Output: ● postgresql.service - PostgreSQL RDBMS Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled) Active: active (exited) since Fri 2018-09-21 20:03:04 IST; 1min 7s ago Main PID: 3994 (code=exited, status=0/SUCCESS) CGroup: /system.slice/postgresql.service Sep 21 20:03:04 Node1 systemd[1]: Starting PostgreSQL RDBMS... Sep 21 20:03:04 Node1 systemd[1]: Started PostgreSQL RDBMS. Sep 21 20:03:24 Node1 systemd[1]: Started PostgreSQL RDBMS. Next, stop the PostgreSQL service so that Patroni can manage it: systemctl stop postgresql Next, you will need to create a symlinks from /usr/lib/postgresql/9.5/bin. Because, Patroni uses some tools that comes with PostgreSQL. You can create symlink with the following command: ln -s /usr/lib/postgresql/9.5/bin/* /usr/sbin/ You will need to install Patroni on Instance1 and Instance2. Before installing Patroni, you will need to install Python and Python-pip to your server. You can install them with the following command: apt-get install python3-pip python3-dev libpq-dev sudo -H pip3 install --upgrade pip Next, install Patroni using the pip command: pip install patroni pip install python-etcd Etcd is a distributed key value store that provides a reliable way to store data across a cluster of machines. Here, we will use Etcd to store the state of the Postgres cluster. So, both Postgres nodes make use of etcd to keep the Postgres cluster up and running. You can install Etcd on Instance3 with the following command: apt-get install etcd -y HAProxy is free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications that spreads requests across multiple servers. Here, we will use HAProxy to forwards the connection Master or Slave node which is currently the master and online. You can install HAProxy on Instance4 with the following command: apt-get install haproxy -y Etcd default configuration file is located at /etc/default directory. You will need to make some changes in etcd file. nano /etc/default/etcd Add the following lines: ETCD_LISTEN_PEER_URLS="" ETCD_LISTEN_CLIENT_URLS="," ETCD_INITIAL_ADVERTISE_PEER_URLS="" ETCD_INITIAL_CLUSTER="etcd0=," ETCD_ADVERTISE_CLIENT_URLS="" ETCD_INITIAL_CLUSTER_TOKEN="cluster1" ETCD_INITIAL_CLUSTER_STATE="new" Save and close the file, then restart Etcd service with the following command: systemctl restart etcd You can now check the status of Etcd with the following command: systemctl status etcd Output: ● etcd.service - etcd - highly-available key value store Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-09-21 22:27:47 IST; 5s ago Docs: man:etcd Main PID: 4504 (etcd) CGroup: /system.slice/etcd.service └─4504 /usr/bin/etcd Sep 21 22:27:47 Node2 etcd[4504]: starting server... [version: 2.2.5, cluster version: to_be_decided] Sep 21 22:27:47 Node2 systemd[1]: Started etcd - highly-available key value store. Sep 21 22:27:47 Node2 etcd[4504]: added local member ce2a822cea30bfca [] to cluster 7e27652122e8b2ae Sep 21 22:27:47 Node2 etcd[4504]: set the initial cluster version to 2.2 Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca is starting a new election at term 5 Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca became candidate at term 6 Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca received vote from ce2a822cea30bfca at term 6 Sep 21 22:27:48 Node2 etcd[4504]: ce2a822cea30bfca became leader at term 6 Sep 21 22:27:48 Node2 etcd[4504]: raft.node: ce2a822cea30bfca elected leader ce2a822cea30bfca at term 6 Sep 21 22:27:48 Node2 etcd[4504]: published {Name:hostname ClientURLs:[]} to cluster 7e27652122e8b2ae Patroni uses YAML to store their configuration. So, you will need to create a configuration file for Patroni on Instance1 and Instance2: nano /etc/patroni.yml Add the following lines: scope: postgres namespace: /db/ name: postgresql0 restapi: listen: 192.168.0.105:8008 connect_address: 192.168.0.105:8008 etcd: host: 192.168.0.103 192.168.0.105/0 md5 - host replication replicator 192.168.0.104/0 md5 - host all all 0.0.0.0/0 md5 users: admin: password: admin options: - createrole - createdb postgresql: listen: 192.168.0.105:5432 connect_address: 192.168.0.105:5432 data_dir: /data/patroni pgpass: /tmp/pgpass authentication: replication: username: replicator password: password superuser: username: postgres password: password parameters: unix_socket_directories: '.' tags: nofailover: false noloadbalance: false clonefrom: false nosync: false Save and close the file, when you are finished. Next, create a data directory for Patroni on Instance1 and Instance2: mkdir -p /data/patroni Next, change the ownership and permissions of the data directory: chown postgres:postgres /data/patroni chmod 700 /data/patroni Next, you will need to create a startup script for Patroni to manage and monitor Patroni. You can do this with the following command: nano /etc/systemd/system/patroni.service Add the following lines: [Unit] Description=Runners to orchestrate a high-availability PostgreSQL After=syslog.target network.target [Service] Type=simple User=postgres Group=postgres ExecStart=/usr/local/bin/patroni /etc/patroni.yml KillMode=process TimeoutSec=30 Restart=no [Install] WantedBy=multi-user.targ Save and close the file. Then start PostgreSQL and Patroni service on both Instance1 and Instance2 with the following command: systemctl start patroni systemctl start postgresql Next, check the status of Patroni using the following command: systemctl status patroni Output: ● patroni.service - Runners to orchestrate a high-availability PostgreSQL Loaded: loaded (/etc/systemd/system/patroni.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2018-09-21 22:22:22 IST; 3min 17s ago Main PID: 3286 (patroni) CGroup: /system.slice/patroni.service ├─3286 /usr/bin/python3 /usr/local/bin/patroni /etc/patroni.yml ├─3305 postgres -D /data/patroni --config-file=/data/patroni/postgresql.conf --max_worker_processes=8 --max_locks_per_transaction=64 ├─3308 postgres: postgres: checkpointer process ├─3309 postgres: postgres: writer process ├─3310 postgres: postgres: stats collector process ├─3315 postgres: postgres: postgres postgres 192.168.0.105(54472) idle ├─3320 postgres: postgres: wal writer process └─3321 postgres: postgres: autovacuum launcher process Sep 21 22:24:52 Node1 patroni[3286]: 2018-09-21 22:24:52,329 INFO: Lock owner: postgresql0; I am postgresql0 Sep 21 22:24:52 Node1 patroni[3286]: 2018-09-21 22:24:52,391 INFO: no action. i am the leader with the lock Note: Repeat all the above steps on both Instance1 and Instance2. PostgreSQL cluster is now up and running. It's time to configure HAProxy to forward connection receive from PostgreSQL client to the Master node. You can configure HAProxy by editing /etc/haproxy/haproxy.cfg file: nano /etc/haproxy/haproxy.cfg Add the following lines: global maxconn 100_192.168.0.105_5432 192.0.2.11:5432 maxconn 100 check port 8008 server postgresql_192.168.0.104_5432 192.0.2.12:5432 maxconn 100 check port 8008 Save and close the file. Then, restart HAProxy with the following command: systemctl restart haproxy Now, open your web browser and type the URL (HAProxy Instance4 IP address). You will be redirected to the HAProxy dashboard as shown below: In the above image, the postgresql_192.168.0.105_5432 row is highlighted in green. That means 192.168.0.105 is currently acting as the master. If you shut down the primary node (Instance1), 192.168.0.104 (Instance2) should be acting as the master. When you restart the Instance1, it will rejoin the cluster as a slave and sync up with the master. How to Install Mail-in-a-Box on Ubuntu 16.04 How China Is Different (Part 1) – Consumer Behavior 2,630 posts | 693 followersFollow Alibaba Clouder - March 15, 2019 Alibaba Clouder - July 6, 2018 Alibaba Clouder - June 4, 2019 Alibaba Clouder - May 22, 2019 Alibaba Clouder - June 10, 2019 Alibaba Clouder - May 22, 2019 Patroni uses DCS (Distributed Configuration Store) to attain consensus. Only the node that holds the leader lock can be the master and the leader lock is obtained via DCS. If the master node doesn’t hold the leader lock, then it will be demoted immediately by Patroni to run as a standby. This way, at any point in time, there can only be one master running in the system. 2,630 posts | 693 followersFollow An online computing service that offers elastic and secure virtual cloud servers to cater all your cloud hosting needs.Learn More An on-demand database hosting service for PostgreSQL with automated monitoring, backup and disaster recovery capabilitiesLearn More Marketplace is an online market for users to search and quickly use the software as image for Alibaba Cloud products.Learn More 5848363536543818 July 19, 2019 at 11:43 am How does the two postgresql instances sync with each other?
https://www.alibabacloud.com/blog/how-to-set-up-a-highly-available-postgresql-cluster-using-patroni-on-ubuntu-16-04_594477
CC-MAIN-2022-40
refinedweb
1,633
54.22
Download presentation Presentation is loading. Please wait. Published byCornelius Hart Modified over 5 years ago 2 Pointers Typedef Pointer Arithmetic Pointers and Arrays 3 on completion of this topic, students should be able to: Correctly use pointers in a C++ program * Take the address of a variable * Dereference a pointer * Do pointer arithmetic Explain the relationship between an array and a pointer * Use pointers to access array elements 4 Pointers are one of the most powerful but most difficult concepts to master in C++. 5 Assume that we have declared an integer variable v1, and that the compiler has assigned the memory address of 000032 for that integer value. 000032 if integer variables take up 4 bytes of memory on this computer, then the address of the first byte of the integer is 000032. v1 is the name of the variable. We can access the variable by simply using its name. v1 = 34; 6 000032 The address of the variable is 000032 and its name is V1. We can store the address of a variable in a special data type called a pointer. 7 int v1; int* v1Ptr; v1Ptr = &v1; this statement declares the integer variable v1. Let us assume that the address of v1 is 000032. this statement stores the address of the variable v1 in the variable v1Ptr. The & is called the address-of operator. this statement declares v1Ptr to be a variable of type integer pointer. That is, v1Ptr can contain a pointer to (the address of) an integer. Note that pointers are typed. This pointer points to an integer. To declare a pointer variable, place an asterisk in front of the variable name. 8 000032 v1 000036 v1Ptr 0 0 0 0 3 2 v1Ptr is said to point to v1. 9 int v1; int* v1Ptr; v1Ptr = &v1; *v1Ptr = 45; cout << v1; the asterisk used as shown here is called the de-referencing operator. This statement says store the value 45 in the variable pointed to by v1Ptr. 10 000032 v1 000036 v1Ptr 0 0 0 0 3 2 4 5 11 Be careful when declaring pointers! int *aptr, bptr; aptr is a pointer but bptr is an int! 12 you must declare them this way! int *aptr, *bptr; int pointer 13 int v1, v2; int *v1Ptr, *v2Ptr; v1Ptr = &v1; *v1Ptr = 45; v2Ptr = v1Ptr; *v2Ptr = *v1Ptr; v1 v2 000032 000036 v1Ptr v2Ptr 000040 000044 000032 45 000032 14 char* pc;// a pointer to a character char** pc;// a pointer to a pointer to a character char* pc [10];// an array of pointers to characters 15 Recall that pointers are typed. int x = 3; int* xPtr; float* yPtr; xPtr = &x; yPtr = xPtr; this assignment won’t work because the pointer types are not the same type! 16 The void pointer is a generic pointer type. That is, it can hold the address of any data type. So, we could use a void pointer to write int x = 3; int* xPtr; void* yPtr; xPtr = &x; yPtr = xPtr; Now this statement will compile and execute with no errors because yPtr is a generic pointer. 17 int x = 3; int* xPtr; void* yPtr; xPtr = &x; yPtr = xPtr; cout << *yPtr; However, this statement will not work. Because a void pointer is generic, it does not keep track of what kind of data it points to. Therefore, the computer cannot do the conversion necessary to print out the data pointed to by the void pointer. 18 int x = 3; int* xPtr; void* yPtr; xPtr = &x; yPtr = xPtr; cout (yPtr) ); cout << *((int*)yPtr); If we know what the data type is that the pointer points to, we can cast the pointer to the correct type to make the statement work. The type we are casting to, in this case an int* appears in parenthesis in front of the variable name. 19 In C++, you can assign a name to a type definition and than later use that name to declare variables. typedef int* IntPtr; IntPtr xPtr, yPtr; the typedef statement should be placed outside of any function so that it is global. later, when we declare xPtr and yPtr, the name IntPtr is an alias for int*. Thus, xPtr and yPtr are declared as int pointers. typedefs provide no additional function. They are strictly for style and readability. 20 Using a pointer as a call-by-value parameter can be troublesome. Consider the following example … 21 typedef int* IntPointer; void sneaky (IntPointer temp) { *temp = 99; cout << “Inside sneaky! *temp = “ << *temp << endl; } Since this is a pass by value, you might not expect any side effects. 22 int main ( ) { IntPointer p; p = new int; *p = 77; cout << “Before calling sneaky! *p = “ << *p << endl; sneaky ( p ); cout << “After calling sneaky! *p = “ << *p << endl; return 0; } 23 int main ( ) { IntPointer p; p = new int; *p = 77; cout << “Before calling sneaky! *p = “ << *p << endl; sneaky ( p ); cout << “After calling sneaky! *p = “ << *p << endl; return 0; } p 77 24 p void sneaky (IntPointer temp) { *temp = 99; cout << “Inside sneaky! *temp = “ << *temp << endl; } temp 99 77 25 int main ( ) { IntPointer p; p = new int; *p = 77; cout << “Before calling sneaky! *p = “ << *p << endl; sneaky ( p ); cout << “After calling sneaky! *p = “ << *p << endl; return 0; } p 99 26 The name of an array is really a const pointer to the first element in the array! int myArray [4]; * myArray is an integer pointer. * myArray is a const pointer ( so that the address of myArray is not accidentally lost!) * myArray can be assigned to a pointer variable. myArray 27 #include using namespace std; int main ( ) { int intArray[ ] = {1, 3, 5, 7, 9}; cout << "\nintArray = " << intArray; cout << endl; cin.get( ); return 0; } what will this statement display? 28 #include using namespace std; int main ( ) { int intArray[ ] = {1, 3, 5, 7, 9}; cout << "\nintArray = " << intArray; cout << “\n*intArray = “ << *intArray; cout << endl; cin.get( ); return 0; } what will this statement display? 29 Since a pointer is just a normal variable we can do arithmetic on it like we can for any other variable. For example int *myPtr; myPtr++;// increment the pointer myPtr--;// decrement the pointer myPtr += 4;// add four to myPtr 30 int x[2]; int *aPtr; aPtr = x; *aPtr = 5; aPtr++; *aPtr = 8; X[0] X[1] 000016 000020 aPtr 000024 address 000016 5 Note: When we increment a pointer, we increment it by the size of the data type it points to! 000020 8 Note however, that in C++, pointer arrithmetic is only legal within an array. 31 Note the following interesting use of pointers and arrays. int myArray [4]; int* arrayPtr; arrayPtr = myArray; myArray [3] = 125; *(arrayPtr + 3) = 125; *(myArray + 3) = 125; arrayPtr [3] = 125; the assignment works because myArray is an int pointer. use pointer arithmetic to move the pointer to the 4 th element in the array (3 times size of int) aha … you can also use pointer arithmetic on the array name. and index notation on the pointer. these are all equivalent. 32 Great care should be taken when using pointer arithmetic to be sure that the pointer always points to valid data. “If NOT you will be punished!!” 33 Recall that C-style character strings are just arrays of characters (terminated by a null). Therefore we can use pointer arithmetic to move through a C-style character string as shown … char * colorPtr = “blue”; … colorPtr++; … 34 A function cannot return an entire array, that is, the following is illegal: int [ ] someFunction ( ); To create a function that achieves the desired result, write the function so that it returns a pointer of the base type of the array, for example: int* someFunction ( ); Warning: don’t return a pointer to data declared locally inside of the function! Why? 36 int x; int y; int *p = &x; int *q = &y; *p = 35; *q = 98; x y p q 1000 1008 1004 1012 variable name value in memory address 37 int x; int y; int *p = &x; int *q = &y; x = 35; y = 46; p = q; *p = 78; x y p q 1000 1008 1004 1012 variable name value in memory address 38 Given the definitions double values[ ] = {2, 3, 5, 17, 13}; double *p = values + 3; Explain the meaning of: values[1] values + 1 *(values + 1) p[1] p + 1 p - values 39 Suppose that you had the char array T h i s i s g o o d. if you had a pointer, cPtr, that pointed here and you put a null terminating character here then the statement cout << cPtr; would display the word is 40 Review: Consider the following code: int valueOne = 10; int someFunction( ) { int valueTwo = 5;... } int main ( ) {... this is a global variable it is stored in the data segment. this is a local variable it is stored on the stack. 41 Review: Consider the following code: int valueOne = 10; int someFunction( ) { int valueTwo = 5;... } int main ( ) {... valueOne exists for the entire life of the program. valueTwo comes into existence when it is declared, and disappears when the function ends. 42 What if you want to control when a variable comes into existence, and when it goes away! 43 int* a = new int; CoinBank* myBank = new CoinBank(5,3); This is a pointer. This variable is stored on the heap. Storage from the heap is allocated dynamically as your program executes, These variable come into existence when they are declared. 44 Dynamically allocated variables will stay around until you explicitly delete them. Thus, they are completely under programmer control. To delete a dynamically allocated variable you would write delete a; where a was a pointer to the variable. 45 Recall that when we talked about arrays, we noted that the array size given in the array declaration had to be a constant value. That is, the array size had to be fixed at compile time. This presents some difficulties: * either we guess too low and the array is not big enough to hold all of the data required, or * we guess to high and we waste space because elements of the array are not used. 46 One approach to solving this problem is to allocate the storage required for the array at run-time. int size; cout << “How big is the array?”; cin >> size; int *myArray; myArray = new int [size]; since we are allocating storage at run-time, we are allowed to use a variable as the array size. 47 Now, remembering the relationship between an array name and a pointer, we can use the dynamic array just like a normal array… myArray [5] = 15; cout << myArray[n]; 48 Whenever you use [ ] with new to allocate an array dynamically, you must use the corresponding form of the delete, delete [ ]. That is, if you write int *myArray; myArray = new int [10]; You must write delete [ ] myArray; to delete the array! 49 When we dynamically storage for an object or a struct, we no longer use the dot operator to access its data members. Instead we use the -> (pointer) operator. piggyBankPtr = new PiggyBank; piggyBankPtr->moneyInBank = 12.45; 50 Every object that you create has an implicit data member that carries the address of the object. This pointer is called the this pointer. The this pointer can be used by member functions that need the address of the calling object. void CoinBank::displayMoney( ) { cout moneyInBank; } note that we seldom use the this pointer this way. It is simpler to write cout << moneyInBank; 51 int *p; int *q; p = new int; q = p; *p = 46; *q = 39; delete p; cout << *p << “ “ << *q; p q 1008 1012 variable name value in memory address 52 int *p; int *q; p = new int[5]; *p = 2; for (int j = 1; j < 5; j++) p[j] = j + 3; q = p; delete [] p; for (int j = 0; j < 5; j++) cout << q[j] << “ “; p q 1008 1012 variable name value in memory address 53 Find the mistakes… int *p = new int; p = 5; *p = *p + 5; Employee e1 = new Employee(“Hacker”, “Harry”, 34000); Employee e2; e2->setSalary(38000); delete e2; Time *pnow = new Time( ); Time *t1 = new Time(2, 0, 0); delete *t1; cout getSeconds( ); Similar presentations © 2021 SlidePlayer.com Inc.
http://slideplayer.com/slide/4271342/
CC-MAIN-2021-10
refinedweb
2,015
68.1
I need to convert Python C4D Script to Plug-In On 15/04/2017 at 11:52, xxxxxxxx wrote: Hi, everyone! Recently I build a script that create one null with one layer assigned for the c4d and now I'm trying convert this in plugin. The script runs perfect in my script manager but I have some problems when I try to convert it in plug in. You can view the plug in code here: ---------------- Code: import os import sys import c4d from c4d import * Plugin IDs 1000001-1000010 are reserved for development PLUGIN_ID = 1000001 class AddNull(c4d.plugins.CommandData) : def \__init\_\_(self) : color_Layer=c4d.Vector(1,1,1) # Layer Color def add_divider(self,name, color) : root = doc.GetLayerObjectRoot() LayersList = root.GetChildren() names=[] layers=[] for l in LayersList: n=l.GetName() names.append(n) layers.append((n,l)) if not name in names: c4d.CallCommand(100004738) # New Layer LayersList = root.GetChildren() layer=LayersList[-1] layer.SetName(name) layer[c4d.ID_LAYER_COLOR] =color else: for n, l in layers: if n ==name: layer=l break Null = c4d.BaseObject(5140) Null[c4d.ID_BASELIST_NAME] = "Null_01" #Name of null Null[c4d.ID_LAYER_LINK] = layer Null[c4d.NULLOBJECT_DISPLAY] = 14 doc.InsertObject(Null) c4d.EventAdd() def Execute(self, doc) : dialog = None if self.dialog is None: self.dialog = add_divider("_Layer01_", color_Layer) self.add_divider("_Layer01_", color_Layer) if __name__ == "__main__": icon = c4d.bitmaps.BaseBitmap() icon.InitWith(os.path.join(os.path.dirname(__file__), "res", "Icon.tif")) c4d.plugins.RegisterCommandPlugin(PLUGIN_ID, "Add_Null", 0, icon, "Add_Null", AddNull()) ---------------- You can download the script and the plug in here: I've been seeing some examples and the python SDK but I don't understand where is the issue because I saw different codes structures in examples and work well. It's necessary the "execute" function? Are there other ways to execute the function? What I'm doing wrong? I hope you can help me and thanks in advance!. Cheers. On 17/04/2017 at 08:43, xxxxxxxx wrote: Hi and Welcome to the Plugin Cafe! I tried the plugin and there are several minor errors related to the implementation of the script's code inside the context of a class. Here are the issues: - Execute() must return a boolean value i.e. True or False - add_divider() should be passed doc variable from Execute() - add_divider() call in Execute() missing self prefix - color_Layer variables missing self prefix - self.dialog really really useful? On 03/05/2017 at 11:40, xxxxxxxx wrote: You may have figured this out already, but if not, here you go. I commented out your old stuff and then added corrected code. I want to expand on what @yannick mentioned with his issues he saw. You were not actually registering the plugin either. Hope this helps you figure out where you went wrong. files: here is the code, it's also in that zip file packaged as a plugin import os import sys import c4d from c4d import * # Plugin IDs 1000001-1000010 are reserved for development PLUGIN_ID = 1000001 class AddNull(c4d.plugins.CommandData) : def __init__(self) : self.color_Layer=c4d.Vector(1,1,1) # Layer Color def add_divider(self, doc, name, color) : root = doc.GetLayerObjectRoot() LayersList = root.GetChildren() names=[] layers=[] for l in LayersList: n = l.GetName() names.append(n) layers.append((n,l)) if not name in names: # ***************** # don't use CallCommand # c4d.CallCommand(100004738) # New Layer # ***************** # make a new layer layer = c4d.documents.LayerObject() # ***************** # set the layer name layer.SetName(name) # ***************** # set the layers properties here if needed # layer_settings = {'solo': False, 'view': False, 'render': False, 'manager': False, 'locked': False, 'generators': False, 'deformers': False, 'expressions': False, 'animation': False} # layer.SetLayerData(doc, layer_settings) layer[c4d.ID_LAYER_COLOR] = color #insert the new layer as a child of the layer root layer.InsertUnder(root) else: for n, l in layers: if n == name: layer = l # user lower case for variables # you should avoid using the actual ID and use the object type instead # # Null = c4d.BaseObject(5140) null = c4d.BaseObject(c4d.Onull) null[c4d.ID_BASELIST_NAME] = "Null_01" #Name of null null[c4d.ID_LAYER_LINK] = layer null[c4d.NULLOBJECT_DISPLAY] = 14 # insert the null object doc.InsertObject(null) c4d.EventAdd() return True def Execute(self, doc) : # you are not using a dialog so you do not need this; it's doing absolutely nothing. """ dialog = None if self.dialog is None: self.dialog = add_divider("_Layer01_", color_Layer) """ # you have to return a True or False result. This result will come from the add_divider function name = "_Layer01_" return self.add_divider(name=name, doc=doc, color=self.color_Layer) if __name__ == '__main__': bmp = bitmaps.BaseBitmap() dir, file = os.path.split(__file__) fn = os.path.join(dir, "res", "Icon.tif") bmp.InitWith(fn) # you were not actually registering the plugin # result = plugins.RegisterCommandPlugin(PLUGIN_ID, "Add Null", 0, bmp, "adds a null to a new layer", AddNull()) On 23/05/2017 at 10:22, xxxxxxxx wrote: Thanks a lot guys! Sorry for my delay to answer. I'm starting to learn Python and I'm making a bunch of bugs yet. I'll learn a lot to how to do this correctly with your files @charlesR. I want to learn the correct structure for python classes and c4d plugins. Thanks again, cheers!
https://plugincafe.maxon.net/topic/10066/13538_i-need-to-convert-python-c4d-script-to-plugin
CC-MAIN-2021-17
refinedweb
857
52.56
9679/can-cryptocurrency-work-without-blockchain-technology Yes, it should work. The authentication practice ...READ MORE The pre-requisites section of hyperledger composer explains ...READ MORE First of all, when someone creates a ...READ MORE You could use a recent feature known ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE You can do this by developing a ...READ MORE The smart contract is just the business ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/9679/can-cryptocurrency-work-without-blockchain-technology
CC-MAIN-2021-10
refinedweb
103
55.3
Hello – - this great script finds all the text after a tab and before a return. Works like a dream in my paragraph style, thank you… (?<=\t).*?(?=\r) What I woudl love to find out is how to tune the script to find all the text between TWO consecutive tabs and a return… Would be incredibly useful; trying to assign a character style to numbers in an index… thanks for your time and assistance! Original script found here: t-page-2#comment-550882 Try (?<=\t{2}).*?(?=\r) or (?<=\t\t).*?(?=\r) That finds text after two tabs with nothing between them, which is how I interpret two consecutive tabs. If that's not waht you meant, you'll need to provide a clearer explanation. Hi, and thanks for the reply... These dont seem to work unfortunately, perhaps it helps if you look at the structure of the text below... my index is currently formatted tab #1 / text string / tab #2 / string of page numbers and commas / return I am trying to assign a character style JUST to the text after the second tab... It's not a "script" but a GREP query. If your post tells us all of the conditions, then you don't need GREP at all. A nested character style will work just fine: If there's no forced line break, it'll continue applying the character style through the end of the paragraph. If you need it to continue through a forced line break, use an End Nested Style character instead. I think it's still worthwhile to figure out the GREP query that will work, but I always like to get the job done first, and then have fun stretching my brain to write the GREP query. Because I find writing GREP queries fun the same way that some people like to do crossword puzzles. One script that I'd suggest that you download and install, if you like stretching your cranium by writing GREP queries, is WhatTheGREP written by estimable forums regular [Jongware]. It does a fine job of expanding GREP queries into something approaching plain English. (?<=\t).*?(?=\r) Anyhow, what you have here is this query .*? (which means "find any group of zero or more characters, grab the shortest match") sandwiched in between a lookahead group and a lookbehind group. The first looks for a tab (that is \t) and the second looks for a hard return (that is \r). The advantage of lookahead and lookbehind is that you can look for stuff that is next to a tab without actually looking for the tab. So, all you have to do to turn your query into what you were originally asking for is to change the trailing \r into a \t (?<=\t).*?(?=\t) which means "find the smallest amount of text possible between two tabs in a single paragraph." * * * * * So, which is better, a nested style or a GREP query, or a GREP style? The reason I suggested a nested style is that I use 'em all the time when I have a project that has an index. A GREP find/replace query makes a permanent change to the index, but if I regenerate the index, I have to remember to re-run the find/replace query. I'd rather fob off the job of remembering stuff to a robot, and save my brain for more exciting tasks, like talking to my coworkers about social science research or Dwarf Fortress. So, it's going to be some kind of automatically applied style. Nested styles take less computing power to apply than do GREP styles. So I chronically advise people to use nested styles in this case, as it is the least taxing solution for your brain and for your computer.
http://forums.adobe.com/thread/1160216
CC-MAIN-2014-10
refinedweb
626
76.45
Where to best put your M4L files / dependencies? May 20 2013 | 8:24 pm. - Aug 26 2013 | 2:59 pmHello. - Mar 14 2015 | 11:53 pmI am currently struggling with this as well. I am amazed that I cannot find decent documentation for how to deal with my M4L object and dependencies, and right now it's a confusing mess split between the bowels of my "Ableton" folder and the "Max 7" folder. - Mar 16 2015 | 9:49 amEvery. - Mar 17 2015 | 10:21 amI think this problem arises if you are not using Max projects and have your sources elsewhere (at least it does for me)So, eg., if have all my source in myfolders/lee and this is separately pathed ini :) - Feb 01 2016 | 10:19 pmLee, Even if you delete the files in the Max for Live Devices folder, they will be re-created, and it seems the frozen device will always use its internal files even if you edit the ones in the Max for Live Devices folder. The only way to edit abstractions and such then becomes by opening them from the device itself. That's my understanding anyway; I'm slowly becoming less confused, but I can't say I understand it completely. - Feb 01 2016 | 10:30 pmIt may be that if you have your device open and unfrozen, you can edit the files in the Max for Live Devices folder and it will find them when you refreeze it. Just a hypothesis that I haven't tested. I'm having trouble understanding what the point of the Max for Live Devices folder even is. - Feb 02 2016 | 10:16 am@TO_THE_SUN Maybe it helps if you think about think about frozen M4L devices similar as a complied program. It is the state where everything that is needed by the device is inside one file, to make sure it runs without outside dependencies in any Live set on every computer.Like with compiled code: When you use an external library (in Max that would be an external or abstraction) a compiled program will always use the version it was compiled with.But compiled programs are not editable. That's why when you unfreeze a M4L device an 'un-complied' version of the code/patch is created in the MaxForLive Devices folder.To my understanding freezing is the last step in development. - basically when development is done. During dev i always work with unfrozen devices and use the global versions of external and non project-specific abstractions.To avoid confusions between abstractions that I use globally in different projects and devices and those that are project specific I "namespace" project abstractions by prefixing them.In general I think it is a good practice to keep all you M4L stuff inside the MaxForLive Devices folder and don't build them from outside locations.Jan - Feb 02 2016 | 6:51 pmRight, normally I would never freeze a device until it's ready for distribution. Recently I started using the amxd~ object however which forces you to freeze your device continually and therefore seems to be more trouble than it's worth for anything that's not in a completely finished state.However, if you try to edit an abstraction saved in the Max for Live Devices folder, it will just be overwritten next time you unfreeze the device. What about if the device is already unfrozen and open? Then are the files in the folder the ones it will reference when it freezes again? If so, maybe you could edit them, but if the device is already open and everything you might as well just get to the abstraction in question by opening it through the device itself.Anyway, I never had issues with this folder until I started using the amxd~ object. For one thing, why would freezing a device create text files there of the contents of coll objects (not set to Save Data with Patcher)? - Feb 03 2016 | 5:32 pm@TO_THE_SUNyes, the overwriting without prompting when unfreezing is a real bummer. I wrote a feature request / suggestion a few month ago to cycling74. I think there should be either a modal dialog or overwritten files should be moved to the _DeletedItems folder inside the M4L project, so we could at least recover them. - Jul 19 2019 | 3:36 amMax 8 / Live 9 - my personal process for getting around this ridiculous issue this feels performing surgery. absolute insanity. totally not the kind of issue I would expect from proprietary software and a company with full-time employees, but, whatever. it works now. just a huge bummer cause it prevented me from sharing my code for a while. - work on your device out of whatever directory - add the folder "<device name> Project" to "Documents/Max 8/Max for Live Devices" (if it doesn't exist already - for me usually it does not) - add a subfolder depending on the type of file. this will either be: /code (.js or other?), /externals , /media (images), /patchers (abstractions) - freeze. I think at this point the .amxd will take on the project folder as it's own. you can check by deleting the folder, freezing and unfreezing. it seems to just work. - Sep 03 2020 | 8:44 amHere is how we deal with freezing at Showsync. For every new version of Videosync and Beam we have to distribute potentially updated versions of our Max for Live devices, obviously in a frozen state.On our development machines though (in git repositories, in our case) we have only a single, non-frozen, version of any .amxd and only a single version of any dependency. We prefer not to have these source folders in a default Max search path but rather to selectively add the folders containing dependencies to the search path using Options / File Preferences.If there is ever a reason to unfreeze a distributed version of a plugin, perhaps for debugging, afterwards we quickly remove the folder with copies of dependencies that is created in Documents/Max/Max for Live Devices.These are the reasons that we try to avoid unfreezing devices as much as possible: Finally, though this may go beyond the scope of most, for some time now, frozen versions of our devices actually never have to exist on our development machines at all since our devices are frozen automatically by our hosted continuous integration system, using a script we developed specifically for this purpose.Hope that helps! - Having copies of dependencies creates a risk of unintentionally editing the copies instead of the originals, losing work if the copies are subsequently deleted. - Multiple devices, unfrozen or never frozen, may have different dependencies with the same names. One device can use a filterlow.maxpat abstraction which is a lowpass dsp filter and another device can use a filterlow.maxpat which passes only values above a threshold. If both abstractions are in the Max search path, it is unclear for any of the two devices which version of the dependency will be used at which point. Both devices may well end up loading a different dependency than the one they started out with. Having untracked copies of dependencies in the Max search path increases this risk. - After saving a Live set with a newly unfrozen device, the Live set now depends on files in the Documents folder without the user getting notified about it. After doing Collect all and Save (which I generally also avoid btw), when moving the Live set to another machine, the device will be broken. Depending on the context, the dependencies may be gone and it may not be possible to recover the device.
https://cycling74.com/forums/where-to-best-put-your-m4l-files-dependencies/
CC-MAIN-2021-49
refinedweb
1,278
60.85
The 5 Classification Evaluation Metrics Every Data Scientist Must Know This post is about various evaluation metrics and how and when to use? What if we are predicting the number of asteroids that will hit the earth. Just say zero all the time. And you will be 99% accurate. My model can be reasonably accurate, but not at all valuable. What should we do in such cases? Designing a Data Science project is much more important than the modeling itself. This post is about various evaluation metrics and how and when to use them. 1. Accuracy, Precision, and Recall: A. Accuracy Accuracy is the quintessential classification metric. It is pretty easy to understand. And easily suited for binary as well as a multiclass classification problem. Accuracy = (TP+TN)/(TP+FP+FN+TN) Accuracy is the proportion of true results among the total number of cases examined. When to use? Accuracy is a valid choice of evaluation for classification problems which are well balanced and not skewed or No class imbalance. Caveats Let us say that our target class is very sparse. Do we want accuracy as a metric of our model performance? What if we are predicting if an asteroid will hit the earth? Just say No all the time. And you will be 99% accurate. My model can be reasonably accurate, but not at all valuable. B. Precision Let’s start with precision, which answers the following question: what proportion of predicted Positives is truly Positive? Precision = (TP)/(TP+FP) In the asteroid prediction problem, we never predicted a true positive. And thus precision = 0 When to use? Precision is a valid choice of evaluation metric when we want to be very sure of our prediction. For example: If we are building a system to predict if we should decrease the credit limit on a particular account, we want to be very sure about our prediction or it may result in customer dissatisfaction. Caveats Being very precise means our model will leave a lot of credit defaulters untouched and hence lose money. C. Recall Another very useful measure is recall, which answers a different question: what proportion of actual Positives is correctly classified? Recall = (TP)/(TP+FN) In the asteroid prediction problem, we never predicted a true positive. And thus recall is also equal to 0. When to use? Recall is a valid choice of evaluation metric when we want to capture as many positives as possible. For example: If we are building a system to predict if a person has cancer or not, we want to capture the disease even if we are not very sure. Caveats Recall is 1 if we predict 1 for all examples. And thus comes the idea of utilizing tradeoff of precision vs. recall — F1 Score. 2. F1 Score: This is my favorite evaluation metric and I tend to use this a lot in my classification projects. The F1 score is a number between 0 and 1 and is the harmonic mean of precision and recall. Let us start with a binary prediction problem. We are predicting if an asteroid will hit the earth or not. So if we say “No” for the whole training set. Our precision here is 0. What is the recall of our positive class? It is zero. What is the accuracy? It is more than 99%. And hence the F1 score is also 0. And thus we get to know that the classifier that has an accuracy of 99% is basically worthless for our case. And hence it solves our problem. When to use? We want to have a model with both good precision and recall. Simply stated the F1 score sort of maintains a balance between the precision and recall for your classifier. If your precision is low, the F1 is low and if the recall is low again your F1 score is low. If you are a police inspector and you want to catch criminals, you want to be sure that the person you catch is a criminal (Precision) and you also want to capture as many criminals (Recall) as possible. The F1 score manages this tradeoff. How to Use? You can calculate the F1 score for binary prediction problems using: from sklearn.metrics import f1_score y_true = [0, 1, 1, 0, 1, 1] y_pred = [0, 0, 1, 0, 0, 1] f1_score(y_true, y_pred) This is one of my functions which I use to get the best threshold for maximizing F1 score for binary predictions. The below function iterates through possible threshold values to find the one that gives the best F1 score. # y_pred is an array of predictions def bestThresshold(y_true,y_pred): best_thresh = None best_score = 0 for thresh in np.arange(0.1, 0.501, 0.01): score = f1_score(y_true, np.array(y_pred)>thresh) if score > best_score: best_thresh = thresh best_score = score return best_score , best_thresh Caveats The main problem with the F1 score is that it gives equal weight to precision and recall. We might sometimes need to include domain knowledge in our evaluation where we want to have more recall or more precision. To solve this, we can do this by creating a weighted F1 metric as below where beta manages the tradeoff between precision and recall. Here we give β times as much importance to recall as precision. from sklearn.metrics import fbeta_score y_true = [0, 1, 1, 0, 1, 1] y_pred = [0, 0, 1, 0, 0, 1] fbeta_score(y_true, y_pred,beta=0.5) F1 Score can also be used for Multiclass problems. See this awesome blog post by Boaz Shmueli for details. 3. Log Loss/Binary Cross-entropy Log loss is a pretty good evaluation metric for binary classifiers and it is sometimes the optimization objective as well in case of Logistic regression and Neural Networks. Binary Log loss for an example is given by the below formula where p is the probability of predicting 1. As you can see the log loss decreases as we are fairly certain in our prediction of 1 and the true label is 1. When to Use? When the output of a classifier is prediction probabilities. Log Loss takes into account the uncertainty of your prediction based on how much it varies from the actual label. This gives us a more nuanced view of the performance of our model. In general, minimizing Log Loss gives greater accuracy for the classifier. How to Use? from sklearn.metrics import log_loss # where y_pred are probabilities and y_true are binary class labels log_loss(y_true, y_pred, eps=1e-15) Caveats It is susceptible in case of imbalanced datasets. You might have to introduce class weights to penalize minority errors more or you may use this after balancing your dataset. 4. Categorical Cross-entropy The log loss also generalizes to the multiclass problem. The classifier in a multiclass setting must assign a probability to each class for all examples. If there are N samples belonging to M classes, then the Categorical Cross-entropy is the summation of - ylogp values: y_ij is 1 if the sample i belongs to class j else 0 p_ij is the probability our classifier predicts of sample i belonging to class j. When to Use? When the output of a classifier is multiclass prediction probabilities. We generally use Categorical Crossentropy in case of Neural Nets. In general, minimizing Categorical cross-entropy gives greater accuracy for the classifier. How to Use? from sklearn.metrics import log_loss # Where y_pred is a matrix of probabilities # with shape =(n_samples, n_classes) and y_true is an array of class labels log_loss(y_true, y_pred, eps=1e-15) Caveats: It is susceptible in case of imbalanced datasets. 5. AUC AUC is the area under the ROC curve. AUC ROC indicates how well the probabilities from the positive classes are separated from the negative classes What is the ROC curve? We have got the probabilities from our classifier. We can use various threshold values to plot our sensitivity(TPR) and (1-specificity)(FPR) on the cure and we will have a ROC curve. Where True positive rate or TPR is just the proportion of trues we are capturing using our algorithm. Sensitivty = TPR(True Positive Rate)= Recall = TP/(TP+FN) and False positive rate or FPR is just the proportion of false we are capturing using our algorithm. 1- Specificity = FPR(False Positive Rate)= FP/(TN+FP) Here we can use the ROC curves to decide on a Threshold value. The choice of threshold value will also depend on how the classifier is intended to be used. If it is a cancer classification application you don’t want your threshold to be as big as 0.5. Even if a patient has a 0.3 probability of having cancer you would classify him to be 1. Otherwise, in an application for reducing the limits on the credit card, you don’t want your threshold to be as less as 0.5. You are here a little worried about the negative effect of decreasing limits on customer satisfaction. When to Use? AUC is scale-invariant. It measures how well predictions are ranked, rather than their absolute values. So, for example, if you as a marketer want to find a list of users who will respond to a marketing campaign. AUC is a good metric to use since the predictions ranked by probability is the order in which you will create a list of users to send the marketing campaign. Another benefit of using AUC is that it is classification-threshold-invariant like log loss. It measures the quality of the model’s predictions irrespective of what classification threshold is chosen, unlike F1 score or accuracy which depend on the choice of threshold. How to Use? import numpy as np from sklearn.metrics import roc_auc_score y_true = np.array([0, 0, 1, 1]) y_scores = np.array([0.1, 0.4, 0.35, 0.8]) print(roc_auc_score(y_true, y_scores)) Caveats Sometimes we will need well-calibrated probability outputs from our models and AUC doesn’t help with that. Conclusion An important step while creating our machine learning pipeline is evaluating our different models against each other. A bad choice of an evaluation metric could wreak havoc to your whole system. So, always be watchful of what you are predicting and how the choice of evaluation metric might affect/alter your final predictions. Also, the choice of an evaluation metric should be well aligned with the business objective and hence it is a bit subjective. And you can come up with your own evaluation metric as well. Continue Learning If you want to learn more about how to structure a Machine Learning project and the best practices, I would like to call out his awesome third course named Structuring Machine learning projects in the Coursera Deep Learning Specialization. Do check it out. It talks about the pitfalls and a lot of basic ideas to improve your models.. Bio: Rahul Agarwal is Senior Statistical Analyst at WalmartLabs. Follow him on Twitter @mlwhiz. Original. Reposted with permission. Related: - The 5 Graph Algorithms That Data Scientists Should Know - The Hitchhiker’s Guide to Feature Extraction - 6 bits of advice for Data Scientists
https://www.kdnuggets.com/2019/10/5-classification-evaluation-metrics-every-data-scientist-must-know.html
CC-MAIN-2022-21
refinedweb
1,854
65.52
In this article, we will learn how to create and bind Dropdown List using ViewBag in ASP.NET Core MVC. Let's learn step by step by creating an ASP.NET Core application. Step 1: Create ASP.NET Core MVC Application - Start then All Programs and select "Microsoft Visual Studio". -. The preceding steps will create the ASP.NET Core MVC application. If you are new to the ASP.NET core and know how to create an ASP.NET Core application in depth, then refer to my following article. Step 2: Add controller class Delete the existing controller for ease of understanding, then add a new controller by right clicking on the Controller folder in the created ASP.NET Core MVC application, give the Controller name Home or as you wish as in the following. Now open the HomeController.cs file and write the following code into the Home controller class as in the following. using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.Rendering; using System.Collections.Generic; namespace BindDropDownList.Controllers { public class HomeController : Controller { public IActionResult Index() { //Creating the List of SelectListItem, this list you can bind from the database. List<SelectListItem> cities = new() { new SelectListItem { Value = "1", Text = "Latur" }, new SelectListItem { Value = "2", Text = "Solapur" }, new SelectListItem { Value = "3", Text = "Nanded" }, new SelectListItem { Value = "4", Text = "Nashik" }, new SelectListItem { Value = "5", Text = "Nagpur" }, new SelectListItem { Value = "6", Text = "Kolhapur" }, new SelectListItem { Value = "7", Text = "Pune" }, new SelectListItem { Value = "8", Text = "Mumbai" }, new SelectListItem { Value = "9", Text = "Delhi" }, new SelectListItem { Value = "10", Text = "Noida" } }; //assigning SelectListItem to view Bag ViewBag.cities = cities; return View(); } } } Step 3: Add View Add an empty view named Index.cshtml and write the following code. @{ ViewData["Title"] = "Home Page"; } <hr /> <div class="row"> <div class="col-md-4"> <div class="form-group"> <label class="control-label"> Select City</label> <select name="products" class="form-control" asp-</select> </div> </div> </div> Step 4: Run the Application. Now press Keyboard F5 or Visual Studio run button to run the application. After running the application, the output will be shown on the browser, as in the following screenshot. I hope, from all the above examples, you have learned how to bind the DropDownList using ViewBag in ASP.NET Core MVC.
https://www.compilemode.com/2022/08/dropdownlist-in-asp-net-core.html
CC-MAIN-2022-40
refinedweb
372
58.79
0 Members and 1 Guest are viewing this topic. re: compact 68K codeI. before you get to your push multiple, first the core has read the vector table and fetch the first instruction of the ISR (prolog) done automatically it can often be done in parallel QuoteMicrochip was defining those symbols at link timeThat's true. Although it's not a very good idea. Microchip was defining those symbols at link time I like the idea of split register sets in the 68K. wonder what a highly tuned 68K or PDP-11 ... could be like. I.Quote... Cortex M3, M4, M7 all have 12 cycle interrupt latency (M0 has 16). It's sitting there writing those eight registers out at one per clock cycle, exactly the same as you could do yourself in software. Motorola had junked so many processor architectures in the 2000s that it's not even funny. 88K was one, and there's also mCore. By the look of it, it should have been competitive, but when even their own phone division wouldn't use it, that's the end. ...It's that if you tie your company to them then you have a huge risk of being orphaned within a decade.This, more than any technical superiority, is one of the things that makes RISC-V so attractive. The 68K had ISA features like double indirect addressing which made it even worse than x86 when scaled up. The separate address and data registers was one of those features although I do not remember why now. Motorola had junked so many processor architectures in the 2000s that it's not even funny. 88K was one The 88000 appeared too late on the marketplace, later than MIPS and SPARC, and since it was not compatible with 68K it was not competitive at all: Amiga/classic? 68k! Atari ST? 68k! Macintosh/classic? 68k! In short, Motorola was not happy because they had problems at selling the chip. As far as I have understood, IBM was working on S/370 since a long while, and their researching was on the IBM 801 chip, which was the first POWER chip If 2015 is not too late (or 2012 for Arm64) then 1990 was certainly not too late. The IBM 801 Quote from: brucehoult on December 17, 2018, 12:07:33 pmThe IBM 801801 was a proof of concept, made in 1974. But POWER and PowerPC are derived from the evolution of this POC. My IBM Red, Green, and Blu books point to this article. Probably to underline that one of their men, mr.Cocke, received the Turing Award in 1987, the US National Medal of Science in 1994, and the US National Medal of Technology in 1991 See that one of our prestigious men received an award for having invented the RISC before any H&P book started talking about it .”Joel BirnbaumFORMER DIRECTOR OF COMPUTER SCIENCES AT IBM“Computer Chronicles: RISC Computers (1986),”October 2, 1986 I compiled them the way they came. 000001b5 <main>: 1b5: e8 20 00 00 00 call 1da <__x86.get_pc_thunk.ax> 1ba: 05 3a 1e 00 00 add $0x1e3a,%eax 1bf: 8b 80 0c 00 00 00 mov 0xc(%eax),%eax 1c5: 81 88 94 00 00 00 80 orl $0x80,0x94(%eax) 1cc: 00 00 00 1cf: 81 88 c8 00 00 00 00 orl $0x1000,0xc8(%eax) 1d6: 10 00 00 1d9: c3 ret 000001da <__x86.get_pc_thunk.ax>: 1da: 8b 04 24 mov (%esp),%eax 1dd: c3 ret 00002000 <PORT>: 2000: 00 f0 add %dh,%al 08048450 <main>: 8048450: a1 c0 95 04 08 mov 0x80495c0,%eax 8048455: 83 48 30 20 orl $0x20,0x30(%eax) 8048459: 83 48 40 20 orl $0x20,0x40(%eax) 804845d: c3 ret #include <stdio.h>#include <stdint.h>#define PORT_PINCFG_DRVSTR (1<<5)struct { struct { struct { uint32_t reg; } PINCFG[16]; struct { uint32_t reg; } DIRSET; } Group[10];} *PORT = (void*)0xdecaf000;void main(){ PORT->Group[0].PINCFG[12].reg |= PORT_PINCFG_DRVSTR; PORT->Group[0].DIRSET.reg |= 1<<5;} gcc a.c -o c -save-temps -O1 -fomit-frame-pointer -masm=intel .file "a.c" .intel_syntax noprefix .text.globl main .type main, @functionmain: mov eax, DWORD PTR PORT or DWORD PTR [eax+48], 32 or DWORD PTR [eax+64], 32 ret .size main, .-main.globl PORT .data .align 4 .type PORT, @object .size PORT, 4PORT: .long -557125632 .ident "GCC: (GNU) 4.5.0" .section .note.GNU-stack,"",@progbits Quote from: brucehoult on December 17, 2018, 12:25:56 amI compiled them the way they came.It doesn't matter if you deliberately tweaked the compiler options and offsets to make RISC-V look good, or they magically came out this way. The problem is that your tests do not reflect reality, but rather a blunder of inconsequential side effects.If you tweak the offsets a different way [CM0 and limitations on offsets/constants, making assembly unpleasant] Oh come on. You not change only change the data structure (which I freely admit I made up at random, as westfw didn't provide it) to be less than 128 bytes to suit your favourite ISA, you *ALSO* change the bit offsets in the constants to be less than 8 so the masks fit in a byte. If you hadn't done *both* of those then your code would have 32 bit literals for both offset and bit mask, the same as mine, not 8 bit. You also changed the code compilation and linking model from that used by all the other ISAs, which would all benefit pretty much equally from the same change. And you accuse me of bad faith? Quote[CM0 and limitations on offsets/constants, making assembly unpleasant](I did specifically choose offsets and bitvalues to be "beyond" what CM0 allows.)As another example, I *think* that the assembly for my CM0 example (the actual data structure is from Atmel SAMD21, but it's scattered across several files) can be improved by accessing the port as 8bit registers instead of 32bit. All I have to do is look really carefully at the datasheet (and test!) to see if that actually works, rewrite or obfuscate the standard definitions in ways that would confuse everyone and perhaps not be CMSIS-compatible, and remember to make sure that it remains legal if I move to a slightly different chip.Perhaps I have a high bar for what makes a pleasant assembly language. The $1M question is. How is my tweaking is any worse than yours? You on the other hand worked backwards from a processor to make code that suited it. Quote from: brucehoult on December 18, 2018, 04:35:16 amYou on the other hand worked backwards from a processor to make code that suited it.Haven't you? Isn't this the way it should be. When you compile for a CPU you select the settings which maximize performance for this particular CPU instead of using settings which produce the bloat. As, by your own admission, you did for Motorola.If you haven't done this for RISC-V, why don't you tweak it so that it produces better code? Go ahead, try to beat my 14 bytes, or even get remotely close.
https://www.eevblog.com/forum/microcontrollers/risc-v-assembly-language-programming-tutorial-on-youtube/msg2046397/
CC-MAIN-2020-29
refinedweb
1,201
69.41
but vim 7 is still in development, isn't it? which spellchecker will it use or can you choose? It uses it's own engine. based on OOo I think btw.: Is there a reason to set nocompatible? When a ".vimrc" file is found while Vim is starting up, this option is switched off, [...]. Never knew that, thanks.]]> but vim 7 is still in development, isn't it? which spellchecker will it use or can you choose? btw.: Is there a reason to set nocompatible? 'compatible' 'cp' boolean (default on, off when a .vimrc file is found) global {not in Vi} This option has the effect of making Vim either more Vi-compatible, or make Vim behave in a more useful way. [...] When a ".vimrc" file is found while Vim is starting up, this option is switched off, [...]. vim 7.0 will have spell checking internally. set spelllang=en_US spell on syntax on set nomodeline set ww=<,>,[,] filetype on filetype indent on filetype plugin on set tabstop=4 set softtabstop=4 set shiftwidth=4 set expandtab set autoindent map ,s :w !~/.vim/pipe.sh bash <CR> <CR> map ,p :w !~/.vim/pipe.sh python <CR> <CR> nmap > :bn<CR> nmap < :bp<CR> au FileType python source ~/.vim/python.vim :command -nargs=+ Pyhelp :call ShowPydoc("<args>") function ShowPydoc(module, ...) let " . fPath :execute ":sp ".fPath endfunction let spell_auto_type = "html,tex,txt,none" let spell_executable = "aspell" let spell_insert_mode = 0 let spell_language_list = "de_DE,en_US" let psc_fontface = "plain" colorscheme ps_color I still have to figure out, how to make vimspell auto check the file types mentioned. I still have to type :SpellAutoEnable at the beginning of each session.]]> ah, ok - it seems "ttyfast" is automatically set depending on $TERM - it's on for xterm/rxvt/etc]]> I haven't really noticed a difference. I'm still experimenting - just going through the reference manual and trying out random new things to see if they help.]]> hmmm does ttyfast make a difference? I don't have it set... never seen it before.]]> Thanks for that - very useful]]> (mez): I've always wanted to mess with line number formatting, but can't seem to ever change it... also, if you're a python coder (looking at your vimrc), you should check out python_calltips - great script for python]]> Some very cool ideas here; seeing this thread inspired me to overhaul my vimrc. Here's what I've got so far: " my ~/.vimrc file, 27.04.05 """""""""" general options """""""""" set nocompatible " turn off vi quirks filetype plugin indent on " filetype dependent indenting and plugins syntax on " turn on syntax highlighting set backspace=indent,eol,start " proper backspace behaviour set nobackup " don't create annoying backups set nostartofline " keep cursor in same column when moving set number " turn line numbers on set showmode " show whether in insert, visual mode etc set showmatch " indicate matching parentheses, braces etc set tabstop=4 " sets up 4 space tabs set shiftwidth=4 " use 4 spaces when text is indented set expandtab " insert spaces instead of tabs set softtabstop=4 " do the right thing when deleting indents set autoindent " indents line to the line above it set guifont=Monospace 12 " font to use in gvim set history=100 " remember this many commands set hlsearch " highlight search results set incsearch " incremental search while typing set mouse=a " enable mouse in all modes set ruler " show line and column in status line set showcmd " show partial command in status line set ignorecase " ignore case in search patterns set smartcase " override ignorecase if search has uppercase set whichwrap=<,>,[,] " cursor keys can wrap to next/previous line set textwidth=79 " 80 column page for ease of reading set ttyfast " for fast terminals - smoother (apparently) set hidden " don't have to save when switching buffers set guioptions-=T " no toolbar colors nedit " modified so bg is only slightly off-white """""""""" autocommand stuff """""""""" if has("autocmd") " return to last known cursor position when opening file autocmd BufReadPost * if line("'"") > 0 && line("'"") <= line("$") | exe "normal g`"" | endif endif """""""""" abbreviations and remaps """""""""" :abbreviate #! #!/usr/bin/env python """""""""" other stuff """""""""" " vim.org tip 867: get help on python in vim, eg :Pyhelp os :command -nargs=+ Pyhelp :call ShowPydoc("<args>") function ShowPydoc(module, ...) let " . fPath :execute ":sp ".fPath endfunction A couple of things: Does anyone know how to increase the space between the line numbers and the start of the line? I surely can't be the only person who wants this, but I've searched and I can't find any mention of it. There are loads of new colour schemes available in one zip file here: … ipt_id=625]]> another goody - recent vim tip: "windows style - ctrl+shift enters visual mode nmap <c-s-left> vbge<space> nmap <c-s-right> vew<bs> nmap <c-s-down> v<down> nmap <c-s-up> v<up> imap <c-s-left> _<esc>mz"_xv`z<bs>obge<space> imap <c-s-right> _<esc>my"_xi<s-right><c-o><bs>_<esc>mz"_xv`yo`z imap <c-s-down> _<esc>mz"_xv`zo`z<down><right><bs><bs> imap <c-s-up> _<esc>mz"_xv`z<up>o`z<bs>o vmap <c-s-left> bge<space> vmap <c-s-right> ew<bs> vmap <c-s-down> <down> vmap <c-s-up> <up> this is because using "behave=mswin" forces shift+<direction> to enter "select" mode (can't perform operations as in visual mode)]]> oh, a new goodie: vnoremap <BS> d allows backspace to delete a selection of text in visual mode....]]> thanks... I'll try it out]]> two things: a) you can use "<cr>" instead of "^M" to insert a carraige return... it's slightly more portable... b) with your mappings, you can add <esc> in front of them, so they work in insert mode as well ok let me see... played with this a bit last night .vimrc colorscheme elflord filetype plugin on set shellslash set grepprg=grep -nH $* filetype indent on set sw=2 set iskeyword+=: map <F2> :w<C-M> map <F11> :wq<C-M> map <F12> :q!<C-M> :abbreviate sig Martin Lefebvre^Memail: dadexter@gmail.com^Mweb: source /home/dadexter/.viabbrv .viabbrv :ab cc #include <stdio.h>^M^Mint main(int argc, char **argv) {^M^M return 0;^M} :ab php_sqlq $query = "SELECT * FROM ";^M$results = mysql_query($query) or die(mysql_error());^M :ab pkgbuild pkgname=^Mpkgver=^Mpkgrel=1^Mpkgdesc=""^Murl=""^Mlicense=""^Mdepends=()^Mmakedepends=()^Mconflicts=()^Mreplaces=()^Mbackup=()^Minstall=^Msource=($pkgname-$pkgver.tar.gz)^Mmd5sums=()^M^Mbuild() {^Mcd $startdir/src/$pkgname-$pkgver^M./configure --prefix=/usr^Mmake^Mmake DESTDIR=$startdir/pkg install^M}^M I just had to make one for a PKGBUILD template
https://bbs.archlinux.org/extern.php?action=feed&tid=10908&type=atom
CC-MAIN-2016-26
refinedweb
1,106
60.65
Arduino I2C Picaxe While working on my first Arduino project I found out that servo library is not compatible with software serial library which is very bad for my project. So, I thought: What about making my own servo controller of say... Picaxe 28x1 chip??? Which will communicate over serial...Doh!... servo command won't do... hmmm... I2C!!! Quick search trough manual, and.... TARAM!!! hi2csetup command just gives you all you need!!! : - Ability to configure Picaxe as I2C slave (for 20x2, 28x1, 28x2, 40x1 and 40x2 chips only) - I2C functions absolutely automatic (i.e. should not be any interference with servo command for example) - Giving all 128 (X1, 20X2) or 256 (X2) bytes of scratchpad area for memory transfer - hi2cflag flag set when master writes to our chip Sounds very promising to me! So lets get started. My setup: Picaxe 28x1 on breadboard + Arduino Duemilanove. To connect 2 mCUs you need just 2 wires: Picaxe 28x1 pin 14 (i2c clock) to Arduino analog input 5 and Picaxe pin 15 (i2c data) to Arduino analog input 4. Arduino has pullup resistors for I2C built-in, so you do not need to mess around with resistors. My Picaxe I2C slave code for use of byte 0 of scratchpad to control servo on outpin 0. You can add more controls by just adding case branches for other servos or whatever you desire! 1 'Servo controller for IMP21 2 '6-jun-2010 by Isotope 3 'v1.0 4 ' 5 6 #no_data 7 #no_table 8 9 'addr | control 10 '-------------------- 11 '0 | servo @ pin 0 12 ' 13 ' 14 15 hi2csetup i2cslave, 160 '80 on arduino 16 17 put 0, 150 'initial position 18 servo 0, 150 19 20 main: 21 22 if hi2cflag = 0 then 23 goto main ' poll flag, else loop 24 else 25 26 hi2cflag = 0 ' reset flag 27 get hi2clast,b1 ' get last byte written 28 29 select case hi2clast 'what address in scratchpad it was written to 30 case 0 'ok, address was 0 31 servopos 0, b1 'which is our servo 0 slot. Move it. 32 endselect 33 34 goto main 35 endif Ok, little bit of explanation here: line 15: setup Picaxe as I2C slave with adress 80. We have 160 there as bit0 of address is read/write flag which is ignored in our case, so to shift 80 one bit to the left we have to muliply it by 2, that gives us 160. line 17-18: initialize scratchpad byte 0 with value of 150 (servo center position) and move servo into that pos. line 22: We continiously poll the hi2cflag to see if master wrote something to us. when it becomes 1, first we have to clear the flag by ourselves (line 26) then we can get last written byte to b1 (line 27) and deside what to do with it upon address it was written to (lines 29-32). Thats it! My servo controller code for one servo is ready. Now lets move to Master (Arduino) code: #include "Wire.h" //We need this library to operate I2C byte pos; //Position of servo void setup(){ Wire.begin(); //Initialize I2C delay(300); } void loop(){ for(pos = 100; pos <= 200; pos++){ Wire.beginTransmission(80); //Init transmission to address 80 Wire.send(0); //scratchpad address we want write into Wire.send(pos); //data we write - position of servo Wire.send(0); //Just add this Wire.endTransmission(); //finalize transmission delay(200); } } What we do here is pretty much explained in comments to code. Just one thing to mention is format of messages: First byte to transmit is the scratchpad address we want to write to, second is data byte. I figured out by trial and error that communication won't take place if you specify just address and data byte, you need to write 3rd byte of whatever (I did write 0) for all of this to work. Now we have our very own servo controller!!! Just add more servos, LEDs, motors, whatever and controll it all over I2C from Your Arduino board! :) I have tested it with just one servo and it worked perfectly :) Enjoy! :D Great tip. Here is a link to other I2C examples for Arduino. I have also struggled with the servo library not working with software serial library. The strange thing is that it works in build 0016 but not in 0017 or 0018 ? In my current project I’m reading/writing to an SD card, reading a GPS and writing to an LCD while controlling a servo. So I’m stuck with the Arduino 0016 IDE for the time being. When i use a GPS with a When i use a GPS with a Servo i use the new Arduino IDE but i copy the servo library from Arduino 0016 but you can only operate a servo on pin 9 and 10 in that case so this is very helpful. But for me, all i want is a ESC and servo so 2 is enough in my case. :). How to connect 2 AXE020 project boards via i2C How to hook up Picaxe to Picaxe: Wow! I am getting a 18M2 PICAXE chip and want to use it as a co-processor with a Aruino (Uno or Duemilanove) to make my "Phoenix" a very smart/capable robot. (I was hoping it would work with 0022....). Great details, great stuff (and needless to say, great website). My plans also might include an XBee or some form of computer-robot connection (maybe with even more processing happening at the PC side, like even a remote webcam.) Oooooh, this is sending warm tingles down this old mans back! :-) Keep me "in the loop" and keep going! Awesome! BTW fritsl, maybe if I had seen your first "try this one first" robot sooner I would have saved me building a fairly expensive POP-BOT (168) that blew up (smoked h-bridge) and has been rebuilt into my "Phoenix". Consider that a great compliment.
http://letsmakerobots.com/node/20787
CC-MAIN-2015-48
refinedweb
993
78.48
I am studying now C with "C Programming Absolute Beginner's Guide" (3rd Edition) and there was written that all character arrays should have a size equal to the string length + 1 (which is string-termination zero length). But this code: #include <stdio.h> main() { char name[4] = "Givi"; printf("%s\n",name); return 0; } outputs Givi and not Giv. Array size is 4 and in that case it should output Giv, because 4 (string length) + 1 (string-termination zero character length) = 5, and the character array size is only 4. Why does my code output Givi and not Giv? I am using MinGW 4.9.2 SEH for compilation.
https://www.howtobuildsoftware.com/index.php/how-do/chK/c-c-strings-c-character-array-and-its-length
CC-MAIN-2020-29
refinedweb
110
73.88
On the Subject of Ximian and Eazel 193 Isldeur writes: "Dennis Powell has a very interesting article on GNOME, Eazel, and the control thereof. However, while it is very thought provoking, it might inspire some heat. Still, I think these things are manifestly important to the ideal of Free Software to figure out!" A very well written article that says a lot of truth. I tend to think that some points are over beaten (lack of binaries for example. So what? Anyone can compile and distribute their own). Especially interesting is the point about Eazel and Paypal, and the comparison to OS/2. The difference, of course, is that this is Free Software in the speech sense, so it's a little more important than OS/2 IMHO. But there's some spicy words in here, and it's worth thought for those with objective minds. Re:Flamebait but... (Score:2) Fifth,the reasoning by which the FSF gets dragged into this is pretty shaky. It's got fuck all to do with his "argument" (what there is of it). Dennis Powell is just as vindictive as Stallman - and exactly like Stallman he finds a way to include his personal hobby horse into every argument. However annoying and frustrating it is for the poor bastard(s) subjected to it. The difference is that Stallman is a very clever and dedicated chap, whereas Powell is a shithawk with a big mouth and nothing of any value to contribute. Re:OS/2, Linux, and Teams (Score:2) And BTW, OS/2's new "open" support at has learned the lesson of TOS2 and is a membership-driven org that actually collects money. And it's growing. It seems that the OS/2 support orgs that prosper are those that collect money. Maybe Eazel has a point there: getting beyond critical mass may require a financial commitment, not just a philosophical one. Visit for more "Warped Perspectives". Re:Flamebait but... (Score:2) What happened in the last few years is that the venture firms were flush with cash from new investors (in the venture firms), and absent enough real ideas to invest in, they tried a Pump-N-Dump strategy. As in pump money in a company, IPO quickly, get the hell out. Try it on 10 companies, and if 2 go public, you've made a hellava lot of money. VC was just rolling the dice on these guys and Linux hype. No matter that their business model was essentially tacked on at the last minute and would never work -- $13M was chump change crap shooting on the VC's part. They missed the window (unlike the guys behind VA), and they're out $13M, and you guys have a Fucked Company. Re:If you want financial information about the FSF (Score:2) Computer Equipment (original cost $279,114 -- is this the HP 9000 box that RMS seranades?) Used Remote Linux Machine Pentium computer DEC Alpha workstation HP IIISI Laser Printer 2 Terminals 3 486 Computer Systems 5 keyboards (original cost $695) 3 400M(hz?) Pentium systems Scam (Score:2) ... and by linking to it, Slashdot validates this style of gutter journalism through the only metric which matters to the publishers - page impressions. It really is a case of "ignore it and it will go away." Tote Bag (Score:2) How dare the author say that the black tote bags Eazel gave out at Linux World were a bad idea! That bag was the most thoughtful item given out at the whole show, and it definitly made carrying the tons of flyers and pamphlets a hell of a lot easier. Especially after 1 or 8 too many beers. Re:Drivel (Score:4) Drivel (Score:4) I stopped reading here. I see where this is going. Trollsville USA! Re:Provide Binaries (Score:2) -adnans Re:Provide Binaries (Score:2) Ie when you double-click the "install this" it runs the compiler as necessary. I don't think anybody cares how long it takes to install. They just want it to be simple. I see no reason why the interface to compiling cannot be as simple as the interface can be made for rpm or Winshield batch files or any other install program. Re:Provide Binaries (Score:2) Also, Ximian _did_ provide binaries. So what if you had to wait a month. Who really cares? Did your life improve _that_ much after getting GNOME 1.4 that the previous month seemed wasted? I doubt it. Why wait for the RHs to provide binaries? Well, we have to wait for someone, or we'd be building from source, wouldn't we? Re:Flamebait but... (Score:2) Re:Unfortunately... (Score:2) Re:Flamebait but... (Score:2) * Previewing of text files in the manager (the icons for text files include text from the file itself) * Use of emblems to mark files and to search for files * Iconic representation of file permissions * New configuration mechanism using drag-n-drop * Use of user modes - Beginning/Intermediate/Advanced And that's just from casual use. There's probably more. Re:Flamebait but... (Score:2) Re:Flamebait but... (Score:2) -or- 2) Eazel's UI expertise & usability testing wasn't worth a hill of beans because they ended up copying most of MS Explorer. Actually, you missed an option. Windows, by virtue of being what everybody uses, is now the standard for usability, because that's what everyone already knows. Eazel, when they started, tried do rethink a lot of assumptions, but found that they couldn't because everyone was already used to the way MS did it. They found out that if you write a filemanager, you have to copy MS, or everyone will be confused. Re:Flamebait but... (Score:2) Re:How are these companies going to survive? (Score:3) ADA Core Technologies makes money. I'm guessing Mandrake makes money. IBM has made a butload of money. Cobalt made money. Tivo made money. Who's not making money? VA - because they can't focus, and were overly optimistic. Corel - because they haven't made money in a while even before Linux. Eazel - because their business plan was pretty stupid anyway. Ximian may come out of this all right, if they play their cards well enough. Add all of the consultants to that, and you've got a picture of whose making money. Re:13 MIL? (Score:3) Re:Flamebait but... (Score:5) $2000-$3000/month on Internet access - for one year that's $36,000 $200,000 for their infrastructure - backups, routers, gateways, plus licenses (this could actually have been more. You can really spend up to $2 million easily to make a scalable infrastructure - like if you use Oracle Apps to manage all your stuff). Let's say they had 10 programmers (I don't know how many they had) on Nautilus - for good programmers, that's about a million per year. Let's say they had another 10 programmers working on Eazel services, including their packaging and online disk storage, we've got another million there. Then you have to pay the execs. I'm not going to guess at a figure. Then you've got another twenty to thirty people doing all sorts of marketing/reception/etc. On top of this, you have office space. If they went for their own building, this could be a few million. Then you have computers for everyone, and that can get expensive real fast. So, as you see, $13 million can go pretty fast, especially if you're trying to start-up fast. Most of the dot-coms failed trying to start-up fast. Most companies do. Venture capital makes you think you can do anything because you have all that money, but then you end up wasting it buying the high-end of everything. The thing is that with $13 million, if the investors were willing to wait a little while, _could_ have been spread out over a decade, with the programmers all sitting in a basement, a 28.8 line to the 'net, and not bothered to even hire the marketing guys until the product was out the door and at revision #2. However, most VC places probably don't like that idea, so they try to get a full company in swing before a product is released, which, as you can see, really drains money. So, of course a file manager doesn't cost that much money, but a company does. The problem is that they formed the company before it was ready, and thus the company drained them of their money. However, they probably wouldn't have gotten VC money doing that. The whole company infrastructure is a bigger drain than any or all projects put together. Provide Binaries (Score:5) I tend to think that some points are over beaten (lack of binaries for example. So what? Anyone can compile and distribute their own). Remember that one of the points of Ximian Gnome is to make Linux less frightening to our mothers. I don't know about you, but telling my mother that she just needs to "uncompress the tarball, configure, make, and make install" won't really get us very far. OTOH, if I can e-mail her a single command (ie, rpm -Uvh) Why wait for the Red Hats of the world to provide binaries? Instead of stopping the simplification process after the UI design, they should follow through, IMHO. -Waldo Re:Provide Binaries (Score:2) Re:13 MIL? (Score:2) With the number of shows that Eazel went to, that should be a cool $1M right there. Then consider their big bandwidth costs, rent for their office, computers, and all that jazz....that's big money. Marketting: it costs a hell of a lot to advertise in magazines, especially when you want to be in all of them. then there's web ads too. a years worth of hard advertising would be $1M or $2M. Remember, they're trying their hardest to seem like a big, huge, successful company...there's a lot of image to sustain. Staff: how many employees do they have? 20? 20 @ $50k/year each would be $1M/year. most of these dot-com companies are always trying to be bigger, better, faster. they're competing to see who can burn money the fastest. -Doviende "The value of a man resides in what he gives, and not in what he is capable of receiving." Re:the reason that it's generating heat... (Score:2) (In case the sarcasm wasn't evident above, I hold RMS in the highest regard for his principles and the actions he's taken in support of them. I was just calling him a nut-job to point out that most people who disagree with him start with name-calling and never really rise above that level of argument (like the author of this article did).) Caution: contents may be quarrelsome and meticulous! the reason that it's generating heat... (Score:4) ...is because it's flamebait, of course. Some choice examples: Yes, let's gloss right over the very real licensing issues, shall we? Because we all know RMS is a total nut-job with no basis in reality, right? That's funny, since I just built Gnome myself last night. I don't recall asking either company's permission. Of course, one is a publicly-traded company! A private foundation is just that, private. It's not surprising when the mainstream press gets confused and makes the jump from "free software" to "all information wants to be free", but it's surprising to see a Linux publication making such a leap, especially since that's never been the FSF's position. If they didn't believe in privacy, they wouldn't distribute GPG Wow, I thought we were past this kind of juvenile name-calling years ago. In case you hadn't noticed, Gnome does have apps, and in fact you can even use KDE apps on Gnome without any problems. Have you ever used Linux? I'll be the first to admit that Ximian and Eazel, along with a zillion other .com companies, made some very poor financial decisions (or at least made decisions which didn't produce good results when coupled with the .com collapse). I'm not sure if I would have given them any money if I were a VC, and I probably won't send them money via PayPal. If those were the points the author wanted to make, then I would have no problems agreeing with him. But these baseless accusations against the FSF and the Gnome organization, combined with the total disregard for the facts and his old-style "KDE r0x, Gnome sux!" attitude (I mean, come on - is this guy still in elementary school or something?) make it impossible for me to really get to the point of the article. If this were a post, it would have been "-1, Flamebait" for sure. obFullDisclosure: I use Gnome with mostly Gnome but some KDE apps at home, mostly because my KDE1.1->2.0 upgrade didn't go so well. Also I've submitted reasonable patches for both desktops' apps (in all cases including an explanation of what the patch would fix), and the Gnome folk have accepted them while the KDE folks have not (and have not provided a good explanation why not, either). So when it comes down to it, I'm more likely to use a desktop that is willing to accept my input, because I can identify with it considerably more. But that doesn't mean that KDE doesn't look nice, have solid code, and some nifty apps as well. Caution: contents may be quarrelsome and meticulous! Re:Provide Binaries (Score:2) The point is that source compatability goes way, way farther then binary compatability. Really, source compatability is the only reasonable way to get software between different software environments. I could grab any random source tarball and probably complie cleanly on RedHat from 5.0-7.1, Caldera, Mandrake, SuSE, Solaris, AIX, HP-UX, FooBix, etc. I could not get binary compatability between say RH and Caldera and definitely not between SPARC and x86 archs. I don't have the perfect analogy but I think you get the point. Re:Provide Binaries (Score:2) I absolutely agree. While binary packages could always be trojaned to "rm -rf /" in the %postinit all packages from major distributors are GPG signed. It is reletively simple to verify the package's signature against the keyfile that comes on your distributions CDROM. It is not easy or even possible to verify that the go-gnome.com site isn't a trojan. Maybe if they used https and made you check off on the key you could have some assurance, and using "set -x" in the script so you could see what it does. In any event, encouraging people to pipe data from random websites into a root shell is a universally bad idea. This article is a troll (Score:3) Unjustified insults against the FSF and Richard Stallman make this article contain neither truth nor much worth thinking about. Expecting records from the FSF about all the people who have contributed money and the sums of money thus contributed demonstrate a complete lack of understanding of the (mis-used here) phrase 'information wants to be free'. Has this guy never heard of the word privacy, and would he like all companies with which he has transacted to give out details of all those transactions? No, and no-one else has even suggested that such information should even be available to anyone. Also, why pick on Eazel for spending $13 million of investment capital? This is just a result of the The only serious points that are made are about the uneasy competition between Ximian and Eazel, which is exactly what you'd expect from two companies competing in the same sector. As for the rest of the allegations he makes, from the reason for RMS starting the Free Software Foundation to the reason for it supporting a desktop that has been fully GPLed all along (without reliance on a private company) and many others are, in a word, garbage, that only a little historical investigation would disprove. Re:Timing (Score:2) -- Re:Timing (Score:2) -- Re:Timing (Score:2) Myself, I always choose "Custom", even if I end up selecting everything. I like to see exactly what is being installed, rather than a vague description. And I like to cut out the stuff I won't use up front. I guess the Windows installers with their "Custom" choices have me paranoid. They tend to bite me in the ass when I use them to install/uninstall bits and pieces after the original install, so that has influenced my habits as well. -- Timing (Score:4) To sum up, the installer was nice and easy to navigate through, but it was draconian in limiting me to the categories that Ximian felt I needed. Tying this together, I mention Microsoft installs for a reason. Windows 95, 98, NT4, NT5, and Office 95, 97 and 2000 have given the option of a "Custom" install, letting me pull out many things I don't need. Ximian seems to be much more controlling than Microsoft, and Ximian is supposed to be Free. Yes, I know I could do the manual install of the packages, and not use go-gnome/Red Carpet/Helix/whatever the offical name is, but isn't that the main focus of Ximian/Helix, to make it easy to get what you want and need from Gnome installed, without the manual install? -- Holy Trollsville! (Score:4) Re:13 MIL? (Score:4) The article was polemical and poorly argued (Score:5) For example, here is a quote about the FSF: He [RMS]. This kind of unsupported pop psychoanalysis could be levelled against any group or organization. In this case, the evidence weighs heavily against it: whatever RMS' faults, he almost certainly believes in what he preaches. I doubt very much that RMS started the FSF to acquire needy followers, and I doubt very much that people join for a sense of belongingness. Writing code in your basement for a compiler with other people you've never met is not a sure a path to belongingness. Anyone looking for a sense of beloning could far more easily find it in a church. The other claims are similarly weak: Gnome is controlled -- c'mon, don't kid yourself -- by two companies The parenthetical clause ("c'mon, don't kid yourself") is the only support offered for this statement. The statement implies that RMS is a corporate lackey, which I seriously doubt. It's tragic that this kind of talk-show commentary has eclipsed real argumentation. There's more than just one business model (Score:3) Other models include such things as packaging and selling a configuration (most Linux distributors), producing a combination of both for-profit and free software products (The Kompany), gathering support from larger companies who will benefit from using the free software (Samba, Apache), and larger companies who feel that producing open source software will ultimately benefit their for-profit product lines (IBM, Sun). I wouldn't give up all hope just because the Ximian/Eazel service-based business model is faltering. Some of the other open source projects/business models seem to be meeting with more success. Re:Provide Binaries (Score:5) I'm always afraid someone will hack the go-gnome.com server and replace index.html with: rm -rf Re:Provide Binaries (Score:2) --Ben Re:Timing (Score:2) The other option that you didn't mention, though (which is what I did) is to do a minimal install, then to run red-carpet once you have the minimal install to install aditional packages. --Ben Re:Kind of funny... (Score:2) Also, the people who started Eazel love building GUIs. I'm sure that in a way this project was just an excuse to experiment. --Ben Re:Flamebait but... (Score:2) Now THAT'S insightful - I actually hadn't thought of it from this angle before, but when comparing the amount of usable (quality, reliable, reasonably performing) code per dollar, KDE wins by a mile. Of course, given how badly Konqueror kicks Mozilla around the field, it might be appropriate to handicap the Gnome folks for choosing to use that turkey. Oh, and CORBA, and, uh... It really makes you wonder what the KDE guys could have done with that kind of backing. Re:Provide Binaries (Score:4) And, no, for the record, this post is not a troll - think about it: is is really reasonable to willingly grant full root-level shell access to *any* site out on the net? Especially without even the most basic encryotion or security against spoofing? I've really been amazed at the double standard of the community. If you doubt for an instant there's a double standard, just think about what would happen if Microsoft tried this. (Oh, that's right, Windows Update does do that, but Microsoft takes some steps to provide security, unlike Gnome...) Re:Scam (Score:2) Ugh, slow down people!!! (Score:2) Re:Boring and Pointless (Score:2) "In the case of the former, it had to do with the lengthy wait users had endured before gaining access to the binary version of Ximian-brand Gnome." How is this a troll? Gnome was supposed to make life easier, you think compiling binary is making life easier? . " Now now, don't fucking deny this, this is dame true and why Gnome exists, because of QT. Don't even trip! You are the idiot, you need to learn how to comprehend what you read! Re:Provide Binaries (Score:2) Done. Re:You're missing the point (Score:2) Yep, that's EXACTLY what I was complaining about. I think KDE and Gnome are both cool, but the zealots on both sides that seem hell-bent on denying the power and usefulness of the other side are starting to make me long for X10 with uwm.... Factual errors abound (Score:3) <RANT> First, the "someone, somewhere" comment about paying for Gnome gets a two-word answer: "Sun, HP". Next, on KDE. I don't give a rat's left kidney about KDE, and why the heck does every 2-bit reporter with a browser have to compare Gnome and KDE? Don't get me wrong, I wish the KDE folks a lot of luck, just not my cup of tea. We're almost mature enough to stop mentioning Linux every time we talk about BSD (and visa versa), hopefuly we can drop the Gnome/KDE thing soon. Now, as for "KDE has actual applications". See my coments about about KDE, but for Gnome, we have: 1. AbiWord (word processing) 2. Gnumeric (spreadsheet) 3. Evolution (groupware; under development) 4. Gnomecal (caldendar) 5. Gnucash (finance) 6. Glade (GUI development) 7. Dia (vector layout) 8. GnomeICU (instant messaging) 9. LOTS more that I don't have time to type. On the Gtk front (non-gnome, just using the same toolkit) there's Gimp (photo-editing), Mozilla (web browsing, HTML editing etc), and again a good many others. Can we drop the "there aren't any applications" thing. <RANT> Re:For those who don't like "corporate" GNOME (Score:3) Also, I find it interesting that Ximian is considered some kind of corporate raider. These guys are free software hackers who decided to make it their day-job. I work just down the street from them, and have stopped in their office before. Let me assure you that they are not the evil capitalist pig-dogs trying to take over Gnome.... Before someone goes off the deep end trying to "re-package Gnome" without the offensive pixmaps of doom, I'd rather they spend time hacking on some of the code. There are features that need to be completed before Gnome will represent the definitive MS-killer (though it's most of the way there, IMHO). Re:Ugh, slow down people!!! (Score:4) 2. KDE ranting where it doesn't belong. Miguel formed Ximian (Helix back then) because he thought that it was the right thing to do to keep Gnome growing, and get commercial acceptance. Given HP and Sun's moves, I agree. Gnome is still just as free as Mozilla (even though, like Mozilla many of the developers work for a commercial entity). If you don't like where Gnome is going, feel free to fork it. I think you'll have a little trouble just keeping up with the updates, but hey, that shouldn't stop you from trying! Then again, you could contribute.... This was yet another "but, if they're trying to make money it's not free, right?" articles that you see from time to time. It's always done by someone who a) just saw free software for the first time or b) has an axe to grind because they like another project more. He likes KDE. Cool, let 'im. He don't need to piss on our playground because he's got a pet desktop. Re:Provide Binaries (Score:5) Once you download and install Red Carpet, you have full verification of binaries all the way through the process. The go-gnome installer is a bootstrap process. You can download source, compile and begin the install if you want, but this is not grandmother compliant.... Lack of binaries/ease of use (Score:2) If Linux is indeed bent upon Total World Domination, then these sorts of things will have to be taken into consideration; end users are, for the most part, sure to run away screaming from such things. Re:Additional point (Score:2) Software support as a service is the reason why IBM has not gone the way of many others. It is where Microsoft has figured they need to go, and where all software goes in the end. Think about it; what's a bigger economy: servicing, repairing, and upgrading homes, or building prefab homes? It's also rather arrogant to characterize the folks from, e.g. Apache as "college student hackers." There are a lot of very professional, very accomplished programmers out there doing shit for reasons you will need years to understand (and guess what; IBM pays them to do it, too). Run with the big dogs a while, you'll not be so impressed with IBM's interest in people from your school. Or maybe you'll turn into another marketroid whose only asset is more facility with jargon than your victi^H^H^H^H^Hcustomers. You are right about the importance of service. Just don't forget that there is nothing to service if the real heavies don't write the code. Boss of nothin. Big deal. Son, go get daddy's hard plastic eyes. Re:Flamebait but... (Score:2) Probably not as well. There's a lot said for being lean, even hungry, during development. Example: NASA spent millions of dollars creating an ink pen that can write in zero gravity without making a mess. The Russians took pencils. - - - - - Re:Drivel (Score:2) THIRTEEN! MILLION! DOLLARS! (Score:2) Re:Truth is less interesting..... (Score:2) When I introduce people to the free software community, I introduce them to this side of it and they are often eager to share their developments with the world that they often realise they had no reason to keep secret in the first place. As long as we keep adding new coding members as well as people who just buy T-shirts, we should be doing fine ;-) Re:Don't kid yourself (Score:2) If you invested your money 6 months ago, it would have gone to those creditors _before_ they went bankrupt. As it is, they now owe creditors money and those creditors are _out_ by that much money. What if Ximian were owed money by Symantec and Symantec went out of business? Wouldn't we be upset that Ximian was out that much cash? Angry! (Score:3) Wow, that author was kind of angry, wasn't he? Still, without doing any research of my own, and not exactly following the works of either Ximian, Eazel (*cough* I kinda have a different [obsession.se] favorite fm, *cough*) or KDE, some serious-sounding issued were raised here... Do developers from competing companies actually fight over important subsystems in the GNOME code base? Scary. One thing that made it difficult to take seriously though was the (to my eyes) invented "paradox" that the FSF should somehow be aligned with the "information wants to be free" meme. [Ouch, trend alert, I said "meme".] Anyway, in my eyes, the FSF in general, and RMS in particular, are for free software. Not information... I believe there's still a point in making a difference between the two, at least in discussions such as this. I must admit, though, that it's kind of interesting to hear that their financial records are being kept so secret... Suspicious? I don't know. Re:Factual errors abound (Score:3) Because they are incompatible and inconsistent with each other, hence every desktop user will compare then to decide what they should you, and the reporters would like to provide a useful service by comparatively analysing them? You could fix this by either making them work together properly (no, half implemented xdnd is not working together properly) and be consistent, or by waiting for one to grow considerably larger than the other, causing vast amounts of pain to desktop users while we wait. Truth is less interesting..... (Score:4) Articles like this tend to be popular, simply because they make people either really mad, or elated. As I am a pretty big GNOME and FSF supporter, it made me mad. But, I was mad not because I discovered that these two organizations have been embarking on a sinister plot to ruin the "community," but because I was shocked at the lack of journalistic integrity demonstrated in the article. But hey, it drives a lot of traffic. First, the notion that the FSF's financial details are not available. That is plain false. Anyone can request them (politeness probably helps) - simply as them or the IRS for their tax forms. Others have stated in other forums that they have had no problems getting such reports. Second, the whole PayPal thing. This really bothers me. It was suggested on slashdot a little while ago by various members of this very forum that perhaps Eazel should accept donations somehow from grateful users, to show their support for the company. Eazel, being an *extremely* community-oriented company, complied. Bart Decrem even went so far as to suggest to people who just wanted to support Free Software in general to make donations to the FSF, since if Eazel goes under, they would be legally obligated to give funds to creditors. Eazel has, in many ways, made every attempt to encourage community feedback and involvement in all of its projects. Yet the supposed "community" that slashdot apparently represents essentially slaps them across the face with unwarranted accusations of unethical practices. As for Ximian and Eazel fighting for control of GNOME, and arguing over base libraries, this is really contrived. Yes, members of both companies have argued technical merits of various bits of software. Sometimes arguments get heated, especially when everyone is under a deadline (thanks to the demanding slashdot crowd who quickly complains about any slippage in schedule, then as soon as a product comes out on time, finds a fault and blames evil marketing machines for forcing products out early). But, as anyone can read by looking at the public mailing list archives, disputes are resolved, and the framework is improved in the end. This happens in any project. It just happens that in the Free Software world, these discussions are made public. Corporate control of GNOME is pretty much wrong in every way. The GNOME Foundation doesn't grant corporate entities voting rights. It is also against the GNOME Foundation's charter for more than 3 people from the same company to be on the Board at once. And all board members are elected by the general GNOME Foundation membership. It is true that a number of employees at Eazel and Ximian (as well as other companies) are actively involved in core parts of GNOME. But, they have been in that capacity for a while, long before these companies existed. They saw an opportunity to do what they loved doing full time, and get paid for it. Shouldn't this be lauded, rather than attacked? These people are making really excellent Free Software. Instead of thanking them, this supposed community alternately slams them for not producing more for free, or for having a "flawed business model." Make up your mind. I am feeling a growing disgust for the "masses" of the slashdot crowd, and the Free Software community at large. It used to be a real community - people actively exchanging ideas in a postive manner, everyone happy to see Linux in the news for some reason, and people actually working on projects to contribute back to the community. That doesn't really happen so much anymore. We have a few dedicated people that work harder than ever to further the causes so many people here pretend to care about, but at the end of the day, people just bitch at them for not making it exactly the way they wanted. But, of course, they can't be troubled to do anything like helping out. Because to people, it is selfishness that matters, not freedom. People attack people like RMS or Miguel or others, whether they be individuals or companies, while it is these people that have gotten Free Software to where it is today. But what do you all do? Attack them. Freedom comes at a price. Responsibility. I hope that some people eventually realize their responsibility and live up to it. But that is probably too much to hope for. "information wants to be free" (Score:3) It's more like a 2nd law of thermodynamics rule for information. You know, only more people can have information as time progresses, not fewer. Kinda like the "You can't put the crap back in the dog" law. Why do people continue to use that phrase anyway? it's something I pretty much only expect to see on alt.2600 or #HackWarezLinuxPhreakKlan or something. Re:Provide Binaries (Score:3) So, pissoff Microsoft boy! We're finished with your fancy Windows 2000. Re:Additional point (Score:2) These are the questions that need to be answered by businesses when they use Free software, whether it be from Apache, FSF, whatever. That's one of the reasosn why IBM's services division is doing so well (IBM Global Services wanted people familiar with Linux+Apache from my school, primarily), and is the business model upon which RedHat is trying to make money on. Fun stuff. -- Re:Additional point (Score:2) -- not applicable (Score:2) You're wrong. GNOME *is* controlled (Score:2) Doesn't surpise me. Ximian is incompetant at GUI's (Score:2) Real ease-of-use, as opposed to Microsoft/Ximian ease of use, is not about wiping the user's ass, it's about not kicking it. It is possible to have a simple interface that gets increasingly detailed as you go down. This is the principle called "Progressive disclosure". Unfortunately, ximian seems to be unfamiliar with this idea. Not that I am singling ximian out, because KDE is equally unfamiliar with this idea, except in the opposite way (the user encounters way too many controls/options initially. Clutters up the user's screen and mental bandwidth). UI design foibles that tend to screw the user over in the name of wiping their ass are the trademark of Microsoft. Since Ximian is heavily influenced by what microsoft does, it is not really surprising that such a mistake like the lack of custom package selection was made. The manual installer was another badly botched design. Last time I used the graphical installer to try and manually install from stuff I downloaded (I now just build from tarballs), it didn't work. There was no, real apparent way achieve a local install, despite the fact that an option for such a thing was listed in the installer. Being a geek, I got around this. But not everyone's grandmother is a geek. I'm not really trying to criticize Ximian too harshly. I'm just saying that I'm better at designing UI's than they are. Maybe I should apply for a job there, since obviously no one at Ximian is doing their's. Ximian sure has many talented programmers who are technically competant. I'm sure they know everything about corba, every gtk/gtk/glib/xlib API call by heart. But from what I've seen of their software, they don't have a single person on board who knows a damned thing about usability design. I have seen UI design mistake after design mistake repeated again and again with each new Ximian/Helix download. Miguel might give a good talk about making computers easy to use, but so far he hasn't been able to back up his words with action. Of course, debating this whole thing in a flame war on slashdot is pointless, since time doing that will take me away from my main GNOME activity, which is fixing Ximian's numerous UI idiocies and releasing the modified code in a forked version of GNOME. I'm sorry I wasted this much time already. too bad moderation ends here (Score:4) Re:Sigh... even communism fails (Score:2) All of these possibilities you list are correct, but they're not reasons for writing the software so much as the people who do so. They do it because they enjoy it, or because they believe information should be free (no one believes in the philosophy that "The project team are the user"). Try running Linux or a BSD sometime and see that free stuff isn't so rare, and that by installing it yourself you make it even less rare. "I may not have morals, but I have standards." Re:Drivel (Score:2) That was after. When I started using KDE beta 2 (I think gnome didn't exist at that time), the license was "free for non-commercial use", meaning that you couldn't even use KDE commercially. I think they allowed free (beer) use for free (speech) software around version 1.0, but I'm not sure. Re:Drivel (Score:4) If nobody had complained and the gnome project had not been started, we'd be in a really strange position now, with the only major Linux Desktop being excluded from companies. KDE does that and more... (Score:2) 2. KSpread [kde.org] 3. Aethera [thekompany.com] 4. KDE PIM [kde.com] 5. Kapital [kde.com] 6. KDevelop [kdevelop.org] and Kylix [borland.com] (Delphi for Linux. You have to here my Delphi-mad housemate ranting about how great this is...) 7. KMatplot [sourceforge.net] 8. Licq [licq.org] 9. LOTS more that I don't have time to type, however [kde.com] will show you. There's KIllustrator [uni-magdeburg.de] (photo-editing), Konqueror [konqueror.org] and Mozilla [linuxtoday.com] (web browsing, HTML editing etc), and again a good many others [kde.com]. Oh, and anti-aliased fonts [sourceforge.net] are very very nice, but that's just a bonus of a superior toolkit... Re:You're missing the point (Score:2) GNOME seems to be almost always playing a game of catchup with KDE.... Moderate me down, it's my opinion. Opinion is worth loss of useless karma... Re:If you want financial information about the FSF (Score:5) Yes, that was my blackjack winnings. John Carmack Don't kid yourself (Score:5) Gnome is controlled -- c'mon, don't kid yourself -- by two companies. Ximian and Eazel have exactly as much control over GNOME as IBM used to have over the PC market. There was a day, years ago, where IBM was the undisputed leader in the PC market. PCs were called "IBM PC compatibles" or "IBM clones". Everyone waited for IBM to come out with a new PC, and then carefully copied it in their own PCs. All that changed when IBM did two things: 0) they tried to get everyone to buy in on a platform completely controlled by IBM (the Microchannel Architecture or MCA; IBM had patents giving it full ownership of MCA) and 1) they delayed months without releasing a PC based on the Intel 386. Another company (Compaq) took the bold step of releasing a 386-based PC before IBM did, and the rest is history: IBM never got the leadership position back. These days IBM is just another vendor in the PC market. The situation with GNOME is similar. Ximian and Eazel can lead, and everyone will follow. But if the day ever comes that these companies try to lock people in to a proprietary solution, or if they stop releasing new stuff, then they will lose their leadership position. Others will pick up the development and run with it. In the case of PCs, it was free-market competition that prevented IBM from forcing the industry to follow its lead. In the case of GNOME, it is the GNU public license and the public release of the source code that prevents Ximian and Eazel from forcing the free software community to follow their lead. The free software license is important, even if Mr. Powell doesn't seem to understand it. Ximian and Eazel have control of GNOME for exactly as long as they deserve it. We can and will take it away from them if we ever need to. And that is why his article is ultimately pointless. Eazel and Ximian and the FSF and RMS could all be abducted by aliens tomorrow, and GNOME will still survive and prosper. Mr. Powell can sling his gossip and innuendo, but he's kidding himself if he thinks any of it really matters. P.S. I am somewhat on the same page with him about the cash donations. The idea of trying to donate cash in a way that keeps the money from going to creditors seems odd, perhaps even immoral. And what good will it do to contribute money to the Eazel company if it will go bankrupt for not paying its creditors? steveha Facts OK, interpretation bad (Score:3) Dennis Powell's consistent inflammatory rhetoric and constant GNOME-slagging are not an indication thathe is a troll. His consistently deliberate misinterpretation of the facts is the indication that he is a troll.He says: It is absolutely undeniable that the FSF has thrown its support behind a desktop controlled by two for-profit companies, one of which has an officer who sits on the FSF's board; He, and you, ignore the fact that the order was like this: First, Miguel started GNOME, which won the support of the FSF because it did not have the same restrictive license (restricive for DEVELOPERS, mind you-- the whole Free Software thing isn't about users having software that is free-for-use but developers having access to, and use of, platforms on which to develop their software, free of charge and free of interference from corporate entities like TrollTech.) Only later did Miguel start a company. DEP implies favoritism and nepotism where it does not exist. This company, Ximian, does not control GNOME, certainly not in the way Microsoft controls Windows, or the way TrollTech controls QT, or the way TheKompany controls Aethera. Ximian is certainly a major leader in GNOME, in the way that, say, HP is a leader in the PC-sales field, but we're competing in an open playing field that no one controls. And we're certainly not controlling GNOME, charging people to develop for GNOME, or anything of the sort. My personal opinions, of course. a. NOT just packaging (Score:2) Arguably though there should be no need for them. KDE manages to get by without needing someone to add polish later on. Perhaps GNOME should ask themselves why they can't do the same. I suspect the reason is that the KDE folks have never had some company to clean up after them so they're a lot more careful about the overall usability and style of the thing. what about qt? (Score:2) how much control, developing under qt, do you have over whether or not you project uses qt? if there is no clean answer to this question, the author of the article is a very, very confused person. touching qt requires that you either open source your project or pay trolltech $1550 per developer -- more than twice the cost of visual c++ pro and w2k combined. Gnome/FSF only require that you free your software if you modify GPL software...that is, system libs are lgpl, which lets oracle, netscape and a few others play in the linux sandbox. naturally, the fsf would prefer all software be free...but many people don't understand the true costs and collaboration between the KDE and QT people. Re:Provide Binaries (Score:2) Even that is a little too hard in my opinion. I remember when the nautilus (early) beta was out and one of my friends was interviewing there. She wanted to install nautilus on a linux box and play with it so she could talk intelligently about it at the interview. I remember being on the phone with her for an hour as we went through "Oh error message, eh? How about if you try rpm --force-something-or-the-other", or "Why not download all those rpms and go rpm -ivh *" or "Let's try to get rid of old rpms on your machine and try to reinstall". And so on and on, and it really did take an hour to get it installed. Now I understand that the nautilus install process and the gnome install process have come a long way since, but they still seem to be overly complicated. We need to make it as easy as Windows: click link - click execute from remote location in dialog - click yes on security warning - click next on a few boxes and we're off to the races (no reboot remember). Anything that involves an xterm or a shell is too complicated. period. If you want financial information about the FSF... (Score:5) There some interesting stuff there, esp. in the Form 990's: - No one seems to draw a salary - In '97, id software donated about $19K to the FSF, which was over $3K more than Red Hat did. (Is that the year Carmack won big at gambling & donated the proceeds?) All in all, seems like it's a pretty low budget organization. Congratulations to the author (Score:2) Re:"information wants to be free" (Score:2) Kinda like the "You can't put the crap back in the dog" law. Actually, my dog eats her own crap all the time. Summary (Score:2) Executive Summary The author, DEP, wants good code gratis. He expects for-profit organisations to offer code for free, and non-profit orgs to do the same. The author believes that the FSF should not co-operate with for-profit orgs, despite the GPL being quite clear on its stance regarding money. In more detail: Issue: Freedom RMS: Free as in "speech" DEP: I want Free as in "beer" DEP: Eazel are a for-profit company asking for money! This is ludicrous! KDE are a non-profit org who are not asking for money! READER: Err... point being? DEP: These FSF idiots don't understand the word "Free" DEP: KDE is Free by my definition; Eazel less so. Therefore the FSF shouldn't support Eazel. Issue: GPL DEP: And it would make an interesting case, GPLed code as an asset in a bankruptcy proceeding, wouldn't it? There is something symmetrically ironic about the GPL's first court test taking place in that milieu. ... READER: And the value of WordStar without Broderbund support? The value of PowerPoint without a Microsoft? The value of Doom just because it's been around a few years? DEP: Oh. Erm, well From the logic of these arguments, I can only surmise that the author (DEP) is, or expects to be, on trial for some serious crime, and is generating evidence prior to making an insanity plea. #include <stddiscl.h> Re:Provide Binaries (Score:2) Folks already have! That's the concept that drives the FreeBSD ports collection. There's even a GUI interface called "pib" that can handle your pointing and clicking needs. For the unenlightened on the subject, FreeBSD includes a directory stucture under cd Inside of this directory is a couple of small files. One is a script that has all the information needed to download the application, uncompress, configure, compile, and install it. A couple of other files in there are used to describe what the application is, so that GUI based clients can tell a user what it is they're installing. There's also a file that lists what all is going to get installed, and where so you can pretty accurately uninstall this thing. These scripts are capable of checking for file dependencies, like actually fixing them, not just spitting out bogus error messages. To install that example above of Mozilla, all you need to type is... make install Best of all, this tree is updated with a fairly simple script through CVS. Since we're only talking small text files, this is pretty darn quick to update even across a modem. There's over 4000 applications in that tree today, with more being added all the time. Whether you like FreeBSD or not, this is a really wonderful system that's been worked out over on this. No, it's not perfect as I'm sure some replies will mention, but the concept behind it is sound. Instead of having 1000's of people trying to figure out how to compile something as massive and complex as a Gnome or KDE, one person gets it all working proper for this OS and submits it in. Wasn't that the point of Open Source in the first place? $13-million dollar file manager (Score:2) heck .. Microsoft did that and more! Re:Don't kid yourself (Score:2) Think of it as a way to undo some of the harm the Slashdot, Gnotices and Powell trolls have done :). CmdTaco please stop doing drugs at work (Score:2) Re:I've had it (Score:2) How to make $$$ (Score:3) Impossible you say? No! We just have to monopolize the support area. Here's the idea: each of us (the 31337 unix admin/coders/users) enters in a "business partnership" with our favorite support-based company (SBC), i.e. Eazel, RedHat, etc. We agree to forbid ourselves from answering tech support questions online, i.e. in #linux on IRC, usenet, etc.. Instead, we redirect the luser who has a question to our personal address at our SBC where the luser can find out his answer--for a small fee (micropayment). Then, the resulting pool of money is collected and divided between us and our SBCs. It's a win win! Help the economy! Help yourself! Don't compromise your software's freedom! IRC Example: #linux Bob: Hi I'm bob I new to linux help me set up my isa winmodem [silence ensues due to all on the channel being bound by agreements] Cardhore: Okay Bob I'll help you. Bob: Okay thanks. My modem is not working in the redhat...how do i make driver for it? Cardhore: Well, I happen to have the answer right here: ? q=winmodem [bob goes to the url] Bob: YOU ASS HOLE I'M NOT PAYING $40 TO LEARN HOW TO DO THAT. Bob leaves. [twenty minutes pass] Bob has entered #linux. Bob: Cardhore...so are you still up for that offer? Success!! Bob successfuly gets his modem working, Cardhore makes $$$, and RedHat pleases its shareholders! For those who don't like "corporate" GNOME (Score:5) The project intends to provide binaries for most platforms so that you don't have to compile them yourself. Its binaries will also be un-branded--there will be no Eazel or Ximian logos, features, etc. Also, just because someone can compile GNOME himself, it doesn't mean that he wants to. In fact, on moderate hardware it will take about two days to compile this. Experienced power users don't necessarily have time to waste on this. From the article: ..where information wants to be free so long as it's other people's information. Do people who believe this agree with it when their personal information is free? Re:Flamebait but... (Score:3) On the other hand, they did hire a *lot* of developers. From the numbers thrown around in the different articles, it sounds like pre-layoffs they had over 30 paid developers, maybe more. And their services development can't really require the ongoing services of 10 developers, can it? Online disk space? Like I said, Konqueror has 1 paid dev. To rephrase, a company that only makes a file browser should not have blown through $13 million before releasing 1.0. Unsettling MOTD at my ISP. Flamebait but... (Score:5) Second, yeah, this is raw flamebait. But the RMS apologists always justify him by saying, "Sure, he's a vindictive nut. But we need people like that!" This is kind of a counterweight. Third, the "..the monkey chased the Eazel" stuff did make me laugh. Fourth (I only planned first and second when I started this), it really is remarkable how Eazel managed to blow through $13 million on a file browser. All of KDE 1 and 2, even including Qt, didn't cost that much or require that many paid developers. By comparison, Konqueror has one paid developer, David Faure. (Who admittedly is really, really good.) Yes, there are some TrollTech people working on khtml, but since Nautilus uses Gecko, they don't count for this comparison. Fifth,the reasoning by which the FSF gets dragged into this is pretty shaky. There's no real reason to think they're getting involved with Eazel. On the other hand, Powell is right that the Gnome leaders have committed to having companies drive their project and they'll have to live with the results. I'll throw in a sixth and preemptively point out to the people who always invoke the Kompany here that the role of the Kompany in KDE is completely unlike what Eazel and Ximian do in Gnome. The Kompany is not involved at all in core KDE development or planning and does not attempt to rebrand the desktop. Unsettling MOTD at my ISP. Boring and Pointless (Score:3) The writer spends a lot of energy blasting companies, that for the most part don't actually ship much in the way of products (previews, stuff like that don't count) and certainly dont earn much money, he then spends a lot of energy attacking FSF and doing his best not to kiss Stallmans ass, only to demand to know whats going on. Well, i got some advice for this writer. Shut the fuck up. If the FSF is full of shit (at it is, IMHO) and these companies might go out of business, then fuck em. Use your copy of Red Hat, download Eazel, don't download Eazel, whatever. They dont have to answer to you, just like Muslims dont have to answer to the Pope. Sorry for the troll, but even from my Mac using point of view, this guy is an idiot. Re:Boring and Pointless (Score:4) Now that I'm not trying to get a post early and make it end up sounding like an idiot on steroids wrote it, I'll address this. You're right, the FSF does have a "holier-than-thou attitude", to that I say: So what if they do? Does it make them any more correct or incorrect? Let me let you in on a little secret I learned years ago: everybody has a "holier-than-thou attitude", everybody does. I do, when I say the FSF is full of shit, I mean exactly that, they are full of shit. I don't like their made up definition of free, I don't like how Stallman comes off sounding like an asshole whenever someone says Open Source near him, I don't like how they view closed source with the same level of dogma that a Religious Fanatic would view an infidel. That my friend, is whats called a holier than thout attitude. It's also called being sure of your opinions. Big deal. If you want to attack the FSF, attack them for concrete things, like disputing their definition of free and giving an alternative definition. That's called a dialog. You're also right, it is "offensive (that) anyone (pissed) away $13 million", especially since the average life span of a person in Africa is now 40 years due to famine and HIV. But it isn't my money. I wasn't the idiot who gave it too them, and hopefully neither were you. Hopefully the Investment Capitalist who gave them that money have learned their lesson and will spend it wisely next time. OS/2 and Eazel ?? (Score:5) And the author's comment but it's no goofier than seven or eight years ago, when people who called themselves "Team OS/2" gave up evenings and weekends in unpaid volunteer support to be especially curious. Isn't this what open-source software is about ? Isn't this what we do when we post an answer to a question on Usenet, or on a bulletin board ? Isn't it what we do when we discuss things here ? The actions of Team OS/2 are no less "goofy" then open-sourcing software. How are these companies going to survive? (Score:3) thank you CmdrTaco (Score:4) Re:whats going on? (Score:4) I've had the same problems. Apparantly Slashdot has been Slashdotted. Perhaps you should Ask Slashdot [slashdot.org] about this. 13 MIL? (Score:5) I guess ferraris must be standard programming equipment nowadays. Otherwise I can't figure out how they would spend 13 million on making a file manager.
http://slashdot.org/story/01/05/02/1919238/on-the-subject-of-ximian-and-eazel?sdsrc=next
CC-MAIN-2014-52
refinedweb
9,812
72.05
Why I love learning functional programming August 1st, 2020 This is the first part of a series on my journey in learning functional programming (FP). In this first part, I'd like to share why I spend time on learning functional programming in the first place. At work, I mostly write imperative code and I still haven't written purely functional production software. However, I still spend time learning it every now and then, and here's why. It brings math to programming The first reason I like functional programming is that it brings math back to programming. At the university, I minored in math. I'll probably never have any practical use to the courses in topology, differential geometry or group theory, but I none of those courses were a waste of time. They all taught the power of abstraction, how to find and see the big concepts underlying seemingly unrelated problems. In functional programming, you encounter abstractions like functors and monads all the time. Functional programming has roots deep in category theory, a branch of mathematics studying objects and their relationships. Category theory tells us, for example, that monad is just a monoid in the category of endofunctors. What the heck do those words even mean? I have no idea, but I must find out! I've been learning category theory from the wonderful Category Theory for Programmers blog posts. They're are an easy and accessible way to access category theory. Maybe some day I'll be able to pick up a serious textbook on category theory! It forces you to think differently My second reason to learning functional programming is that it forces me to think differently. Putting aside playing with Basic in the 90s, I first learned programming at the university in Java and C. Programs were written using if-clauses and for-loops. Data was modified in-place with functions or method calls returning nothing. If-clauses, for-loops and in-place mutations are easy for us humans to understand, because that's how we intuitively process data. If you're given a list of N skills that you need to learn unless I already know the skill, here's the algorithm: - Set i=1 - Take the i'th skill from the list - Check if you know the skill. If you don't, learn the skill. - If i=N, exit. Otherwise, set i = i+1and go to 1. This is an imperative program, with one command after another modifying the program state (your skills). To us, world seems to be made of mutable objects. That's how computers also work, one statement after another modifying the program state. Now, imagine you're told you need to write code for a program without a single if-clause or for-loop. You are also forbidden to mutate objects. What you're allowed to do is create new objects and write pure or referentially transparent functions. Referential transparency means that a function call can be replaced by its return value without any change in the program. For example, this function is not referentially transparent: def square(x): print(f"Computing the square of {x}") return x*x You can't replace square(x) with x*x and expect the program to remain unchanged. It goes without saying that such constraints force you to think differently about writing code. To me, that's a very good thing. Recently I've been writing code mostly in Python and JavaScript. While I love both languages for their flexibility and simple syntax, and there's always something new to learn in both of them, I don't think they offer that many chances for learning new concepts. Last time I learned something genuinely new about Python was when we wrote a command-line tool making heavy use of asyncio or when I had to understand generics in the typing module. Most of the time, the code consists of the same if-clauses and for-loops, possibly in some new framework. With functional programming, programs will inevitably look different. Are they better? That's an ill-posed question, as there's no one best code for a particular task. It depends on factors like with whom you work and who will maintain the code. But I do think writing functional programs teaches me something fundamentally new about computing, and the more you know, the more likely it is that you can pick the best approach when new problems emerge. Of course, my employer most likely wouldn't appreciate me spending the whole morning figuring out how to make a HTTP call or spending the morning explaing my colleagues how data type Maybe replaces if. That's one reason why FP is mostly a hobby to me at the moment. For me to be truly productive in writing purely functional programs, I would need to be surrounded by colleagues supporting me, with a team where knowledge about solving problems in a functional way would spread. In such a team, the cost of learning new concepts would also be lower as those new concepts might improve everybody's code base. Above I referred to imperative programming as the "non-functional" way of coding. To see how that's not true, here's one excerpt of Scala code from the Functional Programming in Scala book ("the red book"): val factorialREPL: IO[Unit] = sequence_( IO { println(helpstring) }, doWhile { IO { readline } } { line => when (line != "q") { for { n <- factorial(line.toInt) _ <- IO { println("factorial: " + n) } } } yield () } ) That's a purely functional program written in imperative fashion. Why's there a for-loop? It's Scala's syntactic sugar for composing functions such as map, filter and flatMap. FP is a logical conclusion to many ideas considered good programming style The last reason to learn FP is that I think it pushes the boundaries of many ideas considered good programming style. My first touch to functional programming came from attending lectures in functional programming at CMU, when I was a visiting researcher there. I attended maybe six lectures, where the lecturer wrote formal proofs showing that given recursive function calls would terminate with the expected result. It all seemed very theoretical to me and I thought I would not meet FP again. However, as soon as I started in my first programming job, I was introduced to FP as more experienced programmers told me to avoid writing code with implicit side effects and mutable state where possible. I didn't understand at the time that the ideas had anything to do with FP, but I can see now how many such ideas are built-in to FP. As an example of how FP can help write cleaner code, let's say you have a function like this: const containsFinnishLapphund: (jpegBase64: String) => boolean = ... It checks if an image contains a Finnish lapphund. The signature says the function takes a base64 encoded string and returns a boolean. Based on the signature, I expect this function to not have implicit side effects. Therefore, I can safely call the function for 100 images in parallel without worrying, for example, about race conditions, deadlocks or hitting rate limits of external APIs. The key here is the word implicit. In the context of my TypeScript codebase, I do not mind if the function prints to console: my code would most likely already be interspersed with such logging statements. However, I would be very surprised if calling the function incremented a database counter or stored the image to Google storage. Such surprises could lead to hard-to-find bugs, let alone make testing a pain. In non-functional languages, it's the developer's responsibility to write code that is not surprising. In Haskell, however, a type signature such as containsFinnishLapphund :: String -> Bool would make it impossible for the implementation to have observable side effects such as storing the image somewhere. If the function insisted on making a network call or logging to console, it would need a type signature containsFinnishLapphund :: String -> IO Bool The IO typeclass here makes it explicit that the function is doing something with the external world. What does it do? For that, you'll need to read the code or trust the function docstring saying it doesn't do anything other than print to console. But at least, it's not a surprise anymore. Another example of an "FP idea" considered good programming style nowadays is declarative style. For example, most programmers would nowadays agree that to remove even elements from an array and square the rest, this const double = arr => arr.filter(v => v % 2 === 0).map(v => v * v) is preferred to this: const double = arr => { const newArr = [] for (const i = 0; i++; i < arr.length) { if (arr[i] % 2 === 0) { newArr.push(arr[i] * arr[i]) } } return newArr } In functional languages, the former would be the default way of solving the problem. Again, this doesn't mean declarative style is better than imperative, but it does show that declarative style has its pros. In FP, the declarative style can be pushed even further with function composition and point-free style: square :: Int -> Int square num = num * num isEven :: Int -> Bool isEven n = n `mod` 2 == 0 double :: [Int] -> [Int] double = map square . filter isEven To me, code like this is elegant and beautiful. While function composition and point-free style take time to get used to, I find it worth the effort. Conclusion That concludes the first part of the series. I love learning functional programming because it gives me reason to read math again, is forces me to think differently, and it pushes the boundaries of good programming style. Thanks for reading, please leave a comment if you have any!
https://kimmosaaskilahti.fi/blog/2020-08-01-why-i-love-learning-fp/
CC-MAIN-2022-40
refinedweb
1,624
62.17
A Beginners Guide to Titan Framework: How Titan Works This is the third article of the series and so far, I have discussed the importance and features of Titan Framework along with the basic setup. In the very first article, I discussed the three-step setup of Titan Framework which goes like this: - Set up your project. - Create options. - Get values. I explained the first step in the previous article, in which we learned that Titan Framework is a plug and play framework, i.e. it comes as a plugin, although it can also be integrated by embedding it within your web development project. So, let's continue from where I left off and get on with the next two steps. Here, I'll explain how and in what capacity Titan Framework works in your web development project. Then I'll jump to the concept of creating Instances and Options from which I will get the saved Values at the front-end. So, let's begin! 1. Set Up Your Project First of all, let's find out what you'll need to implement today's tutorial. We are going to create a simple WordPress theme with which we will use Titan Framework to explore the different set of options it can create. You'll need: - Local WordPress setup, e.g. I use DesktopServer (believe me, it's amazing!). - A base WordPress theme—I am going to use Neat for that purpose. I have created a new branch called Neat: TitanFramework for this tutorial. My Setup As I am going to use Neat WordPress theme, it's important that I explain its structure first. Open the theme folder in your text editor where inside the assets directory I've created a new folder named admin. Its purpose is to handle all the relevant code for admin options. Within it is another directory, titanframework, and a PHP file, admin-init.php. admin-init.php File This PHP file will handle all the admin-related activity. If you scroll through its code you'll find out that I've used the get_template_directory() function to include four files from the titanframework directory. The following code is pretty self-explanatory, but I will explain the purpose of each of these files in a short while. Here is the code for admin-init.php: <?php /** * Admin related initializations */ /** * Titan Framework Required to be installed * * It adds Titan Framework as a plugin * * */ if ( file_exists( get_template_directory() .'/assets/admin/titanframework/titan-framework-checker.php') ) { require_once( get_template_directory() .'/assets/admin/titanframework/titan-framework-checker.php' ); } /** * Create an options via Titan Framework * * */ // Admin panel options. if ( file_exists( get_template_directory() .'/assets/admin/titanframework/adminpanel-options-init.php') ) { require_once( get_template_directory() .'/assets/admin/titanframework/adminpanel-options-init.php' ); } // Metabox options. if ( file_exists( get_template_directory() .'/assets/admin/titanframework/metabox-options-init.php') ) { require_once( get_template_directory() .'/assets/admin/titanframework/metabox-options-init.php' ); } // Customizer options. if ( file_exists( get_template_directory() .'/assets/admin/titanframework/customizer-options-init.php') ) { require_once( get_template_directory() .'/assets/admin/titanframework/customizer-options-init.php' ); } Directory Called titanframework I have discussed previously that Titan Framework helps you to create Admin Panels and Tabs, Metaboxes and Theme Customizer sections and panels. So I have created separate files for each one of them. Obviously they can be created with separate lines of code. I'll be discussing each of these in my upcoming articles, but for now all you need to understand is what are these files for. titan-framework-checker.php: is responsible for including Titan Framework in your theme. adminpanel-options-init.php: contains the code for creating admin panel and tabs with a set of options. metabox-options-init.php: contains the code for creating metaboxes for post types with a set of options in them. customizer-options-init.php: contains the code for creating theme customizer panels and sections with a set of options. Including the admin-init.php File Up to now you must be wondering why I've created so many new files. Why didn't I just add all the code in the functions.php file? Well, I don't consider it to be a good architecture design approach. To build a maintainable product, you need to define a good design pattern. What's the point of messing up your functions.php file with so many lines of code? I have experienced this myself: towards the end of any development project, the code becomes so massive that it gets difficult to handle all of it in a single file, especially when it comes to debugging and fixing any errors. It's always better to create separate files, so think of these files as modules. Let's include admin-init.php in our functions.php file. <?php /** * Include admin-init.php * * File responsible for all admin relevant activity E.g. Settings & Metaboxes etc. */ if ( file_exists( get_template_directory() .'/assets/admin/admin-init.php') ) { require_once( get_template_directory() .'/assets/admin/admin-init.php' ); } Here I have added the admin-init.php file via the same get_template_directory() and require_once() functions. At this point, we've taken a look at the basic setup of a theme which I am going to use to explain how Titan Framework works. We've completed the setup and embedded Titan Framework in our WordPress theme. Let's create an admin panel with options and get the values for the front end. Working with Titan Framework To work with Titan Framework you need to follow a certain workflow which is: - Create a Titan Framework Instance per file. - Create Admin Panel/Tabs, Metaboxes or Theme Customizer Sections/Panels. - Create Options in them. - Get the Values. Let me first write the piece of code which I am going to use for this very purpose. This is the code for the adminpanels-options-init.php file which is present inside the assets/admin/titanframwork/ directory. <?php /** * Creating admin panel options via Titan Framework. * * Getting started: * Admin Panel: * Admin Tabs: * Options: * Getting Option Values: */ /** * `tf_create_options` is the hook used to create options. */ add_action( 'tf_create_options', 'aa_options_creating_function' ); function aa_options_creating_function() { // Initialize Titan with your theme name. $titan = TitanFramework::getInstance( 'neat' ); /** * First Admin panel. */ /** * Create admin panel options page called `$aa_panel`. * * This is a first admin panel and is called by its name i.e. `$aa_panel`. */ $aa_panel = $titan->createAdminPanel( array( 'name' => 'Neat Options' // Name the options menu ) ); /** * Create the options. * * Now we will create options for our panel that we just created called `$aa_panel`. */ $aa_panel->createOption( array( 'id' => 'aa_txt', // The ID which will be used to get the value of this option. 'type' => 'text', // Type of option we are creating. 'name' => 'My Text Option', // Name of the option which will be displayed in the admin panel. 'desc' => 'This is our option' // Description of the option which will be displayed in the admin panel. ) ); /** * Save button for options. * * When creating admin panel options, use this code to add an option "Save" button * since there is no other way for user to save the options.Your users can now save * (and reset) the options we just created. */ $aa_panel->createOption( array( 'type' => 'save' ) ); } At the very beginning I have added a few helping links from the documentation of Titan Framework. Now I am going to explain this code line by line. Line 17 Here we have a hook called tf_create_options, which is used to create options via Titan Framework using the aa_options_creating_function() function. Lines 19 We have created a function called aa_options_creating_function(), which will be responsible for creating these options. Line 22 At line 22, I have created an instance for Titan Framework. Creating an instance is an integral part of this framework, and it must be created in each file wherever we need to interact with Titan Framework. To make your instance unique, you can add your product's name in it. For example, I have added 'neat' as a parameter. Get an Instance of Titan Framework Creating an instance of Titan Framework is pretty simple. We get a unique instance to avoid any confusion just in case another plugin is using Titan Framework to create options. The author states that: The getInstance function creates/gets a unique instance of Titan Framework specific to “mytheme”. This is the namespacewe’re going to use to ensure that our settings won’t conflict with other plugins that use Titan Framework. Be sure to change it to your theme or plugin name. Here is a code example of getting an instance with Titan. $titan = TitanFramework::getInstance( 'my-theme' ); In the case of my Neat theme I will use the parameter of neat instead of my-theme to make my code unique, i.e. $titan = TitanFramework::getInstance( 'neat' ); Creating an Admin Panel; Lines 33–35 These lines will create an admin panel which I have named as "$aa_panel". Titan Framework helps to create sections like admin panel, admin tabs, metaboxes and theme customizer panels within your project. But for now I will only create an admin panel as an example to explain things. These lines of code are calling our createAdminPanel() function in Titan Framework which forms an array. This function will add a new section in your WordPress dashboard named Neat Options. The above image is a screenshot of the WordPress dashboard where you can find a new section being added in the Admin Panel. Just to summarize what I have done so far: I have set up my web developmental project, and then I added an instance to it, after which I created an Admin Panel. Right at this point, if I click the Neat Options button, then this section is empty. So, now I will create options within this newly created Admin Panel. Creating Options in Titan Framework Customizable WordPress themes are preferred because the end users mostly want to configure themes without writing a single line of code. This is made possible by adding flexible options during theme development. We can add such options in a separate admin panel, or in form of metaboxes or else options panels inside theme customizer. Options serve the purpose of storing the values which are later retrieved at the front-end. Lines 42–47 Now take a look at these lines of code. These will be used to create options within an admin panel or tab. Line 42 defines the createOption() function for the $aa_panel. This function is once again an array which bears certain parameters like id, type, name, description, etc. According to these lines I have created an option which is a text field and is named My Text Option. This is the screenshot which displays the option created within the Neat Options panel. Lines 56–58 The last two lines of the code create another option within this panel. But its purpose is to save the settings. For example, in the My Text Option field, the user fills it with John Doe. This means that the user wants to change the default setting, which is only possible if the user saves the custom settings. So, I again used the createOption() function and assigned the value of parameter "type = save". This is the final screenshot of the development which I have done so far. At this point, you have Titan Framework all set up, you have created a few options to get dynamic results, and now all that's left to do is get the values from the options you set up in the first place. Out of the three-step tagline, I've discussed the first two in the previous two articles. So, let's now get to the last and indeed the most important part of how Titan Framework works. Getting Values Creating options is done at the back-end, and now we need to retrieve the values against those options, set by an end user, to use them at the front-end. We can retrieve values set against the options via a simple function, i.e. getOption(). The following is the basic structure of code to retrieve the saved values: <?php function myFunction() { $titan = TitanFramework::getInstance( 'my-theme' ); $myTextOption = $titan->getOption( 'my_text_option' ); // Do stuff here. } So, I created a function called myFunction where I first registered an instance of Titan. Registering an instance is an important step, because it gets the object created by Titan Framework for your settings registered in a variable, i.e. $titan. You can see that our instance is specific to our theme, i.e. my-theme, which is supposed to be our theme name or anything unique. Retrieving Values at the Front-End Let's use the values set against the options at the front-end. I have created a blank custom page template. If you refer to the Neat theme folder, you will find a file called aa_titanframework.php in the root. You can do the same with your theme as well. Create a new file in your text editor and copy and paste the following lines of code. <?php /* Template Name: Titan Framework */ get_header(); /** * First Admin panel. */ // We will initialize $titan only once for every file where we use it. $titan = TitanFramework::getInstance( 'neat' ); $aa_txt_val = $titan->getOption( 'aa_txt' ); ?> <div class="aa_wrap"> <h1><?php the_title( ); ?></h1> <div class="aa_content"> <?php /** * First Admin panel options. */ // Print the value saved in `aa_txt` option. echo $aa_txt_val; // Let's use this value in HTML. ?> <h3><?php echo $aa_txt_val; ?></h3> </div> </div> <?php //get_sidebar(); get_footer(); ?> Before I explain the above-mentioned code, do refer to the code of my previous article where I created an Admin Panel and its options, because I am using the same names, IDs, etc., here as well. The first four lines of the code are for WordPress to register this custom page template, which is pretty standard—no rocket science there. Getting Option Values I will get the values of the options which I created in the adminpanel-options-init.php file (refer for its code to my previous article) here. Two steps are needed to achieve this: - Get a unique Titan Framework instance and save it to a variable. - Get the value via ID using the getOption()function. Line 12 Following the first step, I initialized a unique instance against the variable $titan, only once for every file in which I use it. My instance is unique since I have named it neat, i.e. the package name for my theme—you can name it anything unique. It is necessary so that if a plugin is using Titan and your theme is also using it, then there should be a way to differentiate between options set by that plugin and your theme. $titan = TitanFramework::getInstance( 'neat' ); Line 14 The second step is to make use of the ID and get the saved value for that option. The code for this line is: $aa_txt_val = $titan->getOption( 'aa_txt' ); I retrieved the value for aa_txt which is saved against the variable $aa_txt_val. The aa_txt parameter refers to the ID of the option which I created within my first Admin Panel (refer to my previous article). So, far I've set up the basic structure to get the saved values. Now let's use the saved values at the front-end. Lines 22–35 These lines of code are used to display the saved values on the front-end. Take a look at line 29 where I used the echo command to get the output. The same is done on line 35, but this time I'm displaying the output for $aa_txt_val in an H3 (heading 3) format. So now whatever value the end user sets for this option, it will be displayed at the front-end. Results In order to display the results for the code which I have explained above, follow these steps: - Go to your WordPress dashboard - Create a New Page via Pages > Add New. - Name the page Titan Framework (optional, you'd know that) The above screenshot shows the page which I have created. At the same time, you can also find the new Admin Panel Menu, i.e. Neat Options, where we created the options. Next choose the Page Template, i.e. Titan Framework, for this page before you publish it. The aa_titanframework.php file adds a new page Template called the "Titan Framework" which appears in the drop-down list. Choose that template. - Publish the page. - Next go to the Neat Options menu and add some value against the option named My Text Option. The image shows that I have filled this field with AA-Text-Option and then I clicked Save Changes. - Go back to the Titan Framework page and view the page. The above image displays the final result. This is the Titan Framework page. The saved option value (i.e. AA-Text-Option) for aa_txt is being displayed in two different forms. The first one is in paragraph format while the second is in h3 format. Conclusion By now you must have some understanding of Titan Framework and its working. This is the basic setup which is to be followed each time you develop something with Titan Framework. Now that you know how to set it up, create a few options and get the saved values; try it out and let me know in case of any queries via comments or reach out to me at Twitter. Next in this series we will explore the set of options we can create with Titan Framework and how to use them. Source: Tuts Plus … [Trackback] […] Read More here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] There you will find 34650 more Infos: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] There you will find 27837 more Infos: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] There you will find 71736 more Infos: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] There you will find 35054 more Infos: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Find More Informations here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Find More Informations here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Read More: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Informations on that Topic: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […] … [Trackback] […] Find More Informations here: designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/ […]
http://designncode.in/a-beginners-guide-to-titan-framework-how-titan-works/
CC-MAIN-2017-43
refinedweb
3,143
57.98
Introducing the new Microsoft.Data.SqlClient Diego This post was written by Vicky Harp, Program Manager on SqlClient and SQL Server Tools. Those of you who have been following .NET development closely have very likely seen Scott Hunter’s latest blog post, .NET Core is the Future of .NET. The change in focus of .NET Framework towards stability and new feature development moving to .NET Core means SQL Server needs to change in order to continue to provide the latest SQL features to .NET developers in the same timely manner that we have done in the past. System.Data.SqlClient is the ADO.NET provider you use to access SQL Server or Azure SQL Databases. Historically SQL has used System.Data.SqlClient in .NET Framework as the starting point for client-side development when proving our new SQL features, and then propagating those designs to other drivers. We would still like to do this going forward but at the same time those same new features should be available in .NET Core, too. Right now, we have two code bases and two different ways SqlClient is delivered to an application. In .NET Framework, versions are installed globally in Windows. In .NET Core, an application can pick a specific SqlClient version and ship with that. Wouldn’t it be nice if the .NET Core model of SqlClient delivery worked for .NET Framework, too? We couldn’t just ship a new package that replaces System.Data.SqlClient. That would conflict with what lives inside .NET Framework now. Which brings us to our chosen solution… Microsoft.Data.SqlClient The Microsoft.Data.SqlClient package, now available in preview on NuGet, will be the flagship data access driver for SQL Server going forward. This new package supports both .NET Core and .NET Framework. Creating a new SqlClient in a new namespace allows both the old System.Data.SqlClient and new Microsoft.Data.SqlClient to live side-by-side. While not automatic, there is a pretty straightforward path for applications to move from the old to the new. Simply add a NuGet dependency on Microsoft.Data.SqlClient and update any using references or qualified references. In keeping with our plans to accelerate feature delivery in this new model, we are happy to offer support for two new SQL Server features on both .NET Framework and .NET Core, along with bug fixes and performance improvements: - Data Classification – Available in Azure SQL Database and Microsoft SQL Server 2019 since CTP 2.0. - UTF-8 support – Available in Microsoft SQL Server SQL Server 2019 since CTP 2.3. Likewise, we have updated the .NET Core version of the provider with the long awaited support for Always Encrypted, including support for Enclaves: - Always Encrypted is available in Microsoft SQL Server 2016 and higher. - Enclave support was introduced in Microsoft SQL Server 2019 CTP 2.0. The binaries in the new package are based on the same code from System.Data.SqlClient in .NET Core and .NET Framework. This means there are multiple binaries in the package. In addition to the different binaries you would expect for supporting different operating systems, there are different binaries when you target .NET Framework versus when you target .NET Core. There was no magic code merge behind the scenes: we still have divergent code bases from .NET Framework and .NET Core (for now). This also means we still have divergent feature support between SqlClient targeting .NET Framework and SqlClient targeting .NET Core. If you want to migrate from .NET Framework to .NET Core but you are blocked because .NET Core does not yet support a feature (aside from Always Encrypted), the first preview release may not change that. But our top priority is bringing feature parity across those targets. What is the roadmap for Microsoft.Data.SqlClient? Our roadmap has more frequent releases lighting up features in Core as fast as we can implement them. The long-term goal is a single code base and it will come to that over time, but the immediate need is feature support in SqlClient on .NET Core, so that is what we are focusing on, while still being able to deliver new SQL features to .NET Framework applications, too. While we do not have dates for the above features, our goal is to have multiple releases throughout 2019. We anticipate Microsoft.Data.SqlClient moving from Preview to general availability sometime prior to the RTM releases of both SQL Server 2019 and .NET Core 3.0. What does this mean for System.Data.SqlClient? It means the development focus has changed. We have no intention of dropping support for System.Data.SqlClient any time soon. It will remain as-is and we will fix important bugs and security issues as they arise. If you have a typical application that doesn’t use any of the newest SQL features, then you will still be well served by a stable and reliable System.Data.SqlClient for many years. However, Microsoft.Data.SqlClient will be the only place we will be implementing new features going forward. We would encourage you to evaluate your needs and options and choose the right time for you to migrate your application or library from System.Data.SqlClient to Microsoft.Data.SqlClient. Closing Please, try the preview bits by installing the Microsoft.Data.SqlClient package. We want to hear from you! Although we haven’t finished preparing the source code for publishing, you can already use the issue tracker at to report any issues. Keep in mind that object-relational mappers such as EF Core, EF 6, or Dapper, and other non-Microsoft libraries, haven’t yet made the transition to the new provider, so you won’t be able to use the new features through any of these libraries. An updated versions of EF Core with support for Microsoft.Data.SqlClient are expected in an upcoming preview. We also encourage you to visit our Frequently Asked Questions and Release Notes pages in our GitHub repository. They contain additional information about the features available, how to get started, and our plans for the release. This post was written by Vicky Harp, Program Manager on SqlClient and SQL Server Tools.
https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/
CC-MAIN-2019-22
refinedweb
1,031
68.36
Need help with spaces I am trying to write a code that deletes numbers from a string and outputs words only. It worked just fine when the input is one word without a space, but what if the input had a space in it, the string will read what is before the space . Therefore I need help with removing numbers from a string with spaces. #include <iostream> #include <ctype.h> using namespace std ; int main (){ int i = 0; char b[i]; string a; cin >> b; while ( b[i]) { } if (isalpha(b[i])) { a = a+b[i]; cout << a; } if (b[i] = ' ') { a = a+b[i]; cout << a; } i++; } } 14 AnswersNew Answer You can use getline() and then convert to a c_string or char array. std::cin stops taking input when encountered by a whitespace character. Use getline() or fgets() to get strings with spaces in them. Hakam I just had a look at your program and are you sure it is the correct program you have shared here. and if it is then how is it even compiling without errors ? Because in its current state there are a some errors in it.( And also the initial size of the input string is 0 ) Arsenic getline() 😉 Hakam <ctype.h> is for C, for C++ we include <cctype> instead, note that no .h extension is used, just the name. #include <cctype> See this for more info about C headers usage in C++ 👇 Arsenic true it had errors, but now thanks to your previous comment, i managed to correct it. Here is my code after the edit. #include <iostream> #include <ctype.h> #include <string> #include <algorithm> using namespace std ; int main (){ int i = 0; char b[i]; string a, s; getline(cin, a); for (i = 0; i < a.length(); i++) { b[i] = a[i]; // i used ur advice here if (isalpha(b[i])) {s = s +b[i];} if (b[i] == ' ') {s = s+ " "; } } reverse(s.begin(), s.end()); cout << s; //i needed to reverse the output here } Ipang i am talking about <ctype.h> in this app's compiler. Whenever i try to include it, an error occurs that says file not found, therefors i had to include <ctype> instead getline() won't work since isalpha() doesn't work with strings, and getline works only with strings @Hakam. strings can be indexed just like char arrays, so is alpha() will work. There are two versions of getline one for char arrays and one for string.. cin.getline(strName, SIZE)....... for c-style string (char arrays) getline(cin, strName)....... for C++ strings. Ipang file not found in this compiler rodwynnejones still not working, I would appreciate it if you would demonstrate in a simple code Hello I don't speak English well. french?yes
https://www.sololearn.com/Discuss/2643409/need-help-with-spaces/
CC-MAIN-2022-21
refinedweb
459
80.31
Hi, first time here Im user of Notepad++ for long time and i set it like that when i select a certain text or paragraph or text and press ALT+P (for instance) that it wraps selection in That can easily be done with a plugin. Select Tools->New Plugin and save this: import sublime, sublime_plugin class WrapTextCommand(sublime_plugin.TextCommand): def run(self, edit): regions = ] for region in self.view.sel(): region = sublime.Region(region.begin(), region.end()) text = self.view.substr(region) self.view.replace(edit, region, "<tag>%s</tag>" % text) regions.append(sublime.Region(region.begin()+1, region.begin()+4)) regions.append(sublime.Region(region.end()+11-4, region.end()+11-1)) self.view.sel().clear() for region in regions: self.view.sel().add(region) Then create a keybinding in your user keybindings: { "keys": "alt+p"], "command": "wrap_text"}, Thank you quarnster, this one makes the switch from gvim to ST2 even easier! Regards,Highend wow, thanks quarnster. Thats almost what i needed! I wouldn't want to be rude but can i define ALT+I for , ALT+S for Sure, no problem. Here's the updated plugin code: import sublime, sublime_plugin class WrapTextCommand(sublime_plugin.TextCommand): def run(self, edit, tag="tag"): regions = ] for region in self.view.sel(): text = self.view.substr(region) self.view.replace(edit, region, "<%s>%s</%s>" % (tag, text, tag)) off = 1 regions.append(sublime.Region(region.begin()+off, region.begin()+off+len(tag))) off += len(tag)+3 regions.append(sublime.Region(region.end()+off, region.end()+off+len(tag))) self.view.sel().clear() for region in regions: self.view.sel().add(region) And then you can add keybindings such as this: { "keys": "alt+p"], "command": "wrap_text"}, { "keys": "ctrl+b"], "command": "wrap_text", "args": {"tag": "b"}}, { "keys": "alt+i"], "command": "wrap_text", "args": {"tag": "img"}}, Great. Sir, would you be interested in making this a official plugin? I have more ideas for development of this one but im kinda ashamed of asking for more as you already did more than i expected. Thanks for that! Sure, I've set up a github repository for the plugin at github.com/quarnster/WrapText, please open up issues there for feature requests to make sure I see them See github.com/quarnster/WrapText#setup for installation instructions. You probably have to delete the old plugin code. I think this is already built in with sublime: ctrl+shift+w That's a good point weslly. The current snippet doesn't do exactly what pentago wants, but using snippets provides for a much better solution to the problem. I'm probably going to delete the plugin soonish, but for anyone ending up here looking for a similar feature to Notepad++'s WebEdit plugin, here is a piece of python code to convert WebEdit commands to Sublime Text 2 key bindings + snippets. YES! Completelly satisfied, this is exactly what i wanted.It'll be a pleasure developing with ST2. Thanks guys, such amount of will to help is rarely seen in other communities. Hats off. Take care,P.
https://forum.sublimetext.com/t/need-a-text-surround-plugin-solution/7373/8
CC-MAIN-2017-22
refinedweb
506
50.12
When trying to create a user interface with Java, one of the most agonising decisions is which Layout Manager to use. Many an application has made use of nested Border and Grid Layouts, but in the end it either becomes too hard and too nested to work with, or the window expanding properties just don't work sensibly. More often than not the solution to this to put up with the mutilated window or set the window to a fixed size. Neither option is good as it makes the application look ugly or restricted in its usage. Fortunately, there is a solution to solve all your problems — GridBagLayout. Unfortunately many see this option as so complex and difficult to learn that they never try. GridBagLayout really can solve almost any interface layout problem and have your windows resize sensibly and more importantly, in the fashion that you wish. All it takes is a little forethought and some patience. The forethought GridBagLayout is not useful for toy interfaces. Using GridBagLayout on an interface you want to slap together in a couple of minutes to see what it looks like is akin to putting up scaffolding in your living room to remove a picture hook. For toy applications you are far better off using BorderLayout and GridLayout, if the application moves beyond toy status then you should switch to GridBagLayout. Of course, beginning with an application that you expect to be non-toy in the beginning is more efficient. Once you have decided on using GridBagLayout the next thing to do is grab a sheet of paper and a pencil, a keystroke should not be struck until you know exactly how the interface is going to look. This means you really should properly plan your application before creating any code. Our example for learning GridBagLayout will be a small application that will display a series of photos from a Flickr RSS feed. The final interface will look like this: Here is the original scribble I did of this interface: As you can see, the final result looks much the same as what was intended. You should be able to see some lines going down and across the intended interface in the mock. These lines are used to break the interface into columns and rows so we know a grid place to position each component. This would be the "Grid" part of GridBagLayout — the numbers around the interface are the grid numbers. In one sense, thinking about GridBagLayout is no different to how table based layouts were thought about in the mean old days of HTML 3 and 4, concepts like rowspan and colspan will come into play albeit with a different name. With our interface and grid set, it is time to layout the interface and get coding. The work For this section I am going to assume that you know about basic window and component creation. The end point we want to reach with this article is the ability to layout components within a frame, in later articles we will improve this interface to make it behave like a proper interface should. Therefore, to understand where we are heading, here is the completed code we are aiming for. import java.awt.*; import java.awt.event.*; import javax.swing.*; public class GridBagWindow extends JFrame { private JButton searchBtn; private JComboBox modeCombo; private JLabel tagLbl; private JLabel tagModeLbl; private JLabel previewLbl; private JTable resTable; private JTextField tagTxt; public GridBagWindow() { Container contentPane = getContentPane(); GridBagLayout gridbag = new GridBagLayout(); contentPane.setLayout(gridbag); GridBagConstraints c = new GridBagConstraints(); //setting a default constraint value c.fill = GridBagConstraints.HORIZONTAL; tagModeLbl = new JLabel("Tag Mode"); c.gridx = 0; c.gridy = 1; gridbag.setConstraints(tagModeLbl, c); contentPane.add(tagModeLbl);); searchBtn = new JButton("Search"); c.gridx = 1; c.gridy = 2; gridbag.setConstraints(searchBtn, c); contentPane.add(searchBtn); resTable = new JTable(5,3); c.gridx = 0; c.gridy = 3; c.gridwidth = 3; gridbag.setConstraints(resTable, c); contentPane.add(resTable); previewLbl = new JLabel("Preview goes here"); c.gridx = 0; c.gridy = 4; gridbag.setConstraints(previewLbl, c); contentPane.add(previewLbl); addWindowListener(new WindowAdapter() { public void windowClosing(WindowEvent e) { System.exit(0); } }); } public static void main(String args[]) { GridBagWindow window = new GridBagWindow(); window.setTitle("GridBagWindow"); window.pack(); window.setVisible(true); } } The opening lines until the constructor should not cause too much grief, they are fairly standard imports and variable creation. When we hit the constructor though, things get interesting: Container contentPane = getContentPanegetContentPane(); GridBagLayout gridbag = new GridBagLayout(); contentPane.setLayout(gridbag); We open with grabbing the GridBagWindow'sGridBagWindow's content pane, creating a GridBagLayout object, exactly the same as how we have created GridLayout and BorderLayout objects in the past and then setting our GridBagLayout object as the content pane's layout. GridBagConstraintsGridBagConstraints c = new GridBagConstraintsGridBagConstraints(); Then we come to the unique object within the whole process, GridBagConstraintsGridBagConstraints. This object controls the constraints placed on all components within the GridBagLayout. To add a component into your GridBagLayout, you must first associate it with a GridBagConstraintsGridBagConstraints object. GridBagConstraintsGridBagConstraints has 11 fields that it is possible to manipulate as well as a number of statics to help you out. Those fields are: - gridx - the row which the component will occupy within the grid. - gridy - the column which the component will occupy within the grid. - gridwidth - the number of columns which the component will occupy, this is analogous to colspan in HTML. - gridheight - the number of rows which the component will occupy, this is analogous to rowspan in HTML. - weightx - tells the layout manager how to distribute any extra horizontal space. - weighty - tells the layout manager how to distribute any extra vertical space. - anchor - tells the layout manager at which point to place the component within its grid space. - fill - how the component should behave if the display area is larger than the component. Can fill horizontally, vertically or both ways. - insets - the external padding between the component and the edges of its grid space. - ipadx - how many pixels of internal padding to add to the minimum width of the component. The width of the component is at least its minimum width plus ipadx pixels. - ipady - how many pixels of internal padding to add to the minimum height of the component. The height of the component is at least its minimum width plus ipady pixels. It is possible for each instance of a component that you create an individual GridBagConstraintsGridBagConstraints for it; however this is not recommended. It is far better to set-up your defaults for the object when you call it, then manipulate the fields you wish for each component. This is generally because some fields like insets, padx, pady and fill generally remain the same for every component, therefore it is easier to set a field and carry the changed field onto the next component. If you want to go back to the original field value after changing it then you should do it before adding the next component. It's an easy way to keep track of what you are modifying and is easier to follow than a series of object calls with 11 parameters. If it all appears as clear as mud currently, take solace in the fact that once you understand the GridBagConstraintsGridBagConstraints, then the hard work is done and the rest should fall easily into place. So now that we have covered the GridBagConstraintsGridBagConstraints in a fair amount of detail, let's see what we have to do to actually use it: All we do is instantiate our label, assign it a grid position, associate it with a constraint object and add it do our content pane. We know do the same for our next label: tagModeLbl = new JLabel("Tag Mode"); c.gridx = 0; c.gridy = 1; gridbag.setConstraints(tagModeLbl, c); contentPane.add(tagModeLbl); Notice that even though we have already set the gridx on our constraints object to be 0, we still set it again here — this is for no other reason than readability. Next we add our text field that store the keyword we want to search on and the combo box that will decide how to search on multiple keywords. The concepts are exactly the same as above, except that we wish to have the text field extend for two columns, then we need to reset that value before we add the combo box.); After this, we are simply adding the rest of the components onto the content pane using techniques that we have already seen; the rest of the code should not cause any problems. At this stage we should have an interface that looks somewhat like what we intended. The Patience Of course, the interface does not behave in an intelligent matter yet. Resize the window and see what happens. Why is it doing that? It's because we have yet to give the constraints object sensible values for the weightx, weighty and fill fields. This is something that we will be covering later, but if you wish to try for yourself then the GridBagLayout and GridBagConstraints API pages are good places to extend your.
https://www.techrepublic.com/article/getting-to-grips-with-gridbaglayout/
CC-MAIN-2018-30
refinedweb
1,504
54.73
Hi! I am using this code: import bpy import sys #Get command line arguments argv = sys.argv argv = argv[argv.index("--") + 1:] # get all args after "—" mdl_in = argv[0] fbx_out = argv[0] + ".fbx" bpy.ops.nvb.importmdl(filepath=mdl_in, filter_glob="*.mdl") bpy.ops.export_scene.fbx(filepath=fbx_out, axis_forward='-Z', axis_up='Y', version='BIN7400', use_selection=False, bake_space_transform=True, use_anim=False, bake_anim=False) with this batch file: FOR %%f IN (*.mdl) DO "C:\Path to blender\Blender2.73a\blender.exe" -b --python "C:\path to batch directory and script\convert_fbx.py" -- "%%f" It works but it can’t find the texture files which are in the same directory (and it spits out another error as well when exporting - about “duplis”). If I open Blender and manually paste the 2 operation lines of python code it works perfectly. I got the general code from here: the exact errors are: Error: Cannot read ‘filename’ : no such file or directory Error: Object does not have duplis These errors do not occur if I open Blender and copy and paste the 2 operation lines of python code, so somewhere along the line the path to the file being converted is being lost/not applied to the imported mesh when looking for the textures I’d guess, probably in the bat file but I don’t really know much about those or Python for that matter. I’d also like some direction to look in for batch importing exporting 3D files using Blender and Python, can’t seem to find anything concrete. This is my first foray into Python and Blender Scripting, but I am pretty solid with JS and C#. Thanks!
https://blenderartists.org/t/batch-3d-import-export-file-conversion-missing-cant-find-textures/643783
CC-MAIN-2021-04
refinedweb
275
58.01
Closed Bug 426732 Opened 12 years ago Closed 12 years ago Implement -moz-nativehyperlinktext Categories (Core :: Widget, defect) Tracking () mozilla1.9.1a1 People (Reporter: faaborg, Assigned: fittysix) References Details (Keywords: access, Whiteboard: [not needed for 1.9]) Attachments (7 files, 8 obsolete files) Currently hyperlinks in Firefox, both in chrome and content, are hard coded to (0,0,255). We use native platform colors for hyperlinks instead of full blue to give the product a softer and more integrated feel. Flags: blocking-firefox3? Example of (0,0,255) hyperlinks in chrome Example of default visited and unvisited link colors for hyperlinks in the content area. Example of the softer blue used in vista for hyperlinks, this case in the content area. Example of the softer color for hyperlinks in the content area, from windows media player. In case added code is in the scope of this bug, in GTK 2.10, Widgets have style attributes "visited-link-color" and "link-color" which would be perfect for this Here is an example of hyperlinks in mail.app There is a HotTrack system color for windows. After a little digging I found COLOR_HOTLIGHT: I'm not sure what it has to do with hotlights :P, but it does seem to return the right color on vista, and from what I've found this was implemented first in windows 98. This patch just sticks it in with the CSS user defined system colors, which is probably not quite the right way to do this. I'm not sure what winCE will do with this color either, so it might have to be #ifndefed out of that one. The next step I suppose is to just make -moz-hyperlinktext default to COLOR_HOTLIGHT, but change when the user changes it. Comment on attachment 314007 [details] [diff] [review] Implement COLOR_HOTLIGHT v1 Neil, is this the right way to expose the property? This isn't a blocker, but I don't think it's any worse for us to hard code the right colours if possible. Would take a minimally invasive patch, which might not be easy since I'm not sure that this is a theme thing as opposed to a widget thing :( Attachment #314007 - Flags: review?(enndeakin) Flags: wanted-firefox3+ Flags: blocking-firefox3? Flags: blocking-firefox3- Comment on attachment 314007 [details] [diff] [review] Implement COLOR_HOTLIGHT v1 Use '-moz-linktext' rather than 'hotlight' to be clearer and as it isn't a css defined value. Also, add the color to nsXPLookAndFeel.cpp for preference support. And of course, fix up the alignment in nsILookAndFeel.h Attachment #314007 - Flags: review?(enndeakin) → review- we already have a -moz-hyperlinktext, wouldn't -moz-linktext be a little redundant and confusing? I was thinking maybe -moz-nativelinktext or something (which could be filled by link-color on linux for example) or just filling -moz-hyperlinktext with %nativecolor by default, but somehow changing it to user pref when the user changes it. I was also wondering, since nsCSSProps is global, what would happen on linux/mac? I found color_hotlight is also not implemented on NT4, which (as of firefox2 anyways) is still on the supported OS list. I think we have a couple options: 1) Default browser.anchor_color (and therefore -moz-hyperlinktext) to the appropriate value per-os. This is certainly the easiest way afaik, just a few #ifs. Personally I think this would be a decent solution, since it's user configable anyways. 2) Implement native colors. We know windows provides us with one, which is editable using the Advanced Appearance dialog, apparently GTK gives us one (what does that do on kde?), and I can't find anything on OSX having such a property or not. It seems accessibility is brought up in every bug about theme/color, but windows at least does use different Hyperlink colors on high contrast themes, which is something to consider. We might be able to do a bit of both though, since the hard coded default of -moz-hyperlinktext is defined right here: we just need to set that to %nativecolor when applicable using #ifdef I think? (or does this get overridden by a default pref?) We wouldn't even need to add a CSS keyword for that one. Addresses everything in Comment #10, but with a different CSS keyword due to my concerns in Comment #11 To set this as the default color of -moz-hyperlinktext you simply set browser.anchor_color to -moz-nativelinktext since colors in user prefs are able to take CSS colors. This is the easiest way I've found to set default native colors until the user specifies otherwise, albeit a strange and somewhat round-about way. I was going to make a patch for this too, but I didn't know if this should be done in firefox.js or all.js. Either way it's a simple #ifdef XP_WIN to set the pref for now anyways, could change if we do other OS native link colors. Interesting tidbit since I'm sure everyone is now wondering: setting browser.anchor_color to -moz-hyperlinktext produces some strange seemingly random color, possibly some pointer being turned into a color? Attachment #314007 - Attachment is obsolete: true Attachment #314512 - Flags: review?(enndeakin) Wait, setting browser.anchor_color to -moz-nativelinktext doesn't work. It works in the preferences > content > colors window properly, but it doesn't work right in the actual content. That patch will give us a CSS color for native color hyperlinks on windows, but there's still no way to default to that native color. My code editors hate me :/ I think I've finally found the pref that's causing tabs to look like spaces. Fixes the alignment, see comment #13 for the rest Attachment #314512 - Attachment is obsolete: true Attachment #314528 - Flags: review?(enndeakin) Attachment #314512 - Flags: review?(enndeakin) Might as well get the ability to pull the colour, even if we're not going to use it immediately. Hurrah for third party themers. Comment on attachment 314528 [details] [diff] [review] Implement COLOR_HOTLIGHT v1.1.1 need sr for this... Looks OK to me but a style system person should sign off on it. I would want a comment somewhere explaining how this differs from -moz-hyperlinktext ... it seems the answer is "this one is not affected by user preferences". In addition to what roc said, you should document what background color it goes against. Should that be documented in code or MDC? probably both? I'm guessing a quick blurb in code like //extracted native hyperlink color from the system, not affected by user preferences MDC should probably mention that it's good on Window, ThreeDFace and -moz-dialog There is no official MSDN documentation on what background this color should be used on, I've just seen it personally on those in native chrome on windows. I'm also thinking that we should take this alongside the gnome color if we can get it (I don't really know how to do that one), and if we can't find anything for OSX then we should probably just return the user pref or a hard coded color that matches the screenshot in attachment 313314 [details] Assignee: nobody → fittysix So, does this make sense/work? I don't fully understand the workings of GTK or mozilla widget GTK code, but from what I can tell this appears to be correct. I would build & check this myself, but I don't have a linux machine that can build mozilla. file: case eColor__moz_nativelinktext: //"link-color" is implemented since GTK 2.10 #if GTK_CHECK_VERSION(2,10,0) GdkColor colorvalue = NULL; gtk_widget_style_get(MOZ_GTK_WINDOW, "link-color", &colorvalue, NULL); if (colorvalue) { aColor = GDK_COLOR_TO_NS_RGB(colorvalue); break; } // fall back to hard coded #endif //GTK_CHECK_VERSION(2,10,0) aColor = NS_RGB(0x00,0x00,0xEE); break; Specifically I'm not sure I'm calling gtk_widget_style_get correctly. I used MOZ_GTK_WINDOW because I think this is the GtkWidget for a window, and the link-color of a window is probably exactly what we want. It defaults to #0000EE because that's the current value of most hyperlinks. roc? I believe you're the guy to ask for Widget - GTK (In reply to comment #21) bah, I forgot the * for the pointer in the definition and aColor assignment I meant document in code comments, and this latest patch doesn't do that yet. We do need Mac and GTK2 code for this. We should also see the code to actually use this in Firefox. Michael Ventnor can probably test/finish the GTK2 path. Adds comments, and hard codes the mac color (assuming the underline is the non-aliased color) I searched and can not find any reference to a link, hyperlink or anchor text color anywhere in cocoa or as a kThemeTextColor*. There might be something if we hooked in to webkit, they have a -webkit-link, but I'm guessing we don't want to do that. Attachment #314528 - Attachment is obsolete: true Attachment #315439 - Flags: superreview?(roc) Attachment #315439 - Flags: review?(dbaron) Looks good, but still needs GTK love. >I searched and can not find any reference to a link, hyperlink or anchor text >color anywhere in cocoa or as a kThemeTextColor*. There might be something if >we hooked in to webkit, they have a -webkit-link I couldn't find this information either. As far as I can tell Safari doesn't use platform native hyperlink colors, and instead defaults to (0,0,255). I've gotten this far with the GTK version: This still doesn't work (link goes red, colorValuePtr remains null) and I can't figure out why. I'm attempting this on Kubuntu in KDE, but GTK should be all the same. Michael V, can you help here? Ryan, looking at your pastebin, that code doesn't belong there. It must go in InitLookAndFeel() and the resulting color saved to a static variable. I think the problem is he hasn't realized the window; this will be fixed (and a LOT of code removed) if he moves the code to InitLookAndFeel() and reuses the widgets and local variables there. I could be wrong though, but he should still move the code regardless. Also, I think colorValuePtr needs to be freed like what was done with the odd-row-color code. We also don't need the 2.10 check, we require 2.10 now. It works! And As far as I can tell, it has been working for some time now :/ Even the version on that pastebin works (which I meant to mention, was written that way for testing purposes, I was going to do it this way once I actually got something working.) Finding a gtk2 theme that actually specifies a link-color has proven more difficult than pulling it out. (I ended up just editing the gtkrc file of Raleigh) Interesting note: as far as I can tell any GtkWidget will work, we could use an already defined widget rather than creating the gtklinkbutton, but it is possible AFAIK to specify link-color per widget, so this is probably the best way to do it. Attachment #315439 - Attachment is obsolete: true Attachment #315439 - Flags: superreview?(roc) Attachment #315439 - Flags: review?(dbaron) Attachment #316102 - Flags: superreview?(roc) Attachment #316102 - Flags: review?(dbaron) Why don't you move the gdk_color_free call to within the first check for colorValuePtr? Good point, I thought of that as I was going over the diff, but left it like that because that's how the treeview was done. Now that I look over the treeview again though I see there's 2 instances where it could be used, which is obviously why it was done that way there. Attachment #316102 - Attachment is obsolete: true Attachment #316135 - Flags: superreview?(roc) Attachment #316135 - Flags: review?(dbaron) Attachment #316102 - Flags: superreview?(roc) Attachment #316102 - Flags: review?(dbaron) This patch is separate, but obviously depends on the other one. I had to add a special case in the part where it reads the pref, but I don't know if there's a better way to do this. It might be worth changing MakeColorPref to recognize named colors, but this is so far the first time we've wanted to do this. Whiteboard: [has patch][need review dbaron][needed for blocking bug 423718] I suppose I never did document which background colors this goes against, but since that isn't done on any of the other native colors I think it's something best left for MDC. On there we can note that there is no official documentation specifying a safe background color, but that it's used on dialogs and windows in native OS apps. Attachment #316135 - Attachment is obsolete: true Attachment #317890 - Flags: review?(dbaron) Attachment #316135 - Flags: review?(dbaron) the old one still works with fuzz, but might as well attach this Attachment #317890 - Attachment is obsolete: true Attachment #319518 - Flags: review?(dbaron) Attachment #317890 - Flags: review?(dbaron) Whiteboard: [has patch][need review dbaron][needed for blocking bug 423718] → [not needed for 1.9][has patch][need review dbaron] What about comment 19? (I didn't go through the other comments in detail to check that they were addressed; I just found one that wasn't addressed.) (In reply to comment #36) > What about comment 19? I kind of addressed it in comment 34. If it should be documented in code; is nsILookAndFeel.h the best place for that with the other comments? Comment on attachment 319518 [details] [diff] [review] -moz-nativelinktext v1.6 unbitrot 2 >+ eColor__moz_nativelinktext, //hyperlink color extracted from the system, not affected by user pref Could you call this value (and the corresponding CSS value) nativehyperlinktext rather than nativelinktext? I think that's more consistent with existing names. (Or was there a reason you chose what you did?) > // Colors which will hopefully become CSS3 Could you add your new color to the end of the section below this comment, rather than above it? And could you add an item to layout/style/test/property_database.js testing that the new value is parsed? r=dbaron with those changes (I don't seem to have permission to grant the review flag you requested, though I can grant the other review flag) I'm presuming that roc's superreview+ above means he reviewed the platform widget implementations; if not, somebody with some platform knowledge should review that. Attachment #319518 - Flags: review?(dbaron) → review+ Yeah, the platform widget implementations are reviewed. (In reply to comment #38) > And could you add an item to layout/style/test/property_database.js testing > that the new value is parsed? I've looked at this file, and tbh I'm not entirely certain on how to add this property to it. If I understand it correctly the Initial value would be entirely dependent on your system settings, so I could only guess on the initial values. For other_values the only thing that it wouldn't really be is transparent/semi-transparent. And the only invalid thing is non-longhand colors, since it should always return six character colors. With that, based on other items in the file I have this, but I don't know if it's correct: "-moz-nativehyperlinktext": { domProp: "MozNativeHyperLinkText", inherited: false, type: CSS_TYPE_LONGHAND, initial_values: [ "#0000ee", "#144fae", "#0066cc", "#000080", "#0000ff" ], other_values: [ "transparent", "rgba(255,128,0,0.5)" ], invalid_values: [ "#0", "#00", "#0000", "#00000", "#0000000", "#00000000", "#000000000" ] Other than that I've made the other changes mentioned, though I will wait for feedback on this before posting a patch. It was named that way mostly due to the suggestion in comment #10, but nativehyperlinktext makes more sense. David, comment 40 is for you. -moz-nativehyperlinktext isn't a property; it's a value for existing properties. You should add two items to the "other_values" line for the "color" property and then check that the mochitests in layout/style/test still show no failures. (In reply to comment #37) > I kind of addressed it in comment 34. If it should be documented in code; is > nsILookAndFeel.h the best place for that with the other comments? Yes. All comments should be addressed, I rearranged stuff to make a little more sense (these changes are at the bottom of the appropriate list unless they should be elsewhere), added a comment on the mac color in-line with other comments in the file, and clarified exactly which user pref it is not affected by. Attachment #319518 - Attachment is obsolete: true Keywords: checkin-needed Whiteboard: [not needed for 1.9][has patch][need review dbaron] → [not needed for 1.9] Status: NEW → RESOLVED Closed: 12 years ago Keywords: checkin-needed Resolution: --- → FIXED Target Milestone: --- → Firefox 3.1a1 Added to: Hope that's right. Bug 371870 and bug 437358 are about using the native color in chrome. Please file a new bug for using it in content. Component: Theme → Widget Flags: wanted-firefox3+ Flags: blocking-firefox3- Product: Firefox → Core Summary: Use native platform colors for hyperlinks both in chrome and content → Implement -moz-nativehyperlinktext Target Milestone: Firefox 3.1a1 → mozilla1.9.1a1 Attachment #329015 - Attachment description: -moz-nativelinktext v1.7 → -moz-nativehyperlinktext v1.7
https://bugzilla.mozilla.org/show_bug.cgi?id=426732
CC-MAIN-2019-51
refinedweb
2,857
62.98
A string literal consists of a sequence of characters (and/or escape sequences) enclosed in double quotation marks. Example: "Hello world!\n" Like character constants, string literals may contain all the characters in the source character set. The only exceptions are the double quotation mark ", the backslash \, and the newline character, which must be represented by escape sequences. The following printf statement first produces an alert tone, then indicates a documentation directory in quotation marks, substituting the string literal addressed by the pointer argument doc_path for the conversion specification %s: char doc_path[128] = ".\\share\\doc"; printf("\aSee the documentation in the directory \"%s\"\n", doc_path); A string literal is a static array of char that contains character codes followed by a string terminator, the null character \0 (see also Chapter 8). The empty string "" occupies exactly one byte in memory, which holds the terminating null character. Characters that cannot be represented in one byte are stored as multibyte characters. As illustrated in the previous example, you can use a string literal to initialize a char array. A string literal can also be used to initialize a pointer to char: char *pStr = "Hello, world!"; // pStr points to the first character, 'H' In such an initializer, the string literal represents the address of its first element, just as an array name would. In Example 3-1, the array error_msg contains three pointers to char, each of which is assigned the address of the first character of a string literal. #include <stdlib.h> #include <stdio.h> void error_exit(unsigned int error_n) // Print a last error message { // and exit the program. char * error_msg[ ] = { "Unknown error code.\n", "Insufficient memory.\n", "Illegal memory access.\n" }; unsigned int arr_len = sizeof(error_msg)/sizeof(char *); if ( error_n >= arr_len ) error_n = 0; fputs( error_msg[error_n], stderr ); exit(1); } Like wide-character constants, you can also specify string literals as strings of wide characters by using the prefix L: L"Here's a wide-string literal." A wide-string literal defines a null-terminated array whose elements have the type wchar_t. The array is initialized by converting the multibyte characters in the string literal to wide characters in the same way as the standard function mbstowcs( ) ("multibyte string to wide-character string") would do. Similarly, any universal character names indicated by escape sequences in the string literal are stored as individual wide characters. In the following example, \u03b1 is the universal name for the character a, and wprintf( ) is the wide-character version of the printf function, which formats and prints a string of wide characters: double angle_alpha = 90.0/3; wprintf( L"Angle \u03b1 measures %lf degrees.\n", angle_alpha ); If any multibyte character or escape sequence in a string literal is not representable in the execution character set, then the value of the string literal is not specifiedin other words, its value depends on the given compiler. The compiler's preprocessor concatenates any adjacent string literalsthat is, those which are separated only by whitespaceinto a single string. As the following example illustrates, this concatenation also makes it simple to break up a string into several lines for readability: #define PRG_NAME "EasyLine" char msg[ ] = "The installation of " PRG_NAME " is now complete."; If any of the adjacent component strings is a wide-string literal, then the string that results from their concatenation is also a wide-character string. Another way to break a string literal into several lines is to end a line with a backslash, as in this example: char info[ ] = "This is a string literal broken up into\ several source code lines.\nNow one more line:\n\ that's enough, the string ends here."; The string continues at the beginning of the next line: any spaces at the left margin, such as the space before several in the preceding example, are part of the string literal. Furthermore, the string literal defined here contains exactly two newline characters: one immediately before Now, and one immediately before that's. The compiler interprets escape sequences before concatenating adjacent strings (see the section "The C Compiler's Translation Phases" in Chapter 1). As a result, the following two string literals form one wide-character string that begins with the two characters '\xA7' and '2': L"\xA7" L"2 et cetera" However, if the string is written in one piece as L"\xA72 et cetera", then the first character in the string is the wide character '\xA72'. Although C does not strictly prohibit modifying string literals, you should not attempt to do so. In the following example, the second statement is an attempt to replace the first character of a string: char *p = "house"; // Initialize a pointer to char. *p = 'm'; // This is not a good idea! This statement is not portable, and causes a run-time error on some systems. For one thing, the compiler, treating the string literal as a constant, may place it in read-only memory, so that the attempted write operation causes a fault. For another, if two or more identical string literals are used in the program, the compiler may store them at the same location, so that modifying one causes unexpected results when you access another. However, if you use a string literal to initialize an array variable, you can then modify the contents of the array: char s[ ] = "house"; // Initialize an array of char. s[0] = 'm'; // Now the array contains the string "mouse".
http://books.gigatux.nl/mirror/cinanutshell/0596006977/cinanut-CHP-3-SECT-4.html
CC-MAIN-2018-43
refinedweb
897
50.36
Hi, I’m using profiling tool VTune Amplifier. What I’m interested in is parallel programming, both in thread level and instruction levels. The number of cores in my server is 16, and it supports AVX instructions. (not support AVX2, AVX512) lscpu gives:: 62 Model name: Intel® Xeon® CPU E5-2650 v2 @ 2.60GHz Stepping: 4 CPU MHz: 1200.433 CPU max MHz: 3400.0000 CPU min MHz: 1200.0000 BogoMIPS: 5201.92 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0,2,4,6,8,10,12,14 NUMA node1 CPU(s): 1,3,5,7,9,11,13 f16c rdrand lahf_lm cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d I’m profiling resnet18 training code below. I don’t copy the code of printing loss and accuracy. import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim import torchvision import torchvision.transforms as transforms import torchvision.models as models transform_train = transforms.Compose([ transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), ]) #transform_test = transforms.Compose([ # transforms.ToTensor(), # transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)), #]) trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform_train) trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True, num_workers=0) #testset = torchvision.datasets.CIFAR10(root='./data', train=False, # download=True, transform=transform_test) #testloader = torch.utils.data.DataLoader(testset, batch_size=100, # shuffle=False, num_workers=2) # get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() # define network net = models.resnet18(pretrained=False) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.01, momentum=0.9, weight_decay=5e-4) for epoch in range(15): #() # calculate loss running_loss += loss.item() In my profiling result, I found that AVX dynamic codes (which are hotspots in my code) are mostly executed by 16 threads. (Total 48~49 threads are running, but 16 of them are terminated before training, and the other 16 of them are executing other codes) I have some interesting results. As I increase the number of training loops, some of CPU doesn’t work. I attached result images below with google drive link. Files numbered 1~4 are for epoch 5, 15, 25, and 50, respectively. The CPU Utilization metrics are 58.3%, 62.1%, 53%, and 49.4%, respectively. I think I have to mention some note. For epoch 50, I’ve profiled it twice because of the extremely low metric at the first time. It was 31.1%. The result image of this is in the link above, with the file name numbered 5. Is there anyone who could give me some insight about these results?
https://discuss.pytorch.org/t/some-of-cpu-cores-dont-work-when-i-increase-training-epochs/57439
CC-MAIN-2022-27
refinedweb
482
53.27
Content-type: text/html getc, fgetc, getc_unlocked, getchar, getchar_unlocked, getw - Get a byte or word from an input stream Standard C Library (libc.so, libc.a) #include <stdio.h> int getc( FILE *stream); int fgetc( FILE *stream); int getc_unlocked( FILE * stream); int getchar(void); int getchar_unlocked(void); int getw( FILE *stream); Interfaces documented on this reference page conform to industry standards as follows: getc_unlocked, getchar_unlocked: POSIX.1c fgetc(), getc(), getchar(), getw(): XPG4, XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. Points to the file structure of an open file. The getc() function returns the next byte from the input specified by the stream parameter and moves the file pointer, if defined, ahead one byte in stream. The getc() function may be a macro (depending on compile-time definitions). See the NOTES section for more information. The fgetc() function performs the same function as getc(). The getchar() function returns the next byte from stdin, the standard input stream. Note that getchar() can also be a macro. [Digital] The reentrant versions of these functions are all locked against multiple threads calling them simultaneously. This will incur an overhead to ensure integrity of the stream. The unlocked versions of these calls, getc_unlocked() and getchar_unlocked() may be used to avoid the overhead. The getc_unlocked() and getchar_unlocked() functions are functionally identical to the getc() and getchar() functions, except that getc_unlocked() and getchar_unlocked() may be safely used only within a scope that is protected by the flockfile() and funlockfile() functions used as a pair. The caller must ensure that the stream is locked before these functions are used. The getc() and getchar() functions can also be macros. The getw() function reads the next word (int) from the stream. The size of a word is the size of an int, which may vary from one machine architecture to another. The getw() function returns the constant EOF at the end of the file or when an error occurs. Since EOF is a valid integer value, the feof() and ferror() functions can be used to check the success of getw(). The getw() function assumes no special alignment in the file. Because of possible differences in int length and byte ordering from one machine architecture to another, files written using the putw() subroutine are machine dependent and may not be readable using getw() on a different type of processor. The getc() and getchar() functions may be macros (depending on the compile-time definitions used in the source). Consequently, you cannot use these interfaces where a function is necessary; for example, a subroutine pointer cannot point to one of these interfaces. In addition, getc() does not work correctly with a stream parameter that has side effects. In particular, the following does not work: getc(*f++) In cases like this one, use the fgetc() function instead. Upon successful completion, these functions and macros return the next byte or word from the input stream. If the stream is at end-of-file, the end-of-file indicator for the stream is set and the integer constant EOF is returned. If a read error occurs, the error indicator for the stream is set, EOF is returned, and errno is set to indicate the error. The fgetc(), getc(), getc_unlocked(), getchar(), getchar_unlocked(), and getw() functions set errno to the specified value for the following conditions: The O_NONBLOCK flag is set for the underlying stream and the process would be delayed by the read operation. The file descriptor underlying the stream is not a valid file descriptor or is not open for reading. The read operation was interrupted by a signal which was caught and no data was transferred. The call is attempting to read from the process's controlling terminal and either the process is ignoring or blocking the SIGTTIN signal or the process group is orphaned. Functions: flockfile(3), funlockfile(3), gets(3), getwc(3), putc(3) Standards: standards(5) delim off
http://backdrift.org/man/tru64/man3/getc.3.html
CC-MAIN-2016-44
refinedweb
657
61.56
Opened 8 years ago Closed 2 years ago #14708 closed defect (fixed) Graph constructor forgets vertex labels Description sage: g = Graph() sage: g.add_vertex(0) sage: g.set_vertex(0, 'foo') sage: g.get_vertices() {0: 'foo'} sage: Graph(g).get_vertices() {0: None} Edge labels are remembered, though: sage: g.add_vertex(1) sage: g.add_edge(0,1, 'bar') sage: g.edges() [(0, 1, 'bar')] sage: Graph(g).edges() [(0, 1, 'bar')] Change History (27) comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by Somewhat related: sage: g = Graph() sage: g.set_vertex('foo', 'bar') sage: g.get_vertices() {} Ok, I would have expected a ValueError when calling set_vertex(). But fine, lets continue... sage: g.add_vertex('foo') sage: g.get_vertices() {'foo': 'bar'} wat? comment:3 Changed 8 years ago by I think that these "vertex labels" are meant to associate non-hashable values to a vertex (whose name must be hashable). I never used it, I also think that it is useless (just use an external dictionary..) and that we would be better without it. Nathann comment:4 Changed 8 years ago by - Milestone changed from sage-5.11 to sage-5.12 comment:5 Changed 8 years ago by - Milestone changed from sage-6.1 to sage-6.2 comment:6 Changed 7 years ago by - Milestone changed from sage-6.2 to sage-6.3 comment:7 Changed 7 years ago by - Milestone changed from sage-6.3 to sage-6.4 comment:8 Changed 5 years ago by - Stopgaps set to wrongAnswerMarker comment:9 Changed 3 years ago by I would like to address the issue raised in the comments section. This is the source code for the set_vertex() function for generic graphs def set_vertex(self, vertex, object): if hasattr(self, '_assoc') is False: self._assoc = {} self._assoc[vertex] = object Since _assoc is a dictionary, even if the vertex isn't added beforehand (as in the example demonstrated in the comments section), the dictionary adds an entry into it. For example, >>> vert = {} >>> vert['foo'] = 'bar' >>> vert {'foo': 'bar'} But we are adding a new vertex only on the call of the method _backend.add_vertex() in the graph. Hence, on calling the method get_vertices(), 'foo' was not displayed. But remember that _alloc already has the dictionary entry {foo: bar}. So once the method add_vertex('foo') is called, the entry foo is added to the list of vertices in the graph, and so on calling get_vertices, the entry {foo: bar} is displayed. comment:10 Changed 3 years ago by The issue in the comments is addressed in comment:11 Changed 3 years ago by Are you also planning to work on the other issue: copy of vertex labels when calling Graph(g) or DiGraph(g) ? If so, there is a minor improvement to do in get_vertices: no need to build the list of vertices. So - if verts is None: - verts = list(self) + if verts is None: + verts = self comment:12 Changed 3 years ago by This solution did not work. The problem still persist sage: g.set_vertex(0, 'foo') sage: g.get_vertices() {0: 'foo', 1: None, 2: None, 3: None, 4: None} sage: Graph(g).get_vertices() {0: None, 1: None, 2: None, 3: None, 4: None} I think the problem is in not communicating the labels of vertices in g to Graph(g). If we look at the .set_vertex method, we have self._assoc[vertex] = object which, I believe, is not being transferred to Graph(g). On the other hand, in the .set_edge_label method, we have self._backend.set_edge_label(u, v, l, self._directed) I am not yet sure how these two different implementations affect Graph(g), but edge labels are retained in Graph(g) but not vertex labels comment:13 Changed 3 years ago by And this is one more thing I've found: if g has multiple edges and loops, then Graph(g) allows loops but not multiple edges. But Graph(g.edges()) allows both (with a warning to set multiedges flag to True). sage: g = digraphs.DeBruijn(2,2) sage: g1 = Graph(g) sage: g1.edges() [('00', '00', '0'), ('00', '01', '1'), ('00', '10', '0'), ('01', '10', '0'), ('01', '11', '1'), ('10', '11', '0'), ('11', '11', '1')] sage: g2 = Graph(g.edges()) sage: g2.edges() [('00', '00', '0'), ('00', '01', '1'), ('00', '10', '0'), ('01', '10', '0'), ('01', '10', '1'), ('01', '11', '1'), ('10', '11', '0'), ('11', '11', '1')] comment:14 Changed 3 years ago by When calling Graph(g), the constructor gives the same settings for loops and multiple edges to the resulting graph than g, and here, the digraph g has loops but no multiple edges. Hence the returned graph has no multiple edges. When a list of edges is given as input, we get a deprecation warning and the constructor sets parameter for multiple edges to True if necessary, but this behavior will soon be changed to False unless the user specifies multiedges=True. comment:15 follow-up: ↓ 16 Changed 3 years ago by Oh ok Sir. Anything suggestions on comment 13? comment:16 in reply to: ↑ 15 Changed 3 years ago by Replying to gh-Rithesh17: Oh ok Sir. Anything suggestions on comment 13? The current behavior is the right one. So nothing to do for #comment:13. There is however something to do for #comment:11 comment:17 Changed 3 years ago by Sorry Sir. I meant any suggestion on comment 12. comment:18 Changed 3 years ago by Check the Graph and DiGraph constructors. comment:19 Changed 3 years ago by I went through the __init__ constructa of both Graph and DiGraph classes and found this line in both of them: self.add_vertices(data.vertex_iterator()) self.add_edges(data.edge_iterator()) Now in the vertex_iterator there is no field for the labels sage: g = digraphs.DeBruijn(2,2) sage: for v in g.vertex_iterator(): ....: print(v) ....: 11 10 00 01 But the labels are displayed in the case of edge_iterator sage: for v in g.edge_iterator(): ....: print(v) ....: ('11', '10', '0') ('11', '11', '1') ('10', '00', '0') ('10', '01', '1') ('00', '00', '0') ('00', '01', '1') ('01', '10', '0') ('01', '11', '1') If we need to modify into the vertex_iterator method. we must change the back-end code. def vertex_iterator(self, vertices=None): return self._backend.iterator_verts(vertices) I'll have to look into the Pyrex file basic/c_graph.pyx to modify _backend.iterator_verts(vertices), but can I do it? I generally would not like to touch the backend files because there could be many more methods using it comment:20 Changed 3 years ago by Please don't change the backend. What you must do is simply add self.set_vertices(data.get_vertices()) at the right place in the __init__ methods. comment:21 Changed 3 years ago by Yeah. That's a simpler solution. I had to also modify to_undirected method in DiGraphs? and to_directed method in Graphs to support vertex labels. And now it is working. sage: g = Graph() sage: g.add_vertex(0) sage: g.set_vertex(0, 'foo') sage: g.get_vertices() {0: 'foo'} sage: Graph(g).get_vertices() {0: 'foo'} Do I need to add a doctest for this? comment:22 Changed 3 years ago by Yes, you need to add a doctest with a pointer to this ticket, i.e., :trac:`14708` comment:23 Changed 3 years ago by - Branch set to u/gh-Rithesh17/vertex-labels-in-graph-digraph - Commit set to 5f40353c94f2710ef7bbeee631daf8fb79a5693f - Status changed from new to needs_review New commits: comment:24 Changed 3 years ago by - Reviewers set to David Coudert Can you change the first example in digraph.py to only DiGraph, so - sage: g = Graph() + sage: g = DiGraph() Also, add (:trac:`14708`) to each of the added doctests. comment:25 Changed 3 years ago by - Commit changed from 5f40353c94f2710ef7bbeee631daf8fb79a5693f to e494dc55503e4d12af7d4bc4e60ac52b25840be9 Branch pushed to git repo; I updated commit sha1. New commits: comment:26 Changed 2 years ago by - Milestone changed from sage-6.4 to sage-8.8 - Status changed from needs_review to positive_review - Stopgaps wrongAnswerMarker deleted Just tested it over 8.8.beta3. LGTM. comment:27 Changed 2 years ago by - Branch changed from u/gh-Rithesh17/vertex-labels-in-graph-digraph to e494dc55503e4d12af7d4bc4e60ac52b25840be9 - Resolution set to fixed - Status changed from positive_review to closed I'm also totally confused about what vertex labels are supposed to achieve. Vertices are already some object. Then you can associate another object to the vertex object. How is that different to just using a pair (object1, object2)as vertex?
https://trac.sagemath.org/ticket/14708?cversion=0&cnum_hist=1
CC-MAIN-2021-43
refinedweb
1,411
75.61
Talk:Proposed features/landcover Natural/landuse treat secondary info as primary - Primary info: what is there on the ground (scrub? lawn? bedrock? wood?) This Information is required and quite easy to know. - Secondary info: why it is there (naturally there, man made there). This Information is optional and sometimes hard to know. Unfortunately the tags in use (landuse=*, natural=*) give secondary info along with the primary. Thus you have to know things you actually sometimes cannot know. Therefore landcover=* is favorable: it gives primary info (what is there) without the need to know secondary info (why is it there). All it takes for landcover=lawn (just an example) to have the same info as landuse=meadow is: man_made=yes (just an example again). A missing man_made=yes means: natural. So, yes, that's one tag more if it is man made. landcover=* plus man_made=*. And yes, there are some landcover=*s that are man_made by design: landcover=farmland implicitly is man_made=yes . - I agree that it has to be possible to map "what's on the ground" without being forced to add any secondary information. But please note that currently natural=* doesn't actually mean that the feature is "natural". That's a misconception that has unfortunately been turned into a rule in one case (natural=wood), but is not part of the definition of any other tag using that key. You can tag an artificial lake as natural=water, and a manually planted tree as natural=tree for example. --Tordanik 17:48, 14 December 2011 (UTC) Not a good idea We already have landuse=*, natural=* and surface=*. Your proposal is not an improvement (just renaming) and creates more confusion where newbies are asking for simplicity. --Pieren 15:30, 16 November 2010 (UTC) - +1 with pieren. I would prefer to keep backward compatibility as much as possible. Propose to add tags to the existing ones to increase information. Exemple, add water_type=ocean/sea/lake/... to natural=water while retaining compatibility with a simple natural=water without additions. Also I agree your landcover is a good word for a good purpose, but there is not that much benefit for a big confusion risk and loads of software, mind, usages to adapt. sletuffe 01:02, 17 November 2010 (UTC) - I know that we have landuse, but is has nothing to do with landcover. I know that we have natural, but it has almost nothing to do with landcover, and the few examples where there are the same values should be moved from natural to landcover IMHO (as they are creating inconsistency at the moment). E.g.: natural=water is used for all kinds of water bodies (even swimming pools) that have nothing to do with "natural". Natural=water doesn't fit into the logic of using natural for geographic features (which are the majority: e.g. coastline, bay, beach, cliff, cave_entrance, peak, spring, volcano). Right now you would tag natural=bay for bays and all other water-bodies as natural=water. Does this sound logical? Water is a physical substance, bay isn't. A lake neither. natural=water doesn't fit at all in the scheme, just that many people recognize this, because water is somehow "natural" --- simply this doesn't count IMHO. -- Dieterdreist 18:22, 17 November 2010 (UTC) - -1 with Pieren. The landcover proposal will enable us to make a more logical tagging language. The proposal is not about abolishing natural and surface, but to reorganize them, and in some cases (surface) let them return to their original and meaningful purpose. "Surface" makes sense for roads and the like, but surface=grass??? What do you tell the visitors that come to a summer barbeque at your house? "Step into my grass-surface?" ;-) --mok0 10:56, 25 November 2010 (UTC) - It seems that we have trouble with the semantics of the keys. It seems that most people here interprets the key "landuse" as something that should describe how a area is predominatly used for/planned for. One can then understand that it would be nice to have a more "physical" view, in the meaning of showing how an area looks like, what it is made up of or maybe how it could be used for transporting. - To bad that there seems to be some semantic issues with the existing landuse-alternatives. I believe that the key "natural" is supposed to mean a natural feature/geographic feature, on of those things that mankind have had names for since the beginning of speech and that is why there could be found a bunch of rather small scale and specific tags, e.g. cliff, mixed with more generalized water (meaning open water I suppose). The problem seem to be that we only use the key "natural" for things that are not formed/altered/maintained by man, and in our civilized world we will then get a lot of similar-looking features needing different tags because of how it have been used before. - I guess that the point of this proposal aims on this. To have a key that is more objective, more focused on how it is perceived than how it is used. Then we have to discuss or decide on what physical features we want to focus on when using the key "landcover", and on what scale. For me land-cover means a general description of the nature in a large swath of land, e.g. the land-cover of Belgium is farmlands in the West and forests in the East (The land is covered in farms and forests). As OSM do a lot of small area mapping I see a imminent risque that landcover could be confused with the cover of the ground, that is the properties of the surface looked on close, e.g gravel or grass. - So do we want a detailed description for small areas of how it looks and feels like from the soil and up, for vegetated areas this could be done dividing the volume in grass, shrubs and tree -canopy; for developed industrial or residential areas this could be a bit more exhaustive with its barriers and structures, but for parks and lawns the same scheme as for natural areas could be used./Johan Jönsson 22:34, 30 August 2011 (BST) - Yes, I fully agree that landuse is the actual, current landuse of an area, i.e. how it is used. I don't agree that this would be what an area is planned for. The latter might be the same as the former, but also not, and if it isn't the same I'd only look at how it is actually used. What you are after with planned for (and what is relevant for many planning projects) is the legal status or the permitted landuse, one that can't be observed and is maybe not even realized, but is the intention of the government how it should be. We might have a tag for this in future OSM, but it should not be landuse, and frankly, there is not a big point in having it, as we can't change or modify this (it would be more suitable for an overlay). --Dieterdreist 19:32, 20 October 2012 (BST) - -1 You forgot about leisure which is also used instead os landuse=*/natural=*. --Cracklinrain (talk) 08:35, 20 June 2013 (UTC) - +1 with Pieren. There are already too many redundant keys for the same kind of things. Inventing yet another makes it even worse. Remember that the schisma was not resolved by installing a third pope. --Fkv (talk) 06:16, 22 October 2013 (UTC) A very good idea The landcover tag is a major improvement, and allows for an orthogonal tagging scheme to landuse. I propose to include a generalized tag for vegetation (landcover=vegetation) with the type of vegetation specified in a vegetation=* tag. This would enable a granular tagging of for example a forest, which often has areas of characteristic vegetation, such as grass, pine, etc. that could be specified in this way. Other examples are landcover=desert, landcover=bush, landcover=trees, landcover=pampas, landcover=marsh, etc. In other words, landcover would typically describe the physical geography: - Geology (rocks and minerals) - Landforms - Soils - Vegetation - Environment - etc. where landuse would be used to describe the human geography: - Settlements - Agricultural systems - Recreational areas - Economic activities - etc. User:mok0 18:54, 16 November 2010 (UTC) - All of the "physical geography" features you mentioned here can be specified with natural tag. Use natural=scrub or natural=heath instead of landcover=bush; natural=wetland+wetland=marsh instead of landcover=marsh; natural=wood instead of landcover=trees and so on. There is natural=* (may be with surface=*) for "physical geography", and landuse=*, man_made=* for "human geography". There is no reason to introduce a new tag like "landcover" to describe landscapes. --Surly 15:56, 7 February 2011 (UTC) +1 for landcover=vegetation --Extropy 08:46, 23 November 2010 (UTC) - I find this key appealing but I have had some trouble to pin down exactly what it is about and what differentiates it from the natural-key. I have had a great help in a defintion and classification system by United Nations Food and Agriculture Org, FAO, FAO land cover definition. - They stress the importance for a classification system to be scalable and source independent. - (as opposed to the rendering/legend that should adopt to the situation) - For primarily vegetated areas they suggest a classification to be done something like this, - in decreasing importance: - ) vegetated or not? - ) main vegetation type: tree/shrub/herbs? - ) description of the main vegetation type in height/coverage/distribution - ) secondary vegetation type - ) third vegetation type - ) further details on the vegetation - I hope the link can be of some use. /Johan Jönsson 16:21, 6 February 2011 (UTC) I see the main use of landcover=* tagging scheme in it's orthogonality to landuse, while it actually could interfere with surface=* scheme. Current landuse=* and natural=* schemes are not describing just the surface or structures covering it. Landuse also means some intentional usage and, likely, man-made origin of an object. While natural=* strictly points to natural origin of it. It's common problem with many current tags - not only wide meaning (it's normal abstraction, good for maps) but mixed meanings (single tag+value pair corresponds to several entities or properties). With landcover=* tag we could show just what we want without assuming it's usage or origin. It could also be used as scheme of preliminary tagging during import of low resolution remote sensing products, because there we usually could see "trees" or "sand" without any knowledge if it's natural wood, commercial forest, sand quarry or natural beach. --BushmanK (talk) 08:36, 1 September 2013 (UTC) Very logical idea which would be an extension of the usability/universality of the OSM database. Separating landcover and usage is in practice often not necessary but very logical. Currently areas are defined by tags which combine both keys: natural, landuse, leisure and amenity. It works most of the time because landcover and usage are often linked unambiguously. But there are exceptions: Military, graveyard, park, trees, grass, picnic. However, regarding the sparation of landuse/landcover: It is not necessary for the mainstream renderer, because landuse is more important than landcover. A side note: I see the surface tag as road or vehicle-exclusive, so it shall be kept away from landuse/landcover. --Lukie80 (talk) 10:35, 14 June 2017 (UTC) Complicating things without benefit natural=desert, natural=srub (or bush if you like), natural=wood / landuse=forest, natural=pampas, natural=wetland, etc. I other words, this proposal complicates things for mappers (now you have to know / decide between three possible keys) without gaining any real advantage -- Ulfl 20:42, 16 November 2010 (UTC) - no, this would simplify tagging for people tremendously because it introduces some logics and creates a consistency of which we could only dream 'til now. landuse has a value "grass", what on earth does this mean? "grass" is no landuse. natural is all kind of features, from physical ones to abstract ones (peak), and hides this under a "natural" coating. There is no logics besides "tradition" for the values of natural. Nobody can tell if a new value would fit there, or not, because the already existing are not consistent. We should not go on like this. Mappers don't have to decide with these new values, they can use them all at the same time for the same object but different, well defined, aspects of them. -- Dieterdreist 18:28, 17 November 2010 (UTC) - Sorry, there is no well defined aspects here. How to tag e.g. trees above bushes (), do I have to tag the bushes or the trees above. Is a highway a physical landcover regarding this proposals or not? This proposal exchanges a not well defined state of the art with a more complex, again not well defined proposal. -- Ulfl 22:54, 20 November 2010 (UTC) - Trees above bushes would be simple to tag with landcover=vegetation, specifying vegetation=trees; vegetation=bushes. For your Blaubeerwald, you might even be fancy, and specify vegetation=la:Pinus radiata; vegetation=la:Vaccinium uliginosum :-) I find it strange that those opposing the landcover feature on one hand use the argument "this is too complicated", and on the other hand come up with all kinds of weird and complicated situations and ask "how would you tag this using landcover?" You can't have it both ways. -- mok0 10:16, 25 November 2010 (UTC) - To be preceise the highway (see Wikipedia:highway) would be tagged with a landuse=highway tag. Parts of the highway, including the carriageway and footway may be tagged with landcover=asphalt but the the Wikipedia:road verges are likely to be tagged with 'landcover=grass' (or possibly natural=scrub or similar). In most places the assumption will be that the road are covered with asphalt though so that it not that important a tag in my view. This confusion of landuse and landcover is exactly what this proposal aims to resolve. PeterIto 12:49, 28 December 2011 (UTC) - Landcover will simplify tagging for mappers, not complicate it. What is difficult for us newbies is when we find it difficult to find tags that characterize what we observe on the ground. You can have a park, that is a neat place with roses and lawns you aren't allowed to walk on, and you can have a park that is almost like a forest. Do you tag both landuse=park? The average newbie will spend a long time browsing the wiki and the mailing lists for answers, because it isn't obvious that both are just "parks" because they look so different. Plus, this is information you know about that you would like to convey. If you separate the landuse (park) from landcover (grass or trees or rocks or whatever) it all makes sense because it is logical. These are principles you can easily memorize. Using the verb "natural" for landcover is absolutely not a good idea, because it indicates something about the origin of the landcover, namely that it is natural and not man-made or cultivated. Is grass in a park "natural"? Is grass on a field "natural"? You don't know and it's hard to find out. What you do know is what you can observe, and that is the landcover. The whole point is to develop a logical, systematic and expressive tagging language, that will enable us to describe the features we observe on the ground as accurately as possible. That is what gives "crowd-sourcing" its value. -- mok0 10:34, 25 November 2010 (UTC) Deprecating landuse=forest I like this proposal. The two landuse=* values that I abuse far too often are landuse=grass and landuse=forest. I see forest isn't mentioned as a value that will be deprecated, but I believe it should. I'm mostly working in residential areas that have significant tree cover, and have been using landuse=forest for those areas of tree cover between houses (not just one or two trees, but many). I didn't use natural=wood because these are areas that are actively managed, i.e. if a tree falls across a path, it is removed. I would much rather use landuse=residential for the entire neighborhood and use landcover=trees for just the areas where I've been using landuse=forest. -- Joshdoe 19:48, 24 May 2011 (BST) - For this reason alone I like this proposal. I have marked a lot of trees - wrongly, I realized - with natural=wood in parks; landuse=forest seemed wrong [1] and natural=wood is also wrong. landcover=trees would solve my problem immediately. Can we start using it already? ([1] leisure=park also seems wrong, but it's another story. Shouldn't it be either amenity or landuse??) Mirar 19:04, 19 May 2012 (BST) landuse=forest is for those areas where the trees are used to produce something for example lumber for houses or furniture. The missuse of this tag demonstrates why the landcover tag would be useful ...for mappers it would clearly distinguish the use of the land (use the landuse key) and the coverage of the land (use the landcover tag). Warin61 (talk) 20:56, 9 January 2017 (UTC) Surface i see the need for a cleanup of the landuse tag, but isn't landcover=* just a synonym for surface=*? if not, you need to explicitly explain the difference in the proposal. --Flaimo 08:35, 25 May 2011 (BST) - From the discussions I've read it seems most agree that landcover=* is only to be used on areas defining the type of land cover, and surface=* is to be used on highway=* objects to define the very surface of the earth. In other words, you could have a national park mostly consisting of trees, which you would tag as an area with landcover=trees, and then if you have a road or hiking path through the area you would tag the way with surface=asphalt or whatever. Now certainly someone could come along and want to map to a finer level, and they would split the giant landcover=tree area into many areas, having areas covering the roads and paths separately, and presumably those could be tagged landcover=asphalt. However I don't think many would go to that level, at least not for many years. - There's certainly some overlap between the values of landcover=* and surface=*, however they're not identical. For example it wouldn't make much sense to tag a highway with surface=trees! Perhaps surface=* values could be a subset of landcover=* values. -- Joshdoe 11:55, 25 May 2011 (BST) - the wiki page for surface doesn't say, that it is only for highways. actually it says "The surface=* tag is one of the … tags, which can be used to supply extra information about the surface in conjunction with highway ways … and other features." so i think there isn't enough differentiation (or any at all) between landcover=grass and surface=grass for example. --Flaimo 16:29, 25 May 2011 (BST) - +1 for both of you! As OSM is centred around the ways, I assume that the key "surface" predominantly will be used to describe the surface of the road/path, as a sub-tag of the main one describing that it is a road in the first place. In the same way could the key "surface" be used to extend the information on an area mainly tagged by the key "natural", "landuse" or in this case "landcover". With surface I believe we mean the (near horizontal) surface which upon we walk or otherwise travels, e.g. the bottom-most level of a forest, the undergrowth. /Johan Jönsson 22:48, 30 August 2011 (BST) - Yes, in some circumstances surface has the same meaning than landcover (at least according to this proposal, e.g. for "sand"), on the other hand for vegetated areas surface doesn't make sense and especially in the case of trees the surface would be different than "trees". For simplicity I'd use landcover also for the cases that might be expressed with surface. --Dieterdreist 19:39, 20 October 2012 (BST) - You are just admitting that the landcover=* key mixes diffent informations that have nothing to do with each other. Your suggested values for landcover=* fall into 2 categories: 1) The nature of the surface. This is exactly what the surface=* key has been invented for. 2) Plants and vegetation. Your proposal classifies areas to be vegated by either trees, bushes, or grass. This is highly naive. E.g. a wood/forest is a complex ecosystem that consists not only of trees, but also of bushes, herbs, mosses, fungi, algae, lichens, and animals. Tagging a wood with landcover=trees would only be a small portion of the truth. A key like plant_community=* would be much more to the point. --Fkv (talk) 06:16, 22 October 2013 (UTC) - The landcover=* tag is a physical feature in itself. The surface=* tag is associated with some other physical feature. A good example is surface=wood ... that says lumber has been used to provide a suitable surface ..if someone were to tag landcover=wood ... most would take it to be trees growing there. As for the understore of trees ... I know of no proposal to tag/map it. Are you saying that is needed? Then where is your proposal? Warin61 (talk) 03:22, 5 February 2017 (UTC) - Landcover is redundant. For flat things, it duplicates "surface", For elevated natural things (e.g. trees, bushes) it duplicates "natural". We can already tag everything using these two keys. This one adds no extra flexibility but adds complexity. -- SwiftFast (talk) 06:27, 20 May 2017 (UTC) - Natural is both 'natural' and 'man affected' as such it is confusing! I do not like using it for a grassed area that is maintained in a park ... that is not a meadow. The landcover tag is much better. The surface key is a property of another key, so surface cannot be used by itself. Landcover solves both the confusion over 'natural' and provides a solution to tagging an area that may have no other properties. Rather than being redundant it is superior. Warin61 (talk) 06:48, 20 May 2017 (UTC) Table with all values For migrating purposes the proposal should list the top 20 of all currently used landuse values and which of them need to get transferred to a landcover key (or not). --Flaimo 08:37, 25 May 2011 (BST) - How about the top 25: - This is just my opinion, I'm not trying to match it up against the existing proposal. For the complete list of values see the TagInfo results here -- Joshdoe 12:30, 25 May 2011 (BST) - I think it is the mighty three it all boils down to: landcover=trees, landcover=grass, landcover=water as they can be used to tag both natural and man-made features that looks and behaves the same. And the greatest benefit is to get rid of landuse=grass, the second best thing to map a forest and a wood with the same tag. but as pointed out here, there are some flaws, uncertanties, sematic and defintion issues./Johan Jönsson 23:15, 30 August 2011 (BST) - This is not about "transfering" keys, it is about describing a different property, the polygons for landcover are not necessarily the same than those for landuse, indeed often they will not be. There are only very few values that are currently tagged for the landuse-key that should be transfered to landcover, from your list this is only: grass. --Dieterdreist 19:54, 20 October 2012 (BST) - I would suggest just adding the landcover tag to a few of these: - Please stop creating lists which landcover tags might be automatically added to which landuse area. This is not a proposal about automated edits. An area tagged with landuse=farm will most probably not have a landcover-tag. If you wanted to add landcover to such an area you will have to apply the landcover most probably to sub-areas of the landuse.--Dieterdreist 19:57, 20 October 2012 (BST) I started to create a "cleaned up" version of landcover, landuse and natural a while ago. It is in most parts identical to the tags this proposal suggests and it also contains a table comparing "old" and "new" tags. You can find it here. Maybe it is of some use here. --Imagic 10:30, 6 June 2012 (BST) - landuse=forest in most parts of Australia is a landuse - I have tried to move the others over to natural=wood + landcover=trees. landuse=grass is a 1:1 move over to landcover=grass .. there should be no contention here! Plantation to me is is land use .. not a landcover. farmland might be wheat one year and vegetables the next ... it is not something I'd be comfortable mapping unless I knew it was consistent. I don't see these tables as being helpfull as too many of the suggestions have difficulties with the conversion. Warin61 (talk) 07:01, 20 May 2017 (UTC) I think this is a good idea. landuse=grass, for example, is contradictory and just doesn't make sense (“grass” is not a use). Wikipedia has separate articles differentiating the concepts. (There is a type of thematic map called land use/land cover, but I think this breaks down when we consider the more detailed map scales that we have available in OpenStreetMap.) Some comments about the proposal: - “Geology, soil.” – Soil is not land cover, although it can influence land cover. Landcover maps and soil maps are two different kinds of thematic maps. Their subjects are, by definition, separated by the land's surface. - “... Climate” – I can't imagine how this relates. Some land cover types may be a result of climate, I suppose, but we're not mapping the climate. I would think we should use tags that are descriptive and independent of climate, e.g., forest, mixed forest, or perhaps rainforest, and not differentiate Pacific temperate rainforest. - “Landforms, water” – Land cover is, by definition, on all the bits that are not water. - “landcover=trees” – forest or woods is a land cover type, in which the presence of many trees is the main, but not only, characteristic. - “landcover=bushes” – similarly, brush or scrub is a land cover. “Bushes” specifically, are more precisely called shrubs. Also, to be confused with bush, a synonym for forest. - “landcover=bare_rock” – Why not just landcover=rock? - “landcover=ice” – That would be glacier? The proposal should define some range of scales that land cover is conceptualized at. I don't think we intend to be mapping very large-scale, complex, or heterogenous phenomena like ecozones, ecoregions, ecosystems, biomes, etc. —Michael Z. 2011-07-28 02:42 z - Great comments, +1 on geology and climate. Water:I wish there was a better term than landcover. I really think that there should be at least three things describing a forest: undergrowth, scrubs, trees, that implies the use of subsidiary tags and the existence of a supertag (=vegetation?). bare_rock is based on the many meanings of the word rock, especially the confusion with boulder and cliff, see natural=bare_rock, but I guess it wouldn´t be so confusing when used in the landcover key. Glacier is one of those nice things to use the key "natural" on./Johan Jönsson 23:04, 30 August 2011 (BST) - I support the proposal the principle. I note the suggestion that surface could be developed to cover landuse but think it may be better to create a landuse tag as you propose. I also agree with some of the comments above, that we should avoid tampering with existing tags which are functioning well if, they have an unambiguous use and only need some clearer documentation. For example riverbank is working just fine as far as I am concerned as is natural=water - these are both 'landcover' tags and be documented to make this clearer; if they are also primarily used as a major shipping route then they should also be tagged with a suitable landuse tag. I am particularly keen that we deprecate landuse=grass and clarify that landuse=forest is only to be used for areas of agroforestry and propose an alternative tag to describe landcover=trees. We also need the landuse=highway tag for areas of land devoted to road infrastructure. Can I suggest that we limit this proposal to resolving the more immediate issues and establishing the landcover tag for any 'missing' concepts and document other tags where there may be confusion. I have just reworked the text of this proposal to clarify what is being proposed. I have also created a new Landuse article for a general discussion of Landuse and created a redirect from Landcover to this proposal. Fyi, PeterIto 11:36, 28 December 2011 (UTC) I thoroughly approve of this idea. The current system is full of problems; land-use and land-cover are quite distinct things. Eg. 'grass' is not a land-use (ie. a piece of land cannot be said to be 'used' for grass- that's merely a description of what's on the land, which could be part of for instance a residential area. --SDavies 11:20, 22 January 2012 (UTC) - I really do like this idea. I would also suggest landcover=hedge for planted hedges (wikipedia) (but it could also be covered by shrubs or bushes?) and landcover=bedding for plant bedding (wikipedia). Could this also maybe improve city squares (highway=pedestrian in area=yes)? Mirar 19:12, 19 May 2012 (BST) References I've only skimmed these, but they may be useful in deciding whether and how to map land cover: - Food and Agriculture Organization of the United Nations, 2000, Land Cover Classification System (LCCS): Classification Concepts and User Manual - Fisher, et al, 2005, “What Is Land Cover?”, in Environment and Planning B - Fisher, et al, 2005, “Land Use and Land Cover: Contradiction or Complement, in Re-Presenting GIS, p 85 —Michael Z. 2011-07-28 03:04 z Good idea/concept I found this page while searching for my own - very similar proposal. If there is this one, I'll just add my ideas (bulletpoints for easier reading ;) ). Any tags names or namespaces are just for illustration and not meant as definitive - Reasoning - it is all complicated today, number of keys describe the same feature, one feature is described by more keys. - There should be a distinction between what is there (landcover) and to what purpose does it serve (landuse). - landuse and landcover are independent - there can be a tree-covered area (forest,wood...) used comercianlly, for recreational purposes, for scientific purposes etc. - Ridiculous combinations like landuse=grass could disapper. Grass is just cover, the use can be anything from golf course to wild grass areas - on a general map it is landcover that should be rendered (there is a forest, building...) not landuse (whatever is there is used commercially, for residence, for recreation). There can even be multiple landuses for the same place (unlike landcovers). - Useful for - rendering in higher zoom leves - "fake" aerial imagery - analysis visibility (I am standing in point A, can I see point B?) - environmental analysis (flood risks, animal migration possibilities...) - many more I cannot think of now (that is the point of open geo data) - Total coverage - Landuse should be designed in a way making possible that any part of earth is covered exactly by one landcover tag. Therefore: - two landcovers should never ever overlap - landcover should be tagged prominently as multipolygons (not on the way itself), probably with use of some sort of multiline. - Only the border should be tagged on the way (fence, curb/kerb, wall...) - Same cover divided should be tagged as separate multipolgon - if there is no physical divider, only source would be tagged on the way itsef - Ease of tagging (and automated data processing) - there should be a limited number of base tags to describe any landcover (landcover = [paved, vegetation, construction, water, non-vegetated]). - This should be rarely canged, if at all. - Anyone should be able to classify any given piece of land within the top class - each of these could than by expanded (most common values would be stable, other values would be allowed) - landcover:paved=[asphalt,concrete, setts,...] - landcover:vegetation=[forest,grass,bushes,...] - Difference between landcover and surface is mostly customary - surface for linear features such as highway - landcover for areas - lancover together with usability could in fact deprecate surface=*, but that is really a long run - surface became quite useless, since there are so many values (writing a router that could tell you where you can get is virtually impossible - one would need to enumerate all English words possibly describing surface) - Why it is really needed and not some sort of re-factoring exercise - numerous proposals around this have been made, mappers are botherd - nobody knows not only what value to use for certain real-world object, but not even the key - it should be as easy as "what_I_describe(:more_specifically)=how_I_describe_it" - landcover is, other combinations are just not - Having asked this question, I could not get any acceptable answer (besides splitting the places into smaller areas and tagging almost each flower separately) - because of missing landcover tag --LM 1 20:34, 26 January 2012 (UTC) Ease of tagging (As a response to the bullet, "ease of tagging" above) To avoid the situation with the many-valued surface, I think there need to be some kind of hierarchy in the land_cover keys. It would be practical if the top level could have something between three and seven values. There shouldn´t be to many levels either, two or three, more lvels only used for very specialized mapping. One possibility/problem is that top level categorization is easily done in a binary fashion, with questions like this (i.e.): - Land or water? - Vegetatated yes/no? - "urban" or "nature" (bad names) - and many more quetsions.t or This could give some branches on a tree: - industrial landscapes: Land/NotVegetated/"Urban" - bare_rock/desert: Land/NonVegetated/"Nature" If I avoid the urban land cover (buildings/industries/large areas of paved nothingness) I find it rather easy in the very top levels: - Water - Land/Vegetated - Land/NotVegetated this high in the hiearchy, one can easily skip one level and flatten the structure to: - Water - Vegetated (land) - Unvegetated land Unvegetated land in nature is a few areas , high mountains with visible bedrock, deserts and coastal cliffs. My little problem is that it also includes many urban areas, I really would like to find something that says unvegetated "nature".. /Johan Jönsson 20:16, 21 February 2012 (UTC) - I think your suggestion is a very good one. Is there any recognised hierarchy in professional/academic circles that we could adopt I wonder? PeterIto 13:46, 22 February 2012 (UTC) - Yes, there are a couple, they have been discussed. Can´t find the discussion of them above, it must have been held on the maillist. Anyhow, the UN-organization FAO have one: - See figure 7 in ch 2.3.1. in - Level1: Primarily_vegetated or non-vegetated - Level2: Terrestial or Aquatic/flooded - Level3: "Cultivated" or "natural" (a bit irritating land_use connection) - Giving in total: eight different top categories. - Example of the eight: - 1) Farmland/forest 2)Steppe/jungle 3)Rice_paddy 4)mangrove_marsh 5) Townland 6)Bare_rock 7)Dam 8)Lake - I think they are way to theoretic and we need to find a way to i.e. separate jungle from plains on the top_level. - Btw, I think the openstreetmap wetland-scheme is so good that we do not need to think more about all those odd cases (it covers the vegetated and unvegetated aquatic)./Johan Jönsson 19:30, 22 February 2012 (UTC) - I am not an expert in this field, but would be happy to support an proposal that made sense and came from an organisation of standing such as the FAO. My only comment is that from the article I am still not clear how to work at the more detailed level. How should one tag a 'forest' area which consists of tress that are only a few feet tall, or grassland with interspursed mature trees or thick cover of trees with a closed canopy. They allude to such a classification distinction but I don't see a complete hierarchy for us to implement. PeterIto 18:07, 23 February 2012 (UTC) - I wrote the above as a comment on the "ease of tagging"-bullet, where LM 1 asked for a set of a few top-level categories. The FAO-system tries to be a bit dynamic. At top-level there are 8 classes but each class then have its own tailored set of subdivisions, called classifiers. I am not suggesting that we use the classifiers-system, I was only looking for possible ways to form the top-level categories for land_cover. (More on FAO in a separate header, below) /Johan Jönsson 17:14, 23 February 2012 (UTC) - USGS land cover definition to be used for remote sensing, NLCD92, also have 8 top-level categories. (and only a total of 23 on second level). Here is a comparison between FAO and USGS - The eight FAO-categories, with my own names and some grouped together. - FAO-1) cultivated vegetation - FAO-2) natural vegatation - FAO-3 & 4) wetland - FAO-5) urban - FAO-6) barren - FAO-7 & 8) water - The eight NLCD92-categories, with corresponding FAO-category (NLCD92 have more diverse vegetation): - "cultivated" (herbaceous planted/cultivated) (FAO-1) - "orchard" non-natural woody (FAO-1) - "Grassland" (Herbaceous Upland) (FAO-2) - Shrubland (FAO-2) - Forest(ed upland) (FAO-2) - Wetlands (FAO-3) - Developed (FAO-5) - Barren (FAO-6) - Water (FAO-7) - /Johan Jönsson 18:49, 23 February 2012 (UTC) FAO land cover system for vegetation (Posted as a response to a detailed question on a discussed classifying system, if to be used in OSM simplifications is needed.) The FAO-system for classifying vegetatated areas uses the shape and form of the vegetation, rather than a species-related classification. the system is based on this method: - identify all vegetation - divide it into groups: tree, shrub and herbs - Determine for each group its "cover": is it sparse, open or closed - Choose the dominant lifeform, non-sparse trees goes first, then shrub, then herbs. For PeterIto questions above, the fAO answer would be: - How should one tag a 'forest' area which consists of tress that are only a few feet tall - " woody plants lower than 5 meter are classified as Shrubs" - I assume closed or open cover. - dominant lifeform is shrubs. - If the cover was closed it is called a "thicket". - If the cover was open it is called "shrubland" - grassland with interspursed mature trees - trees and herbs - I assume the trees are sparse. - Dominat lifeform is herbs. - It is called "herbaecous" - If the trees where not sprase, it would become a "woodland" - thick cover of trees with a closed canopy. - trees - closed - dominant lifeform is clearly trees - It is called "forest" After this initial work it is possible to add classifiers on height, leaftype, leaf phenology (decidious), spatial distribution. /Johan Jönsson 17:52, 23 February 2012 (UTC) - So this would allow one to say "The area consists of a closed herbaceous layer of unimproved grassland with scattered tall deciduous trees". That would be neat and I could see how a clever visualisation system in a computer game could have a go at creating a realistic representation of the scenery based on that description. I think that it pretty interesting. PeterIto - It is indeed very interesting, but it is more complex than what Johan is describing. The second question is actually if there is water and if the area is permanently or temporarily completely covered with water. They use a 2-phase-classification system, where the first phase consists of 3 steps (easy yes/no-questions) resulting in 8 type of basic classes. These get refined in the second phase.--Dieterdreist 20:01, 20 October 2012 (BST) - I think from the above two sections that the FAO system is definitely NOT what this osm key should be used for. Most notably landcover intentionally does not distinguish between managed and unmanaged plants. landcover=trees applies equally to anything from primordial coniferous woods to a commercial Christmas tree plantation, the distinction between the two is their land use, not their land cover. Similar for any other plant and non-plant land cover.Jbohmdk 05:06, 10 June 2013 (UTC) Status of proposal At what point should we say this proposal is essentially dead and without a viable future _or_ that it remains live and is ready for a vote? --Ceyockey 00:52, 22 November 2012 (UTC) - I wouldn't call it dead. I'm using it wherever possible, but I'm also aware that this topic is disputed. The main problem in my opinion is the fact that land cover is something you want to see on the map. As long as there is no support for the most important values in the main renderers people will not use the tag. And as long as the tags are not used, renderers ...... duh. --Imagic 12:17, 22 November 2012 (UTC) - As a relative newbie I found the above discussion very helpful to read, and I like the idea of the concept of landcover even if the proposal doesn't go anywhere. I had a hard time understanding natural=* for a long time because I kept fixating on whether it was untouched-by-man natural or not; eventually I realised that whatever the actual word "natural" means, in OSM it means landcover and is used to tag landcover (except when there's an appropriate landuse tag instead). This was useful for natural=water, where I spent a lot of time dithering and reading wiki pages as to whether something was a "basin" or not, before finally deciding that it's all water and can be tagged with a landuse later if that's appropriate. I also spent a lot of time trying to understand the debate over natural=wood vs landuse=forest, and in the absence of knowing who planted the trees or how often (if ever) they get logged (which you can't tell from the air or from a physical survey) I settled on natural=wood because it is neutral as to how the trees are actually used, if you understand the tag "natural" to mean "landcover". It also is amenable to overlapping with various other landuses, such as landuse=residential. - I've been using things like 'landuse=grass' and 'landuse=basin' for quite some time and I think it is useful to use 'landcover=grass' and 'natural=water' instead of these as they are merely descriptive rather than illustrative of an inferred use case. For instance → , where all the green and blue could be accurately revised to landcover and natural, respectively, retaining the landuse for the water cover as they are known to be basins for rainwater runoff. --Ceyockey 17:08, 25 November 2012 (UTC) - Isn't the proposal ready for voting? It's already more than 2 years old and there hasn't been any discussion for a while anymore. Getting this in to the wiki might boost the usage of it and the render hopefully will insert it into there themes. Should "area" be added to "landcover"? In other words, when we use "landcover=" should we also use "area=yes", or is "area=yes" provided by the interpretation of "landcover"? --Ceyockey 15:07, 24 November 2012 (UTC) - In my understanding the key landcover describes how an area of land is covered, therefore area=yes is implied. --Imagic 07:42, 25 November 2012 (UTC) bushes vs. scrub Lately I did some research amongst native speakers regarding "bushes". I mostly got the feedback that it is not a good word for land cover and that scrub(land) or shrub(land) would be better. As the usage numbers of "bushes" are still rather low I want to suggest to change bushes to scrub. This would also be more consistent with natural=scrub. (I myself am a german native speaker so I fully understand if some people prefer "bushes". But in OSM we should use british english words.) And if someone is interested: I started a JOSM style for landcover. --Imagic 09:11, 23 January 2013 (UTC) - I believe we need to distinguish two meanings of the words bush and bushes here. I suggest we use landcover=scrub for the landscape type consisting of low wild-looking vegetation found in places where trees cannot grow or simply have not yet grown high enough to be seen, while we use landcover=bushes for a dense but somewhat uniform growth of many instances of the low plant type individually known as a "bush". And yes, I have seen both within a short distance of each other. - Scrub is the landcover typically seen in landuses such as uncultivated natural "scrubland", landuse=greenfield and landuse=brownfield. Bushes on the other hand is a landcover typically seen in gardens, parks, hedges and plantations (e.g. tea and various berries). Either landcover may be found in places such as woods. In suburban areas, some gardens may have scrub, others bushes depending on the residents personal tastes and abilities. There is full orthogonality and mixability here, a plantation can include many acres of bushes grown for their commercial produce and some unused areas with scrub. A (semi-)natural wood can include some patches of scrub and some patches of bushes with wild berries. - Oh, and the African landscape type known as "the bush" is something quite different from either and more a landuse than a landcover, as it is diverse enough to hold multiple landcover types within it.Jbohmdk 04:52, 10 June 2013 (UTC) - Wahtever, I would simply add landcover=scrub to the table and see what mappers pick. landcover=bushes seems almost unused though. Another way to distinguish land cover, surface and land use From reading the comments, I realize that a way to clarify and untangle the concepts involved here: - land cover is what you can actually see in aerial photography or by looking down/around from normal human eye level, especially if it is biological or geological (as opposed to e.g. buildings and roads). It is what faces the sky: Tallest plants (not one by one), barren soil, barren mountain rocks etc. A land cover is typically (not always) larger than the typical C/A GPS uncertainty of 2.5m to 25m. Not all of planet Earth has a land cover. landcover can only be tagged on areas, not on ways or points. - surface is the upper layer of the planet if vegetation etc. is hypothetically stripped off (without actual ecological consequences as this is only hypothetical). This may be a natural surface such as rock, sand, soil, water or a man made surface such as asphalt, concrete, paving stones, or pebble stones. Surface is mostly tagged on mad-made barrent things such as the active parts of roads, plazas, sports pitches etc., but may also be tagged where the surface beneath a landcover is unusual, such as trees growing on water (mangrove) or grass growing on sand. surface can be tagged on areas but would mostly be used on ways and points that already carry a primary tag which makes it relevant. For instance a way tag as a highway will often have a surface to indicate if and how it is paved. - landuse is the how an area is used, which may or may not related to its landcover and/or surface. It is not the why or the what, it is the how. E.g. being legally designated as a public access commons (in England) is not a land use, being used as the commons of an English rural community is a land use. Similarly growing trees with a focus on producing valuable construction timber is a land use. Land use tags should be used where the use is known and not implied by some other tag. E.g. A highway is already a land usage excluding most others (it could still be part of an aerodrome or a car racing track), so there is no point in having a landuse=road tag. All 3 tag types should be simple but not oversimple classifications easily understood by novice mappers, such as "trees", "bushes", "grass", "mud", "barren" (specify with surface) or "steel", "asphalt", "concrete", "earth", "sand", "mud", "water". Fine distinctions unlikely to be understood by a mapper should be delegated to subtags such as trees=oak (not trees=deciduous, deciduous=oak which is too complicated). Implicit grouping of related predefined tag values into broader categories is a job for the data consuming software. Some examples: A village pond will often be landcover=waterplants, landuse=water, water=pond (implicit: surface=water). The Sargasso sea is probably landcover=seaweed, natural=ocean, (implicit: surface=water). Most of the Atlantic is probably natural=ocean, (implicit: surface=water) (implicit: no landcover if not specified) Memorial trees in a graveyard may be landcover=trees, surface=grass, landuse=cementary . A tree lined boulevard with trees so large their branches touch across the road may be highway=secondary, surface=asphalt. Jbohmdk 06:26, 10 June 2013 (UTC) Lancover=lawn I propose to change the name of the landcover=grass on landcover=lawn and add landcover:type. Type of grass can be tagged as grass:type. --Władysław Komorek (talk) 12:03, 29 September 2013 (UTC) - The usual convention is to tag the attributes, where "type" can refer to all sorts of different classification lists (by taxon? by (maximum) height? by usage? by soil moisture? by sharpness of the edges?). And even if there is a globally agreed primary classification list, a convention would be to use grass=* instead of any *:type key. Alv (talk) 21:47, 29 September 2013 (UTC) Rendering It appears to me the proposed and proper separation of landcover (or surface) and landuse would also help renderers to guess the vertical stacking of features. E.g. landuse= (just an administrative intention of usage of the area) would be the lowest level, then landcover= on it, then highway= and building=. Currently it is not so easy. Consider an area of highway=pedestrian+area=yes+surface=paved (or leisure=park+surface=paved) and then smaller areas overlapping it being landuse=grass. On Mapnik I get the whole area as grey with no trace of the grass (and layer=1 does not help). On other renderers I get grey main area with green areas where tagged. Maybe these are bugs in some of the renderers, I don't know. But I can't argue with them as long as the meaning is not well defined in OSM itself. Until the stacking is well defined, it sometimes is a hit and miss experimentation on which tags to use on which areas or whether to use a multipolygon with the grass areas being "inner" roles. For some areas multipolygon may be appropriate, for some it may not (e.g. the paved and grass areas are all formally part of a park or pedestrian zone, you do not want to exclude them). Currently, do we want the renderers to have special exceptions, e.g. to know that all landuse is on the lowest level, except when it is grass? Properly separating the layers would allow you to have big landuse=residential and small landcover=grass on top of it. Currently, when both are landuse, how should renderers choose the precedence? The one that is smaller is on top? - Aceman 17 Oct 2013. Update 2014 While mapping in the Alpes I also got a problem with the mishmasch of landcover features. After reading this proposal and the comments I studied two landcover classification systems: NLCD92 and LCCS. To get an overview of this existing landcover classification systems I created two tables with pictures: Then I tried to find a workable conclusion of this two systems in due consideration of some existing tagging-scheme in OSM. I concluded with 8 main categories and 36 subcategories, describing the whole earth's surface with one main key and 8 sub-keys. Please take a look at: User:Rudolf/draft_landcover --Rudolf (talk) 06:18, 29 April 2014 (UTC) - It may be a nice cleanup but not worth it. Reality is often more complex than "every place has a single landcover". How would you map underwater rocks, surfaces or "a mostly sandy area with intermittent water cover and sparse vegetation of several listed types"? RicoZ (talk) 12:19, 16 May 2015 (UTC) Landcover class or classifier? The Land Cover Classification System (LCCS) by FAO includes landcover classes and classifiers. The land cover classes are created by the combination of sets of pre-defined classifiers. These classifiers are tailored to each of the eight major land cover types. For every area you assign the related main type and then allocate all matching classifiers. This can be a number of different classifiers, e.g. trees, shrubs, herbaceous, lichen, mosses also sands, bare rock, gravel and so on. The landcover class (e.g. forest, woodland, shrubland, grassland) is the result after allocation of classifiers. Normally you get the result by software, e.g. dedicated LCCS software or rendering software. Very simple examples: - classifier = "trees" --> class = "forest" or "woodland" - classifier = "shrubs" --> class = "shrubland" - classifier = "herbaceous" --> class = "grassland" - classifier = "herbaceous"+"trees" --> class = "grassland with trees" - classifier = "trees"+"shrubs" --> class = "forest with shrubs" Therefore I propose to define the values trees and grass as landcover:classifier=*. The key landcover=* should be reserved for the landcover classes, such as woodland or grassland. Otherwise we got a mixture of classes and classifiers, which is very difficult to evaluate. If we use only the classifiers, then we got the result not until rendering. --Rudolf (talk) 21:15, 8 May 2014 (UTC) Is grass surface or landcover? Is grass surface or landcover? Bushes? Scrub? Rock? With exception of rare cases there will be always confusion. The only think which is obvious is trees (landcover, not surface), but it is not much :( Otherwise I like the proposal, but why not just to reuse the surface? (with exception of rare things like trees in water etc, which can be solved differently) --Jakubt (talk) 08:19, 26 August 2016 (UTC) - The surface=* is used to describe another physical feature e.g. a road. The landcover=* is a physical feature itself, so it needs no other tag. So it this 'grass' part of another physical feature - then add the tag surface=grass to it. If the 'grass' is not part of feature .. then tag it as landcover=grass Warin61 (talk) 04:02, 5 February 2017 (UTC) Exclude flat things I think this key would have been far less controversial and logical if it excluded flat things like grass, asphalt, etc. This ensures it does not overlap with surface=*, and it creates a nice order: - landcover=* would describe the non flat landcover (mainly trees or bushes), but it can be omitted when implied by other tags. esp. natural=*. - surface=* already describes the flat landcover, and it is already omitted when implied by other tags. e.g. natural=bare_rock - landuse=* already describes human use of the land. - natural=* already describes for naturally occurring features with no, or with minimal human intervention, and it often also implies the non flat landcover or flat landcover. - man_made=* already describes non naturally occurring features. Examples: - Military base with a grass surface. landuse=military, surface=grass. - Park with asphalt and trees which are clearly maintained and "non natural": leisure=park, surface=asphalt, landcover=trees. - Park with lawn and trees: leisure=park, surface=grass, landcover=trees. - Grass only park: leisure=park, surface=grass. (Or maybe landcover=none to explicitly state that non-flat landcover doesn't exist). This is backward compatible because landcover and surface are often implied by tags which are already widely used tags. e.g. tagging a forest with only natural=wood implies the non flat landcover is tree, and therefore is compatible. Additionally, all previous grass uses should be discouraged in favor of surface=grass for maximum consistency with the order above, but this is still backward compatible with current uses, e.g. landuse=grass implies surface=grass. (natural unmaintained grasslands would still be natural=grassland) This would require abandoning the weird distinction between "highway/routing surfaces that should have surface=* and area surfaces that should have landcover=*" which is suggested in the original proposal. As an aside, I think landuse=forest should be removed and landuse=wood_farm should be introduced to avoid confusion with natural=wood, but that's a different story. I am willing to draft a new proposal if people like this. Opinions? -- SwiftFast (talk) 09:23, 20 May 2017 (UTC) - Your flat vs. non-flat distinction does not help, as any theshold would be artificial. Grass grows from 0 to maybe 50 cm, bushes to 200, and trees to 2000, so your definition of flat is just a matter of scale. And I don't like surface to be mixed in when not describing the property of a particular feature.--Polarbear w (talk) 14:44, 20 May 2017 (UTC) - The threshold isn't the exact height. If it's something you walk on, it is flat. If it's something you walk beneath/beside, it is non-flat. Such a threshold is pragmatic, because any given piece of land can have any combination of things you walk on and things you walk beside/beneath. Since it's a combination, these are two distinct attributes requiring two distinct tags (which are sometimes implied and need not be explicit). Unlike manmade lawn, a grassland is an edge case, because it's the only thing you walk *through*, but this one is already covered by natural=grassland, so we're good. -- SwiftFast (talk) 17:40, 20 May 2017 (UTC) - This is getting even more confusing. So if the threshold is "walking on" it means cut lawn falls into flat, and non-flat when unmaintained. And what if people trample over the bushes?--Polarbear w (talk) 22:34, 20 May 2017 (UTC) - Yes. if it's nice to walk on, it's surface=grass. If it's jungly, it's natural=grassland. The map should distinguish the two. Please don't resort into Sorties Paradoxes, in 99.99% cases the distinction is obvious. And when it's right on the edge both tags are fine by me. (You could make such philosophical arguments to "invalidate" any spectrum related tags like hamlet/village, pond/lake) And I don't like surface to be mixed in when not describing the property of a particular feature. Is there any technical reason why this would be bad tagging? (e.g. area=yes, surface=asphalt). I thought not liking this is a subjective preference, but Osmose sometimes warns me about it. -- SwiftFast (talk) 06:21, 15 June 2017 (UTC) Tagging grass under trees? It can't be both landcover=grass and landcover=trees? So what do we do? what about sand under trees? -- SwiftFast (talk) 18:08, 21 May 2017 (UTC) - If you are on this level of detail, you can tag landcover=grass and draw the individual node for the tree, with natural=tree. Quite common method already. Would you mind calming down a bit? --Polarbear w (talk) 19:54, 21 May 2017 (UTC) - What if I don't want to go into this level of detail, and I still want to indicate this park is gravel-covered and that one is grass-covered? (could be useful information for picnics or ball games and such). I think this demonstrates what I meant by saying any piece of land has two attributes (flat and non flat covers) that can never be recorded by a single tag. -- SwiftFast (talk) 05:15, 22 May 2017 (UTC) - We desperately need a landcover definition encompassing vegetation levels plus topmost layers of geology. Started toying with ideas some time ago User:RicoZ/What_is_needed. RicoZ (talk) 10:56, 30 July 2017 (UTC) landcover=green What about a generic value for green landcover for situations where the presence of grass, bushes or trees cannot yet be verified, but what is a identified as a green area. The value description should of course encourage to refine the kind of green. This would help to resolve the common misuse of leisure=village_green for small green spots that do not qualify as park, garden or recreation ground.--Polarbear w (talk) 17:50, 1 December 2017 (UTC) Missing from the urban mapper's toolbox Mapping in a detailed urban area often entails defining areas of (planned) vegetation. Currently this mean abusing these three tags: - landuse=grass - natural=scrub - landuse=forest What is really being tagged are clumps of trees, bushes, and patches of grass in an area often already defined by another landuse (residential, industrial, railway, etc.). It makes sense to have these available as a consistent landcover set. JeroenHoek (talk) 08:02, 23 January 2018 (UTC) - Main problem is: you never get people to (re)tag to landcover values, unless these tags are rendered appropriately. If the rendering is there, many taggers would adopt the new more consistent scheme. Nobody likes tagging for not-rendered. So you need renderers to a. Assert the renderability, b. set out a course and a date to actual rendering.--Peter Elderson (talk) 09:47, 5 June 2018 (UTC) Usage - quick note about Paraguay import I run a quick check why landcover=trees usage massively increased in 2018. It turned out to be result of a single import in a Paraguay that added over 96 000 of landcover=tress elements. It is a great example that raw usage count is not a perfect metric to judge how popular is tag among mappers. Just because tag usage is growing (even quickly) it does not mean that real mappers are actually using it. Currently about half (depends on how it is counted) of landcover data is from this single import (and remaining data is not checked, even larger part of landcover tags may be imported) Mateusz Konieczny (talk) 08:59, 6 March 2019 (UTC) - --Peter Elderson (talk) 16:44, 8 July 2019 (UTC) I would like to add that Paraguay was a single project by real mappers, but not a single import. The landcover key was selected as best fit for the project. It would not be right to discard this. It is valid usage. Is this proposal planning to deprecate landuse=forest and natural=wood? It is not stated outright but implied that part of this proposal is deprecation of landuse=forest and natural=wood. I think that it would be a good idea to make explicit is deprecation of this tags proposed along deprecation of landuse=grass Mateusz Konieczny (talk) 14:14, 6 March 2019 (UTC) - --Peter Elderson (talk) 16:58, 8 July 2019 (UTC) At the same time it is not necessary to ban the old tags all at once. The proposal, in my view, is a preference for landcover over landuse-that-is-not-landuse.
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/landcover
CC-MAIN-2020-50
refinedweb
10,533
61.06
Android Data Binding: The Big Event And You Don’t Even Have to Dress Up In previous articles, I wrote about how to eliminate findViewById from Android applications and in some cases eliminate the need for View IDs altogether. One thing I didn’t explicitly mention in those articles is how to handle event listeners, such as View’s OnClickListener and TextView’s TextWatcher. Android Data Binding provides three mechanisms to set an event listener in the layout file and you can choose whichever is most convenient for you. Unlike the standard Android onClick attribute, none of the event data binding mechanisms use reflection, so performance is good whichever mechanism you choose. Listener Objects For any view with a listener that uses a set* call (as opposed to an add* call), you can bind a listener object to the attribute. For example: <View android:onClickListener="@{callbacks.clickListener}" .../> Where the listener is defined with a getter or a public field like: public class Callbacks { public View.OnClickListener clickListener; } There is also a shortcut for this where the “Listener” has been stripped: <View android:onClick="@{listeners.clickListener}" .../> Binding with listener objects is used when your application already uses them, but in most cases you’ll use one of the other two methods. Method References With method references, you can hook a method up to any event listener method individually. Any static or instance method may be used as long as it has the same parameters and return type as in the listener. For example: <EditText android:afterTextChanged="@{callbacks::nameChanged}" .../> where Callbacks has a nameChanged method declared like this: public class Callbacks { public void nameChanged(Editable editable) { //... } } The attribute used is in the “android” namespace and matches the name of the method in the listener. Though it isn’t recommended, you may do some logic in the binding as well: <EditText android:afterTextChanged= "@{user.hasName?callbacks::nameChanged:callbacks::idChanged}" .../> In most cases it is better to put logic in the called method. This becomes much easier when you can pass additional information to the method (like user above). You can do this with lambda expressions. Lambda Expressions You can supply a lambda expression and pass any parameters to your method that you wish. For example: <EditText android:afterTextChanged="@{(e)->callbacks.textChanged(user, e)}" ... /> And the textChanged method takes the passed parameters: public class Callbacks { public void textChanged(User user, Editable editable) { if (user.hasName()) { //... } else { //... } } } If you don’t need any of the parameters from the listener, you can remove them with this syntax: <EditText android:afterTextChanged="@{()->callbacks.textChanged(user)}" ... /> But you can’t take just some of them — it is all or none. The timing of expression evaluation also differs between method references and lambda expressions. With method references, the expression is evaluated at binding time. With lambda expressions, it is evaluated when the event occurs. Suppose, for example, the callbacks variable hasn’t been set. With a method reference, the expression evaluates to null and no listener will be set. With lambda expressions, a listener is always set and the expression is evaluated when the event is raised. Normally this doesn’t matter much, but when there is a return value, the default Java value will be returned instead of having no call. For example: <View android:onLongClick=”@{()->callbacks.longClick()}” …/> When callbacks is null, false is returned. You can use a longer expression to return the type you wish to return in such an error case: <View android:onLongClick=”@{()->callbacks == null ? true : callbacks.longClick()}” …/> You’ll more often just avoid that situation altogether by ensuring that you don’t have null expression evaluation. Lambda expressions may be used on the same attributes as method references, so you can easily switch between them. Which To Use? The most flexible mechanism is a lambda expression, which allows you to give different parameters to your callback than the event listener provides. In many cases, your callback will take the exact same parameters as given in the listener method. In that case, method references provide a shorter syntax and are slightly easier to read. In applications that you are converting to use Android Data Binding, you may already have listener objects that you were setting on views. You can pass the listener as a variable to the layout and assign it to the view.
https://medium.com/google-developers/android-data-binding-the-big-event-2697089dd0d7
CC-MAIN-2017-34
refinedweb
719
54.83
SimpleHTTP provides a back-end independent API for handling HTTP requests. By default, the built-in HTTP server will be used. However, other back-ends like CGI/FastCGI can used if so desired. So the general nature of simpleHTTP is no different than what you'd expect from a web application container. First you figure out which function is going to process your request, process the request to generate a response, then return that response to the client. The web application container is started with simpleHTTP, which takes a configuration and a response-building structure (ServerPartT which I'll return too in a moment), and picks the first handler that is willing to accept the request, passes the request into the handler. A simple hello world style HAppS simpleHTTP server looks like: main = simpleHTTP nullConf $ return "Hello World!" simpleHTTP nullConf creates a HTTP server on port 8000. return "Hello World!" creates a serverPartT that just returns that text. ServerPartT is the basic response builder. As you might expect, it's a container for a function that takes a Request and converts it a response suitable for sending back to the server. Most of the time though you don't even need to worry about that as ServerPartT hides almost all the machinery for building your response by exposing a few type classes. ServerPartT is a pretty rich monad. You can interact with your request, your response, do IO, etc. Here is a do block that validates basic authentication It takes a realm name as a string, a Map of username to password and a server part to run if authentication fails. basicAuth' acts like a guard, and only produces a response when authentication fails. So put it before any ServerPartT you want to demand authentication for in any collection of ServerPartTs. main = simpleHTTP nullConf $ myAuth, return "Hello World!" where myAuth = basicAuth' "Test" (M.fromList [("hello", "world")]) (return "Login Failed") basicAuth' realmName authMap unauthorizedPart = do let validLogin name pass = M.lookup name authMap == Just pass let parseHeader = break (':'==) . Base64.decode . B.unpack . B.drop 6 authHeader <- getHeaderM "authorization" case authHeader of Nothing -> err Just x -> case parseHeader x of (name, ':':pass) | validLogin name pass -> mzero | otherwise -> err _ -> err where err = do unauthorized () setHeaderM headerName headerValue unauthorizedPart headerValue = "Basic realm=\"" ++ realmName ++ "\"" headerName = "WWW-Authenticate" Here is another example that uses liftIO to embed IO in a request process main = simpleHTTP nullConf $ myPart myPart = do line <- liftIO $ do -- IO putStr "return? " getLine when (take 2 line /= "ok") $ (notfound () >> return "refused") return "Hello World!" This example will ask in the console "return? " if you type "ok" it will show "Hello World!" and if you type anything else it will return a 404. Run simpleHTTP with a previously bound socket. Useful if you want to run happstack as user on port 80. Use something like this: import System.Posix.User (setUserID, UserEntry(..), getUserEntryForName) main = do let conf = nullConf { port = 80 } socket <- bindPort conf -- do other stuff as root here getUserEntryForName "www" >>= setUserID . userID -- finally start handling incoming requests tid <- forkIO $ socketSimpleHTTP socket conf impl Note: It's important to use the same conf (or at least the same port) for bindPort and simpleHTTPWithSocket. Used to manipulate the containing monad. Very useful when embedding a monad into a ServerPartT, since simpleHTTP requires a ServerPartT IO a. Refer to Web ServerPartT IO a (e.g. simpleHTTP). You can provide the function: unpackErrorT:: (Monad m, Show e) => UnWebT (ErrorT e m) a -> UnWebT m a unpackErrorT handler et = do eitherV <- runErrorT et case eitherV of Left err -> return $ Just (Left Catastrophic failure ++ show e, Set $ Endo r -> r{rsCode = 500}) Right x -> return spUnwrapErrorT for a more sophisticated version of this function Just (Either Response a, SetAppend (Endo Response)) if mzero wasn't called. Inside the Maybe, there is a pair. The second element of the pair is our filter function FilterFun Response. FilterFun Response is a type alias for SetAppend (Dual (Endo Response)). This is just a wrapper for a Response->Response function with a particular Monoid behavior. The value Append (Dual (Endo f)) Causes f to be composed with the previous filter. Set (Dual (Endo f)) Causes f to not be composed with the previous filter. Finally, the first element of the pair is either Left Response or Right a. Another way of looking at all these pieces is from the behaviors they control. The Maybe controls the mzero behavior. Set (Endo f) comes from the setFilter behavior. Likewise, Append (Endo f) is from composeFilter. Left Response is what you get when you call finishWith and Right a is the normal exit. This class is used by path to parse a path component into a value. At present, the instances for number types (Int, Float, etc) just call readM. The instance for String however, just passes the path component straight through. This is so that you can read a path component which looks like this as a String: /somestring/ instead of requiring the path component to look like: Minimal definition: toMessage Used to convert arbitrary types into an HTTP response. You need to implement this if you want to pass ServerPartT m containing your type into simpleHTTP Ignores all previous alterations to your filter As an example: do composeFilter f setFilter g return "Hello World" setFilter g will cause the first composeFilter to be ignored. A monoid operation container. If a is a monoid, then SetAppend is a monoid with the following behaviors: Set x mappend Append y = Set (x mappend y) Append x mappend Append y = Append (x mappend y) _ mappend Set y = Set y A simple way of sumerizing this is, if the right side is Append, then the right is appended to the left. If the right side is Set, then the left side is ignored. A control structure It ends the computation and returns the Response you passed into it immediately. This provides an alternate escape route. In particular it has a monadic value of any type. And unless you call setFilter id first your response filters will be applied normally. Extremely useful when you're deep inside a monad and decide that you want to return a completely different content type, since it doesn't force you to convert all your return types to Response early just to accomodate this. pops any path element and ignores when chosing a ServerPartT to handle the request. used to read parse your request with a RqData (a ReaderT, basically) For example here is a simple GET or POST variable based authentication guard. It handles the request with errorHandler if authentication fails. myRqData = do username <- lookInput "username" password <- lookInput "password" return (username, password) checkAuth errorHandler = do d <- getData myRqDataA case d of Nothing -> errorHandler Just a | isValid a -> mzero Just a | otherwise -> errorHandler An varient of getData that uses FromData to chose your RqData for you. The example from getData becomes: myRqData = do username <- lookInput "username" password <- lookInput "password" return (username, password) instance FromData (String,String) where fromData = myRqData checkAuth errorHandler = do d <- getData' case d of Nothing -> errorHandler Just a | isValid a -> mzero Just a | otherwise -> errorHandler proxyServe is for creating ServerPartT's that proxy. The sole argument [String] is a list of allowed domains for proxying. This matches the domain part of the request and the wildcard * can be used. E.g. TODO: annoyingly enough, this method eventually calls escape, so any headers you set won't be used, and the computation immediatly ends. This is a reverse proxy implementation. see unrproxify TODO: this would be more useful if it didn't call escape, just like proxyServe' This ServerPart modifier enables the use of throwError and catchError inside the WebT actions, by adding the ErrorT monad transformer to the stack. You can wrap the complete second argument to simpleHTTP in this function. An example error Handler to be used with spUnWrapErrorT, which returns the error message as a plain text message to the browser. Another possibility is to store the error message, e.g. as a FlashMsg, and then redirect the user somewhere. This is a for use with mapServerPartT' It it unwraps the interior monad for use with simpleHTTP. If you have a ServerPartT (ErrorT e m) a, this will convert that monad into a ServerPartT m a. Used with mapServerPartT' to allow throwError and catchError inside your monad. Eg. simpleHTTP conf $ mapServerPartT' (spUnWrapErrorT failurePart) $ myPart `catchError` errorPart Note that failurePart will only be run if errorPart threw an error so it doesn't have to be very complex. Set the validator which should be used for this particular Response when validation is enabled. Calling this function does not enable validation. That can only be done by enabling the validation in the Conf that is passed to simpleHTTP. You do not need to call this function if the validator set in Conf does what you want already. Example: (use noopValidator instead of the default supplied by validateConf) simpleHTTP validateConf . anyRequest $ ok . setValidator noopValidator =<< htmlPage See also: validateConf, wdgHTMLValidator, noopValidator, lazyProcValidator ServerPart version of setValidator Example: (Set validator to noopValidator) simpleHTTP validateConf $ setValidatorSP noopValidator (dir ajax ... ) See also: setValidator This extends nullConf by enabling validation and setting wdgHTMLValidator as the default validator for text/html. Example: simpleHTTP validateConf . anyRequest $ ok htmlPage Actually perform the validation on a Response Run the validator specified in the Response. If none is provide use the supplied default instead. Note: This function will run validation unconditionally. You probably want setValidator or validateConf. Validate text/html content with WDG HTML Validator. This function expects the executable to be named validate and it must be in the default PATH. See also: setValidator, validateConf, lazyProcValidator A validator which always succeeds. Useful for selectively disabling validation. For example, if you are sending down HTML fragments to an AJAX application and the default validator only understands complete documents. Validate the Response using an external application. If the external application returns 0, the original response is returned unmodified. If the external application returns non-zero, a Response containing the error messages and original response body is returned instead. This function also takes a predicate filter which is applied to the content-type of the response. The filter will only be applied if the predicate returns true. NOTE: This function requirse the use of -threaded to avoid blocking. However, you probably need that for Happstack anyway. See also: wdgHTMLValidator
http://hackage.haskell.org/package/happstack-server-0.4.1/docs/Happstack-Server-SimpleHTTP.html
CC-MAIN-2015-40
refinedweb
1,726
55.13
I need to install OpenCL on a 32-bit machine with CentOS6.0 Anyone can help me? Solved! Go to Solution. Hi You will be able to install opencl on this os. You can download the package from the below link-... Please post your feedback after installation. Hi You will be able to install opencl on this os. You can download the package from the below link-... Please post your feedback after installation. Hello. The installation was successful but when compiling a program I get the following error: 'size_t' was not declared In this scope This error is repeated with other variables. How I can fix? I have to install something else? Thanks. Hi Just make sure that you have included stddef.h . size_t is defined in this header file. This is my code: #include "simpleCL.h" int main() { char buf[]="Hello, World!"; size_t global_size[2], local_size[2]; int found, worksize; sclHard hardware; sclSoft software; // Target buffer just so we show we got the data from OpenCL worksize = strlen(buf); char buf2[worksize]; buf2[0]='?'; buf2[worksize]=0; // Get the hardware hardware = sclGetGPUHardware( 0, &found ); // Get the software software = sclGetCLSoftware( "example.cl", "example", hardware ); // Set NDRange dimensions global_size[0] = strlen(buf); global_size[1] = 1; local_size[0] = global_size[0]; local_size[1] = 1; sclManageArgsLaunchKernel( hardware, software, global_size, local_size, " %r %w ", worksize, buf, worksize, buf2 ); // Finally, output out happy message. puts(buf2); } __kernel void example( __global char* buf, __global char* buf2 ){ int x = get_global_id(0); buf2 } and this is the output now: simple.cpp:1:22: error: simpleCL.h: No such file or directory simple.cpp: In function ‘int main()’: simple.cpp:8: error: ‘sclHard’ was not declared in this scope simple.cpp:8: error: expected ‘;’ before ‘hardware’ simple.cpp:9: error: ‘sclSoft’ was not declared in this scope simple.cpp:9: error: expected ‘;’ before ‘software’ simple.cpp:12: error: ‘strlen’ was not declared in this scope simple.cpp:18: error: ‘hardware’ was not declared in this scope simple.cpp:18: error: ‘sclGetGPUHardware’ was not declared in this scope simple.cpp:20: error: ‘software’ was not declared in this scope simple.cpp:20: error: ‘sclGetCLSoftware’ was not declared in this scope simple.cpp:27: error: ‘sclManageArgsLaunchKernel’ was not declared in this scope simple.cpp:30: error: ‘puts’ was not declared in this scope simple.cpp: At global scope: simple.cpp:34: error: expected constructor, destructor, or type conversion before ‘void’ Thanks!! Could you please attach your complete application folder in zip format here. Because, you must place your .h files in a approprate path to access. By the error which you mentioned above shows that your .h file is somewhere and u r just giving the path as "simpleCL.h". Make sure the path
https://community.amd.com/t5/archives-discussions/opencl-in-centos6-0/m-p/395813/highlight/true
CC-MAIN-2021-25
refinedweb
451
61.83
All JetBrains products are known for their free, quality technical support. We at PyCharm are here to help you with any problem you may come across while using the IDE. Whenever you have a question, doubt, bug or technical issue, rest assured we’re eager to help! Let me just give you some pointers on how it’s best to contact us to get your problems solved as quickly and easily as possible. In short, here are the channels you’re welcome to use to get support: bug tracker, blog, Twitter, forum, Facebook, sales support, and of course technical support. That said, some issues may be more quickly and easily solved on your own. If you have a question about product usage or a specific feature, first please consult this webpage which lists many useful links: PyCharm tutorials, keymap references and PyCharm online help, as well as demo videos, webinars and screencasts from our YouTube channel. In addition, this blog also has a lot of feature highlights. Do also check our community forum to see if someone has already answered your question. If no answer is in sight after this, use one of these two powerful tools: 1. PyCharm’s public bug tracker. No matter which PyCharm edition you’re using, feel free to report a bug if you experience any technical problems. You can also look through issues others have reported, vote for them, and request new features. Whenever your issue is updated, you will received a notification: Our developer team does use this tracker internally: we go through, prioritize and resolve dozens of issues every day. 2. PyCharm Technical Support. When bugs are not the issue, but you need help with setup, customization or some general questions, you’re welcome to submit a technical support request. Go to and switch to the “SUBMIT A REQUEST” tab. Here, please provide your email address, specify the product you’re using, and describe your question or problem in as much detail as possible: Our support team will respond to your request by email as quickly as humanly possible. To track your request history, use the “CHECK YOUR EXISTING REQUEST” tab. Just a couple of hints to speed up the process and avoid any delays in getting help: - Make sure you provide all the information required in the form. We especially need the details of your installation, including the operating system, PyCharm version and the build number. - It’s always good to attach screenshots, screencasts, specific projects or files, so we can gain deeper insight into the problem you’re having. - In most cases, the support team will ask for your PyCharm log files. For instructions on how to get them, click here. - For troubleshooting performance issues like hanging or frozen UI, we have special instructions. Additional instructions, known issues and FAQs can be found on the JetBrains Support homepage. While it’s possible to submit anonymous requests, we recommend that you register for a JetBrains Account and use to log in to various JetBrains services (including Support). A JetBrains Account is a useful tool for managing your interactions with us, as well as your licenses, orders and other things. To open your JetBrains Account, simply log in on the support page or go directly to JetBrains Account page. An alternative way to reach us is by writing an email directly to the PyCharm Support team. To receive speedy help, please make sure to provide the same details as in the online form described above. That’s about it. Of course, we’re always listening to your questions, thoughts and any other feedback on our social media, including PyCharm Twitter, Facebook, forum, and this blog. Even our online help supports comments. Whatever problems you may encounter, we’ll solve them together! Develop with Pleasure! Sigh. Pretty much the sole reason our company bought PyCharm licenses was to use the remote debugger with Vagrant, but this feature has been outright broken since 4.0 and the bug reports about this have seen no activity for months (,), even after we asked support to have someone revise the priority of these issues. Can’t help but be a tiny bit skeptical about everything in this blog post. I’ve had good interactions with the bug tracker. It’s hard to get to everything, but things that are outright bugs are often fixed in the next release, which has been great. The forum, on the other hand, seems neglected. Many posts don’t have answers. Some don’t even have views. I’ve been discouraged by that. I have mixed feelings with this blog post. On one hand, JetBrains delivers an amazing product, part of which for free. On the other hand, out of the many bugs I have filed, a select few have been fixed very quickly, but the vast majority have been quickly triaged and then apparently forgotten (FWIW, PY-12206 PY-12408 and PY-12205 are the ones irritating me the most), which makes me hesitant to report other issues. The issues I refer to are quite minor: you can quickly fix up what PyCharm did wrong, but since they happen all the time, it is both irritating and time wasting (it quickly adds up). To be honest, those issues have very few votes from other users. I have wondered why, and I suppose part of the reason is that it is so quick to fix what PyCharm did wrong, that those that do get into the issue do not bother going to the bug tracker. If only these parts of PyCharm were coded in Python, I would have already (tried to) fix(ed) those myself… Dmitry, I like your enthusiasm, but there is not much point using your forum when the rate of reply / resolution is so small. How about giving some attention to this channel? Indeed the forum sometimes seems abandoned. Currently we’re trying to re-arrange our channels and the way we communicate with users including forum. I wanted to solve an equation. How can I do that I have a problem with Gurobi. Everything works well in terminal when I execute “python ” but it pycharm I get this error: /usr/bin/python /home/amir/PycharmProjects/test2/test2.py Traceback (most recent call last): File “/home/amir/PycharmProjects/test2/test2.py”, line 97, in from gurobipy import * File “/usr/local/lib/python2.7/dist-packages/gurobipy/__init__.py”, line 1, in Have you already solved this question? I encountered this problem as well. If you have the solution, please let me know. Many thanks. My email is martinss.40@163.com In my laptop when i install pyCharm there is problem in python interpreter . pycharm said there is no python interpreter selected . how to fix it please help You can select an interpreter in the project settings: File | Settings | Project Settings | Project Interpreter. If you don’t have a version of Python installed yet, you should grab it from. Hello, I have a question about the packages of project interpreter installed on Pycharm. I installed a package ‘deribit-api’ on project interpreter on Pycharm and I want to change its code in source file. When I try to get access to its source file, Pycharm asks me if I will change the source code anyway, I choose no. So I lost the chance to change the source file code later but it only asks once about I will change the source code. I have no chance to change it again. How can I make this source file editable? Thanks I have problem with import in Pycharm I have installed Anaconda3 with Python 3.7 and succsessfully installed torch. I create a new project setting Anaconda3 as interpreter when I run the code file: “import torch” I get this error Traceback (most recent call last): File “C:/Users/bened/PycharmProjects/untitled4/main.py”, line 1, in import torch File “C:\Users\bened\Anaconda3\lib\site-packages\torch\__init__.py”, line 80, in from torch._C import * ImportError: DLL load failed: Impossible to find the specified module. How can I fix it? How to setup https proxy in pycharm and webstorm. Please help Here is a help system page: Hi Community New to pycharm, want a quick info on pycharm community edition, Is there a way I can enable step definition creation in Community edition, If yes please let me know. If you mean step definitions for BDD tests, that’s a pro feature. BDD is only available in PyCharm Professional Edition. How to link Google cloud machine learning engine API to app built using Android studio and pycharm? usr/bin/python: can’t find ‘__main__’ module in ‘mac changer/’ I am using kali in virtual env, and pycharm as IDE, while I try to run any code (as simple as hello(“world”)) via IDE it works fine, but while I use terminal to run the same file, the above error messages gets displayed. So far I cant figure out what is wrong.(was working fine till yesterday) ps: script path is fine in “edit configuration tab in pycharm”
https://blog.jetbrains.com/pycharm/2015/06/pycharm-support-for-any-problem-you-may-have-well-find-a-solution/
CC-MAIN-2019-22
refinedweb
1,518
71.95
The new GST return forms would be introduced from January 1 after successful beta-testing of the software, Finance Secretary Hasmukh Adhia said today. The new GST return forms would be introduced from January 1 after successful beta-testing of the software, Finance Secretary Hasmukh Adhia said today. He said wrong input tax credit claims are a potential area of evasion in the Goods and Services Tax (GST) regime and one-to-one invoice matching is the key to checking evasion. On scope for rationalisation in the highest tax bracket of 28 per cent, Adhia, at an event organised jointly by CNBC-TV18 and PwC India, said reducing the number of items in the tax slab would depend on revenue position of the government. To a question on how soon the new return filing system would be implemented, Adhia said the target is to roll it out within 6 months. “I would like it to go through a trial run. So we will sort of put it in a pilot in beta version and we will ask some people to test. We will do it sometime between October-December. By January 1 it should be implemented,” Adhia said at the event ‘GST Decoded’. He said invoice matching has to happen in the GST system. “The only thing is the way we collect details about invoices is what matters. So in the new system of filing return we will have all the sales invoice being uploaded along with returns only for B2B. For B2C you don’t need to give invoice wise details at all,” Adhia said. The GST Council had in its last meeting on May 4 approved the design of new return forms. It was decided that the current system of filing summary returns (GSTR-3B) and final sales return (GSTR-1) would continue for 6 months. The secretary said while GST collection every month is about Rs 94,000 crore to Rs 1 lakh crore, the total tax liability discharged by businesses every month is Rs 5 lakh crore. “But Rs 4 lakh crore of this liability is being discharged by way of input tax credit claims. Now you can understand what a scope (is) for improving our revenue if we streamline and if we have a check on input tax credit,” Adhia said. He said the only place where there is a scope for tax evasion is in the area of claiming wrong ITC and that’s an area which can be plugged by using technology. “Even if we make a 10 per cent difference in Rs 4 lakh crore by keeping an eye on them (wrong ITC claims), by having a one-to-one invoice matching, what an improvement in revenue collection can happen,” Adhia said. On rationalisation of the 28 per cent bracket, he said there are 50 items left in the highest tax slab and most of these items attract a cess as these are either demerit or luxury goods. Besides there are some construction and automobile items. “…We have not still reached the magic number of Rs 1 lakh crore (revenue) every month consistently. We will have to keep watching our revenue and whenever there is a scope created for giving more concessions, then we can one by one or in batches we can start looking at those items. “Immediately I think it (rationalisation) is not going to be so feasible because we have to watch our revenue first,” Adh.
https://www.financialexpress.com/economy/target-to-roll-out-new-gst-return-forms-by-jan-1-says-finance-secretary-hasmukh-adhia/1224343/
CC-MAIN-2021-10
refinedweb
577
65.46
Event scheduling with persistence for short-lived processes i.e. AWS Lambda Project description Event Magic Event magic is a simple scheduling / event management package aimed at services like AWS Lambda. Typically, Event scheduling applications rely on you creating objects and then running those in a loop, so a long running process. The issue then appears with Lambda that you need to be idempotent so you need to reload the events and make sure they executed losing any data like number of executions. To solve this, event magic can be run entirely in memory and has additional functionality to simply store and load the events from a mysql DB. Additionally event magic allows you to specify a function to execute before the main event and provides a range of 'complete' criteria tests, such as: - Until_success (Run for ever until it is successful), - Until X count of successful executions - Until a completed function returns True Events can be broken down as follows: Event - The thing you want to do Schedule - When you want to do it. The challenge is knowing when the event executed, Did it succeed or fail? Should the event repeat? Should it repeat X times or forever? What If it needs to repeat forever but only until a certain state is reach? This module was written specifically to work (agnostically) on a FaaS platform (AWS Lambda for example). Should I use Event Magic? Yes, If you have Lambdas in AWS and you need scheduling but do not want to set up Redis or create more lambdas and you already have a mysql DB then this is good. Current State of Development Currently in alpha as it does not have all of the features I originally intended and there's some areas that need a drastic rework, such as the database interactions. Once all of the features are in place and I have implemented it fully in a.n.other product it will move to Beta. Production will be decided on usage. i.e. I don't wont to say it's production ready until it stops having odd issues. So either a period of time or a number of implementations (Please let me know if you are using it / need help just raise a ticket) Setup DB Not finished on this needs a total re-write but... pragmatism. However for now, simply copy the db_setup.sql and run it against your DB. To set your DB credentials do the following: import eventmagic eventmagic.HOST = "localhost" eventmagic.PORT = "3306" eventmagic.USERNAME = "root" eventmagic.PASSWORD = "thisisroot" eventmagic.DATABASE = "eventmagic" Creating an event from eventmagic.schedule import Schedule from eventmagic.event import Event # import datetime # import time def oneOffFunc(): """My One off Func.""" print("Hello world!") return True # This event completed as soon as oneOffFunc returns true event = Event(oneOffFunc, until_success=True) schedule1 = Schedule() schedule1.job(event) # Below is how you would set it with a datetime object # schedule1.when(datetime.datetime.now() + datetime.timedelta(seconds=2) # Sleep for 5 seconds so When is no longer in the future... # time.sleep(5) # Standard Crontab schedule1.when("* * * * *") # If it is only a one off you can simply do this and an Event object will be # created for you: schedule2 = Schedule() schedule2.job(oneOffFunc) # Extended Crontab schedule2.when("* * * * * * *") # Execute once (NB This won't execute as when is set 1 min in the future) schedule1.execute() # Execute until successful while not schedule2.completed: schedule2.execute() see parse-crontab for more info on what is accepted as a crontab Recurring events: from eventmagic.schedule import Schedule from eventmagic.event import Event def runForEver(): """Run for ever""" print("Hello World") return True event = Event(oneOffFunc, count=10) schedule1 = Schedule() schedule1.job(event) schedule1.when("* * * * * * *") while not schedule1.completed: schedule1.execute() see example.py for more info Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/eventmagic/
CC-MAIN-2018-34
refinedweb
658
57.47
The Basics of MVVM While training and consulting with clients around the country, I find that many developers still have trouble grasping the concept of Model-View-View Model (MVVM) as used in Silverlight or WPF. In this blog post I thought I would show two examples side-by-side to help you learn how to move from the more traditional model of development to MVVM. Why use MVVM The reasons why programmers are adopting MVVM or the Model View Controller (MVC) design patterns are the same reasons why programmers adopted Object Oriented Programming (OOP) almost 30 years ago: reusability, maintainability and testability. Wrapping the logic of your application into classes allows you to reuse those classes in many different applications. Maintaining logic in classes allows you to fix any bugs in just one place and any other classes using that class automatically get the fix. When you don’t use global variables in an application, testing your application becomes much simpler. Wrapping all variables and logic that operates upon those variables into one class allows you to create a set of tests to check each property and method in the class quickly and easily. The thing to remember with MVVM and MVC is all you are doing is moving more of the logic out of the User Interface (UI) and into a class that simply has properties and methods that you will bind to the user interface. One example of the type of logic you move into a class is setting the DataContext property of a list box (in Silverlight/WPF) when you want to display a collection of objects in that list box control. Instead of setting the DataContext property directly you simply set a property in the class where you load that collection of objects. If you bind this property to the list box control then the list box will automatically redraw itself when this property changes. To use MVVM you must be using classes and not just writing all your code in the code behind of your UI. For VB.NET programmers this also means you are NOT using modules. the Click event of a button. Load a List Box without MVVM To start, let’s write code to load a list box control in Silverlight without using the MVVM design pattern. Create a new Silverlight project and to the MainPage.xaml add a ListBox control to the page and set the Name property to lstData. Add a Loaded event procedure to the XAML so it will create a UserControl_Loaded event procedure. <UserControl ... The sample project that comes with this posting (see the end of this blog for instructions on download) has a WCF service that will return a collection of Product objects. In the UserControl_Loaded event procedure you write code to load the Project data and give that data to the list box. private void UserControl_Loaded(object sender, RoutedEventArgs) { lstData.DataContext = e.Result; } The code in the Loaded event procedure instantiates a generated WCF service client object called ProductServiceClient. Next it hooks up the GetProductsCompleted event procedure that is called when the asynchronous call to the GetProductsAsync() method is complete. In the GetProductsCompleted event procedure you take the e.Result property and give it to the DataContext property of the list box called lstData. In the list box named lstData you need to set the ItemsSource property to {Binding}. This tells the list box that you will be binding the data at runtime by setting the DataContext property. You also fill in the DisplayMemberPath property with a property of the Product object that you wish to be displayed in each row of the list box. In this case you are using the ProductName property. <ListBox ItemsSource="{Binding}" DisplayMemberPath="ProductName" Name="lstData" /> That is all there is to writing the code behind and having it load data from a Product WCF service. The problem with the above code is in order to test this code someone has to actually run the program and verify that this code works as it is supposed to. If you make a change to the service or to the UI you will then need to have someone run the application again to ensure it still works. You have to repeat this testing each time a change is made. This becomes very tedious and time consuming for the developer and the tester. Load a List Box using MVVM To take the previous sample and convert it into an MVVM design pattern, you simply need to create a class for your Silverlight user control to bind to. This class, called ProductViewModel recreates the UserControl_Loaded event procedure and the GetProductsCompleted event procedure as the LoadAll method in the class and the GetProductsCompleted event procedure. When the data is returned from the WCF service in the GetProductsCompleted event, that data is placed into a property called DataCollection as opposed to setting the DataContext property of a list box directly. This property is of the type ObservableCollection<Product>. The only thing unique about this property is in the “set” procedure; you not only set the value to the _DataCollection field, you also call a method named RaisePropertyChanged. This method raises the PropertyChanged event procedure. This is a standard event in XAML whose sole job is to inform any UI elements bound to this property that they should call the “get” procedure because the data has just been updated. I will let you look up the implementation of the PropertyChanged event procedure in the sample code that you download for this posting. Other than that, the complete View Model class is shown in the list below: public class ProductViewModel : INotifyPropertyChanged { #region INotifyPropertyChanged /// <summary> /// The PropertyChanged Event Informs the UI /// when a Property in this class changes /// </summary> /// IMPLEMENTATION GOES HERE #endregion private ObservableCollection<Product> _DataCollection; public ObservableCollection<Product> DataCollection { get { return _DataCollection; } set { _DataCollection = value; RaisePropertyChanged("DataCollection"); } } public void Load) { DataCollection = e.Result; } } The above class is not that much more code that you wrote in the code behind. In fact, the implementation of the PropertyChanged event can be put into a base class that all your view models inherit from. This will avoid duplicating this code in each view model class. Bind ProductViewModel Class to XAML Once you have the ProductViewModel class created; you now bind your User Control to this class. Any Silverlight user control may create an instance of any class in your application within XAML. First you create an XML namespace as shown by the arrow next to “Callout 1” in Figure 1. Figure 1: XAML can create an instance of a class in the UserControl.Resources section of your user control. The XML namespace is given an alias and references a .NET namespace from your Silverlight application. In this XAML the alias for the namespace is called “vm”. Next, you create a UserControl.Resources section in the XAML and using an XML element you specify the XML namespace followed a colon and then by the class name you wish to instantiate as shown by “Callout 2” in Figure 1. The following line of code in the XAML is what creates an instance of the ProductViewModel class and assigns it to the static resource variable name listed in the x:Key attribute. <vm:ProductViewModel x: The above line is the equivalent of the following in .NET: using vm = SL_MVVM_Easy; vm.ProductViewModel viewModel = new vm.ProductViewModel(); Bind the instance of this class to the list box control you created on your Silverlight user control using the standard {Binding} syntax as shown in Figure 2. Figure 2: Bind the ListBox to the resource you created In the ItemsSource of the list box you specify the source of the data as coming from the instance of the ProductViewModel class you created in the resources section. The Path property in the Binding specifies the property in the class you wish to get the collection of data from. In this case the name of the property in the ProductViewModel class is called DataCollection (Figure 3). Figure 3: The Path attribute refers to the property where the data is coming from in the class to which this UI object is bound Write the Code Behind In your User Control you still need the UserControl_Loaded event procedure just like you did when you were not using the MVVM design pattern. The only difference is you will only have one line of code to call the LoadAll() method in the ProductViewModel class. Since the XAML created the ProductViewModel class you need to get a reference to that specific instance so you can use it in the code behind. In the constructor you will use the Resources collection to access the x:Key you assigned to the view model. In this case the key name is “viewModel”. So you access the resources on this page using this.Resources[“viewModel”] and casting the result as a ProductViewModel class and putting it into a field variable on this user control called _ViewModel. You can now call any method or access any property in the ProductViewModel class through this field. public partial class ucMVVM : UserControl { ProductViewModel _ViewModel; public ucMVVM() { InitializeComponent(); _ViewModel = (ProductViewModel)this.Resources["viewModel"]; } private void UserControl_Loaded(object sender, System.Windows.RoutedEventArgs e) { _ViewModel.LoadAll(); } } That is all there is to migrating your code into an MVVM design pattern. The primary advantage of this design pattern is you can now write a unit test to test the view model class. You do not need to have a tester run the application in order to verify that everything worked. In addition you also now have a view model class that you can reuse in a WPF or Windows Phone application. You could even reuse this view model class in an ASP.NET application. Summary Hopefully this post helped you see how easy it is to move code from the code behind of your user interface and put it into a class. That is the whole key to MVVM; simply moving code into a class. Do not worry about being 100% “code behind free”. That is an almost impossible goal and most often requires you to write more code! That is certainly not a good thing. If your event procedures in your UI are simply doing UI code or making a single call to a method in your View Model, then you have accomplished the goals of MVVM; namely reusability, maintainability and testability. You also get one more benefit; having event procedures in your UI makes it easier to follow the flow of the application. NOTE: You can download this article and many samples like the one shown in this blog entry at my website.. Select “Tips and Tricks”, then “The Basics of MVVM” from the drop down list. Good Luck with your Coding, Paul Sheriff ** SPECIAL OFFER FOR MY BLOG READERS ** We frequently offer a FREE gift for readers of my blog. Visit for your FREE gift!
http://weblogs.asp.net/psheriff/the-basics-of-mvvm
CC-MAIN-2014-49
refinedweb
1,822
60.04
On Wed, 2013-11-13 at 11:33 +0000, Paul Durrant wrote: > > -----Original Message----- > > From: Ian Campbell > > Sent: 13 November 2013 11:25 > > To: Paul Durrant > > Cc: xen-devel; Ian Jackson; Stefano Stabellini > > Subject: Re: Multiple platform PCI device ID registries? > > > > On Wed, 2013-11-13 at 11:01 +0000, Paul Durrant wrote: > > > > -----Original Message----- > > > > From: Ian Campbell > > > > Sent: 13 November 2013 09:41 > > > > To: xen-devel > > > > Cc: Paul Durrant; Ian Jackson; Stefano Stabellini > > > > Subject: Multiple platform PCI device ID registries? > > > > > > > >- > > > > reservations.txt > > > > vs > > > >- > > > > staging/hypercall/x86_64/include,public,hvm,pvdrivers.h.html > > > > > > > > Are they distinct namespaces? Can someone clarify with a patch to one or > > > > both what the relationship is? How does this relate to the additional > > > > platform device thing which Paul added to qemu? > > > > > > > > I'm particularly concerned that 0x0002 is different in the two of > > > > them... > > > > > > > > > > They are distinct namespaces. The former is PCI device ID, the latter > > > is an abstract 'product number' which is used as part of the QEMU > > > unplug protocol (and actually means nothing to the upstream QEMU > > > platform device code anyway). > > > > I'm confused then. qemu.git: > > commit 8fbab3b62a271526c782110aed0ae160eb38c296 > > Author: Paul Durrant <paul.durrant@xxxxxxxxxx> > > Date: Mon Jul 29 10:58:01 2013 +0000 > > > > Xen PV Device > > > > Introduces a new Xen PV PCI device which will act as a binding > > point for > > PV drivers for Xen. > > The device has parameterized vendor-id, device-id and revision > > to > > allow to > > be configured as a binding point for any vendor's PV drivers. > > > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > > Signed-off-by: Stefano Stabellini > > <stefano.stabellini@xxxxxxxxxxxxx> > > Reviewed-by: Andreas FÃrber <afaerber@xxxxxxx> > > > > Adds a new device with parameterized vendor and device-id, which sounds > > like a pci-device-reservations.txt thing. But you reserved entries in > > the pvdrivers.h list which is a "product number" thing? Maybe I'm > > confusing two different events? > > > > Yes, they are completely distinct things. Not related in the slightest. OK! > The former was to add a parameterized device which we can use to hang > PV drivers off for Windows Update purposes. > The latter was add reservations for a product numbers that is already > in use in XenServer, and another one which we will use going forward > that - by reserving it - should now not conflict with anyone else's PV > drivers in future (if anyone cares to add product-number-based > blacklisting into upstream QEMU or amend the code in trad.) > > > That patch uses device ID > > #define PCI_DEVICE_ID_XEN_PVDEVICE 0x0002 > > > > which appears to conflict with pci-device-reservations.txt which says > > that devid 2 is "Citrix XenServer (grandfathered allocation for > > XenServer 6.1)" Or maybe the comment there is just out of date? > > > > Using 2 seems safe as it doesn't conflict with the platform device ID > and we have now registered that ID. You are referring to the registration in pci-device-reservations.txt, right? I'm concerned because that comment refers to XenServer 6.1 but it now appears to be being reused as the default device ID for the "Xen pvdevice". Maybe it is safe to reuse this in this way, but the docs should be updated I think. > In practice it should never be used as the device ID should always be > specified by the toolstack. But it defaults to the Citrix ID -- is that wise? Wouldn't it be better to default to nothing? It also seems to be causing a moderate amount of fallout in the "[Xen-devel] [BUG] Xen vm kernel crash in get_free_entries." thread. I'm busy wondering if maybe Qemu should blacklist non-xenserver product ids when it has been configured with the Xen PV device using the Citrix dev.
https://lists.xenproject.org/archives/html/xen-devel/2013-11/msg01828.html
CC-MAIN-2021-31
refinedweb
610
64.51
On 4/7/07, Martijn Dashorst <martijn.dashorst@gmail.com> wrote: > Hi.). some routes may have creative content. in particular, those involving an element of non-algorithmic selection. example: the best route for a shopper round a mall. > That said, I'll add the header. i agree it's a marginal case. i wouldn't -1 a release for something like this so long as people involved understood the issues. my personal line in the sand is whether something is arguably creative. if a reasonable case could be made then i'd like to see a license header. this approach saves questions from lawyers later. > >. ok > > ------------ > >. contains a discussion of manifests MANIFESTs are a little controversial (there are multiple specs which are open to interpretation). i'm of the maximal school of thought: putting everything in which people think are required stops sniping. > > i'd recommend creating release notes (but i hope that these are > > missing since this is only an audit release). > > Normally we have those as well, containing links to our documentation > etc. Are the release notes part of what is voted on? the vote is about the complete and final artifacts for distribution. if these the release notes are part of the final artifact then they are voted on as part of that process. i recommend shipping release notes as part of the release artifact. not all downstream users will download wicket from the apache site so it's a chance to reach them with the same content. apache releases have long lifespans. all releases are archived permanently. it's good to use the release notes content as a basis for a permanent summary record of the release on the website (remember to includes the release date). this is useful for users and search engines indexes this content well. it's also good to post up the documentation for each release on the website. > > BTW i see the current namespace is. do > > you plan to change this upon (sometime)? > > Yep, however, we think we should only change this after graduation as > we are still not officially an Apache project :). fine > Thanks for looking into this and providing us with some solid > feedback. I expect to have a new release available somewhere next week > with all problems solved and questions answered. great - robert --------------------------------------------------------------------- To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org For additional commands, e-mail: general-help@incubator.apache.org
http://mail-archives.apache.org/mod_mbox/incubator-general/200704.mbox/%3Cf470f68e0704080005v774bf61dq4c4990641ba75b56@mail.gmail.com%3E
CC-MAIN-2017-09
refinedweb
404
67.96
fputws() Write a wide-character string to an output stream Synopsis: #include <wchar.h> int fputws( const wchar_t * ws, FILE * fp ); Since: BlackBerry 10.0.0 Arguments: - buf - The wide-character string you want to write. - fp - The stream you want to write the string to. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The fputws() function writes the wide-character string specified by ws to the output stream specified by fp. The terminating NUL wide character isn't written. Errors: - EAGAIN - The O_NONBLOCK flag is set for fp and would have been blocked by this operation. - EBADF - The stream specified
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/fputws.html
CC-MAIN-2018-13
refinedweb
114
69.07
setvbuf() Associate a buffer with a stream Synopsis: #include <stdio.h> int setvbuf( FILE *fp, char *buf, int mode, size_t size ); Arguments: - fp - The stream that you want to associate with a buffer. - buffer - NULL, or a pointer to the buffer; see below. - mode - How you want the stream to be buffered: - _IOFBF — input and output are fully buffered. - _IOLBF — output is line buffered (i.e. the buffer is flushed when a newline character is written, when the buffer is full, or when input is requested). - _IONBF — input and output are completely unbuffered. - size - The size of the buffer. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Returns: - 0 - Success. - EINVAL - The mode argument isn't valid. - ENOMEM - The buf argument is NULL, size isn't 0, and there isn't enough memory available to allocate a buffer. Examples: ; }
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/s/setvbuf.html
CC-MAIN-2021-21
refinedweb
152
68.47
aw JAVA AWT BASE PROJECT JAVA AWT BASE PROJECT suggest meaningful java AWT-base project java awt calender java awt calender java awt code for calender to include beside a textfield Java AWT Package Example Java AWT Package Example In this section you will learn about the AWT package of the Java. Many running examples are provided that will help you master AWT package. Example awt jdbc awt jdbc programm in java to accept the details of doctor (dno,dname,salary)user & insert it into the database(use prerparedstatement class how to use JTray in java give the answer with demonstration or example please SWINGS - Swing AWT more information,Examples and Tutorials on Swing,AWT visit to : Package Example. java - Swing AWT java How can my java program run a non-java application. Will it be possible to invoke a gui application using java Event handling in Java AWT Event handling in Java AWT  ... events in java awt. Here, this is done through the java.awt.*; package of java. Events are the integral part of the java platform. You can see the concepts java - Swing AWT java Write Snake Game using Swings Program for Calculator - Swing AWT Program for Calculator write a program for calculator? Hi Friend, Please visit the following link: Hope that it will be helpful Look and Feel - Swing AWT : Hope - Swing AWT java hello sir.. i want to start the project of chat server in java please help me out how to start it?? urgently.... Hi friend, To solve problem to visit this link....... slider - Swing AWT :// Thanks... Example"); Container content = frame.getContentPane(); JSlider slider scrolling a drawing..... - Swing AWT information. java - Swing AWT java Hello Sir/Mam, I am doing my java mini project.In that, i want to upload any .jpg,.gif,.png format image from system & display... for upload image in JAVA SWING.... Hi Friend, Try the following code hi - Swing AWT information, visit the following link: Thanks Line Drawing - Swing AWT ) { System.out.println("Line draw example using java Swing"); JFrame frame = new...Line Drawing How to Draw Line using Java Swings in Graph chart... using java Swing import javax.swing.*; import java.awt.Color; import Help Required - Swing AWT JFrame("password example in java"); frame.setDefaultCloseOperation...(); } }); } } ------------------------------- Read for more information.... the password by searching this example's\n" + "source. DrawingCircle - Swing AWT : Thanks JAVA - Swing AWT JAVA Sir how can i design flow chart and Synopsis for random password generation by using swing in java . Hi Friend, Try the following code: import java.io.*; import java.util.*; import java.awt.*; import swings - Swing AWT :// What is Java Swing Technologies? Hi friend,import javax.swing.*;import java.awt.*;import javax.swing.JFrame;public java - Swing AWT Java - Swing AWT
http://www.roseindia.net/tutorialhelp/comment/55321
CC-MAIN-2013-48
refinedweb
457
58.48
A Flask extension adding a decorator for CORS support Project description. A full list of options can be found in the documentation. CRSF protection before doing so! Simple Usage In the simplest case, initialize the Flask-Cors extension with default arguments in order to allow CORS for all domains on all routes. app = Flask(__name__) cors = CORS(app) @app.route("/") def helloWorld(): return "Hello, cross-origin-world!" Resource specific CORS Alternatively, you can specify CORS options on a resource and origin level of granularity by passing a dictionary as the ‘resources’ option, mapping paths to a set of options.(..) to allow CORS on a given route. @app.route("/") @cross_origin() def helloWorld(): return "Hello, cross-origin-world!" Logging Flask-Cors uses standard Python logging, using the logger name ‘app.logger_name.cors’. The app’s logger name attribute is usually the same as the name of the app. You can read more about logging from Flask’s documentation. import logging # make your awesome app logging.basicConfig(level=logging.INFO) @corydolphin or send me an email. I do my best to include every contribution proposed in any way that I can. Credits This Flask extension is based upon the Decorator for the HTTP Access Control written by Armin Ronacher. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/Flask-Cors/2.0.1/
CC-MAIN-2019-51
refinedweb
233
59.4
User:Heerenveen/PLS From Uncyclopedia, the content-free encyclopedia Consult the talk page or me for any questions or concerns. See the Poo Lit Archives for the results from past competitions. edit The competition edit What is the Poo Lit Surprise? A writing competition held on Uncyclopedia for a small cash prize. It is designed to jump-start writing quality at Uncyclopedia. After the previous eight conditions and limitations apply. Judges are barred from entering the competition (see here for the list of judges). Non-noobs (people who've been here longer than 3 months) cannot enter the "Best Article By a Noob" category, and noobs are not restricted to the "Best Article By a Noob" category. A checkuser will be run to ensure that no supposed "noobs" are sockpuppets of existing users. Sockpuppet accounts will be disqualified and banned. Unless your sockpuppet is BENSON of course.) edit Where should I put my entry? The article should be placed on your namespace (i.e. the title should be in the format User:[username here]/[article name here]). Between May 10th ― May 23rd, you should post a link to your article in the entry section below and sign as you would a forum post or vote. If you're a paranoid, or afflicted with OCD, or just want to tell people to shove off until after the PLS, add {{PLS-WIP}} to your entry. edit Boxers or briefs?.
http://uncyclopedia.wikia.com/wiki/User:Heerenveen/PLS?oldid=3645545
CC-MAIN-2015-27
refinedweb
237
66.44
Creating apps with observables. My biggest concern with Functional Reactive Programming, related to building applications, is the too simplistic view on input -> output. Applications are not as simple as input -> output, there are lots of things happening between the input and the output. I really like the idea of using observables at the core of driving your application state changes, like in CycleJS, but there are some things I do not understand. Here are some open questions I am having a hard time finding the answers to: - When the observables in my application needs to produce multiple state changes, how do I express that? - When my state changes can take multiple paths, have conditional paths etc. how do I express that with observables? - How do I keep my UI optimized, not doing unnecessary renders? - How do I let my observables change state across each other? So one observable handling a click needs to change the same state that some other observable handling a different click does So when I can not find answers I try to come up with some myself. Maybe you can help me validate them? I built a small lib for this article called HotDiggeDy, just to emphasize it is not a serious project. It implements observables as the “application state driver” and uses Snabbdom to expose state and observables as first class citizens to the components. I have high respects for projects like Elm and CycleJS, but I still have no idea how you can build actual web applications with these projects. There are no examples of full scale complex web applications and I can not see how you can structure your application in a way that scales to hundreds of views and hundreds of models, where views can extract any state and request any state change. To allow any view/component to extract any state and ask for any state change is what I think needs to be at the core to create a scalable application. A simple application I am going to use React as an example to show you what I mean when stating “observables driving your application state changes”. I am using RxJs5 to create these observables, which HotDiggeDy also does. import React from 'react'; import {Oservable, Subject} from 'rxjs'; class Counter extends React.Component { constructor() { // We create some initial state for our component this.state = {count: 0}; // We create two observables for increasing and decreasing the count this.increase$ = new Subject(); this.decrease$ = new Subject(); // We merge these two observables and whenever they trigger // we "reduce" a count. Then we subscribe to it all to run // setState on our component Observable.merge(this.increase$, this.decrease$) .scan((count, value) => count + value, 0) .subscribe((count) => this.setState({count: count})); } render() { return ( <div> <h1>Count: {this.state.count}</h1> <button onClick={() => this.increase$.next(1)}>Increase</button> <button onClick={() => this.decrease$.next(-1)}>Decrease</button> </div> ); } } So this looks like a lot of unnecessary boilerplate to do a very simple thing. And yeah, I completely agree. Too bad most examples of using observables are like this. Either counters or BMI calculators. But there are a couple of things to take away from this: - We can use observables to handle our click events and really anything else. That means having a common concept of “something happening in our app”, it being a mouse click, a websocket message, an ajax response etc. - We can transform these observables into state. In the example above we just created a count, but this could actually be an object holding lots of different state. We also moved the count to the state of the component, isolating it, but we could create our own state concept which made the count available to any component The need for a global state store So to build an application we have to move our state outside of our components. Flux Stores, Redux Reducers or Elm Models might come to mind… and yeah, that is basically what we are going to do. But lets first talk about the scope of this store/reducer/model. When something happens in our applications we want to produce some new state. In the example above we just changed a counter, but a more typical example would be: - Reset an array, - Set a state to isLoading: true - Fetch some data from the server - Update posts array with data from server - Set the isLoading state to isLoading: false - Set an error state if server response fails So we need the observables of our app to access a lot more than a count state. The question is how we can share this state produced between different observables. Like the count on the initial example can only be changed through the created observables, not a completely new one inside a different component. To solve this we create a global state store. Personally I have very good experience with creating a single state store/tree for the application. This prevents you getting into issues where one part of your state is isolated from other state. We will use ImmutableJS to hold the state. This will also allow for render optimizations later and gives a nice API for updating the state in our state store. Producing state with observables Lets see how our little framework creates a new application with some state and an observable which changes some state. import HotDiggeDy from 'HotDiggeDy'; // Define the initial state of the app const initialState = { isLoading: false, posts: [], error: null }; // Define observables const observables = { fetchClicked(observable) { return observable.map(() => state => state.set('isLoading', true)); } }; const app = HotDiggeDy(initialState, observables); Lets first take a look at return observable.map(() => state => state.set('isLoading', true));. What is happening here? Let us first break it down to normal JavaScript: return observable.map(function () { return function (state) { return state.set('isLoading', true); } }); What we are doing here is mapping to a function that will change the state of the app. This is what I mean by “generically changing the state”. This function receives the current state of the app and returns the new state. So how is this state changing function used? Lets imagine we just call the fetchClicked function and it returns our state changing observable. const state$ = fetchClickedStateChange$ .scan((state, changeFunction) => ( return changeFunction(state); ), Immutable.fromJS({ isLoading: false, posts: [], error: null }) state$.subscribe(function (state) { // Whenever the state changes }); This piece of code effectively takes observables that emits this state change function that returns new state. So if we had many observables in our application we could: const state$ = Observable.merge( fetchClickedStateChange$, someOtherStateChange$, someCoolStateChange$ ) .scan((state, changeFunction) => ( return changeFunction(state); ), Immutable.fromJS({ isLoading: false, posts: [], error: null }); state$.subscribe(function (state) { // Whenever the state changes }); The important thing is that every type of observable we define returns one or merged observables that produces this “change state function”. So let us see how we can let one observable produce two state changes: const observables = { fetchClicked(observable) { const setLoading$ = observable .map(() => state => state.set('isLoading', true)); const resetPosts$ = observable .map(() => state => state.set('posts', Immutable.fromJS([]))); return Observable.merge(setLoading$, resetPosts$); } }; So now we are able to express multiple state changes on one observable in our application. This is a really important concept for application development that is very often lost in Observable examples. And I think the reason is that Observables are really about plain input -> output transformation at its core. Driving complex application state changes needs to be an abstraction over it. But lets get more into it, we need an even more important feature. We need to handle side effects that also eventually produces state. Typically ajax requests. So lets add that: const observables = { fetchClicked(observable) { // Whenever we click we start fetching posts const posts$ = observable .flatMap(() => Observable.fromPromise(axios.get('')) ) .map(result => result.data) .share(); // Make it HOT (run once regardless of subscriber count) // Whenever we click we also want to reset the posts array const resetPosts$ = observable .map(() => state => state.set('posts', Immutable.fromJS([]))); // And we want to set a loading state to true const fetchingStarted$ = observable .map(() => state => state.set('isLoading', true)); // Though setting loading to false should happen after the posts // are fetched, so we map over the "posts{{BLOG}}quot; observable instead const fetchingStopped$ = posts$ .map(() => state => state.set('isLoading', false)); // Also setting the news posts, or an error, should happen after // the posts are fetched const setNewPosts$ = posts$ .map(posts => state => state.set('posts', Immutable.fromJS(posts))) .catch(err => state => state.set('error', err.message)); // We expose all the state changes as one observable return Observable.merge( resetPosts$, setNewPosts$, fetchingStarted$, fetchingStopped$ ); } }; So now you might say… is this really how observables should be used? Should not the click of a button be just one observable producing some state? Yeah, it can be, like with a counter. But it can not be that simple in a real application because an event, like a button click, might produce a lot of different state changes. What we want, at least what I want, is a way to express what is happening on that button click as “one coherent thing”. I want to be able to easily read all the possible requests for state change in the application and I want to easily read what exactly happens when these requests occur, top down. And that is exactly what we have achieved above. Side effects So a big question in application development is “where to trigger side effects?”. With side effects I mean for example the ajax request in the example above. Like in Elm you return side effects from a request for state change, which triggers new requests for state change. With Redux you just run your side effects in the action creators. The one single reason I favor the Redux approach is the readability. If you have a complex request for state change, like above, you have to compose a lot more in your head if you can not just read “top down” what happens. Like in Elm you would have to create an Effect separated from your state changing logic, “at the edge of your app”. That means when you compose in your head how some request for state change runs you have to jump back and forth between actions and effects to understand the complete flow. What I like about the approach in the example above is that you return normal state changes and side effects both as observables, allowing you to compose everything in your head by just reading the logic of your defined observable (fetchClicked). Creating the UI So any event in our application now gets its own observable that will return a new observable of merged state changes. Some events might just lead to a single state change, but others might be more complex, like the one in the example above. So now let us attach a UI to our application. I chose Snabbdom as it allows me to easily wrap my own first class citizens, those being the state of our application and the observables we have created. This is how our code looks: import {Component, DOM} from 'HotDiggeDy'; const renderPost = post => <li>{post.title}</li>; const Demo = Component((props, state, observables) => ( <div> <h1>Number of posts {state('posts').size}</h1> <button on-click={observables.fetchClicked} disabled={state('isLoading')}> Fetch </button> {state('isLoading') ? <div>Loading stuff</div> : ''} { state('posts').size ? <ul>{state('posts').toJS().map(renderPost)}</ul> : '' } </div> )); So this is a typical component. It receives some props, state and observables. This is all we really care about in components. We need to be able to grab properties passed by parent components, grab state from the application state store and trigger requests for state change. State and request for state change is often what leads to insane amounts of boilerplate in our apps, but with Snabbdom we can just make it a natural part of our component concept. What to notice specifically here is how state is grabbed. It is grabbed using a function. Basically it translates to Immutable JS: state.get('foo'). But what is very interesting about this approach is that it is possible to analyze the state requirements of the component. Optimizing UI What I can not find any information on in Elm and CycleJS is how views/components are optimized, maybe they are not? When scaling up an application you will need to optimize the need for render. What this basically means is: - A new render of the component is requested - The props passed and the state required is checked - If there are no changes to props or state the previous render is returned, as nothing changed. This saves up a lot of processing in large applications The really great thing about requesting state as a function, instead of exposing all the state as a plain object ( state.isLoading), is that it can be automized. An example of this can be seen here CerebralJS Computed. So that means when the component renders the first time the state requested is tracked and on the next render the component already knows what state is to be checked to verify the need for another render. This is not implemented in HotDiggeDy, but it could be and I think it is an interesting concept to bring up. In an ideal world we would not have to care about optimizing rendering, we would just rerender everything on every state change, but that is not realistic currently. That said, an approach like this would completely hide it. Scaling up! So how would an application like this scale? First of all our views/components are completely detached from the state and the observables. That means any component can grab any state and trigger any observable. But more importantly, how do we structure our state and observables in big applications? What I have come to realize, through experiences with Cerebral, is that scalability is often better handled with namespacing rather than isolation. What I specifically mean is: const initialState = { isLoading: false, posts: [], error: null }; // With namespacing const initialState = { posts: { isLoading: false, list: [], error: null } }; // With isolation const adminState = { isLoading: false }; const postsState = { isLoading: false, posts: [], error: null }; The key difference here is that isolation prevents you from changing the admin state from posts and posts state from admin. But what you really just wanted was to structure your state. Namespacing does that for you and still gives you the freedom to change any state from any observable. The same goes for the observables themselves: const observables = { fetchClicked(observable) {} } const observables = { posts: { fetchClicked(observable) {} } } Debugging I also want to bring up thoughts on debugging. When debugging observables in the context of creating an application you do not care about how each and every observable run, that is just way too much information. What you care about is what “requests for state change” are made, what state is changed and when is the state changed. That means in HotDiggeDy it would be easy to map over all the observables defined to figure out when they trigger. It would also be just as easy to map over the returned state changing observables to figure out what state they change. To get some inspiration on this, take a look at the Cerebral Debugger which I think gives the information you care about when developing an application in terms of state changes. Summary So I have been jumping a bit all over the place here. I hope I got the message through that observables can drive applications, but it is difficult to wrap a concept around them that handles everything between the input and the output in a way that is flexible and scales. If you feel I am completely lost or that my statements are completely wrong, GREAT!, please comment and I will learn something! :) If the ideas resonates with you, please also comment, as I would love to to get some validation on the thoughts and ideas. You can check out HotDiggeDy at this repo, also including a demo of the example above. Thanks for reading!
https://christianalfoni.herokuapp.com/articles/2016_03_27_Creating-apps-with-observables
CC-MAIN-2019-35
refinedweb
2,670
63.49
Base class for both client and server lobby. The lobbies are started when a server opens a game, or when a client joins a game. It is used to exchange data about the race settings, like kart selection. More... #include <lobby_protocol.hpp> Base class for both client and server lobby. The lobbies are started when a server opens a game, or when a client joins a game. It is used to exchange data about the race settings, like kart selection. Lists all lobby events (LE). Adds a vote. Creates either a client or server lobby protocol as a singleton. Returns the singleton client or server lobby protocol. Returns all voting data. Returns the game setup data structure. Returns the maximum floating time in seconds. Returns the number of votes received so far. Returns the remaining voting time in seconds. Returns the voting data for one host. Returns NULL if the vote from the given host id has not yet arrived (or if it is an invalid host id). Returns if the voting period is over. Starts the sychronization protocol and the RaceEventManager. It then sets the player structures up, creates the active player, and loads the world. This is called on the client when the server informs them that the world can be loaded (LE_LOAD_WORLD) and on the server in state LOAD_WORLD (i.e. just after informing all clients). A previous GameSetup is deleted and a new one is created. Implemented in ServerLobby, and ClientLobby. Starts the voting period time with the specified maximum time. Called by the protocol listener, synchronously with the main loop. Must be re-defined. Implemented in ServerLobby, and ClientLobby. Store current playing track in name. Mutex to protect m_current_track. Timer user for voting periods in both lobbies. Estimated current started game progress in 0-100%, uint32_t max if not available. Estimated current started game remaining time, uint32_t max if not available. Stores data about the online game to play. Save the last live join ticks, for physical objects to update current transformation in server, and reset smooth network body in client. The maximum voting time. Vote from each peer. The host id is used as a key. Note that host ids can be non-consecutive, so we cannot use std::vector.
https://doxygen.supertuxkart.net/classLobbyProtocol.html
CC-MAIN-2020-24
refinedweb
376
70.29
show() blocks in pylab mode with ipython 0.10.1 Bug Description Binary package hint: ipython From the upstream bug: https:/ After updating to ipython 0.10.1 on 64-bit Linux, I noticed that plotting commands no longer automatically display anything when ipython is started with the -pylab option. Moreover, the show() command now appears to block in pylab mode. This issue is fixed in the latest stable release of ipython: 0.10.2 ProblemType: Bug DistroRelease: Ubuntu 11.04 Package: ipython 0.10.1-1 ProcVersionSign Uname: Linux 2.6.38-8-generic x86_64 NonfreeKernelMo Architecture: amd64 Date: Wed May 4 22:37:33 2011 PackageArchitec ProcEnviron: LANGUAGE=en_US:en PATH=(custom, user) LANG=en_US.UTF-8 SHELL=/bin/bash SourcePackage: ipython UpgradeStatus: Upgraded to natty on 2011-03-31 (33 days ago) Related branches - Barry Warsaw (community): Approve on 2011-06-02 - Ubuntu branches: Pending requested 2011-05-25 - Diff: 69 lines (+49/-0)3 files modifieddebian/changelog (+6/-0) debian/patches/fix_blocking_show.patch (+42/-0) debian/patches/series (+1/-0) I also have this problem with ipython & natty. Even if TkAgg works fine, not being able to use GTK(Agg) means that figures cannot be saved directly as jpg's - which is a problem when you want to save a lot (10000's) of 2D plots without using too much disk space. Also, I can't seem to be able tu use the wx backend either.. Is there any other workaround ? Another ipython version ? png is supported which is lossless and compresses very well unless your plotting photos. You can convert these to jpg later with e.g. the convert command from imagemagick. I have 0.10.2 in my ppa: https:/ but it may get replaced by 0.11 soon. which will introduce many new bugs so you may want to pin the version Answering my own query: apparently installing ipython 0.10.2 (from oneiric) fixes the issue: http:// Is there any chance this could be pushed as an update for natty (when installing, no dependency conflict appeared) ? > png is supported which is lossless and compresses very well unless your plotting photos. Unfortunately I am plotting images recorded from 2D detectors, so compression is not that good with png (yes I can post-convert to jpg but I'd rather avoid that sort of kludge). > I have 0.10.2 in my ppa: > https:/ > but it may get replaced by 0.11 soon. which will introduce > many new bugs so you may want to pin the version Since ipython 0.10.1 also induces problems with mayavi's mlab when using 'ipython -wthread' (see discussion in https:/ I'm currently giving lectures about using python for scientific computing and the thread issue is not giving a very enjoyable experience of ipython to the student using nattys (which is a real pity, because it's clearly indispensable for scientific plotting in 1D/2D/3D). If you think it qualifies for a SRU have a look here: https:/ Better ask the release team for an Ok before backporting the fix. Or you could do a backport: https:/ I really believe this qualifies for an SRU: * the thread-handling of ipython represents a large part of its added value compared to a standard python console * it affects matplotlib, although a workaround using a different backend is possible, it removes some functionality (described in this bug https:/ * it affects mayavi/mlab, and no workaround exists (you lose the ability to use the console while the GUI is functional) (bug report https:/ Also: * the bug is well described and has been corrected upstream (https:/ * only one package (ipython) needs to be upgraded, no additional dependency needs be updated in natty * all that is needed is to use the package already in oneiric TEST CASE: 1) install ipython, python- 2) Start "ipython -wthread", then use the following two commands: from enthought.mayavi import mlab mlab.test_ => the GUI window is unresponsive unless the 'show()' command is passed - this is fixed by the new package (see upstream report) END TEST CASE So I think this is a really good case for a SRU... It is very similar to what was reported for https:/ I don't think a backport would be appropriate- keeping 0.10.1 in the main tree would only create problems for many users who won't find the backport repository. Who can push this for a SRU ? Apparently fixed in 0.10.2, so marking Fix Released for oneiric I'll sponsor the upload to natty-proposed and subscribe ubuntu-sru. The patch in the merge proposal works for me and looks good. Thanks for your contribution to Ubuntu! Oh, please be sure to test the package in natty-proposed once it's uploaded. See the SRU link above for details. Accepted ipython into natty-proposed, the package will build now and be available in a few hours. Please test and give feedback here. See https:/ Thanks for the quick upload ! This looks OK, I installed the new package using: wget http:// dpkg -i ipython_ And with this new version, both ipython -pylab and ipython -wthread seem to work fine (i.e. graphical windows are interactive, with the python console also accepting input), using matplotlib and mayavi.mlab. ipython still reports 0.10.1 version, but seems to work as if it is the corrected 0.10.2 version. vincefn: right, this isn't 0.10.2, but just 0.10.1 with a backported patch to fix just the immediate problem. Thanks for testing it! this only affects the matplotlib GTK backend which is not used by default. The default backend (TKAgg) works fine. You can change backend by copying /etc/matplotlibrc to $HOME/.matplotlib/ and changing the "backend:" line
https://bugs.launchpad.net/ubuntu/+source/ipython/+bug/777420
CC-MAIN-2017-26
refinedweb
964
64.1
Quite often, you want to customize the colors shown in your dialogs. Maybe a unique, original look is required (and you don't want to go all the way into skinning); perhaps a red background seems suitable for critical error messages; if you've developed a complex dialog, parts of which serve as drop targets, you might want to emphasize those: or, in a form with required fields, you might want a different color for them. The easy way to do this, is handle the WM_CTLCOLOR family of messages: the easy way to handle messages in WTL, is to use a mixin class which does most of the grunt work. In winuser.h, you can find the definitions of WM_CTLCOLORMSGBOX, WM_CTLCOLOREDIT, WM_CTLCOLORLISTBOX, WM_CTLCOLORBTN, WM_CTLCOLORDLG, WM_CTLCOLORSCROLLBAR, and WM_CTLCOLORSTATIC. By checking MSDN, you find out that all their handlers have a lot in common: HDC) for the relevant control in their wParam . HWND) in their lParam. HBRUSH), which will be used to erase the control's background (unless, of course, you handle WM_ERASEBKGNDyourself) What's left to do? Implement a mixin with a message map which calls the same handler for all of them, with one overrideable function and a few data members for customization, and you're done! Well, that mixin is already written. I hope you'll find it as useful as I found so many CodeProject samples. By the way, "Deleted Windows Programming Elements" in MSDN mentions WM_CTLCOLORMSGBOX as gone, never again to return. It was part of 16-bit Windows. Five lines of code, that's all it takes: CCtlColor.h). CCtlColored<>) to your inheritance list, with whatever flags you consider relevant. COLOR_WINDOWand COLOR_WINDOWTEXTare not suitable for your purpose. Here is a sample, which repaints a wizard-generated 'about box'. #include <CCtlColor.h> // (One) class CAboutDlg : public CDialogImpl<CAboutDlg> , public CCtlColored<CAboutDlg> // Add this line (Two) { public: enum { IDD = IDD_ABOUTBOX }; BEGIN_MSG_MAP(CAboutDlg) MESSAGE_HANDLER(WM_INITDIALOG, OnInitDialog) COMMAND_ID_HANDLER(IDOK, OnCloseCmd) COMMAND_ID_HANDLER(IDCANCEL, OnCloseCmd) // Add this line. CColoredThis is typedefed // inside CCtlColored<> for your comfort. CHAIN_MSG_MAP(CColoredThis) // (Three) END_MSG_MAP() LRESULT CAboutDlg::OnInitDialog(UINT uMsg, WPARAM wParam, LPARAM lParam, BOOL& bHandled) { // Add next two lines... SetTextBackGround(0xFfbF9F); // Lightish kind of blue (Four) SetTextColor(RGB(0X60, 0, 0)); // Dark red // (Five lines, as promised!) // ...or, if that's your pleasure, the next two... SetTextColor(::GetSysColor(COLOR_INFOTEXT)); (Four) SetBkBrush(COLOR_INFOBK); // (Five lines, as promised!) // ...or, if you're satisfied with the default // COLOR_WINDOW/COLOR_WINDOWTEXT, do nothing! CenterWindow(GetParent()); return TRUE; } LRESULT CAboutDlg::OnCloseCmd(WORD wNotifyCode, WORD wID, HWND hWndCtl, BOOL& bHandled) { EndDialog(wID); return 0; } }; You can change the dialog's appearance at run time, using the exposed functions: // Function name : SetTextColor // Description : Replaces the current text color. // Return type : COLORREF (the former text color) // Argument : COLORREF newColor - The new text color. COLORREF SetTextColor(COLORREF newColor); // Function name : SetTextBackGround // Description : Sets the passed color as text background, // and creates a solid // brush from it, to erase the background // of controls before // drawing on them. // Return type : COLORREF (The former text background) // Argument : COLORREF newColor - The new text background. COLORREF SetTextBackGround(COLORREF newColor); // Function name : SetBkBrush // Description : This function sets the background color and brush, // using ::GetSysColorBrush(nIndex) // and ::GetSysColor(nIndex). // It returns the former brush (in case // you want to delete it). // Return type : HBRUSH - The former brush // Argument : int nIndex - One of the ::GetSysColor() indexes. HBRUSH SetBkBrush(int nIndex); // Function name : SetBkBrush // Description : This function gives the caller maximum latitude, // letting // it set any brush (not necessarily solid) // and any background // color (not necessarily similar to the brush's color). // Return type : HBRUSH - The former brush // Argument : HBRUSH NewBrush - The new brush you'd like to set. // Argument : bool bManaged - // If true, the class will adopt the brush // and delete it as needed. // Argument : COLORREF clrBackGround - Since any brush // goes, the caller // should send a background color as similar // as possible to that of the brush. HBRUSH SetBkBrush(HBRUSH NewBrush, bool bManaged = false, COLORREF clrBackGround = CLR_INVALID); Little can be added. The first two are the ones I personally used most often, the third is an easy shortcut when you want to use system colors; the last one is the heaviest tool, both powerful and hard to use. A set of flags, defined in an enum at the beginning of the header, enables managing the messages the class will handle. For each of the WM_CTLCOLOR* messages there's a FLG_HANDLE_* flag, that, if it's set (either at creation time or at run time), will enable managing the corresponding message. The flags are passed as a parameter to the template (as usual, with a "reasonable default"), and can be modified through the protected member m_Flags. As mentioned above, "Deleted Windows Programming Elements" in MSDN mentions WM_CTLCOLORMSGBOX is obsolete (part of 16-bit Windows only), so it is commented out in the source, and so is the corresponding flag. Who knows, some day they might be back. LRESULT DoHandleCtlColor( UINT uMsg, // One of the WM_CTLCOLOR* messages. HDC hdc, // DC of the control which sent the message. HWND hw) // Which control sent the message. The default handler sets the text color and background of the passed HDC using the relevant member variables, and then returns the member brush handle, which will be used by Windows' DefWindowProc() when handling WM_ERASEBKGND. In the sample app, there are a couple of overrides: one in class CMainDlg, which shows read-only edit boxes in a color scheme different from that used for writeable edit boxes (and everything else), the other in class CRequiredDlg, which paints the background of "required fields" in a different color (light green). Basically, you can do whatever you like in an override (if you decide to use one), but always return a valid brush. All data members are protected, which means they are accessible by your class. Still, only one is meant to be directly modified: m_Flags. All others are accessible just to enable using them if you override DoHandleCtlColor(), putting accessors for them seemed a bit of an overkill. Object-oriented purists might complain, but then, they all write in Eiffel, don't they? Sorry, it's not that easy! You have to create a common dialog, and subclass it before its shown. That. at least, is the theory: I must confess that I didn't, yet. I hope I'll be able to add that feature in the near future. A TrackBar control does not repaint itself until it gets the focus. So, if you enable run time changes of appearance, and you have one, you'll have to set and reset the focus in order to show it properly. Scroll bars respond to the WM_CTLCOLORSCROLLBAR message only if they are direct children of your dialog: according to MSDN, WM_CTLCOLORSCROLLBARmessage is used only by child scrollbar controls. Scrollbars attached to a window ( WS_SCROLLand WS_VSCROLL) do not generate this message. To customize the appearance of scrollbars attached to a window, use the flat scroll bar functions. FlatSB_SetScrollProp(), which requires calling InitializeFlatSB()on control initialization, and even then, there will be no result after calling these functions, if your OS is running Windows XP (first one with Comctl32.dll version 6), or later. WS_SCROLL, WS_VSCROLL), I'll be glad to learn. Buttons do not respond to whatever settings you used while handling WM_CTLCOLORBTN, unless they're owner-drawn. On owner-drawn buttons, quote MSDN: WM_INITDIALOG WM_COMMAND - WM_DRAWITEM When you must paint an owner-drawn button, the system sends the parent window a WM_DRAWITEM message whose lParam parameter is a pointer to a DRAWITEMSTRUCT structure. Use this structure with all owner-drawn controls to provide the application with the information it requires to paint the control. The itemAction and itemState members of the DRAWITEMSTRUCT structure define how to paint an owner-drawn button." (end of MSDN quote) In other words, quite a lot of work just to paint a button, and then themes might make your life even harder... Personally, I don't consider it's worth the effort. The sample application is a dialog-based WTL application, which has a few checkboxes to alter its appearance, and two interesting buttons: About... opens an 'About box', which has a static variable that controls its appearance: there are four color sets, shown below. Each time you click the button, you'll get a different one. Required..., shown below, is more interesting. The dialog paints the background of all 'required' fields in light green: if one of those is missing, you cannot close the dialog by clicking OK, only by clicking CANCEL. In order to enable what you see, the class CRequired holds a brush member (in addition to the one inherited from CCtlColored), and overrides DoHandleCtlColor() so that it checks if the control is one of the 'required' edit boxes, and if so, it paints its background, otherwise letting the default implementation of DoHandleCtlColor() handle things. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/wtl/ChangeColorWtlDlg.aspx
crawl-002
refinedweb
1,475
52.7
While referring to the “EYE on YOUR LIFE” section on page 389 of the textbook, discuss the change in the U.S. unemployment rate and inflation rate over the past year based on the Phillips curve concepts. Did they change in the same direction or in opposite directions? Explain if the change was a movement along a short-run Phillips curve or a shifting short-run Phillips curve. Can you think of reasons why the short-run Phillips curve might have shifted? Finally, based on these economic concepts as well as your own point-of-view, discuss and explain what is worse for our U.S. economy, too much inflation or too much unemployment?<?xml:namespace prefix = o Please give correct answer!!!!!!!
http://www.chegg.com/homework-help/questions-and-answers/referring-eye-life-section-page-389-textbook-discuss-change-us-unemployment-rate-inflation-q3964151
CC-MAIN-2015-22
refinedweb
122
74.08
Scraping with BeautifulSoup BeautifulSoup is a handy library for web scraping that’s mature, easy to use and feature complete. It can be regarded as jQuery’s equivalent in the Python world. In this post we’re going to scrape the front page of wooptoo.com and output a clean, JSON version of it. The library can also handle DOM manipulation, i.e. adding elements to the HTML document, but that’s beyond the scope of this article. from bs4 import BeautifulSoup import requests resp = requests.get('') page = BeautifulSoup(resp.content) The bread and butter of scraping with BS is the find_all method. select works similarly but it uses the CSS selector syntax à la jQuery. Their output will be identical in this case. posts = page.find_all(attrs={'class':'post'}) _posts = page.select('.post') posts[0] is _posts[0] > True One catch to be aware of is that BS will work with special bs4 data structures, which inherit the built-in Python structures. So a list of posts will yield a bs4.element.ResultSet and each individual entry will be a bs4.element.Tag. The find_all method allows us to also select html elements using native regular expressions. This enables us to fetch all the posts from 2014 for example: import re d2014 = page.find_all('time', {'datetime': re.compile('^2014')}) [p.parent.parent for p in d2014] We can select the child of an element either using chained calls to find or using the select_one method. Both will only fetch the first match. titles = [p.find(class_='post-title').find('a').text for p in posts] titles = [p.select_one('.post-title a').text for p in posts] Putting it all together: import itertools import json _titles = [p.select_one('.post-title a') for p in posts] titles = [t.text for t in _titles] urls = [u.get('href') for u in _titles] datetimes = [p.find('time').get('datetime') for p in posts] tags = [t.select_one('.meta-tags a').text for t in posts] summaries = [s.find(class_='post-summary').text.strip() for s in posts] _posts = zip(titles, urls, datetimes, tags, summaries, itertools.count(1)) _keys = ('title', 'url', 'datetime', 'tags', 'summary', 'number') output = [dict(zip(_keys, p)) for p in _posts] json.dumps(output) The _posts will yield a generator object which can be iterated over only once, as opposed to the _titles list which does not have the same drawback. The BeautifulSoup library is much more complex than the example provided here. It allows for things like walking the DOM tree in a Javascript-esque manner: page.body.footer.p, fetching children nodes, parents, siblings on the same level, and much more. Read more - CSS Selector Syntax - BeautifulSoup Docs - Source code files for this post
http://wooptoo.com/blog/scraping-with-beautifulsoup/
CC-MAIN-2018-22
refinedweb
454
60.01
Natural. NLP: The Basics of Sentiment Analysis If you have been reading AI related news in the last few years, you were probably reading about Reinforcement Learning. However, next to Google’s AlphaGo and the poker AI called Libratus that out-bluffed some of the best human players, there have been a lot of chat bots that made it into the news. For instance, there is the Microsoft’s chatbot that turned racist in less than a day. And there is the chatbot that made news when it convinced 10 out 30 judges at the University of Reading’s 2014 Turing Test that it was human, thus winning the contest. NLP is the exciting field in AI that aims at enabling machines to understand and speak human language. One of the most popular commercial products is the IBM Watson. And while I am already planning a post regarding IBM’s NLP tech, with this first NLP post, I will start with the some very basic NLP. The Data: Reviews and Labels The data consists of 25000 IMDB reviews. Each review is stored as a single line in the file reviews.txt. The reviews have already been preprocessed a bit and contain only lower case characters. The labels.txt file contains the corresponding labels. Each review is either labeled as POSITIVE or NEGATIVE. Let’s read the data and print some of it. import numpy as np import matplotlib.pyplot as plt %matplotlib inline with open('data/reviews.txt','r') as file_handler: reviews = np.array(list(map(lambda x:x[:-1], file_handler.readlines()))) with open('data/labels.txt','r') as file_handler: labels = np.array(list(map(lambda x:x[:-1].upper(), file_handler.readlines()))) unique, counts = np.unique(labels, return_counts=True) print('Reviews', len(reviews), 'Labels', len(labels), dict(zip(unique, counts))) for i in range(10): print(labels[i] + "\t:\t" + reviews[i][:80] + "...") Reviews 25000 Labels 25000 {'POSITIVE': 12500, 'NEGATIVE': 12500} POSITIVE : bromwell high is a cartoon comedy . it ran at the same time as some other progra... NEGATIVE : story of a man who has unnatural feelings for a pig . starts out with a opening ... POSITIVE : homelessness or houselessness as george carlin stated has been an issue for ye... NEGATIVE : airport starts as a brand new luxury plane is loaded up with valuable pain... POSITIVE : brilliant over acting by lesley ann warren . best dramatic hobo lady i have eve... NEGATIVE : this film lacked something i couldn t put my finger on at first charisma on the... POSITIVE : this is easily the most underrated film inn the brooks cannon . sure its flawed... NEGATIVE : sorry everyone i know this is supposed to be an art film but wow they sh... POSITIVE : this is not the typical mel brooks film . it was much less slapstick than most o... NEGATIVE : when i was little my parents took me along to the theater to see interiors . it ... The dataset is perfectly balanced across the two categories POSITIVE and NEGATIVE. Counting words Let’s build up a simple sentiment theory. It is common sense that some of the words are more common in positive reviews and some are more frequently found in negative reviews. For example, I expect words like “superb”, “impressive”, “magnificent”, etc. to be common in positive reviews, while words like “miserable”, “bad”, “horrible”, etc. to appear in negative reviews. Let’s count the words in order to see what words are most common and what words appear most frequently the positive and the negative reviews. from collections import Counter positive_counts = Counter() negative_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): negative_counts[word] += 1 total_counts[word] += 1 # Examine the counts of the most common words in positive reviews print('Most common words:', total_counts.most_common()[0:30]) print('\nMost common words in NEGATIVE reviews:', negative_counts.most_common()[0:30]) print('\nMost common words in POSITIVE reviews:', positive_counts.most_common()[0:30]) Most common words: [('', 1111930), ('the', 336713), ('.', 327192), ('and', 164107), ('a', 163009), ('of', 145864), ('to', 135720), ('is', 107328), ('br', 101872), ('it', 96352), ('in', 93968), ('i', 87623), ('this', 76000), ('that', 73245), ('s', 65361), ('was', 48208), ('as', 46933), ('for', 44343), ('with', 44125), ('movie', 44039), ('but', 42603), ('film', 40155), ('you', 34230), ('on', 34200), ('t', 34081), ('not', 30626), ('he', 30138), ('are', 29430), ('his', 29374), ('have', 27731)] Most common words in NEGATIVE reviews: [('', 561462), ('.', 167538), ('the', 163389), ('a', 79321), ('and', 74385), ('of', 69009), ('to', 68974), ('br', 52637), ('is', 50083), ('it', 48327), ('i', 46880), ('in', 43753), ('this', 40920), ('that', 37615), ('s', 31546), ('was', 26291), ('movie', 24965), ('for', 21927), ('but', 21781), ('with', 20878), ('as', 20625), ('t', 20361), ('film', 19218), ('you', 17549), ('on', 17192), ('not', 16354), ('have', 15144), ('are', 14623), ('be', 14541), ('he', 13856)] Most common words in POSITIVE reviews: [('', 550468), ('the', 173324), ('.', 159654), ('and', 89722), ('a', 83688), ('of', 76855), ('to', 66746), ('is', 57245), ('in', 50215), ('br', 49235), ('it', 48025), ('i', 40743), ('that', 35630), ('this', 35080), ('s', 33815), ('as', 26308), ('with', 23247), ('for', 22416), ('was', 21917), ('film', 20937), ('but', 20822), ('movie', 19074), ('his', 17227), ('on', 17008), ('you', 16681), ('he', 16282), ('are', 14807), ('not', 14272), ('t', 13720), ('one', 13655)] Well, at a first glance, that seems disappointing. As expected, the most common words are some linking words like “the”, “of”, “for”, “at”, etc. Counting the words for POSITIVE and NEGATIVE reviews separately might appear pointless at first, as the same linking words are found among the most common for both the POSITIVE and NEGATIVE reviews. Sentiment Ratio However, counting the words that way would allow us to build a far more meaningful metric, called the sentiment ratio. A word with a sentiment ratio of 1 is used only in POSITIVE reviews. A word with a sentiment ratio of -1 is used only in NEGATIVE reviews. A word with sentiment ratio of 0 is neither POSITIVE, nor NEGATIVE, but are neutral. Hence, linking words like the one shown above are expected to be close to the neutral 0. Let’s draw the sentiment ratio for all words. I am expecting to see figure showing a beautiful normal distribution. import seaborn as sns sentiment_ratio = Counter() for word, count in list(total_counts.most_common()): sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5 print('Total words in sentiment ratio', len(sentiment_ratio)) sns.distplot(list(sentiment_ratio.values())); Total words in sentiment ratio 74074 Well, that looks like a normal distribution with a considerable amount of words that were used only in POSITIVE and only in NEGATIVE reviews. Could it be, those are words that occur only once or twice in the review corpus? They are not necessarily useful when identifying the sentiment, as they occur only in one of few reviews. If that is the case it would be better to exclude these words. We want our models to generalize well instead of overfitting on some very rare words. Let’s exclude all words that occur less than ‘min_occurance’ times in the whole review corpus. min_occurance = 100 sentiment_ratio = Counter() for word, count in list(total_counts.most_common()): if total_counts[word] >= min_occurance: # only consider words sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5 print('Total words in sentiment ratio', len(sentiment_ratio)) sns.distplot(list(sentiment_ratio.values())); Total words in sentiment ratio 4276 And that is the beautiful normal distribution that I was expecting. The total word count shrank from 74074 to 4276. Hence, there are many words that have been used only a few times. Looking at the figure, there are a lot of neutral words in our new sentiment selection, but there are also some words that are used almost exclusively in POSITIVE or NEGATIVE reviews. You can try different values for ‘min_occurance’ and observe how the number of total words and the plot is changing. Let’s check out the words for min_occurance = 100. print('Words with the most POSITIVE sentiment' ,sentiment_ratio.most_common()[:30]) print('\nWords with the most NEGATIVE sentiment' ,sentiment_ratio.most_common()[-30:]) Words with the most POSITIVE sentiment [('edie', 1.0), ('paulie', 0.9831932773109244), ('felix', 0.9338842975206612), ('polanski', 0.9056603773584906), ('matthau', 0.8980891719745223), ('victoria', 0.8798283261802575), ('mildred', 0.8782608695652174), ('gandhi', 0.8688524590163935), ('flawless', 0.8560000000000001), ('superbly', 0.8253968253968254), ('perfection', 0.8055555555555556), ('astaire', 0.803030303030303), ('voight', 0.7837837837837838), ('captures', 0.7777777777777777), ('wonderfully', 0.771604938271605), ('brosnan', 0.765625), ('powell', 0.7652582159624413), ('lily', 0.7575757575757576), ('bakshi', 0.7538461538461538), ('lincoln', 0.75), ('lemmon', 0.7431192660550459), ('breathtaking', 0.7380952380952381), ('refreshing', 0.7378640776699028), ('bourne', 0.736842105263158), ('flynn', 0.727891156462585), ('homer', 0.7254901960784315), ('soccer', 0.7227722772277227), ('delightful', 0.7226277372262773), ('andrews', 0.7218543046357615), ('lumet', 0.72)] Words with the most NEGATIVE sentiment [('insult', -0.755868544600939), ('uninspired', -0.7560975609756098), ('lame', -0.7574123989218329), ('sucks', -0.7580071174377224), ('miserably', -0.7580645161290323), ('boredom', -0.7588652482269503), ('existent', -0.7763975155279503), ('remotely', -0.798941798941799), ('wasting', -0.8), ('poorly', -0.803921568627451), ('awful', -0.8052173913043479), ('laughable', -0.8113207547169812), ('worst', -0.8155197657393851), ('lousy', -0.8181818181818181), ('drivel', -0.8240000000000001), ('prom', -0.8260869565217391), ('redeeming', -0.8282208588957055), ('atrocious', -0.8367346938775511), ('pointless', -0.8415841584158416), ('horrid', -0.8448275862068966), ('blah', -0.8571428571428572), ('waste', -0.8641043239533288), ('unfunny', -0.8726591760299626), ('incoherent', -0.8985507246376812), ('mst', -0.9), ('stinker', -0.9215686274509804), ('unwatchable', -0.9252336448598131), ('seagal', -0.9487179487179487), ('uwe', -0.9803921568627451), ('boll', -0.9861111111111112)] There are a lot of names among the words with positive sentiment. For example, edie (probably from edie falco, who won 2 Golden Globes and another 21 wins & 70 nominations), polanski (probably from roman polanski, who won 1 oscar and another 83 wins & 75 nominations). But there are also words like “superbly”, “breathtaking”, “refreshing”, etc. Those are exactly the positive sentiment loaded words I was looking for. Similarly, there are words like “insult”, “uninspired”, “lame”, “sucks”, “miserably”, “boredom” that no director would be happy to read in the reviews regarding his movie. One name catches the eye - that is “seagal”, (probably from Steven Seagal). Well, I won’t comment on that. Naive Sentiment Classifier Let’s build a naive machine learning classifier. This classifier is very simple and does not utilize any special kind of models like linear regression, trees or neural networks. However, it is still a machine LEARNING classifier as you need data that it fits on in order to use it for predictions. It is largely based on the sentiment radio that we previously discussed and has only two parameters ‘min_word_count’ and ‘sentiment_threshold’. Here it is: class NaiveSentimentClassifier: def __init__(self, min_word_count, sentiment_threshold): self.min_word_count = min_word_count self.sentiment_threshold = sentiment_threshold def fit(self, reviews, labels): positive_counts = Counter() total_counts = Counter() for i in range(len(reviews)): if(labels[i] == 'POSITIVE'): for word in reviews[i].split(" "): positive_counts[word] += 1 total_counts[word] += 1 else: for word in reviews[i].split(" "): total_counts[word] += 1 self.sentiment_ratios = Counter() for word, count in total_counts.items(): if(count > self.min_word_count): self.sentiment_ratios[word] = \ ((positive_counts[word] / count) - 0.5) / 0.5 def predict(self, reviews): predictions = [] for review in reviews: sum_review_sentiment = 0 for word in review.split(" "): if abs(self.sentiment_ratios[word]) >= self.sentiment_threshold: sum_review_sentiment += self.sentiment_ratios[word] if sum_review_sentiment >= 0: predictions.append('POSITIVE') else: predictions.append('NEGATIVE') return predictions The classifier has only two parameters - ‘min_word_count’ and ‘sentiment_threshold’. A min_word_count of 20 means the classifier will only consider words that occur at least 20 times in the review corpus. The ‘sentiment_threshhold’ allows you to ignore words with arather neutral sentiment. A ‘sentiment_threshhold’ of 0.3 means that only words with sentiment ratio of more than 0.3 or less than -0.3 would be considered in the prediction process. What the classifier does is creating the sentiment ratio like previously shown. When predicting the sentiment, the classifier uses the sentiment ratio dict, to sum up all sentiment ratios of all the words used in the review. If the overall sum is positive the sentiment is also positive. If the overall sum is negative the sentiment is also negative. It is pretty simple, isn’t it? Let’s measure the performance in a 5 fold cross-validation setting: from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score all_predictions = [] all_true_labels = [] for train_index, validation_index in KFold(n_splits=5, random_state=42, shuffle=True).split(labels): trainX, trainY = reviews[train_index], labels[train_index] validationX, validationY = reviews[validation_index], labels[validation_index] classifier = NaiveSentimentClassifier(20, 0.3) classifier.fit(trainX, trainY) predictions = classifier.predict(validationX) print('Fold accuracy', accuracy_score(validationY, predictions)) all_predictions += predictions all_true_labels += list(validationY) print('CV accuracy', accuracy_score(all_true_labels, all_predictions)) Fold accuracy 0.8576 Fold accuracy 0.8582 Fold accuracy 0.8546 Fold accuracy 0.858 Fold accuracy 0.859 CV accuracy 0.85748 A cross-validation accuracy of 85.7% is not bad for this naive approach and a classifier that trains in only a few seconds. At this point you will be asking yourself, can this score be easily beaten with the use of a neural network. Let’s see. Neural Networks can do better To train a neural network you should transform the data to a format the neural network can understand. Hence, first you need to convert the reviews to numerical vectors. Let’s assume the neural network would be only interested in the words: “breathtaking”, “refreshing”, “sucks” and “lame”. Thus, we have an input vector of size 4. If the review does not contain any of these words the input vector would contain only zeros: [0, 0, 0, 0]. If the review is “Wow, that was such a refreshing experience. I was impressed by the breathtaking acting and the breathtaking visual effects.”, the input vector would look like this: [1, 2, 0, 0]. A negative review such as “Wow, that was some lame acting and a lame music. Totally, lame. Sad.” would be transformed to an input vector like this [0, 0, 0, 3]. Anyway, you need to create a word2index dictionary that points to an index of the vector for a given word: def create_word2index(min_occurance, sentiment_threshold): word2index = {} index = 0 sentiment_ratio = Counter() for word, count in list(total_counts.most_common()): sentiment_ratio[word] = ((positive_counts[word] / total_counts[word]) - 0.5) / 0.5 is_word_eligable = lambda word: word not in word2index and \ total_counts[word] >= min_occurance and \ abs(sentiment_ratio[word]) >= sentiment_threshold for i in range(len(reviews)): for word in reviews[i].split(" "): if is_word_eligable(word): word2index[word] = index index += 1 print("Word2index contains", len(word2index), 'words.') return word2index Same as before, the create_word2index has the two parameters ‘min_occurance’ and ‘sentiment_threshhold’ Check the explanation of those two in the previous section. Anyway, once you have the word2index dict, you can encode the reviews with the function below: def encode_reviews_by_word_count(word2index): encoded_reviews = [] for i in range(len(reviews)): review_array = np.zeros(len(word2index)) for word in reviews[i].split(" "): if word in word2index: review_array[word2index[word]] += 1 encoded_reviews.append(review_array) encoded_reviews = np.array(encoded_reviews) print('Encoded reviews matrix shape', encoded_reviews.shape) return encoded_reviews Labels are easily one-hot encoded. Check out this explanation on why one-hot encoding is needed: def encode_labels(): encoded_labels = [] for label in labels: if label == 'POSITIVE': encoded_labels.append([0, 1]) else: encoded_labels.append([1, 0]) return np.array(encoded_labels) At this point, you can transform both the reviews and the labels into data that the neural network can understand. Let’s do that: word2index = create_word2index(min_occurance=10, sentiment_threshold=0.2) encoded_reviews = encode_reviews_by_word_count(word2index) encoded_labels = encode_labels() Word2index contains 11567 words. Encoded reviews matrix shape (25000, 11567) You are good to go and train the neural network. In the example below, I am using a simple neural network consisting of two fully connected layers. Trying different things out, I found Dropout before the first layer can reduce overfitting. Dropout between the first and the second layer, however, made the performance worse. Increasing the number of the hidden units in the two layers did not lead to better performance, but to more overfitting. Increasing the number of layers made no difference. from sklearn.model_selection import KFold from sklearn.metrics import accuracy_score from keras.callbacks import ModelCheckpoint from keras.models import Sequential from keras.layers import Dense, Dropout, Input from keras import metrics all_predictions = [] all_true_labels = [] model_index = 0 for train_index, validation_index in \ KFold(n_splits=5, random_state=42, shuffle=True).split(encoded_labels): model_index +=1 model_path= 'models/model_' + str(model_index) print('Training model: ', model_path) train_X, train_Y = encoded_reviews[train_index], encoded_labels[train_index] validation_X, validation_Y = encoded_reviews[validation_index], encoded_labels[validation_index] save_best_model = ModelCheckpoint( model_path, monitor='val_loss', save_best_only=True, save_weights_only=True) model = Sequential() model.add(Dropout(0.3, input_shape=(len(word2index),))) model.add(Dense(10, activation="relu")) model.add(Dense(10, activation="relu")) model.add(Dense(2, activation="softmax")) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=[metrics.categorical_accuracy]) model.fit(train_X, train_Y, validation_data=(validation_X, validation_Y), callbacks = [save_best_model], epochs=20, batch_size=32, verbose=0) model.load_weights(model_path) all_true_labels += list(validation_Y[:, 0]) all_predictions += list(model.predict(validation_X)[:, 0] > 0.5) print('CV accuracy', accuracy_score(all_true_labels, all_predictions)) Training model: models/model_1 Training model: models/model_2 Training model: models/model_3 Training model: models/model_4 Training model: models/model_5 CV accuracy 0.90196 A performance of 90,2% is significant improvement towards the naive classifier that has been previously build. Without a doubt the architecture above can be tuned and a bit better performance can be reached. Nevertheless, there are points in this NLP approach that when bettered can lead to much bigger performance gains. One of most important aspects when approaching a NLP task are the data pre-processing and the feature engineering. I will dicuss many interesting techniques such as e.g. TF-IDF. Until then make sure you check out this repo and run the code on your own: sentiment-analysis.
http://machinememos.com/natural%20language%20processing/sentiment%20analysis/python/keras/tensorflow/artificial%20intelligence/machine%20learning/neural%20networks/tfidf/word2vec/word%20embeddings/2017/08/06/basics-sentiment-analysis.html
CC-MAIN-2019-51
refinedweb
2,871
52.66
A shared-resource-locking queue system using python and redis. Project description Turn Introduction Turn is a shared-resource-locking queue system using python and Redis. Use it in separate programs that acess the same shared resource to make sure each program waits for its turn to handle the resource. It is inspired on a the queueing system that is sometimes found in small shops, consisting of a serial number dispenser and a wall indicator. Turn comes with a commandline tool for resetting and direct inspection of queues, and listening to message channels for one or more resources. Installation Install turn with pip: $ pip install turn Of course, you should also have a Redis server at hand. Usage Basic usage goes like this: import turn # a locker corresponds to a reusable Redis client locker = turn.Locker(host='localhost', port=6379, db=0) resource = 'my_valuable_resource' label = 'This shows up in messages.' with locker.lock(resource=resource, label=label): pass # do your careful work on the resource here lock() accepts two extra keyword arguments: expire: maximum expire value for a users presence (default 60) If a user crashes hard, its presence in the queue will be kept alive at most expire seconds. This value affects how often the EXPIRE command will be sent to redis to signal the continuing presence of a user in the queue. patience: period of waiting before bumping the queue (default 60) If a program waits longer than this value without receiving any progression messages on the queues pubsub channel, it will bump the queue to see if any users have left the queue in an unusual way. Tools The state of users and queues can be monitered by inspection of Redis values and subscription to Redis channels. Inspection can be done using the console script requesting a snap-shot status report: $ turn status my_valuable_resource --host localhost my_valuable_resource 5 ------------------------------------------------------------ This shows up in status reports and messages. 5 Running turn status without specifying any resources, produces a summary of all queues within the database. Alternatively, one or more subscriptions to the Redis PubSub channels for a particular resource can be followed: $ turn follow my_valuable_resource --port 6379 my_valuable_resource: 5 assigned to "This shows up in messages." my_valuable_resource: 5 started my_valuable_resource: 5 completed by "This shows up in messages." my_valuable_resource: 6 granted Similar to the status command, running turn follow without specifying any resources starts following the channels for any queue currently within the database. Note that new queues are not automatically added to the subscription. Queues can also be reset (removed) from redis using turn reset optionally followed by resources queues to reset. Reset without resource names resets all available queues in the server. If a queue for a resource shows activity, it will not be reset and in addition a message will be produced. A reset command for a resource will also ‘bump’ the queue for that resource. Implementation details When a lock is requested, a unique serial number is obtained from Redis via INCR on a Redis value, called the dispenser. The lock is acquired only if another value, called the indicator, corresponds to the unique serial number. There are two mechanisms that can change the indicator: - The user with the corresponding serial number is finished acting on the shared resource and increases the number, notifying any other subscribed waiting users. This is the preferred way to handle things. - Another user gets impatient and calls the bump procedure. This procedure checks if the user corresponding to the indicator is still active and if necessary sets the indicator to an appropriate value. Activity is monitored via an expiring key-value pair in Redis. The turn library automatically arranges a thread that keeps updating the expiration time, to make sure the presence does not expire during waiting for, or handling of the resource. Credits - Written by Arjan Verkerk Changes 0.3.1 (2015-04-30) - Nothing changed yet. 0.3 (2015-04-30) - All code covered by tests. - Extend docs. 0.2.1 (2015-04-28) - Patience adjusted to seconds - PubSub connections closed automatically - Make patience an argument of lock() - Docs updated 0.2 (2015-04-28) - Documentation adjustments. - Move console stuff to separate module. - Use select and not poll, to make increase platform independency. - Use the name ‘Locker’ for the reusable object that locks things. 0.1.1 (2015-04-23) - U can use pip now. 0.1 (2015-04-23) - Initial project structure created with nensskel 1.36.dev0. - First working version. Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/turn/0.3.1/
CC-MAIN-2018-51
refinedweb
768
54.22
Porting Symbian Qt Apps to Nokia N9 This article explains how to port Symbian Qt applications to the Nokia N9, which is based on the MeeGo 1.2 Harmattan platform. Note also that most of the methods in this article can also be used to port from Harmattan to Symbian. Prerequisites There are some differences between Symbian and MeeGo 1.2 Harmattan development processes. If you haven't yet developed for Harmattan we recommend you read Getting started with Harmattan (in MeeGo 1.2 Harmattan Development Documentation). This covers, among other things, selecting the correct target and setting up the Harmattan emulator (QEMU) or the device. Introduction Symbian and. Using QWidgets in Symbian has also been deprecated from Qt 4.7.4 release. Although QWidgets work as earlier it is highly recommended to not to use them anymore, see Symbian platform notes. - Symbian apps written using Qt Quick may use Qt Quick Components for Symbian for a native look and feel. Harmattan developers use the Harmatttan-specific. For a comparison between the Symbian and Harmattan components, see UI migration from N9 to updated Symbian style. -. A Qt application that only uses Qt Quick but not Qt Quick Components falls to this category. This approach is also Components a custom TextLabel element that works on both platforms: import QtQuick 1.0 Rectangle { property alias text: labelText.text ... gradient: Gradient { GradientStop { position: 0.0; color: "#555555" } GradientStop { position: 1.0; color: "#222222" } } // Inner rectangle to make borders Rectangle { ... } Text { id: labelText ... } } interface classes and dynamic binding usually solves the issue:); #endif } ... In the source file (.cpp) construct a different instance depending on the target platform. For information on how to define a Harmattan-specific code scope, see here. Step 2: Project file and application deployment Add the Harmattan configurations into the project file. The following block contains the configurations in the project file of in the Projects tab page in the Qt SDK (see the following figure). Harmattan platform uses a concept called Resource Policy Framework to handle the different roles and modes that a modern smartphone is used in. For example, you can make the hardware volume keys control the audio volume of your game application. For more information, see Selecting the resource application class. Example: 1. Create a <your application>.conf file (the following snippets are from main.cpp so that the correct QML file is loaded when the application is launched (note that this is only required if the QML files are deployed onto the device instead of putting them into resources): QML and JavaScript files can be compiled into the binary using the Qt resource system. When using the resource system, you don't have to deploy (copy) the QML files onto the device. However, due to a bug in Qt Quick 1.0 concerning Symbian QML files using the component, icons cannot be placed in resources. The bug is fixed in Qt Quick 1.1 release for Symbian that is available in Qt 4.7.4. If the main QML file is loaded from resources and you have created separate resource files for both platforms, no changes are required: Build the application using the Harmattan target and fix any errors found. Build errors help you to find the platform-specific code that needs to be rewritten for Harmattan. The #ifdef approach usually works. Step 4: Adapting to Harmattan Qt Quick Components If your Symbian Qt Quick application uses Qt Quick Components 1.0 and components such as PageStack, StatusBar, or ToolBar, you probably need to modify at least the main.qml file of the Harmattan build. The following snippets show the possible differences in main.qml between Symbian and Harmattan: import QtQuick 1.0 import com.nokia.symbian 1.0 // Symbian Qt Quick Components and Qt Quick Components 1.1 for Symbian, Symbian components will also be equipped with the PageStackWindow element. Next, run/debug your application with an emulator or on a device and locate the possible errors in the debug log. Also fix possible scaling issues since the resolution until your application looks and behaves the way you want it to. Don't forget to test that your original Symbian version still works! If your application uses Qt Quick Components, note that the component set does not match perfectly. For example, Qt Quick Components 1.0 for Symbian lack the PageStackWindow element and Harmattan components version 1.0 lack the ListItem element which you need to implement yourself when porting an application (see the following snippets). Moving to Qt Quick Components 1.1 for Symbian usage will ease the porting to Harmattan as there's less differences in fundamental elements like PageStackWindow. Generally, applications will get other benefits too like the split-view inputs which is similar how text input happens in Harmattan., that are ported using the approach covered in this section: - Diner - RentBook example - RSS Reader - Tic-Tac-Toe over Sockets .
http://developer.nokia.com/community/wiki/Porting_Symbian_Qt_Apps_to_Nokia_N9
CC-MAIN-2014-23
refinedweb
818
58.28
#include <utmpx.h>cc ...-lc updwtmpx (const char *wfilex, struct utmpx *utmpx); #include <utmpx.h> void getutmp (const struct utmpx *utmpx, struct utmp *utmp); void getutmpx (const struct utmp *utmp, struct utmpx *utmpx); void updwtmp (const char *wfile, struct utmp *utmp); getutxent(S), getutxid(S), getutxline(S) and pututxline(S) each return a pointer to a utmpx structure. (See utmpx(F).) getutxent( ) reads in the next entry from a utmpx-like file. If the file is not already open, it opens it. If it reaches the end of the file, it fails. The action of getutxid( ) depends on the type of entry. If the type specified is RUN_LVL, BOOT_TIME, OLD_TIME, or NEW_TIME, getutxid( ) searches forward from the current point in the utmpx file until it finds an entry with a ut_type matching id->ut_type. But if the type specified in id is one of INIT_PROCESS, LOGIN_PROCESS, USER_PROCESS, or DEAD_PROCESS, it returns a pointer to the first entry whose type is one of these four and whose ut_id field matches id->ut_id. If getutxid( ) reaches the end of file without a match, it fails. getutxline( ) searches forward from the current point in the utmpx file until it finds an entry of the type LOGIN_PROCESS or USER_PROCESS which also has a ut_line string matching the line->ut_line string. If it reaches the end of file without a match, it fails. pututxline( ) writes out the supplied utmpx structure into the utmpx file. If it is not already at the proper place, it uses getutxid( ) to search forward for the proper place. Normally, the user of pututxline( ) searches for the proper entry using one of the getutx(S) routines. If so, pututxline( ) does not search. If pututxline( ) does not find a matching slot for the new entry, it adds a new entry to the end of the file. It returns a pointer to the utmpx structure. setutxent(S) resets the input stream to the beginning of the file. Do this before each search for a new entry if you want to examine the entire file. endutxent(S) closes the currently open file. utmpxname(S) allows the user to change the name of the file examined, from /var/adm/utmpx to any other file. This other file is usually /var/adm/wtmpx. If the file does not exist, that is not apparent until the first attempt to reference the file is made. utmpxname( ) does not open the file. It just closes the old file if it is currently open and saves the new file name. The new file name must end with ``x'' to allow the name of the corresponding utmp file to be easily obtainable (otherwise an error code of 0 is returned). getutmp(S) copies the information stored in the fields of the utmpx structure to the corresponding fields of the utmp structure. If the information in any field of utmpx does not fit in the corresponding utmp field, the data is truncated. getutmpx(S) copies the information stored in the fields of the utmp structure to the corresponding fields of the utmpx structure. updwtmp(S) checks the existence of wfile and its parallel file wfilex, whose name is obtained by appending an ``x'' to wfile. If only one of them exists, the other is created and initialized to reflect the state of the existing file. utmp is written to wfile and the corresponding utmpx structure is written to the parallel file. If neither file exists nothing happens. updwtmpx(S) checks the existence of wfilex and its parallel file wfile, whose name is obtained by removing the final ``x'' from wfilex. If only one of them exists, the other is created and initialized to reflect the state of the existing file. utmpx is written to wfilex, and the corresponding utmp structure is written to the parallel file. If neither file exists nothing happens. There is one exception to the rule about emptying the structure before further reads are done. The implicit read done by pututxline( ) (if it finds that it is not already at the correct place in the file) does not hurt the contents of the static structure returned by getutxent( ), getutxid( ), or getutxline( ), if you have just modified those contents and passed the pointer back to pututxline( ). These routines use buffered standard I/O for input, but pututxline( ) uses an unbuffered write to avoid race conditions between processes trying to modify the utmpx and wtmpx files. getutxent(S), getutxid(S), getutxline(S), pututxline(S), setutxent(S), and endutxent(S) are conformant with: X/Open CAE Specification, System Interfaces and Headers, Issue 4, Version 2.
http://osr507doc.xinuos.com/cgi-bin/man?mansearchword=getutxline&mansection=S&lang=en
CC-MAIN-2020-50
refinedweb
763
70.33
Constructors Constructors are special methods that are used to instantiate(construct) the object. Constructors have certain characteristics. Constructor Characteristics 1. Have the same name as the class. 2. Have no return type. 3. A default, no-argument constructor is automatically provided by the compiler if you do not create one. 4. Constructors can be overloaded 5. Needs to be declared as public to be available outside the package to other classes public class Socket{ } The socket class have a default constructor, since we did not create a custom one. The default constructor will be as follows public Socket() { } Socket class default constructor. Calling the Constructor The constructor is called to create a new object from the class using the new operator. This creates a new object called socket from the Socket class. In Java the convention is usually to name the new object with the same name as the class but start the name with a lowercase letter. Custom Constructor The following defines a custom constructor for the Socket class. The consturctor takes in a single parameter url of type String. public class Socket{ public Socket(String url) { } } When you invoke the consturctor, the arguments used must match the declaration's parameters in type and order. We can create a new socket object by passing the url argument String as follows Socket socket = new Socket(""); Note that when you create a custom constructor, the compiler will not create the default, no-argument constructor. The default constructor will need to be explicitly created as follows : public class Socket{ public Socket(String url) { } public Socket(){ } } And you can still either create socket objects by calling the default constructor or the custom constructor. Socket socket = new Socket(""); //or the default, no-agument constuctor Socket socket = new Socket();
https://java-book.peruzal.com/class/constructors.html
CC-MAIN-2018-22
refinedweb
294
55.34
Memory Access Patterns¶ If you have not figured this out by now, computer architecture is tied to the software it runs. Designers study programming patterns, and use that data to find ways to improve performance. Let’s look at some code, and see how we might process data. Note Some of this code is silly. We could just as easily do a few calculations and come up with the data we will uncover. However, building the code shown here is helpful in exploring aspects of modern architecture. Processing Big Data¶ The rage today is working on “big data”. We are not quite sure what that data is, but we are told it is “BIG”. In my career, I have processed a lot of “big data”, long before that term had even been mentioned. In my work as a Computational Fluid Dynamicist, I routinely worked on piles of data, Gigabytes big. So big, in fact, that we had a hard time moving it back and forth into the machine to do the calculations. Let’s build a simple model of such work. Our model will be like all models, a small version of the real thing. In these experiments, the actual processing is not important. The fetching of data from some storage area for processing is important. Therefore, we will set up our model data storage areas as arrays of data items, say 64-bits wide. Our arrays can be any size, the “bigger” the better, but we will keep things under control for our model. We will need three different storage areas for this work: - Registers - small - Memory - fairly big, but slower - Disk - huge but very slow Just for fun, let’s size these data areas based on the number of bits it takes to address all of them. (Hey we are studying computer architecture, and the address bus is an important part of this!) Data Stores¶ Here is a start on setting up the system’s storage areas: #include <iostream> #include <cstdint> #include <string> //set up data areas const uint64_t raddr_bits = 6; const uint64_t reg_size = 1 << raddr_bits; const uint64_t maddr_bits = 16; const uint64_t mem_size = 1 << maddr_bits; const uint64_t daddr_bits = 18; const uint64_t data_size = 1 << daddr_bits; // byte arrays uint8_t registers[reg_size]; uint8_t memory[mem_size]; uint8_t data[data_size]; I wonder how big they are! int main( void ) { std::cout << "Register Size: " << reg_size << std::endl; std::cout << "Memory Size : " << mem_size << std::endl; std::cout << "Data Size : " << data_size << std::endl; } Here is the output: Register Size: 64 Memory Size : 65536 Data Size : 262144 (Hey, these is not really big! They are tiny by today’s standards, but back in 1975, when I started building my own computers, this was what we had available. Access Patterns¶ In reviewing Gauss’s Busy Work code, we see that we have several access patterns to look at. Actually, there are only two. One involves strictly sequential access to memory, and the other is fairly random access. We hop all over the data store fetching our data. If we want to model a real computer in these experiments, we need to consider how the different devices work. Each one has some total capacity, that much should be obvious. But there are two other characteristics we need to consider as well: Access Time¶ Every device takes some time to get ready. We deliver an address to the store, and we wait for the result. The delay is called the access time. For a simple device that time may be all we need. But other devices work differently. For example if our data store is a rotating hard disk, the address reaches the device, and we need to wait while the read-write head positions to the right track, and then wait for the disk to spin around into the right spot on that track. We can read our data at that moment. Stream Time¶ If we access totally random data stored on this disk, the access time is the delay we will experience (although it will not be a fixed time (depending on where the next data item lives). However, if the next fetch is located next to the first fetch, we can do better. In fact, we can “stream” data from sequential storage locations much faster than what we will see for random accesses. The reads happen as the disk spins, we never move the read-write head. This faster access rate can be measured by a different term, we will call it the stream_time, a measure of the time between sequential fetches after the first fetch is ready. Modeling a Storage Device¶ Let’s create a simple data structure (a C++ struct) to store all of this information about a store: // data area management structure struct Store { std::string name; uint8_t *data; uint64_t addr_bits; uint64_t size; uint64_t start_address; uint8_t access_time; uint8_t stream_time; }; Note In case you have not seen this feature of C++, a struct is basically a class with no methods. In fact, some programmers never use this old pattern, inherited from “C”, and set up classes with no methods. Internally, they are the same thing. After doing some intense “Googling”, I came up with this set of values for the delays in accessing our three data storage areas: // access times const uint8_t reg_access = 1; const uint8_t reg_stream = 1; const uint8_t mem_access = 10; const uint8_t mem_stream = 2; const uint8_t disk_access = 100; const uint8_t disk_stream = 24; Here is the code needed to set up the first two of our data stores using these structures: Store a1 = { "REG", registers, raddr_bits, reg_size, 0, reg_access, reg_stream}; Store a2 = { "MEM", memory, maddr_bits, mem_size, 0, mem_access, mem_stream}; Utility Routines¶ To assist in our work, we need a utility routine to initialize a data area with a sequence of numbers. Here is code to do tat: // initialize an array with a sequence of numbers void init( uint8_t *array, uint64_t size, int n) { for(uint64_t i=n; i<size;i++) { array[i] = i; } } We will hand this routine a management structure, and it will initialize that array for us. Modeling Memory Access¶ We will skip the first version of Gauss’s Busy Work code we showed earlier, and start off with the one-dimensional array version. The heart of this code was a simple loop that accessed each data item in order to do the work. Experiment 1: Modeling Random Access¶ If our data fetches are random, all fetches will result in the delay specified by the access _time variable. To model this, we set up a simple loop that looks like this: // experiment 1: process data in different areas: rtime = mtime = dtime = 0; for(uint64_t i=0; i < data_size; i++) { rtime += reg_access; mtime += mem_access; dtime += disk_access; } std::cout << "Time to process data in registers: " << rtime << std::endl; std::cout << "Time to process data in memory : " << mtime << std::endl; std::cout << "Time to process data on disk : " << dtime << std::endl; We do not really need to fetch th data, each fetch will happen in the time specified by the access time, so we just calculate that time here to get a baseline number for reference. The code examines the time it would take to fetch each item from each data store: Here is the output: Time to process data in registers: 262144 Time to process data in memory : 2621440 Time to process data on disk : 26214400 Experiment 2: Moving from Memory to Registers¶ In this next experiment, we need to work harder. We want to process all of the numbers stored in memory, using the registers. Since we have more memory than registers, we need to pull the data in from the memory in blocks. Once the data is in our registers, we can do the work. Our program code is unaware of all of this, it is simple running through a loop adding up numbers. In doing that it generates a sequence of addresses that head off to the controller. In this experiment, the raw data is in memory, so our addresses represent locations in memory. Obviously those addresses will be beyond anything available for the registers, so we need to translate the addresses. Here is the idea: Break each address up into two parts: - offset - the low bites (equal to register address bit size) - tag: all of the other bits. If you view the memory area as a set of blocks, each one exactly the same size as our register area, then a memory address ends up looking like this: - offset, - index into any block - tag: block number We can watch the tag part of an address and check to see if that tag is currently loaded in the registers. If so, we add in the number from the register indicated by the offset part. If the tag does not match, we need to load that block into the register store. In all of this, we will use the access time needed to measure our time. We will add in the streaming improvement later. Here is a routine to load a block: // general purpose load routine (from area2 to area1) uint64_t load( Store area1, Store area2, uint64_t block_size) { uint64_t time = 0; uint64_t a1_addr = area1.start_address; uint64_t a2_addr = area2.start_address; for(uint64_t i=0; i < block_size; i++) { // check for transfer faults if(a1_addr + 1 >= area1.size) { //area 1 fault std::cout << area1.name << " fault" << std::endl; std::exit(1); } if(a2_addr + 1 >= area2.size) { //area 2 fault std::cout << area2.name << " fault" << std::endl; std::exit(1); } // no faults, do the transfer area1.data[a1_addr + i] = area2.data[a2_addr + i]; if(i == 0) // access time delay time += area2.access_time; // a2 is always slower else time += area2.stream_time; } return time; } The interesting part of this code is checking to see that the transfer will work. If we step out of bounds in either block, we generate a fault message, and exit our program. Processing Loop¶ The processing loop is simple. We will not do any actual processing, just set up the data fetches. We can figure out the data we will get easily. If we are going to process n bytes from slow memory into faster memory, then process the data from faster memory, the total time to do the work is: - n * access_time1 + n * access_time 2. As an example, if we move 65566 bytes from memory, which has an access time of 10 clocks, it will take 655360 clocks to complete the data transfers. If the data must all be processed through the registers, which have an access time of 1 clock tick, it will take 65536 clock ticks to do the work. The total is 655360+65536 = 720896. Let’s hope our code works properly! Note This seems silly. The time it takes to do this processing is longer than just accessing the data directly from the slower of the two. In our example, we are modeling moving data from memory into registers where we will do the actual work. Registers are faster than the memory, but we have very few registers, and much more memory. The real story is a bit more complex, this is just a start. Lets build some code and see if we get the right numbers: // experiment 2: process data memory array through registers rtime = 0; Store a1 = { "REG", registers, raddr_bits, reg_size, 0, reg_access, reg_stream}; Store a2 = { "MEM", memory, maddr_bits, mem_size, 0, mem_access, mem_stream}; uint32_t tag; uint32_t offset; uint32_t current_tag = 99; // force initial load // process the data for(uint64_t i = 0; i < a2.size; i++) { tag = i >> a1.addr_bits; offset = i & (a1.size - 1); if(tag != current_tag) { rtime += load(a1,a2,a1.size); current_tag = tag; std::cout << "\t\tloading tag " << tag << " "; std::cout << a2.name << " address = " << i << std::endl; } a1.data[i] = 0; rtime += a1.access_time; } std::cout << "EX2: process time: " << rtime << std::endl; Running this code give a lot of output, most of which is generated by the loads of 64 bytes at a time into the registers. The last few lines tell the story: loading tag 1020 MEM address = 65280 loading tag 1021 MEM address = 65344 loading tag 1022 MEM address = 65408 loading tag 1023 MEM address = 65472 EX2: process time: 720896 Hey, we got the right number. That was sure a lot of work to generate a number we could just figure out by hand. But, hey, we are programmers here, this was much more interesting. The Real Story¶ Storage devices to not work in such a simple way. The access time is the time it takes to get that first byte ready to move across a bus to the destination. If our access into this device was totally random, then that access time might be the number we need to calculate throughput. However, if we fetch data sequentially the device can do much better. It can stream data at a much higher rate. RAM has its own clock. A typical memory module is clocked at a rate of 1.333GHz which works out to about half the speed of the processor. Therefore, the burst transfer rate works out to one byte every two clock ticks. (twice as slow as transfers inside the chip. Of course, this only works if the data access is sequential. Introduce random access, and we are back to using the latency numbers. Here are some numbers we can work with: - - Registers: - - Latency: 1 clock - stream rate: 1 (no improvement here) - - Memory (DDR3-1333): - - Latency: 13 clocks - stream rate: 2 clock ticks. - - Disk (7200rpm) - - Latency: 100 tocks - Stream rate: 23 Let’s see what this does to our code. Adding Stream Rate¶ The stream rate improves performance in our test. When we move a block of data, say n bytes large, the total time needed works out as follows: - time = 1 * access_time + (n-1) * stream time If the stream time is the same as the access time, the time is just n * access time, which is what we used in the previous example code. Note There is no new code for this experiment. The stream delay calculation is already in the code, I just made it equal to the access time for the last experiment! Lazy me! Now, considering this improvement, our total processing time looks like this: EX3: process time: 204800 Looks like we improved things by a factor of three. Not bad! Non-Sequential processing¶ Unfortunately, programmers do not always do the “right thing” Let’s consider a two dimensional array that we need to process: #include <iostream> const int nrows = 5; const int ncols = 10; int addr; int data[nrows][ncols]; int sum1, sum2 = 0; int main( void ) { for(int i = 0; i < nrows; i++) { for (int j = 0; j < ncols; j++) { data[i][j] = 1; sum1 += data[i][j]; addr = i * ncols + j; sum2 += data[0][addr]; } } std::cout << sum1 << std::endl; std::cout << sum2 << std::endl; } That code looks a bit strange. I am building up a sum1 by placing a one in each memory location, then adding it to the sum variable. This is just a count of the cells. Look at that silly code that calculates sum2. What in the world is going on there. Note I am cheating here, I am over indexing the column index on purpose. I am also always aiming the processor at column zero. The effect is to access each item in the array using the address I calculate from the row number and the column number. This is exactly what the compiler does, after laying out your array in the correct form. The accesses are always using an address to fetch data inside the processor! If you are not convinced that this will work, run the code! Compare that to this code: #include <iostream> const int nrows = 5; const int ncols = 10; int addr; int data[nrows][ncols]; int sum1, sum2 = 0; int main( void ) { for(int j = 0; j < ncols; j++) { for (int i = 0; i < nrows; i++) { data[i][j] = 1; sum1 += data[i][j]; addr = i * ncols + j; sum2 += data[0][addr]; } } std::cout << sum1 << std::endl; std::cout << sum2 << std::endl; } Do you see the difference? The first example access the data column by column with a single row, then moves on to the next row. Based on how an array is stored in memory (row major) this is back to our sequential memory access time. On the other hand, the second example access data row by row, working down one column at a time. This is not going to be as efficient. The reason why should be clear by now. If we are not going to fetch the next byte in sequence from memory, we lose the advantage of that stream rate for data transfer. Every fetch if going to incur the full access time delay.
http://www.co-pylit.org/courses/cosc2325/memory-heirarchy/03-memory-access-patterns.html
CC-MAIN-2018-17
refinedweb
2,784
68.7
#include <db.h> int memp_fget(DB_MPOOLFILE *mpf, db_pgno_t *pgnoaddr, u_int32_t flags, void **pagep); The memp_fget function copies a pointer to the page with the page number specified by pgnoaddr, from the source file in the DB_MPOOLFILE, into the memory location to which pagep refers. If the page does not exist or cannot be retrieved, memp_fget will fail. Page numbers begin at 0; that is, the first page in the file is page number 0, not page number 1. The returned page is size_t type aligned. The flags value must be set to 0 or by bitwise inclusively OR'ing together one or more of the following values: The DB_MPOOL_CREATE, DB_MPOOL_LAST, and DB_MPOOL_NEW flags are mutually exclusive. Fully or partially created pages have all their bytes set to 0, unless other behavior was specified when the file was opened. All pages returned by memp_fget will be retained (that is, pinned), in the pool until a subsequent call to memp_fput. The memp_fget function returns a non-zero error value on failure, 0 on success, and returns DB_PAGE_NOTFOUND if the requested page does not exist and DB_MPOOL_CREATE was not set. The memp_fget function may fail and return a non-zero error for the following conditions: The DB_MPOOL_NEW flag was set, and the source file was not opened for writing. More than one of DB_MPOOL_CREATE, DB_MPOOL_LAST, and DB_MPOOL_NEW was set. The memp_fget function may fail and return a non-zero error for errors specified for other Berkeley DB and C library or system functions. If a catastrophic error has occurred, the memp_fget function may fail and return DB_RUNRECOVERY, in which case all subsequent Berkeley DB calls will fail in the same way.
http://pybsddb.sourceforge.net/api_c/memp_fget.html
CC-MAIN-2017-47
refinedweb
278
56.89
Log aggregation with ElasticSearch, Fluentd and Kibana stack on ARM64 Kubernetes cluster This article was updated on 18/jan/2019 to reflect the updates on the repository, images to 6.5.4 and support to multi-arch cluster composed of X64 (Intel) and ARM64 hosts. The project will be updated so it might be newer than the one described here. A while back, as a proof of concept, I’ve set a full logging aggregation stack for Kubernetes with ElasticSearch, Fluentd and Kibana on ARM64 SBCs using my Rock64 Kubernetes cluster. This deployment was based on this great project by Paulo Pires with some adaptations. Since then, Pires discontinued the project but in my fork you can find all the latest files for the project including the manifests for the cluster, image Dockerfiles and build script, Kibana dashboard and detailed information in the Readme:. Images All included images that depend from Java, were built using OpenJDK 1.8.0 181. I recently wrote a post testing the performance for multiple JVMs on ARM64 and found this version provides the best performance. Previously I used Oracle Java in a custom Docker image that is still in the repository. The project is composed of the ElasticSearch base image, and the ES image with custom configuration for the Kubernetes cluster. The Dockerfiles are a little big so I won’t paste them here but you can check on the repository. I also created an image for ElasticSearch Curator, a tool to cleanup the ES indexes after a determined amount of days and the Kibana image. The Fluentd image was based on the Kubernetes addons image found here with minor adaptations. All the image sources can be found in the repo and are pushed to my Dockerhub account. Having separate images for these steps makes it easier to update separate components . The images are all multi-architecture with X64 (Intel) and ARM64 with a manifest that point to both. Deployment The deployment is done on the “logging” namespace and there is a script to automate it and tear it down. You can either use it or do it manually to follow the deployment step by step. The standard manifest in the root level deploy a three-node cluster where all nodes perform all roles (Master, Ingest and Data). I found this to be more tailored to SBCs like the ones I use. There is also a separate set of manifests to deploy a four-node cluster where each node have it's role(1 Ingest, 1 Master and 2 Data). To deploy this option, use the included deploy script in the separate-roles directory. First create the namespace and an alias to ease the deployment commands: kubectl create namespace logging alias kctl='kubectl --namespace logging' Full roles To deploy the simpler version, there is a set of manifests in the root dir that can be installed with: kctl apply -f es-full-svc.yaml kctl apply -f es-full-statefulset.yaml Separate Roles To deploy in separate roles, I have a deeper description on how it’s deployed. Services and Ingresses There are three services on the stack, one for Kibana web interface, one for ElasticSearch API interface on port 9200 and the last one for ElasticSearch internal node communication on port 9300. To accompany the services, there are two (actually three in my cluster) ingresses that allow external access to the stack. One for Kibana web interface, another for Kibana external access (in my case since I use two Traefik Ingresses, one for the internal domain and another for the external, internet valid domain) and the last for ElasticSearch API. Adjust the domains and need for these ingresses according to your environment. Start Deploying the services for ES: kctl apply -f ./separate-roles/es-master-svc.yaml kctl apply -f ./separate-roles/es-ingest-svc.yaml kctl apply -f ./separate-roles/es-data-svc.yaml ElasticSearch In ES, you have the master nodes that control and coordinate the cluster, the data nodes responsible for storing, searching and replicating the data and the client nodes responsible for the API calls and ingesting the data from the collectors. More details on each role here. Master node ElasticSearch master nodes can be deployed in a single instance or a quorum with many nodes. This can be controlled in the “replicas” parameter on the deployment manifest and also need to be adjusted in the NUMBER_OF_MASTER ENV in the manifest according to the documentation. In my deployment I have only one master with one requested CPU and a maximum of 512Mb of memory limited in Java’s Xms and Xmx parameters. The master also has a PersistentVolume that in my case uses a StorageClass allocated in a NFS server. More info on my Kubernetes Cluster post linked in the start of the article. A good explanation by Paulo on the replicas and masters is quoted here: Why does NUMBER_OF_MASTERSdiffer from number of master-replicas? The default value for this environment variable is 2, meaning a cluster will need a minimum of 2 master nodes to operate. If a cluster has 3 masters and one dies, the cluster still works. Minimum master nodes are usually n/2 + 1, where nis the number of master nodes in a cluster. If a cluster has 5 master nodes, one should have a minimum of 3, less than that and the cluster stops. If one scales the number of masters, make sure to update the minimum number of master nodes through the Elasticsearch API as setting environment variable will only work on cluster setup. To deploy the Master statefulSet, and the configMap use: kctl apply -f es-configmap.yaml kctl apply -f ./separate-roles/es-master-statefulset.yaml You can check the Master was deployed and it’s logs before proceeding: $ kctl rollout status statefulset es-master statefulset "es-master" successfully rolled out# List the master node logs kctl get pods kckctl logs es-master-0 kctl logs [master node pod] The configuration can be overridden with the elasticsearch-configmap that is shared among all nodes. Ingest Node The ingest node are the ones that receive the API calls and ingest the data into the cluster. I deployed only one node but this parameter can be adjusted on the “replicas” parameter of the manifest. The node have 2 requested CPUs, 1Gb of memory and it’s persistent data is stored in the StorageClass. The format of the file is similar to the Master node below. Deploy it with: kctl apply -f ./separate-roles/es-ingest-statefulset.yaml kctl rollout status deployment es-client You can check the node logs with kctl logs es-ingest-0 command until you see the lines similar to: Data Node The Data nodes are the ones responsible for the heavy work on the cluster. Since they replicate data, I’ve deployed them as a StatefulSet with two nodes and have the data in the StorageClass. ElasticSearch defines the cluster health as colors as red, yellow and green. If you have your Data node without replication, ES will mark your cluster and your indexes (where the data is stored) as yellow. This post have a nice explanation of this. More details in checking your cluster health below. The Data nodes are allocated with 1 requested CPU and 3 CPU limit and 1Gb Java memory. Also the set have Liveness and Readiness probes to control rollouts and restart the pods in case of problems and anti-affinity to avoid running the Data nodes in the same server. To deploy, use: kctl apply -f ./separate-roles/es-data-svc.yaml until kctl rollout status statefulset es-data > /dev/null 2>&1; do sleep 1; printf "."; done Check the node logs with kctl logs es-data-0 and kctl logs es-data-1 until you see lines similar to: Cluster Status First check all elements were deployed correctly in Kubernetes: $ kubectl get svc,ingresses,deployments,pods,statefulsets -l component=elasticsearch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch NodePort 10.111.92.223 <none> 9200:32658/TCP 30h service/elasticsearch-discovery ClusterIP 10.96.98.85 <none> 9300/TCP 30h NAME READY STATUS RESTARTS AGE pod/es-full-0 1/1 Running 0 28h pod/es-full-1 1/1 Running 2 24h pod/es-full-2 1/1 Running 0 30h NAME READY AGE statefulset.apps/es-full 3/3 30h After this, you can query the ElasticSearch cluster for it’s status: $ curl { "name" : "es-full-0", "cluster_name" : "myesdb", "cluster_uuid" : "k1j8Cqw6TySMQkS6MRuYMg", "version" : { "number" : "6.5.4", "build_flavor" : "default", " } And health: $ curl { "cluster_name": "myesdb", "status": "green", "timed_out": false, "number_of_nodes": 3, "number_of_data_nodes": 3, "active_primary_shards": 24, "active_shards": 27, "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100 } And check each node status and load: $ curl ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name 10.36.0.4 51 91 26 4.13 5.52 4.51 mdi - es-full-2 10.32.0.15 40 85 19 0.63 1.22 1.32 mdi * es-full-0 10.46.0.11 25 90 15 1.87 1.90 2.03 mdi - es-full-1 This presents that you have a healthy cluster (all green) and the nodes “found each other”. Now on to the ingest and upkeep chains. Cluster Manager — Cerebro To manage the cluster, I’ve built and deployed Cerebro, a tool that evolved from kops. The manifest contains the deployment, service and the ingress to access it’s web interface. Adjust to your own domain. kctl apply -f cerebro.yaml kctl apply -f cerebro-external-ingress.yaml Curator Deploy ElasticSearch curator to keep your cluster clean and tidy. You can set the amount of days it will keep your data on the configMap before loading. The curator creates a Kubernetes cronjob that’s ran every night. This can be adjusted in the es-curator-cronjob.yaml manifest. ... unit: days unit_count: 10 ... Deploy with: kctl apply -f es-curator-configmap.yaml kctl apply -f es-curator-cronjob.yaml Fluentd Finally deploy Fluentd, the daemon that runs in all your nodes and collect the logs from all deployed containers. You can have many more options, collecting data from the node itself but this would require additional configuration. This deployment used the configs provided by Kubernetes add-on repository. kctl apply -f fluentd-configmap.yaml kctl apply -f fluentd-daemonset.yaml After deploying this, you can see the data being ingested into the cluster with the command curl -s “" I’ve set a watch command to keep monitoring the ingest progress: watch -n 5 'curl -s ""; echo "\n\n"; curl -s ""|head -30' This might take some time depending on the amount of logs you have. In my case took more than 2 hours. Also the data nodes replicate the data between themselves so in the beginning you might see the health of each index as “yellow” but after a while they should become “green”. Kibana Finally Kibana, after all you want to see your data. I’ve not delved too much into customizing and studying it and the Lucene syntax for the queries but there are lots of good posts around. Deploy it using: kctl apply -f kibana-configmap.yaml kctl apply -f kibana-svc.yaml kctl apply -f kibana-deployment.yaml kctl apply -f kibana-external-ingress.yaml And access on the URL based on your ingress configuration. First you need to tell Kibana how to identify your index patterns. The application helps you out on this in the “Discover” menu but it’s just a matter of creating a new index pattern with “logstash*” as the index name and associating the @timestamp as the time series field. You can also see and search the logs in “Discover” and add a couple of columns to ease up visualization like I did with the node name, pod name and the log itself: This is a simple dashboard I created to show the amount of logs ingested, the distribution between all pods and below it I filter some errors and count them. Showing the logs and error logs: Also a heat-map: To create this, I’ve set some visualizations and assembled them into the Dashboard: I’ve exported my data and uploaded it to the repository. You can import on Management -> Saved Objects -> Import. I’ve never tried importing data so YMMV. It’s also included Conclusion This stack can give you a good overview and a pretty functional log aggregation platform. Just be aware that due to limitations on CPU and memory in these SBCs, the response times for queries might be a little high (I had to adjust Kibana timeout config to handle this). Please send me feedback, corrections and suggestions on and issues or pull requests on the project site in.
https://medium.com/@carlosedp/log-aggregation-with-elasticsearch-fluentd-and-kibana-stack-on-arm64-kubernetes-cluster-516fb64025f9
CC-MAIN-2020-29
refinedweb
2,146
62.27
How to plot pie chart in excel sheet using Python After this tutorial, you will be able to plot a pie chart in an excel sheet using Python. We will use the xlsxwriter module that is available in Python. First, you have to know about the xlsxwriter module. xlsxwriter: xlsxwriter is a library in Python that is used to perform operations on excel files. Using the xlsxwriter library, we can do the following operation. we can, - Create an excel file - Write to an excel file - Perform arithmetic operations - Plot charts - Merge cells - Perform data validation. Program - At first, we have imported the xlsxwriter module. - Then we created a workbook object using Workbook() that takes the filename as the argument. - After that, the workbook object is used to add a new worksheet using the add_worksheet() method. - And then, we created a format object to format cells in worksheets using the add_format() method. - Then we created data lists called head and data. - We have written those data along the row and column using write_row() and write_column() methods. write_row() and write_column() methods take cell, data, font format as arguments. - Next, we created a chart object which can be added to a worksheet using the add_chart() method. The add_chart() method takes a type of chart as an argument. - Then we added a data series to a chart using the add_series() method. - The set_title() method is used to set the chart title. - We can set the style of an excel chart using the set_style() method. - The insert_chart() method is used to insert the chart into the excel sheet. - At last, we closed the excel file using the close() method. Let’s take a look at the implementation. import xlsxwriter wbook = xlsxwriter.Workbook('PieChart.xlsx') wsheet = wbook.add_worksheet() font = wbook.add_format({'bold': 1}) head = ['Vehicle', 'Count'] data = [ ['Car', 'Bike', 'Bus'],[10, 50, 70],] wsheet.write_row('A1', head, font) wsheet.write_column('A2', data[0]) wsheet.write_column('B2', data[1]) chart = wbook.add_chart({'type': 'pie'}) chart.add_series({ 'name': 'Transport data', 'categories': ['Sheet1', 1, 0, 3, 0], 'values': ['Sheet1', 1, 1, 3, 1], }) chart.set_title({'name': 'Transport data'}) chart.set_style(5) wsheet.insert_chart('D2', chart, {'x_offset': 25, 'y_offset': 10}) wbook.close() After executing the above Python code, our excel sheet will look like below, I hope that you learned something new from this tutorial.
https://www.codespeedy.com/how-to-plot-pie-chart-in-excel-sheet-using-python/
CC-MAIN-2020-45
refinedweb
385
68.87
Node:How to Code, Next:How to Format, Previous:How to Implement, Up:Top A program consists of different libraries, and a library consists of different object files (built from different source files). The following is not only true for the relationship between libraries, but also for the relationship between the different sources of a library, as well as the different sources of a program. If every module can be understood without reading the source of the other modules, people will understand the whole program better. If every module can be tested without relying on other modules, your code can get more stable. And, if every module can be debugged without digging through all the other code, you will be able to fix bugs much faster than otherwise. Avoid circular dependencies in your modules. Avoid dependencies on too many other modules in a single module. Avoid modules that are tied together too strong. For C, the header files are the faces of your modules. If you put something in a header, you have to expect that people rely on it. Don't put implementation specific stuff in a header. Don't define structures in your header - define the structure in the main source, and simply put a type definition in the header, to make your structures opaque. As every source file is a `low-level module', every source file must have it's own header if it exports symbols that other source files of the same `high-level module' (library or application) uses. Don't write a `big' header file in which you define all symbols that are shared among your library or application, as that would make the internal dependency structure of your modules very unclear. And, of course, protect all headers against multiple including. The #include preprocessor directives are a way of documenting dependencies. Don't include what you don't need. Explicitly include every header you directly depend on, even if it's implicitly included in another header. Be careful to use #include<...> for system headers and headers that are external to your project and #include "..." for your own headers. In header files, only include other headers when the code in this header needs the other header. Foreign headers you include in the header of your library have to be present on every system where your library should be used. In source files, include all headers the source depends on, even those already included in the source's own header. This documents clearly what your source depends on. If your project has a global configuration file (like autoconf's config.h), this must be included in every source file, and it must be included as the very first line of your code after comments, so that all other header files can react on the defines. config.h may only include #define's. Nobody expects code in such a file. Don't include config.h in a header file, unless you want to force all projects that use your header to have a config.h, too. Next should be your source's own header file (where this source exports its external symbols). By putting no other includes before this, you implicitly check whether your header file is self-contained, i.e. if it contains all #include's it needs to compile. Then, include all needed header files that are external to your project, with the most usual ones first. Last, include the needed header files of the other modules of your project. Never put any code before the #include's. Nobody searches them somewhere else as at the very top of your source or header file. Nobody expects real code in files that end in .h. When you write a module, you will write code to test it. That code is a part of the module. Put it in it's own file, document it, and make it a program that others can use to test if the module behaves correctly on their system. GNU Automake provides a very good means for running automatic tests when a program is built with make check. Choose a prefix for a module, and use that prefix for all symbols in that module. If the module prefix is foo, then all public symbols of that module should start with foo_, and all private symbols with _foo_. If the module foo defines a structure bar, name the structure tag _foo_bar, define the structure tag in the main source, and define a type foo_bar in the header. Name all the functions operating on this structure beginning with foo_bar_. For example, name a function that frees the memory of the structure, foo_bar_free. Well, that's the reason why we have static symbols anyway, isn't it?
http://www.nongnu.org/style-guide/download/style-guide/How-to-Code.html
crawl-002
refinedweb
791
72.56
In the previous blog post. Policy pattern descriptions This section briefly describes each policy pattern provided with the Business Service Composer. The general structure of the input that you must provide for each policy pattern is outlined as well as additional information that aims to outline the appropriate use of each policy pattern..loadBusinessComposerDefinitions’. See the previous blog post for more details on this task. Upon successful BSC project load and processing, the services panel should display a structure model resembling the one shown below. If things don’t look right or didn’t show up at all you may need to try these things: · If things still don’t show up after the BSC project has been successfully loaded and processed you’ll have to troubleshoot the policy and pattern logic you’ve used to ensure it matches what’s in the SCR and for those resources.rd section of the manuals discusses how to create and use a custom namespace (e.g. common data model) for TBSM and the SCR. If you implement something like this the use of the BSC may become even easier within your complex multi-vendor, multi-product environment. I look forward to hearing from you!
https://www.ibm.com/developerworks/mydeveloperworks/blogs/7d5ebce8-2dd8-449c-a58e-4676134e3eb8/entry/dynamic_resource_linking_via_business_service_composer_bsc_project_policies_bsm_solution_development_series_and_demo_development37?lang=en_us
CC-MAIN-2018-13
refinedweb
202
52.09
Cutting Edge - Expando Objects in C# 4.0 By Dino Esposito | July 2010 Most of the code written for the Microsoft .NET Framework is based on static typing, even though .NET supports dynamic typing via reflection. Moreover, JScript had a dynamic type system on top of the .NET 10 years ago, as did Visual Basic. Static typing means that every expression is of a known type. Types and assignments are validated at compile time and most of the possible typing errors are caught in advance. The well-known exception is when you attempt a cast at run time, which may sometimes result in a dynamic error if the source type is not compatible with the target type. Static typing is great for performance and for clarity, but it’s based on the assumption that you know nearly everything about your code (and data) beforehand. Today, there’s a strong need for relaxing this constraint a bit. Going beyond static typing typically means looking at three distinct options: dynamic typing, dynamic objects, and indirect or reflection-based programming. In .NET programming, reflection has been available since the .NET Framework 1.0 and has been widely employed to fuel special frameworks, like Inversion of Control (IoC) containers. These frameworks work by resolving type dependencies at run time, thus enabling your code to work against an interface without having to know the concrete type behind the object and its actual behavior. Using .NET reflection, you can implement forms of indirect programming where your code talks to an intermediate object that in turn dispatches calls to a fixed interface. You pass the name of the member to invoke as a string, thus granting yourself the flexibility of reading it from some external source. The interface of the target object is fixed and immutable—there’s always a well-known interface behind any calls you place through reflection. Dynamic typing means that your compiled code ignores the static structure of types that can be detected at compile time. In fact, dynamic typing delays any type checks until run time. The interface you code against is still fixed and immutable, but the value you use may return different interfaces at different times. The .NET Framework 4 introduces some new features that enable you to go beyond static types. I covered the new dynamic keyword in the May 2010 issue. In this article, I’ll explore the support for dynamically defined types such as expando objects and dynamic objects. With dynamic objects, you can define the interface of the type programmatically instead of reading it from a definition statically stored in some assemblies. Dynamic objects wed the formal cleanliness of static typed objects with the flexibility of dynamic types. Scenarios for Dynamic Objects Dynamic objects are not here to replace the good qualities of static types. Static types are, and will remain for the foreseeable future, at the foundation of software development. With static typing, you can find type errors reliably at compile time and produce code that, because of this, is free of runtime checks and runs faster. In addition, the need to pass the compile step leads developers and architects to take care in the design of the software and in the definition of public interfaces for interacting layers. There are, however, situations in which you have relatively well-structured blocks of data to be consumed programmatically. Ideally, you’d love to have this data exposed through objects. But, instead, whether it reaches you over a network connection or you read it from a disk file, you receive it as a plain stream of data. You have two options to work against this data: using an indirect approach or using an ad hoc type. In the first case, you employ a generic API that acts as a proxy and arranges queries and updates for you. In the second case, you have a specific type that perfectly models the data you’re working with. The question is, who’s going to create such an ad hoc type? In some segments of the .NET Framework, you already have good examples of internal modules creating ad hoc types for specific blocks of data. One obvious example is ASP.NET Web Forms. When you place a request for an ASPX resource, the Web server retrieves the content of the ASPX server file. That content is then loaded into a string to be processed into an HTML response. So you have a relatively well-structured piece of text with which to work. To do something with this data, you need to understand what references you have to server controls, instantiate them properly and link them together into a page. This can be definitely done using an XML-based parser for each request. In doing so, though, you end up paying the extra costs of the parser for every request, which is probably an unacceptable cost. Due to these added costs of parsing data, the ASP.NET team decided to introduce a one-time step to parse the markup into a class that can be dynamically compiled. The result is that a simple chunk of markup like this is consumed via an ad hoc class derived from the code-behind class of the Web Forms page: Figure 1 shows the runtime structure of the class created out of the markup. The method names in gray refer to internal procedures used to parse elements with the runat=server elements into instances of server controls. Figure 1 The Structure of a Dynamically Created Web Forms Class You can apply this approach to nearly any situation in which your application receives external data to process repeatedly. For example, consider a stream of XML data that flows into the application. There are several APIs available to deal with XML data, from XML DOM to LINQ-to-XML. In any case, you have to either proceed indirectly by querying the XML DOM or LINQ-to-XML API, or use the same APIs to parse the raw data into ad hoc objects. In the .NET Framework 4, dynamic objects offer an alternative, simpler API to create types dynamically based on some raw data. As a quick example, consider the following XML string: To transform that into a programmable type, in the .NET Framework 3.5 you’d probably use something like the code in Figure 2. var persons = GetPersonsFromXml(file); foreach(var p in persons) Console.WriteLine(p.GetFullName()); // Load XML data and copy into a list object var doc = XDocument.Load(@"..\..\sample.xml"); public static IList<Person> GetPersonsFromXml(String file) { var persons = new List<Person>(); var doc = XDocument.Load(file); var nodes = from node in doc.Root.Descendants("Person") select node; foreach (var n in nodes) { var person = new Person(); foreach (var child in n.Descendants()) { if (child.Name == "FirstName") person.FirstName = child.Value.Trim(); else if (child.Name == "LastName") person.LastName = child.Value.Trim(); } persons.Add(person); } return persons; } The code uses LINQ-to-XML to load raw content into an instance of the Person class: The .NET Framework 4 offers a different API to achieve the same thing. Centered on the new ExpandoObject class, this API is more direct to write and doesn’t require that you plan, write, debug, test and maintain a Person class. Let’s find out more about ExpandoObject. Using the ExpandoObject Class Expando objects were not invented for the .NET Framework; in fact, they appeared several years before .NET. I first heard the term used to describe JScript objects in the mid-1990s. An expando is a sort of inflatable object whose structure is entirely defined at run time. In the .NET Framework 4, you use it as if it were a classic managed object except that its structure is not read out of any assembly, but is built entirely dynamically. An expando object is ideal to model dynamically changing information such as the content of a configuration file. Let’s see how to use the ExpandoObject class to store the content of the aforementioned XML document. The full source code is shown in Figure 3. public static IList<dynamic> GetExpandoFromXml(String file) { var persons = new List<dynamic>(); var doc = XDocument.Load(file); var nodes = from node in doc.Root.Descendants("Person") select node; foreach (var n in nodes) { dynamic person = new ExpandoObject(); foreach (var child in n.Descendants()) { var p = person as IDictionary<String, object>); p[child.Name] = child.Value.Trim(); } persons.Add(person); } return persons; } The function returns a list of dynamically defined objects. Using LINQ-to-XML, you parse out the nodes in the markup and create an ExpandoObject instance for each of them. The name of each node below <Person> becomes a new property on the expando object. The value of the property is the inner text of the node. Based on the XML content, you end up with an expando object with a FirstName property set to Dino. In Figure 3, however, you see an indexer syntax used to populate the expando object. That requires a bit more explanation. Inside the ExpandoObject Class The ExpandoObject class belongs to the System.Dynamic namespace and is defined in the System.Core assembly. ExpandoObject represents an object whose members can be dynamically added and removed at run time. The class is sealed and implements a number of interfaces: As you can see, the class exposes its content using various enumerable interfaces, including IDictionary<String, Object> and IEnumerable. In addition, it also implements IDynamicMetaObjectProvider. This is the standard interface that enables an object to be shared within the Dynamic Language Runtime (DLR) by programs written in accordance with the DLR interoperability model. In other words, only objects that implement the IDynamicMetaObjectProvider interface can be shared across .NET dynamic languages. An expando object can be passed to, say, an IronRuby component. You can’t do that easily with a regular .NET managed object. Or, rather, you can, but you just don’t get the dynamic behavior. The ExpandoObject class also implements the INotifyPropertyChanged interface. This enables the class to raise a PropertyChanged event when a member is added or modified. Support of the INotifyPropertyChanged interface is key to using expando objects in Silverlight and Windows Presentation Foundation application front ends. You create an ExpandoObject instance as you do with any other .NET object, except that the variable to store the instance is of type dynamic: At this point, to add a property to the expando you simply assign it a new value, as below: It doesn’t matter that no information exists about the FirstName member, its type or its visibility. This is dynamic code; for this reason, it makes a huge difference if you use the var keyword to assign an ExpandoObject instance to a variable: This code will compile and work just fine. However, with this definition you’re not allowed to assign any value to a FirstName property. The ExpandoObject class as defined in System.Core has no such member. More precisely, the ExpandoObject class has no public members. This is a key point. When the static type of an expando is dynamic, the operations are bound as dynamic operations, including looking up members. When the static type is ExpandoObject, then operations are bound as regular compile-time member lookups. So the compiler knows that dynamic is a special type, but does not know that ExpandoObject is a special type. In Figure 4, you see the Visual Studio 2010 IntelliSense options when an expando object is declared as a dynamic type and when it’s treated as a plain .NET object. In the latter case, IntelliSense shows you the default System.Object members plus the list of extension methods for collection classes. Figure 4 Visual Studio 2010 IntelliSense and Expando Objects It should also be noted that some commercial tools in some circumstances go beyond this basic behavior. Figure 5shows ReSharper 5.0, which captures the list of members currently defined on the object. This doesn’t happen if members are added programmatically via an indexer. Figure 5 The ReSharper 5.0 IntelliSense at Work with Expando Objects To add a method to an expando object, you just define it as a property, except you use an Action<T> or Func<T> delegate to express the behavior. Here’s an example: The method GetFullName returns a String obtained by combining the last name and first name properties assumed to be available on the expando object. If you attempt to access a missing member on expando objects, you’ll receive a RuntimeBinderException exception. XML-Driven Programs To tie together the concepts I’ve shown you so far, let me guide you through an example where the structure of the data and the structure of the UI are defined in an XML file. The content of the file is parsed to a collection of expando objects and processed by the application. The application, however, works only with dynamically presented information and is not bound to any static type. The code in Figure 3 defines a list of dynamically defined person expando objects. As you’d expect, if you add a new node to the XML schema, a new property will be created in the expando object. If you need to read the name of the member from an external source, you should employ the indexer API to add it to the expando. The ExpandoObject class implements the IDictionary<String, Object> interface explicitly. This means you need to segregate the ExpandoObject interface from the dictionary type in order to use the indexer API or the Add method: Because of this behavior, you just need to edit the XML file to make a different data set available. But how can you consume this dynamically changing data? Your UI will need to be flexible enough to receive a variable set of data. Let’s consider a simple example where all you do is display data through the console. Suppose the XML file contains a section that describes the expected UI—whatever that means in your context. For the purpose of example, here’s what I have: This information will be loaded into another expando object using the following code: The main procedure will have the following structure: public static void Run(String file) { dynamic settings = GetExpandoSettings(file); dynamic persons = GetExpandoFromXml(file); foreach (var p in persons) { var memberNames = (settings.Parameters as String). Split(','); var realValues = GetValuesFromExpandoObject(p, memberNames); Console.WriteLine(settings.Format, realValues); } } The expando object contains the format of the output, plus the names of the members whose values are to be displayed. Given the person dynamic object, you need to load the values for the specified members, using code like this: Because an expando object implements IDictionary<String, Object>, you can use the indexer API to get and set values. Finally, the list of values retrieved from the expando object are passed to the console for actual display. Figure 6 shows two screens for the sample console application, whose only difference is the structure of the underlying XML file. Figure 6 Two Sample Console Applications Driven by an XML File Admittedly, this is a trivial example, but the mechanics required to make it work are similar to that of more interesting examples. Try it out and share your feedback! Dino Esposito is the author of “Programming ASP.NET MVC” from Microsoft Press and coauthor of “Microsoft .NET: Architecting Applications for the Enterprise” (Microsoft Press, 2008). Based in Italy, Esposito is a frequent speaker at industry events worldwide. Thanks to the following technical expert for reviewing this article: Eric Lippert Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/magazine/a217a482-f9cb-4535-84f5-473df14e1e99
CC-MAIN-2018-09
refinedweb
2,612
55.13
Working on various pieces of software those last years, I noticed that there's always a feature that requires implementing some DSL. The problem with DSL is that it is never the road that you want to go. I remember how creating my first DSL was fascinating: after using programming languages for years, I was finally designing my own tiny language! A new language that my users would have to learn and master. Oh, it had nothing new, it was a subset of something, inspired by my years of C, Perl or Python, who knows. And that's the terrible part about DSL: they are an marvelous tradeoff between the power that they give to users, allowing them to define precisely their needs and the cumbersomeness of learning a language that will be useful in only one specific situation. In this blog post, I would like to introduce a very unsophisticated way of implementing the syntax tree that could be used as a basis for a DSL. The goal of that syntax tree will be filtering. The problem it will solve is the following: having a piece of data, we want the user to tell us if the data matches their conditions or not. To give a concrete example: a machine wants to grant the user the ability to filter the beans that it should keep. What the machine passes to the filter is the size of the current grain, and the filter should return either true or false, based on the condition defined by the user: for example, only keep beans that are bigger that are between 1 and 2 centimeters or between 4 and 6 centimeters. The number of conditions that the users can define could be quite considerable, and we want to provide at least a basic set of predicate operators: equal, greater than and lesser than. We also want the user to be able to combine those, so we'll add the logical operators or and and. A set of conditions can be seen as a tree, where leaves are either predicates, and in that case, do not have children, or are logical operators, and have children. For example, the propositional logic formula φ1 ∨ (φ2 ∨ φ3) can be represented with as a tree like this: Starting with this in mind, it appears that the natural solution is going to be recursive: handle the predicate as terminal, and if the node is a logical operator, recurse over its children. Since we will be doing Python, we're going to use Python to evaluate our syntax tree. The simplest way to write a tree in Python is going to be using dictionaries. A dictionary will represent one node and will have only one key and one value: the key will be the name of the operator ( equal, greater than, or, and…) and the value will be the argument of this operator if it is a predicate, or a list of children (as dictionaries) if it is a logical operator. For example, to filter our bean, we would create a tree such as: {"or": [ {"and": [ {"ge": 1}, {"le": 2}, ]}, {"and": [ {"ge": 4}, {"le": 6}, ]}, ]} The goal here is to walk through the tree and evaluate each of the leaves of the tree and returning the final result: if we passed 5 to this filter, it would return True, and if we passed 10 to this filter, it would return False. Here's how we could implement a very depthless filter that only handles predicates (for now): import operator class InvalidQuery(Exception): pass class Filter(object): binary_operators = { "eq": operator.eq, "gt": operator.gt, "ge": operator.ge, "lt": operator.lt, "le": operator.le, } def __init__(self, tree): # Parse the tree and store the evaluator self._eval = self.build_evaluator(tree) def __call__(self, value): # Call the evaluator with the value return self._eval(value) def build_evaluator(self, tree): try: # Pick the first item of the dictionary. # If the dictionary has multiple keys/values # the first one (= random) will be picked. # The key is the operator name (e.g. "eq") # and the value is the argument for it operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: # Lookup the operator name op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) # Return a function (lambda) that takes # the filtered value as argument and returns # the result of the predicate evaluation return lambda value: op(value, nodes) You can use this Filter class by passing a predicate such as {"eq": 4}: >>> f = Filter({"eq": 4}) >>> f(2) False >>> f(4) True This Filter class works but is quite limited as we did not provide logical operators. Here's a complete implementation that supports binary operators and and or: import operator class InvalidQuery(Exception): pass class Filter(object): binary_operators = { u"=": operator.eq, u"==": operator.eq, u"eq": operator.eq, u"<": operator.lt, u"lt": operator.lt, u">": operator.gt, u"gt": operator.gt, u"<=": operator.le, u"≤": operator.le, u"le": operator.le, u">=": operator.ge, u"≥": operator.ge, u"ge": operator.ge, u"!=": operator.ne, u"≠": operator.ne, u"ne": operator.ne, } multiple_operators = { u"or": any, u"∨": any, u"and": all, u"∧": all, } def __init__(self, tree): self._eval = self.build_evaluator(tree) def __call__(self, value): return self._eval(value) def build_evaluator(self, tree): try: operator, nodes = list(tree.items())[0] except Exception: raise InvalidQuery("Unable to parse tree %s" % tree) try: op = self.multiple_operators[operator] except KeyError: try: op = self.binary_operators[operator] except KeyError: raise InvalidQuery("Unknown operator %s" % operator) return lambda value: op(value, nodes) # Iterate over every item in the list of the value linked # to the logical operator, and compile it down to its own # evaluator. elements = [self.build_evaluator(node) for node in nodes] return lambda value: op((e(value) for e in elements)) To support the and and or operators, we leverage the all and any built-in Python functions. They are called with an argument that is a generator that evaluates each one of the sub-evaluator, doing the trick. Unicode is the new sexy, so I've also added Unicode symbols support. And it is now possible to implement our full example: >>> f = Filter( ... {"∨": [ ... {"∧": [ ... {"≥": 1}, ... {"≤": 2}, ... ]}, ... {"∧": [ ... {"≥": 4}, ... {"≤": 6}, ... ]}, ... ]}) >>> f(5) True >>> f(8) False >>> f(1) True As an exercise, you could try to add the not operator, which deserve its own category as it is a unary operator! In the next blog post, we will see how to improve that filter with more features, and how to implement a domain-specific language on top of it, to make humans happy when writing the filter! In this drawing, the artist represents the deepness of functional programming and how its horse power can help you escape many dark situations.
https://julien.danjou.info/simple-filtering-syntax-tree-in-python/
CC-MAIN-2018-39
refinedweb
1,131
53.81
Hi, I am sorry but i really don’t understand how to create a Cookie refering to the doc… I correctly added it to the quasar.conf but when i try to write one it does nothing. I just want to be able to create a simple cookie in a view. Can someone give me an example how to do that ? Thanks for the help ! - rstoenescu Admin last edited by Edit quasar.conf to tell Quasar to use the Cookies Quasar plugin: framework: { plugins: [‘Cookies’] } Reading: // outside of a Vue file import { Cookies } from ‘quasar’ var value = Cookies.get(‘cookie_name’) When cookie is not set, the return value is undefined. // inside of a Vue file this.$q.cookies.get(‘cookie_name’) - Writing // outside of a Vue file import { Cookies } from ‘quasar’ Cookies.set(‘cookie_name’, cookie_value, options) options is an Object which can have the following properties: expire, path, domain, secure. They are explained below. // outside of a Vue file import { Cookies } from ‘quasar’ Cookies.set(‘quasar’, ‘framework’, { secure: true }) // inside of a Vue file this.$q.cookies.set(‘cookie_name’, cookie_value, options) Check document.cookieafter updates to make sure everything is working. If it’s not working, Cookies are either blocked or you’re in Incognito browser mode. This post is deleted! Thanks ! My misstake was that i didn’t set option with path: '/'. Now it’s working.
https://forum.quasar-framework.org/topic/2020/cookies-0-15-8
CC-MAIN-2021-10
refinedweb
226
69.58
Android AsyncTask Class Helps Avoid ANRs When Android starts an application (App) it assigns it to run on a single thread process, also known as the User Interface (UI) thread. All the components in the App run on this main thread by default. The UI thread, as the name suggests, handles the interface. It posts events for the various widgets on the screen and then runs the code in the listeners for those events. If that code takes too long to execute it can cause problems for the UI. This article looks at that issue and discusses a solution with an Android AsyncTask example. (This tutorial assumes that an App can be created and run in Android Studio. If not familiar with App programming in Studio this site has beginners articles.) Android ANR Timeout If code that is executed from a listener takes too long it will slow down event processing, including the UI events that tell widgets to redraw. The UI then becomes unresponsive. Any slow code running on the UI thread can, ultimately, result in the Application Not Responding (ANR) error. This can occur if screen elements do not get a chance to process their pending requests. This Android ANR timeout occurs after after about five seconds. When an ANR appears the user can then forcibly close the App (and then probably remove it since no one likes an App that appears to crash). The main UI thread should just be that, keeping the UI going and updated. Any heavy duty, regularly executing or potentially slow code needs to be on a background task, the Android AsyncTask class is ideal for that job. Tasks that can chew up UI thread CPU cycles included: - Accessing large amounts of data, especially through slow connections or peripherals. - Jittery connections when accessing networks and Internet services. - The need to run some recurring code, for animation, polling, timing. - Parsing large data files or XML files. - Creating, reading, updating, deleting large database records. - Code with too many loops. - Intensive graphics operations. - Game loops. Android ANR Example The following Android example code has an Activity with a TextView. A Button calls a method, named ThisTakesAWhile(), which mimics a slow process. The code tries to keep the UI updated on progress by updating the TextView. To use this code create a new project in Studio. In this article the App is called Slow Process and starts with an Empty Activity (the activity and layout name defaults are used). Drop a Button onto the App's layout. Move the existing TextView below the button. Make both the width of the layout. Set the button text to GO and the TextView text to Press GO to start. The text size was set to 18sp (sp for scaled pixels). The TextView id should be set to textView: The activity_main.xml should look something.slowprocess.MainActivity"> <TextView android: <Button android: In the MainActivity class file add the function to simulate a long running process. Called ThisTakesAWhile() it uses the sleep() function for one second ten times. It updates the TextView using a a reference called tv declared (TextView tv;) in the first line of MainActivity{}. Add this function before the closing brace (curly bracket). Studio will prompt to add the SystemClock import, do this with Alt-Enter: private void ThisTakesAWhile() { //mimic long running code int count = 0; do{ SystemClock.sleep(1000); count++; tv.setText("Processed " + count + " of 10."); } while(count<10); } Add the inner class for an OnClickListener. This updates the TextView, calls the long running process. Then changes the text to Finished when the process ends. Add it after the OnCreate(){...}, pressing Alt-Enter when prompted to add the View and OnClickListener (for a View) imports: class doButtonClick implements OnClickListener { public void onClick(View v) { tv.setText("Processing, please wait."); ThisTakesAWhile(); tv.setText("Finished."); } } Before the closing brace of the onCreate() use the findViewById() function to assign tv and call setOnClickListener() (to assign the ClickListener). The full code should look like this (add any missing imports, e.g. place the cursor on TextView and press Alt-Enter): package com.example.slowprocess; import android.os.SystemClock; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.TextView; public class MainActivity extends AppCompatActivity { TextView tv; //for class wide reference to update status ()); } class doButtonClick implements View.OnClickListener { public void onClick(View v) { tv.setText("Processing, please wait."); ThisTakesAWhile(); tv.setText("Finished."); } } private void ThisTakesAWhile() { //mimic long running code int count = 0; do{ SystemClock.sleep(1000); count++; tv.setText("Processed " + count + " of 10."); } while(count<10); } } When this code is run and the Button pressed the text only changes after ten seconds to Finished. The messages that are meant to provide feedback counting from 1 to 10 are never shown. This illustrates how the UI can be blocked by code executing on the main thread. In fact bash the button a few times and an ANR occurs. The solution is to run the time consuming code away from UI events. Ideally for potentially slow operations we want to start them from the UI in a background thread, get regular reports on progress, cancel them if need be and get a result when they have finished. In Android the AsyncTask class does all of that easily without having to crank out a lot of code. With AsyncTask the time consuming code is placed into a doInBackground() method, there are onPreExecute() and onPostExecute() methods (for pre and post task work), and an onProgressUpdate() method to provide feedback. The background task can be cancelled by calling the cancel() method, causing onCancelled() to execute. The above example is changed to demonstrate using an AsyncTask object. The count variable is moved to the main class level and a couple more module variables are added (to store the Button reference and a processing state). The button reference is set up in onCreate(). The method used to run the slow process is replaced with an AsyncTask object, also given the name ThisTakesAWhile. The button click handler uses this new object. The execute() method starts the ball rolling. The long running process can be stopped with the button if required (using the cancel() method). Here is the resulting MainActivity class with the Imports for AsyncTask and Button added: package com.example.slowprocess; import android.os.AsyncTask; import android.os.SystemClock; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; public class MainActivity extends AppCompatActivity { TextView tv; //for class wide reference to update status int count; //number of times process has run, used for feedback boolean processing; //defaults false, set true when the slow process starts Button bt; //used to update button caption ()); bt=(Button) findViewById(R.id.button); } class doButtonClick implements View.OnClickListener { ThisTakesAWhile ttaw;//defaults null public void onClick(View v) { if(!processing){ ttaw = new ThisTakesAWhile(); ttaw.execute(10); //loop 10 times } else { ttaw.cancel(true); } } } class ThisTakesAWhile extends AsyncTask<Integer, Integer, Integer>{ int numcycles; //total number of times to execute process protected void onPreExecute(){ //Executes in UI thread before task begins //Can be used to set things up in UI such as showing progress bar count=0; //count number of cycles processing=true; tv.setText("Processing, please wait."); bt.setText("STOP"); } protected Integer doInBackground(Integer... arg0) { //Runs in a background thread //Used to run code that could block the UI numcycles=arg0[0]; //Run arg0 times //Need to check isCancelled to see if cancel was called while(count < numcycles && !isCancelled()) { //wait one second (simulate a long process) SystemClock.sleep(1000); //count cycles count++; //signal to the UI (via onProgressUpdate) //class arg1 determines type of data sent publishProgress(count); } //return value sent to UI via onPostExecute //class arg2 determines result type sent return count; } protected void onProgressUpdate(Integer... arg1){ //called when background task calls publishProgress //in doInBackground if(isCancelled()) { tv.setText("Cancelled! Completed " + arg1[0] + " processes."); } else { tv.setText("Processed " + arg1[0] + " of " + numcycles + "."); } } protected void onPostExecute(Integer result){ //result comes from return value of doInBackground //runs on UI thread, not called if task cancelled tv.setText("Processed " + result + ", finished!"); processing=false; bt.setText("GO"); } protected void onCancelled() { //run on UI thread if task is cancelled processing=false; bt.setText("GO"); } } } With the slow process wrapped up in the AsyncTask object the UI thread is freed from waiting and the feedback messages get displayed to tell the users what is happening. When cancel is called the background process may not stop immediately. For example two cycles have completed and the third starts just as cancel is called. Then the third cycle could still complete. In some scenarios this process may need to be thrown away, rolled back, or handled differently. With version 3.0 of Android the AsyncTask object was extended to provide an onCancelled(Object) method, again run when cancel() is executed. This new version of onCancelled takes the same object as that returned by doInBackground(). This allows the program to determine the state of the background task when cancelled. If this newer version of AsyncTask is used onCancelled can be defined differently. protected void onCancelled(Integer result) { //run on UI thread if task is cancelled //result comes from return value of doInBackground tv.setText("Cancelled called after "+ result + " processes."); processing=false; bt.setText("GO"); } For the above scenario it would be known that cancelled was called after two processes completed even though the third then completed. Depending upon the requirements the work done by the third process could then be rolled back. This article has shown how to move code that could potentially frustrate users and cause poor performance into the useful AsyncTask object. If required download slowprocess.zip for the project and code from this AsyncTask tutorial. Extract the zip contents and import the project into Studio. See Also For further details see AsyncTask on the Android Developers web site. The source code link for this tutorial is also on the Android Example Projects page. Archived Comments mcwong on June 12, 2012 at 2:35 pm said: Thanks, great tutorial among so many on AsyncTask. I can understand and grasp how it is done! Author:Daniel S. Fowler Published: Updated:
https://tekeye.uk/android/examples/asynctask-helps-avoid-anr
CC-MAIN-2018-26
refinedweb
1,694
58.79
-main-is flag should change exports in default module header The -main-is option to GHC should probably change the export list for the default module header. It doesn't. $ cat Main.hs program = return () $ ghc -main-is Main.program Main.hs [1 of 1] Compiling Main ( Main.hs, Main.o ) Main.hs:1:1: error: Not in scope: ‘main’ Perhaps you meant ‘min’ (imported from Prelude) Main.hs:1:1: error: The main IO action ‘program’ is not exported by module ‘Main’ I cannot imagine any possible use case for a feature that changes the entry point name to something else, and then deliberately fails to export the symbol by that name. This seems like an obvious thing to fix.
https://gitlab.haskell.org/ghc/ghc/-/issues/13704
CC-MAIN-2021-10
refinedweb
121
70.43
Hi im writing a fill-in-the-blanks program and I am stuck on how to delete a node from a liked list. I am suppost to follow this psuedocode to finish the program. I think I am close to a solution, can anyone point out what im doing wrong? The list already has values stored and the user is asked from Main to enter a value to delete (int item is passed to delete function). The commented code is the steps I am suppost to take to solve this. My attempt is the actual code. I must use the function template given. // LinkedList.cpp void LinkedList::deleteItem (int item) { ListElement *currPtr = head; ListElement *deletePtr; if (item == head->datum) { deletePtr = head; head = head->next; } else { currPtr = head; while (currPtr->next->datum != item) { head = head->next; } deletePtr = head; } delete head; /* If "item" is in the first node { Set a delete-pointer to the first node. Change the "head" pointer to the second node. } Else { Set a current-pointer to the "head". While current-pointer->next->datum is not equal to "item" { Set the current-pointer to the "next" node in the list. } Set a delete-pointer to the node to be deleted (Note: it is the node after current-pointer). Set the link member ("next") of the current-pointer to the "next" node after delete-pointer. } Delete the node indicated by delete-pointer. */ } // LinkedList.h #ifndef LINKEDLIST_H #define LINKEDLIST_H #include <cstdlib> #include <iostream> using namespace std; class LinkedList; // needed for friend line below class ListElement { private: int datum; ListElement* next; public: ListElement (int, ListElement*); int getDatum () const; ListElement const* getNext () const; friend class LinkedList; // gives LinkedList access to datum and next }; class LinkedList { private: ListElement* head; public: LinkedList (); void insertItem (int); void makeList (); void appendItem (int); void deleteItem (int); void printList (); }; #endif
https://www.daniweb.com/programming/software-development/threads/440008/deleting-nodes-from-a-linked-list
CC-MAIN-2019-47
refinedweb
301
70.73
Hi Will, We have hit some road blocks. The main issue here is that cloudstack sees the VM as a set of disks, while a OVA contains VM definition including instructions on pre boot steps, delays and maybe more. So even if we are able to reliably get disks from the OVA and orchestrate these with vCenter, we still may end up with some non-booting VM. Following is the flow: 1. Split the OVA into disks and iso, assume boot disk. 2. Boot disk is the parent template and rest of the disks and iso are child templates, created in cloudstack. 3. Map disk offerings to disks, cloudstack then orchestrates the boot disk and additional disks as a venter VM. 4. Attach the ISO. This works with some limitations. The cloudstack VMs exported as OVA work, but some of the appliances that I tested it with show errors on vCenter and checking the console reveals booting problem. Have you faced similar issues ? How do we go about these. I am not sure if you are in a position to share parts of the work that might be relevant. We have kept our PR private as it is still under works, but if it is useful we can share it. Basically it is based on previous similar effort. Regards, -abhi abhinandan.prateek@shapeblue.com 53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue On 04/05/17, 3:06 PM, "Abhinandan Prateek" <abhinandan.prateek@shapeblue.com> wrote: > >The template generation related actions are better done on SSVM as they will be dealing/moving with various data/boot disks. Vim(VMWare infrastructure management), works with vCenter context and as such cannot be used inside SSVM. Vim gives a much better validation of OVF as it can make compatibility checks with vCenter. Currently it is part of vmware hypervisor plugin. Due to these dependency I ended parsing and generating OVF using standard dom api. >Due to the nature of OVF we also ended up making several assumptions. Like the one that says the first disk is the boot disk. The few OVF file that I have seems to work for now. (Other than that the one OVA had second disk as boot disk). Though the OVF file does contain the OS of the VM but weirdly it does not link it to the disk that has it. > > >Regards, >-abhi > > > >On 03/05/17, 5:57 PM, "Will Stevens" <williamstevens@gmail.com> wrote: > >>Cool. Let me know if you have questions. >> >>My instinct is that we probably want to keep the Ova manipulation in the >>context of vmware since I don't believe it will be used outside that >>context. Trying to manipulate the ovf files with generic tools may prove to >>be more complicated to manage going forward as it is almost guaranteed to >>require some 'hacks' to make it work. If we can avoid those by using the >>vim jars, it may be worth it. I have not reviewed anything on the vim jars >>side, so I don't know how good of an option that is. >> >>Are there key benefits to not using the vim jars? >> >>Cheers, >> >>Will >> >>On May 3, 2017 3:34 AM, "Abhinandan Prateek" < >>abhinandan.prateek@shapeblue.com> wrote: >> >>> Hi Will, >>> >>> I am improving the multiple disk OVA feature. As part of revamp I am >>> moving out some OVF manipulation code from the vmware hypervisor plugin >>> context to secondary storage component. The existing code was using vim25 >>> and managed objects to query and rewrite the OVF file. I have rewritten >>> that, using standard java w3c dom parser. >>> >>> The overall flow is mostly similar and as below: >>> 1. Decompress OVA and read the OVF file. OVF file will give information >>> about various disks >>> 3. Create the regular cloudstack template out for the boot disk and >>> rewrite the OVF file, minus the information about other disks. >>> 4. For each additional disk create data disk templates and capture the >>> relationship in db. >>> 5. This can then be followed by creating the multi-disk cloudstack VM. >>> >>> Essentially I am rewriting the original OVF file after removing the File >>> and Disk information that refers to the other disks. Given that the the >>> VMWare is picky, I think it will require some more cleanup and massaging. >>> Your inputs will definitely help. >>> >>> Overall I think the two pieces, the tool that you have and the cloudstack >>> multi disk OVA functionality can nicely complement each other. Will post my >>> learning here. >>> >>> Thanks and regards, >>> -abhi >>> >>> >>> >>> >>> On 02/05/17, 6:05 PM, "williamstevens@gmail.com on behalf of Will >>> Stevens" <williamstevens@gmail.com on behalf of wstevens@cloudops.com> >>> wrote: >>> >>> >Hey Abhinandan, >>> >First, can you give us a bit more context regarding what you are doing so >>> >we can highlight potential areas to watch out for? I have done some OVF >>> >parsing/modification and there are a bunch of gotchas to be aware of. I >>> >will try to outline some of the ones I found. I have not tried to use the >>> >vim25.jar, so I can't really help on that front. >>> > >>> >In my use case, I was exporting VMs via the ovftool from a source VMware >>> >environment, and I was migrating them to an ACS managed VMware >>> >environment. In doing so, I also wanted to support VMs with multiple >>> disks >>> >using a Root volume and multiple Data volumes, as well as change the nic >>> >type (vmxnet3), assign static IPs, etc... I have not had the time to open >>> >source my migration tool, but it is on my todo list. >>> > >>> >My general flow was: >>> >- Export the VM with ovftool >>> >- Extract the resulting OVA into its parts (OVF, VMDKs, Manifest) >>> >- Duplicate the OVF file, once per VMDK >>> >- Modify a OVF file to be specific for each of the VMDKs (one OVF per >>> VMDK) >>> >- Take each VMDK and the corresponding OVF and recompress them back into >>> an >>> >OVA >>> >- Treat the first OVA as a template and the rest as data disks >>> > >>> >My initial (naive) approach was to just treat the OVF as a well behaved >>> XML >>> >file and use standard XML libs (in my case in Python) to parse and >>> >manipulate the OVF file. This approach had a few pitfalls which I will >>> >outline here. >>> > >>> >VMware is VERY picky about the format of the OVF file, if the file is not >>> >perfect, VMware won't import it (or at least the VM won't launch). There >>> >were two main items which caused me issues. >>> > >>> >a) The <Envelope> tag MUST have all of the namespace definitions even if >>> >they are not used in the file. This is something that most XML parsers >>> are >>> >confused by. Most XML parsers will only include the namespaces used in >>> the >>> >file when the file is saved. I had to ensure that the resulting OVF files >>> >had all of the original namespace definitions for the file to import >>> >correctly. If I remember correctly, they even had to be in the right >>> >order. I did this by changing the resulting file after saving it with the >>> >XML lib. >>> > >>> >b) VMware uses namespaces which actually collide with each other. For >>> >example, both the default namespace and the 'ovf' namespace share the same >>> >URL. Again, XML libraries don't understand this, so I had to manage that >>> >manually. Luckily, the way VMware handles these namespaces is relatively >>> >consistent, so I was able to find a workaround. Basically, the default >>> >namespace will apply to all of the elements, and the 'ovf' namespace will >>> >be applied only in the attributes. Because of this I was able to just use >>> >the 'ovf' namespace and then after exporting the file, I did a find >>> replace >>> >from '<ovf:' and '</ovf:' to '<' and '</' respectively. >>> > >>> >Those are the main gotchas which I encountered. >>> > >>> >I put the OVA Split function I wrote into a Gist [1] (for now) for your >>> >reference in case reviewing the code is helpful. I was under a lot of >>> time >>> >pressure when building this tool, so I have a bunch of cleanup to do >>> before >>> >I release it as open source, but I can rush it out and clean it up after >>> >release if you are solving the same(ish) problem and my code will be >>> useful. >>> > >>> >[1] >>> > >>> >I hope this is helpful in your case. >>> > >>> >Cheers, >>> > >>> >*Will STEVENS* >>> >Lead Developer >>> > >>> ><> >>> > >>> >On Tue, May 2, 2017 at 3:49 AM, Abhinandan Prateek < >>> >abhinandan.prateek@shapeblue.com> wrote: >>> > >>> >> Hello, >>> >> >>> >> I am looking at vim25.jar to put together ovftool like functionality >>> >> specially around parsing and generating OVF files. vim25.jar is already >>> >> included as non-oss dependency and used by vmware hypervisor plugin. I >>> see >>> >> that some OVF parsing capabilities are present in this jar, but it >>> seems to >>> >> be tied to host connection/context. Can anyone who has used this can >>> tell >>> >> me if I can use it as a standalone OVF manipulation api any pointer to >>> good >>> >> resource on that will be nice. >>> >> >>> >> Regards, >>> >> -abhi >>> >> >>> >> >>> >> >>> >> > > >
http://mail-archives.apache.org/mod_mbox/cloudstack-dev/201705.mbox/%3CC12AC875-4290-4CF4-BC3F-F21172BAAD9F@shapeblue.com%3E
CC-MAIN-2017-47
refinedweb
1,488
71.75
"What harmony is this? My good friends, hark!" Alonso. Tempest[III,3]. William Shakespeare. Previous blogs A Simple Arduino Music Box Simple Arduino Music Box: Voices Introduction This is more on the Arduino music box that I did for the Project14 Simple Music Maker competition. The original project played one note at a time. That's a bit tame - let's see if we can manage some polyphony and play some chords. Although playing chords might seem to be considerably more complex, in practice it is quite easy and naturally falls out of what I've done so far. To get multiple notes playing I just need a table index for each note and the appropriate step size to give the required note. Then I simply sum the results and scale for the output. It is more processing, but it might work, so let's give it a try. The Sketch Here's the sketch. The include at the start is #include <Arduino.h> /* Arduino Music Box: Chords */ #include <Arduino.h> volatile unsigned int tableOffset1 = 0; // wave table index, note 1 volatile unsigned int tableOffset2 = 0; // wave table index, note 2 volatile unsigned int tableOffset3 = 0; // wave table index, note 3 volatile unsigned int tableOffset4 = 0; // wave table index, note 3 volatile unsigned int tableStep1 = 0; // wave table step, note 1 volatile unsigned int tableStep2 = 0; // wave table step, note 1 volatile unsigned int tableStep3 = 0; // wave table step, note 1 volatile unsigned int tableStep4 = 0; // wave table step, note 1 unsigned int noteTime = 0xc35; // note duration count unsigned int outputValue = 0; // int i=0; // Twinkle, Twinkle, Little Star // Each line is four notes and a duration. 0 if no note. // Line of 5 zeroes marks the end. unsigned int tuneNotes[] = { 343, 229, 144, 0, 0x4000, 385, 229, 153, 0, 0x4000, 343, 229, 144, 0, 0x4000, 385, 229, 153, 0, 0x4000, // 343, 229, 144, 0, 0x4000, 385, 216, 153, 0, 0x4000, 343, 229, 144, 0, 0x8000, // 343, 229, 192, 0, 0x4000, 306, 229, 0, 0, 0x4000, 288, 229, 171, 0, 0x4000, 229, 144, 0, 0, 0x4000, // 192, 162, 0, 0, 0x4000, 306, 216, 128, 0, 0x4000, 288, 229, 114, 0, 0x8000, // 171, 85, 0, 0, 0x4000, 192, 96, 0, 0, 0x4000, 229, 114, 0, 0, 0x4000, 257, 128, 0, 0, 0x4000, // 288, 144, 0, 0, 0x4000, 306, 162, 0, 0, 0x4000, 343, 171, 0, 0, 0x8000, // 343, 216, 171, 144, 0x4000, 288, 0, 0, 0, 0x4000, 229, 192, 144, 96, 0x4000, 306, 192, 162, 64, 0x4000, // 257, 216, 162, 108, 0x4000, 257, 216, 162, 48, 0x4000, 229, 144, 64, 0, 0x8000, // 0,0,0,0,0 }; // // --- Wave table - generated by waveTable.exe // unsigned int waveTable[512] = { 0x8000,0x862d,0x8c57,0x927a,0x9890,0x9e98,0xa48c,0xaa69,0xb02c,0xb5d1,0xbb55,0xc0b4,0xc5eb,0xcaf7,0xcfd5,0xd483, 0xd8fd,0xdd41,0xe14d,0xe520,0xe8b6,0xec0e,0xef27,0xf1ff,0xf496,0xf6ea,0xf8fa,0xfac6,0xfc4f,0xfd93,0xfe93,0xff4f, 0xffc8,0xfffe,0xfff4,0xffa8,0xff1e,0xfe57,0xfd54,0xfc17,0xfaa2,0xf8f8,0xf71a,0xf50c,0xf2d0,0xf069,0xedd9,0xeb24, 0xe84d,0xe556,0xe243,0xdf17,0xdbd6,0xd883,0xd520,0xd1b2,0xce3c,0xcac0,0xc743,0xc3c8,0xc051,0xbce2,0xb97e,0xb628, 0xb2e2,0xafb0,0xac94,0xa990,0xa6a8,0xa3dc,0xa130,0x9ea5,0x9c3e,0x99fb,0x97de,0x95e8,0x941b,0x9278,0x90fe,0x8fb0, 0x8e8d,0x8d95,0x8cc8,0x8c27,0x8bb0,0x8b64,0x8b42,0x8b49,0x8b77,0x8bcd,0x8c47,0x8ce6,0x8da7,0x8e89,0x8f8a,0x90a8, 0x91e1,0x9332,0x949a,0x9617,0x97a5,0x9943,0x9aee,0x9ca3,0x9e61,0xa024,0xa1eb,0xa3b1,0xa577,0xa737,0xa8f1,0xaaa2, 0xac48,0xade0,0xaf69,0xb0e0,0xb243,0xb391,0xb4c8,0xb5e6,0xb6eb,0xb7d3,0xb89f,0xb94e,0xb9de,0xba4e,0xba9f,0xbad0, 0xbae0,0xbad0,0xba9f,0xba4e,0xb9de,0xb94e,0xb89f,0xb7d3,0xb6eb,0xb5e6,0xb4c8,0xb391,0xb243,0xb0e0,0xaf69,0xade0, 0xac48,0xaaa2,0xa8f1,0xa737,0xa577,0xa3b1,0xa1eb,0xa024,0x9e61,0x9ca3,0x9aee,0x9943,0x97a5,0x9617,0x949a,0x9332, 0x91e1,0x90a8,0x8f8a,0x8e89,0x8da7,0x8ce6,0x8c47,0x8bcd,0x8b77,0x8b49,0x8b42,0x8b64,0x8bb0,0x8c27,0x8cc8,0x8d95, 0x8e8d,0x8fb0,0x90fe,0x9278,0x941b,0x95e8,0x97de,0x99fb,0x9c3e,0x9ea5,0xa130,0xa3dc,0xa6a8,0xa990,0xac94,0xafb0, 0xb2e2,0xb628,0xb97e,0xbce2,0xc051,0xc3c8,0xc743,0xcac0,0xce3c,0xd1b2,0xd520,0xd883,0xdbd6,0xdf17,0xe243,0xe556, 0xe84d,0xeb24,0xedd9,0xf069,0xf2d0,0xf50c,0xf71a,0xf8f8,0xfaa2,0xfc17,0xfd54,0xfe57,0xff1e,0xffa8,0xfff4,0xffff, 0xffc8,0xff4f,0xfe93,0xfd93,0xfc4f,0xfac6,0xf8fa,0xf6ea,0xf496,0xf1ff,0xef27,0xec0e,0xe8b6,0xe520,0xe14d,0xdd41, 0xd8fd,0xd483,0xcfd5,0xcaf7,0xc5eb,0xc0b4,0xbb55,0xb5d1,0xb02c,0xaa69,0xa48c,0x9e98,0x9890,0x927a,0x8c57,0x862d, 0x8000,0x79d2,0x73a8,0x6d85,0x676f,0x6167,0x5b73,0x5596,0x4fd3,0x4a2e,0x44aa,0x3f4b,0x3a14,0x3508,0x302a,0x2b7c, 0x2702,0x22be,0x1eb2,0x1adf,0x1749,0x13f1,0x10d8,0x0e00,0x0b69,0x0915,0x0705,0x0539,0x03b0,0x026c,0x016c,0x00b0, 0x0037,0x0001,0x000b,0x0057,0x00e1,0x01a8,0x02ab,0x03e8,0x055d,0x0707,0x08e5,0x0af3,0x0d2f,0x0f96,0x1226,0x14db, 0x17b2,0x1aa9,0x1dbc,0x20e8,0x2429,0x277c,0x2adf,0x2e4d,0x31c3,0x353f,0x38bc,0x3c37,0x3fae,0x431d,0x4681,0x49d7, 0x4d1d,0x504f,0x536b,0x566f,0x5957,0x5c23,0x5ecf,0x615a,0x63c1,0x6604,0x6821,0x6a17,0x6be4,0x6d87,0x6f01,0x704f, 0x7172,0x726a,0x7337,0x73d8,0x744f,0x749b,0x74bd,0x74b6,0x7488,0x7432,0x73b8,0x7319,0x7258,0x7176,0x7075,0x6f57, 0x6e1e,0x6ccd,0x6b65,0x69e8,0x685a,0x66bc,0x6511,0x635c,0x619e,0x5fdb,0x5e14,0x5c4e,0x5a88,0x58c8,0x570e,0x555d, 0x53b7,0x521f,0x5096,0x4f1f,0x4dbc,0x4c6e,0x4b37,0x4a19,0x4914,0x482c,0x4760,0x46b1,0x4621,0x45b1,0x4560,0x452f, 0x451f,0x452f,0x4560,0x45b1,0x4621,0x46b1,0x4760,0x482c,0x4914,0x4a19,0x4b37,0x4c6e,0x4dbc,0x4f1f,0x5096,0x521f, 0x53b7,0x555d,0x570e,0x58c8,0x5a88,0x5c4e,0x5e14,0x5fdb,0x619e,0x635c,0x6511,0x66bc,0x685a,0x69e8,0x6b65,0x6ccd, 0x6e1e,0x6f57,0x7075,0x7176,0x7258,0x7319,0x73b8,0x7432,0x7488,0x74b6,0x74bd,0x749b,0x744f,0x73d8,0x7337,0x726a, 0x7172,0x704f,0x6f01,0x6d87,0x6be4,0x6a17,0x6821,0x6604,0x63c1,0x615a,0x5ecf,0x5c23,0x5957,0x566f,0x536b,0x504f, 0x4d1d,0x49d7,0x4681,0x431d,0x3fae,0x3c37,0x38bc,0x353f,0x31c3,0x2e4d,0x2adf,0x277c,0x2429,0x20e8,0x1dbc,0x1aa9, 0x17b2,0x14db,0x1226,0x0f96,0x0d2f,0x0af3,0x08e5,0x0707,0x055d,0x03e8,0x02ab,0x01a8,0x00e1,0x0057,0x000b,0x0000, 0x0037,0x00b0,0x016c,0x026c,0x03b0,0x0539,0x0705,0x0915,0x0b69,0x0e00,0x10d8,0x13f1,0x1749,0x1adf,0x1eb2,0x22be, 0x2702,0x2b7c,0x302a,0x3508,0x3a14,0x3f4b,0x44aa,0x4a2e,0x4fd3,0x5596,0x5b73,0x6167,0x676f,0x6d85,0x73a8,0x79d2}; void setup() { // set the digital pins as outputs: pinMode(0, OUTPUT); pinMode(1, OUTPUT); pinMode(2, OUTPUT); pinMode(3, OUTPUT); pinMode(4, OUTPUT); pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(7, OUTPUT); pinMode(8, OUTPUT); // test [100ksps] OCR2A = 239; // period = 240 x 1/16MHz = 15uS [66.6ksps]; if(tableStep1 == 0) { PORTD = 0x80; noteTime = noteTime - 1; if(noteTime==0) { tableStep1 = tuneNotes[i++]; tableStep2 = tuneNotes[i++]; tableStep3 = tuneNotes[i++]; tableStep4 = tuneNotes[i++]; noteTime = tuneNotes[i++]; tableOffset1 = 0; tableOffset2 = 0; tableOffset3 = 0; } } else { tableOffset1 = tableOffset1 + tableStep1; tableOffset2 = tableOffset2 + tableStep2; tableOffset3 = tableOffset3 + tableStep3; tableOffset4 = tableOffset4 + tableStep4; outputValue = waveTable[tableOffset1 >> 7] >> 8; outputValue = outputValue + (waveTable[tableOffset2 >> 7] >> 8); outputValue = outputValue + (waveTable[tableOffset3 >> 7] >> 8); outputValue = outputValue + (waveTable[tableOffset4 >> 7] >> 8); PORTD = outputValue >> 2; noteTime = noteTime - 1; if(noteTime==0) { tableOffset1 = 0; tableOffset2 = 0; tableOffset3 = 0; tableOffset4 = 0; tableStep1 = 0; tableStep2 = 0; tableStep3 = 0; tableStep4 = 0; noteTime = 0xc35; if (tuneNotes[i]==0) i=0; } } PORTB &= 0xFE; } void loop() { } It's much as I had before except everything is done four times over - four step additions and four look-ups. The division by four of the summed values, to scale the final result, is done with a shift. It does work, but the time it takes is around the same as the 10uS that the timer period was set to. To give it a bit more space, I've slowed the sample rate down to 66.6ksps (15uS). That transposes all my notes down and slows the note durations but it still sounds ok (I'm feeling too lazy to go back and recalculate them). The relationships between the notes remain the same, so unless you have perfect pitch you won't know that the notes are now wrong. The way I knew that it was taking all of the processing time was to set an output pin at the start and reset it at the end of the interrupt code. That's the write to PORTB that you can see in the code. Here's the scope trace, but with a longer timer period. Bear in mind that the interrupt needs time for entry and exit too, so it was over-running the 10uS. An Example And here's what it sounds like. This is the Twinkle, Twinkle tune that I used for the original music box but played with chords rather than just the melody line. It's now much more complex in sound, even though it's all a bit basic in hardware terms (keep in mind we're at the grunge end of the hardware spectrum, not the hi-fi end). I'm quite proud of that, even though it is just a silly piece of hackery. Next thing to look at is adding an envelope generator: Simple Arduino Music Box: Envelopes
https://www.element14.com/community/people/jc2048/blog/2018/05/28/simple-arduino-music-box-chords
CC-MAIN-2019-30
refinedweb
1,417
52.12
In Spring, bean scope is defined by it’s type, different type of Spring bean instance can be used in different scenarial. There are totally 5 scope type by default. Default Bean Scope Type - singleton : This is the default Scope, which means that there will be only one instance in the entire spring container, or in the entire java application. - prototype : Multi – instance type, container will create a new bean instance for every request to a bean by client. - request : For spring web application only. When a web action request an instance of this bean, it will be created by Spring container, and then saved in HttpServletRequestobject. When the request complete, the bean will be out of scope and wait for garbage collection. - session : This bean scope type is valid only in web application also. Defined with this scope, when a web action request this bean, Spring container will also create an instance of it then save it in HttpSessionobject. When the session timeout or invalidate, the bean is invalidate also. - application : This is also for web application only. This kind of bean will exist in the web application context. One web application will has only one instance. - globalSession : When you use Portlet container to create Portlet application, there are a number of portlets in it. Each portlet will save variable in their own sesion by default. But how to share a global variable object to all the portlets in this Portlet application? Then global-session concept come out. You can give your spring bean global-session scope, then it can be accessed by all portlet in the application. This scope is not so much different from session scope in Servlet based java web applications. How To Specify Bean Scope - In Spring bean configuration xml file. <bean id="helloWorld" class="com.dev2qa.HelloWorld" scope="prototype"/> - Via Java Annotation. When you want to use annotation to define, you should first add below xml in bean configuration file. Then Spring will add corresponding bean definition by scanning annotations. <context:component-scan Then you need to use annotations such as @Component on the corresponding java class to indicate that it needs to be added as a bean definition to the corresponding container. @Scope annotation will just be used to point out the scope value. @Component @Scope("prototype") public class HelloWorld { }
https://www.dev2qa.com/spring-bean-scopes-introduction/
CC-MAIN-2019-51
refinedweb
387
57.37
En savoir plus à propos de l'abonnement Scribd Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs. American Philological Association The Hero Echetlaeus Author(s): M. H. Jameson Source: Transactions and Proceedings of the American Philological Association, Vol. 82 (1951), pp. 49-61 Published by: The Johns Hopkins University Press Stable URL: . Accessed: 05/02/2011 19 Philological Association and The Johns Hopkins University Press are collaborating with JSTOR to digitize, preserve and extend access to Transactions and Proceedings of the American Philological Association. Vol. lxxxii] The Hero Echetlaeus 49 III.-The Hero Echetlaeus M. H. JAMESON UNIVERSITY OF MISSOURI Pausanias, in speaking of the stoa poikile in the Agora at Athens, describes the famous painting of the battle of Marathon, which is He says, in dated in the second quarter of the fifth century B.c.' part (1.15.3): Here also are depicted Marathon, a hero after whom the plain is from out of the ground, and Athena named, and Theseus, and Heracles;for it was first by the Marathonians, as they themselves say, that Heracles was considereda god. cially conspicuous in the painting are Callimachus, who was chosen by the Athenians to serve as one of those who served strategos, and a hero called Echetlus, tion later. It is this figure Echetlus or Echetlaeus whom I propose to consider. The later passage (Paus. 1.32.4-5) occurs in a description of the deme of Marathon and the battle which took place on its plain. The Marathonians worship those who died in battle, calling them its name, and heroes, and also Heracles; they claim that among the Greeks it was first by them that Heracleswas considereda He killed many of the barbarianswith a plow and, after the battle, dis- appeared. only hero. The assistance of gods and heroes in battle is a familiar pheno- menon. I need only mention here the Aeacids whose help had been asked before Salamis and who, as armed men, seemed to stretch out their hands from Aegina before the Greek triremes (Plut., Them. 15, cf. Hdt. 8.64 and 84), while the hero Cychreus of Salamis appeared in the form of a snake on the Greek ships (Paus. 1.36.1). this answer about him: he orderedthem to honor Echetlaeus as a When the Athenians inquired of the [Delphic] god, he gave be present in the battle, in appearance and outfit a countryman. They also say that a man happened to as of whom I shall also make men- Among the combatants espe- shown rising polemarch,Miltiades, Marathon, from whom the deme god. gets On the date and the disputed authorship of the painting see E. Pfuhl, Malerei und Zeichnung (Munich 1923) 2, pp. 637, 660-62. E. Vanderpool, Hesperia 18 (1949) p. 130, fig. 1, and p. 136; on the recently discovered architectural fragments see H. A. Thompson, Hesperia 19 (1950) 327-9. On the location of the Stoa see 50 [1951 Before the same battle, lacchus cry of the Eleusinian mysteries moved towards the Greek camp, testifying to divine support for the Greeks and the impending doom of the Persians (Hdt. 8.65). Other examples are at hand from all over the Greek world.2 If we consider who the figures are who give aid at such crises, we see that the heroes usually have some special connection with the place or the people, as in the case of Ajax and Telamon on Salamis (Hdt. 8.64) and the seven archegetai of Plataea (Plut., Arist. 11), or they have to be fetched to the spot as Aeacus and the rest of the Aeacids had to be brought from Aegina (Hdt. 8.64, 84) and were sent on another occasion by the Aeginetans to Thebes (Hdt. issued the M. H. Jameson a cloud of dust from which 5.80-81). Gods in the classical period gave help from a distance, as did Pan at Marathon (Hdt. 6.105.2-3), and it is probably in this way that we should understand Athena's position in this painting; but even so the gods with local attachments were more likely to be of service.3 Here, Marathon and Heracles were worshipped locally for we know, besides what Pausanias tells us, that the Athenians camped on Heracles' precinct (Hdt. 6.108 and 116) and that games were held at Marathon in his honor (Pindar, 01. 9.134 ff., with scholiast). Athena and Theseus are the champions of the nation, and there is the legend of Theseus and the bull of Marathon (Plut., Thes. 14, Apollod., Epit. 1.5, Paus. 1.27.10); Plutarch knew 2E.g., 5.75) and Asclepius (IG 42.1.128.58 ff. [Isyllus]) at Ajax the son of Oileus, for whom a space was left in the ranks of the Italian the Dioscuri (Hdt. Cf. M. Sparta; Locrians and who in one battle wounded the Crotonian commander (Paus. 3.19.12-13; Conon fr. 18, Jacoby). (Munich 1941) 678-9; 1913) 5, on Paus. 10.23.2. P. Nilsson, Geschichteder griechischen Religion, I J. G. Frazer, Pausanias's Description of Greece (2nd ed.; London 3 The evidence does not show that gods were thought to take the field in person in the classical period (but we do not know the date of the battle between Tanagra and Eretria, in which Hermes appeared as an ephebe, Paus. 9.22.2). Nilsson suggests that there was a reluctance to imagine gods, worshipped in more than one city, as fighting for one particular city tion in battle against the arose it is doubtful that such contradictions would be felt, especially since supernatural figures generally appeared in victory, side they - a term that includes in the stories from less localized and more universal. in battle barbarians). At the level, however, at which these stories (above, note 2; this would hardly prevent their participa- and not defeat, in itself an indication of which had decided to favor. Nor does this account for the reappearance of gods in Hellenistic times when the gods are, if anything, It would seem that the predominance of heroes figures of diverse origin (cf. the classical period is due character, and the belief that many had been great men and especially great warriors, rather than to any reluctance to visualize the gods in battle. A. D. Nock, HThR 37 [1944] 162-6) - to the popularity of their cults, their local and intimate 51 a tradition that Theseus appeared in battle for the Greeks at Marathon (Plut., Thes. 35). Help may also come from a hero whose at Delphi, Phylacus, who had a shrine near the sanctuary of Athena Pronaia southeast of Apollo's precinct, along with Autonous, another local hero, appeared as giant hoplites and attacked the Persians (Hdt. 8.38-9; Paus. 10.8.7). Again, when the Gauls attacked Deiphi, Phylacus helped to ward off the enemy (Paus. 10.23.2). The very name of Phylacus suggests the role he played.4 In such company Echetlaeus seems out of place if, in fact, he was neither a local nor national hero and was unknown to the Athenians before the battle. Nor does his name - "he of the plow- handle" - suggest his epiphany in battle. It is, therefore, not surprising that the tradition recorded by Pausanias has been more it is probably a pseudo-historic aetiological story invented to explain or less vigorously rejected. prime function is to give aid when the country is attacked: Farnell, for one, believed that a name and a half-forgotten cult, and should not be regarded as proof that the latter [Echetlaeus] originated in the fifth century B.C." He classes him with such other local agricultural heroes as Eunostus ("he of the good return") of Tanagra.5 Indeed, comparison with other heroes who fight in battle strongly suggests the prior existence of Echetlaeus, but, as I hope to show, we need not therefore discount the tradition which says that he had no organized cult up to that time. In native Greek art, no representation of Echetlaeus is known, but a series of Etruscan funerary urns shows a figure in battle this figure wielding a plow. It has not been possible to identify presumably a guardian of the city walls. Hopfner, RE 17 [1937] 1516-17, s.v.), but Horo- or Oro-phylax (JHS 8 [1887] 236, Kibyratis) may be the title of an (Drexler in Roscher, Lex., s.v.) and Apollo is attached to a variety of gods and heroes, cf. J. Schmidt, RE 20 (1941) 991-2. In addition to his name, the location of the shrine of to the main sanctuary may be significant for his function. Phylax Cf. also Hoplophylax used of Heracles 4Cf. Teichophylax at Myrina (Hesychius, s.v.), There is also a daim6n nyktophylax (Lucian, official. Peregr. 27-28, p. 350, cf. Th. Prophylax (IG 12.7.419, Amorgos). Phylacus on an approach So far his temenoshas not been certainly identified, cf. R. Demangel, Fouilles de Delphes, II. 5, Topographie et architecture. Le sanctuaire d'Athina Pronaia, 3. Topographie du sanctuaire (Paris 1926) 74, 105-7. 5L. R. Farnell, Greek Hero Cults and Ideas of Immortality (Oxford 1921) 88. Cf. C. Robert, "Die Marathonschlacht in der Poikile und Weiteres iiber Polygnot," Hallisches Winckelmannsprogramm 18 (1895) 34 f., and W. Helbig, Fuhrer durch die offentlichen Sammlungen klassischer E. Altertiimer in Rom (3rd ed. by W. Amelung, Reisch, F. Weege; Leipzig 1913) 2, p. 424. 52 as the Attic hero, nor as a native demon of death. He appears to be a native Etruscan, but a rustic, warrior hero, of similar origin but independent of the Attic Echetlaeus.6 In themselves the Etruscan representations do not add to our knowledge of Echetlaeus. It is to his name and his distinctive weapon that we must look for further understanding. Echetlaeus performed his feats with the plow, the arotron. His name, however, in the second form given by Pausanias is an adjec- tive derived from the echetle, the plow-handle, -tail, or -stilt, by which the plowman grasps and guides the plow and on which a strong pressure, according to the Geoponica (2.2.3), ensures a deep furrow (Echetlus is probably a shortened form of Echetlaeus).7 6 See especially G. Zoega, G. K6rte in H. Brunn, I rilievi delle urne etrusche, III (Berlin Winckelmann called this figure 1916) p. 11, fig. Echetlaeus (Werke 3 [edd. H. Meyer & J. Schulze; Dresden 1809] 91, 170, 380-81; cf. pi. 40 and pp. 304-306) and C. Robert (above, note 5) pp. 32-34, attempting to recon- struct the painting The con- trary view has Die antiken Basreliefe von Rom [trans. and ed. Welcker; Giessen 1811] 3, pp. 12-16, and plates iv-vii. of the battle of Marathon, reaffirmedthe identification. been presented by Giancarlo Conestabile della Staffa, Dei monumenti that the Etruscans may have acquired their plow hero in their 3:31 by his Philistines is slew six hundred usual translation. The suggestion is di p. Lex., s.v. "Echetlos"; A. S. F. Gow, JHS 34 (1914-15) p. 252, note 10. Perugia etrusca e romana (Perugia 1855-70) on pl. 47-73 [non vidi, cited by Robert, note 5) 2, p. 424, cf. p. 277; Schultz in Roscher, 32, note 39]; W. Helbig (above, Korte suggested earlier home in the Near East, on the basis of the interpretation of Judges friend R. Smend: the malmad with which Shamgar explained as a plow instead of an ox-goad, the hazardous in the extreme. 7 On the early Greek plow see A. S. F. Gow, "The Ancient Plough," JHS 34 (1914- W. Mair, Hesiod: The Poems and Fragments (Oxford 1908) 158-62; lines 427 and 433; E. A. Cambridge 1906) p. 539, whole upright stick while the actually grasped by the hand is the cheirolabis,cf. Hesiod Op. 60. The distinc- E. Saglio, 15) 249-75; A. T. A. Sinclair, Hesiod's Works and Days (London 1932), on Gardner in A Companion to GreekStudies (ed. L. Whibley; fig. III and p. 540. Pollux (1.252) uses echetlgof the short horizontal piece 467-68 &KpovkTXr)7s Ixetpl Xa/wv and Gow (above), p. 250 and note tion is not made elsewhere, e.g., Photius, Suidas, Hesychius, "Aratrum," DarSag with the plow, as warrior with Echetlaeus. Et. Mag., s.v. 1.354 describes the beam in his fig. the echetle; he is evidently influenced 429, the Etruscan warrior by his identification of the The adjective echetlaios is one of a number of divine or heroic epithets derived herkosand Zeus Herkeios; tegos and the hero Epitegios (IG 12.310.82- from objects: cf. 83; 22.5071); pyle and Epogmios (Anth. Pal. 6.258). alog, "worked land"; they had graves Crete (Plin. Nat. Hist. 7.73; Sall. ap. shipped as heroes on Naxos (Diod. Sic. 5.51.2; With Echetios as a shortened name Hermes Pylaios and Propylaios; ogmos ("furrow") and The Aloades (Otos and in Anthedon in Boeotia (Paus. IG 12.5.56). It is extremely unlikely Such Demeter Ephialtes) may come from the 9.22.6) and in Serv. in Verg. Aen. 3.578) and they were wor- derived from Echetlaios cf. Iphiklos from that parallel forms, Iphiklees, Echetlosand Echetlaios, originally existed side by side Perimos from Perimedes, etc. parallel forms as cheimet- To describe Echetlaeus as "the hero of the ploughshare" as does Farnell, or even of the plow-share or plow-handle, as does Schwenn, is misleading.8 The plow-share, in particular the metal tip, the hynis, because of its phallic and fertilizing function, is used in modern Greek customs directed to human fertility and especially the birth of male children, as well as to the fertility of the fields, and these practices find parallels, for example, in India; possibly the modern Greek customs have their origin in the ancient world.9 But Echetlaeus is the hero of the plow-handle, which is not the object of attention in such practices. The plow as a whole is involved in a number of interesting rituals and a hero is most likely to be derived from the plow when it is the focus of religious attention, though, strange to say, no divine or heroic epithet is derived directly from the name of the plow or the act of plowing in Greece.l? The "ox-yoker," Bouzyges, in Attica is the closest parallel. We should first see, however, when the plow-handle in particular is the center of interest. Because of the importance of plowing in an agricultural society, in many cultures the task is attended by various ceremonies; these 53 ion besides cheimetle ("chilblain") are no pendent form Echetlos. Echetlaios from echetle but Echetlos, on the analogy of Melissos (from melissa), "being like a bee," Kor6nos (from kor6ne), "being like a crow," would and for such a meaning there is no support. help for the meaning of a presumed inde- clearly means "he of the plow-handle." have to mean "being like a plow-handle," 8Farnell (above, note 5) 71; F. Schwenn, "Triptolemos," RE 7A (1939) 216. I Hesychius's gloss on echetl1,where only ("the furrow") KacI i airr.7 'roi &p6rpov. A. S. F. Gow (JHS 34 are they intrinsically probable." present day Mani (the middle of the three prongs of the Peloponnesus) I have heard it used of the brace between the part of the share to which the metal tip, the hynis, is attached. The Lexikon Hellgniko-Gallikon(ed. A. spathe as the part of the plow to which is sense) in the dialects describe the plow beam when the pole today can be seen in JHS 26 (1906) p. 201, fig. 8 and JHS 34 (1914-15) p. 258, fig. 10, but in the Mani and Crete, at least, the upright echetlgof Laographia 7 (1923) 346-68. (Indian)," in Hastings, ERE 8.289. Cf. H. A. Rose, "Magic attached the histoboe ("the pole" - I heard only stavari in this Epites; Athens 1926), s.v., described the beam, to which the pole is attached, and the stock, the wooden I know of no ancient use of spathU in this sense but in the do not believe their translations are based on in our sources, after the usual explanation, there is added Kai 7 aviXa [1914-15] 265, note 46) comments: "For these meanings there is no other evidence nor of Epiros and Karpathos; this would seem to and beam are separate. The usuAlGreek plow of 9 antiquity survives. Cf. K. A. Romaios, O0But cf. perhaps Hesychius "Apwcros-'HpaKX^srapa MaKeS6aQv. The second out of alphabetical order between &pir76v (Gottingen 1906) 93. For epithets from vowel is uncertain since the word is placed and &pOeos.Cf. O. Hoffman, Die Makedonen the plow in Indian mythology, see below, note 14. are attached primarily to the commencement of plowing and sowing, and in Greece they were a joint operation (cf. Hesiod, Op. 391 with the comments of Sinclair [above, note 7] and Mair [above, note 7], pp. 126, 130, 135-6). initial acts survives in Greece today in the blessing of a sheaf of grain in church before plowing begins." used for the first furrow, Pliny tells us (Nat. Hist. 28.267), was put on the focus Larum to keep wolves from the fields. In Indo-China the plow with which the ceremonial plowing of the rice-fields is performed is purified in advance with water.'2 In Athens the very first plow ever yoked to oxen was dedicated to Athena and preserved on the Acropolis (Schol. Aeschines, p. 56.23-25 Dindorf). A Greek superstition which held that one should curse and swear while sowing cumin to ensure a good crop (Theophr., Hist. Plant. 7.3.3, 9.8.8; Plut., Quaest. Conv. 7.2.2, 700F-701A) is paralleled in public ritual by the Bouzygeioi arai, the curses pronounced by the Bouzyges ("the ox-yoker") at the time of the sacred plowing at Athens, against those who do not share water or fire, or do not show the lost the way (Diphilus fr. 62 Kock [II, p. 561] ap. Athen. 6, 238F; Paroem. Graec. 1.388) and against those who suffer a body to lie unburied (Schol. Soph. Ant. 255). The magical, apotropaic practices of the sower of cumin are echoed and elaborated on the public level as curses against any member of the community who by offend- ing the gods endangers the success of this state plowing anl con- sequently all plowing in the state. In Attica alone we know of three sacred, public plowings (Plut., Con. Praecep. 42, 144A-B) and undoubtedly similar ceremonies took place elsewhere in Greece. It seems probable that originally Eleusis and Athens each had a public plowing, the one on the Rarian plain, the other below the Acropolis near the sanctuary of Demeter Chloe (cf. IG 22.5006), conducted by the family of the Bouzygai who traced their descent from the original Bouzyges who first yoked oxen to the plow. The third plowing, at Sciron, is thought to be a compromise between the two states at the time of 54 This example of the special importance of In Italy the plow-share n Cf. K. Sittl, Hesiodou Ta Hapanta (Athens 1889) on Op. 465. 12 W. D. Wallis, Religion in Primitive Society (New York 1939) 99. Comparative P. V. Glob on votive deposits of wooden plows in Scandi- material is widespread, e.g., navia and a rock carving of a plowing rite in south Sweden, Strena Archaeologica A. M. Tallgren Sexagenario Dedicata (Helsingfors 1945) summarized in AJA 52 (1948) 225. 55 their union.13 The intent of all three plowings, we may be sure, was to procure a good crop by ensuring that all the plowing in Attica was performed properly and with the good will of the appro- priate divinities. The plowman, the minister of agriculture ex oficio, is called Bala- deva, the name of an Echetlaeus-like figure of Indian mythology who performs various feats with his plow and has various titles derived from the plow.14 Other Greek rituals which are close to the plowing ceremonies are the Proerosia, the "before plowing ceremonies," at Eleusis, and the Thesmophoria at Athens, not to mention the Eleusinian mysteries. The Proerosia were held on the fifth of Pyanopsion, the Attic month of plowing, and were, according to a gloss in Suidas, s.v., "sacrifices taking place before plowing for the future crops, that they may be brought to maturity." A fragment of a song addressed to Kore seems to indicate that she was conceived as participating in the A similar state ceremony is performed in Siam. ap. Proclus in Hes. Op. 389). The Thesmophoria took place at the time of sowing (cf. Plut., Is. et Os. 69, 378E; Cornutus 28, p. 55.7 Lang) and included the bringing up from the underground megara of the pigs and other fertility early in the summer in the month of Scirophorion, and the mixing of the remains with the seed to be sown.15 Thus the ceremonies at the time of plowing and sowing are not isolated rites but continue the power preserved charms thrown down after harvest, subsequent sacred plowing (Plut. 13See O. Kern, RE 2 (1896) 1215-17; L. Deubner, Attische Feste (Berlin 1932) 46-48; C. Robert, Hermes 20 (1885) 378. The contention that the Bouzygai also performed the Eleusinian plowing is not proved. The Scholiast on Aristides Rhetor (II, p. 175 Dindorf) says that those who cared for the sacred cows used for plowing at Eleusis were called Bouzygai and an inscription of the second century A.D.shows one official among others called Bouzyges (IG 22.1092, B, line 32). This does not mean In view of the prominence of the Eteoboutadai rather than the Bouzygai at the plowing at he was of the clan of the Bouzygai, cf. 0. Rubensohn, AM 24 (1899) 59-60. Sciron (Lysimachides, fr. 3 Jacoby), presumed to be a compromise between the two national plowings, it would be strange to find the Bouzygai dominant at both the other two. Epithets derived from the plow 14See G. E. Gerini in Hastings ERE 5. 886B. attached to Baladeva, also known as Balarama, include Haledhara, Halabhrit,Sirabhrit, Strapdni, Haldyudha, Halin; cf. Hastings ERE 7. 195B. I am grateful to a member of the audience at the time of oral presentation of this paper for reminding me of the ceremonial plowing formerly performed each spring by the Chinese emperor at the See L. C. Arlington Hsien Nung T'an, the so-called & W. Lewisohn, In Search of Old Peking (Pei-p'ing 1935) 113-116. 15Cf. Deubner (above, note 13) 68-69 for the Proerosia, 50-52 for the Thesmo- phoria. Cf. the month Praratios (Sept./Oct.) at Epidaurus, e.g., IG 42.1.103, line 136 Altar of Agriculture, in Pei-p'ing. 56 throughout the summer months by ensuring that the seed enters the ground safely and set in motion the whole process of growth which will be fostered throughout the year by other rites16 and will culminate in the harvest and its ceremonies. At Athens the Scira, the Thesmophoria, and the sacred plowing at Sciron, and perhaps the Acropolis, Arretophoria sacred plowing below spanned the period between harvest and sowing, and there are traces of a similar continuum in present-day customs reported from Lygourion in the Argolid.17 At Magnesia on the Maeander a ritual spanned the period between sowing and harvest. A bull was dedicated to Zeus Sosipolis at the beginning of sowing with a prayer for the safety of the city and its people, for peace, wealth, and the bringing forth of wheat, all other crops, and herds (SIG3 589.7, 26-31). The bull was fed throughout the winter and killed, it seems probable, at harvest time. This is but one example of a distinctive type of sacrifice of which the best-known representative is the Attic Bouphonia.18 The object of these public rites on a considerably more elaborate scale and performed for the whole state corresponds to the desires of the individual farmer as he performs his seasonal tasks. Vergil tells no one to begin the harvest before making simple dances and songs for Ceres (Georg. 1.347-50). No one, says Epictetus (Arrian, Epict. 3.21.12), leaves port before sacrificing to the gods for his safety and no one starts to sow before praying to Demeter. Maximus of Tyre shows that the individual farmer performed his own blood- less Proerosia among other agricultural rites (292.16 ff. Hobein). The ceremonies at the public plowings correspond, as Frazer pointed out, to the individual farmer's prayer as he starts his own plowing.19 Chloe, two sows, one pregnant of Lenaion: to Zeus Chthonios and Ge Chthonie, black yearling sheep "for the crops," SIG3 1024 (von Prott, Fasti Sacri [Leipzig 1896] No. 4). For Attica, e.g., the Pompaia and Zeus Melichios in the month of Maimakterion. Deubner (above, note 13) 157-58, "Zeus 16 E.g., on Myconos, crop" (along winter . with .10th early spring, 12th Posideon: sow to Bouleus) of Lenaion: Kore verwandt." a pregnant Zeus (primipara) "for the sacrifices Melichios ist ja den Zeus XO6vtosnachst 17 Cf. D. Euaggelides, Laographia 3 (1912) 675-6; K. A. Romaios, 7 (1923) the survival of this type of sacrifice, see C. [ = K.] A. Romaios, Cultes populaires de la Thrace (Collection de l'institut francais d'Athenes 18: Cahiers d'hellinisme, I; Athens 1949) 50-67. The reviling of the slayer of the plow 365. 18 Cf. Nilsson (above, note 2) 140-43. On ox should not be confused the cursing at the time of plowing. 19 J. G. Frazer, The Golden Bough (3rd ed.; London 1911) 7.50 and 53. Vol. lxxxii] 57 In some parts of' Greece this prayer was perhaps called the pratasia, a word which Hesychius, s.v., also equates with Proerosia. Hesiod says Demeter that the holy grain of Demeter may be ripe and heavy as you first begin plowing when, having taken the top of the plow- handle [the echetle]in your hand, you come down with the goad on the backs of the oxen as they tug at the strap-pin." The pose described is to be seen on a number of vase-paintings.20 Here the echetle is the center of attention. In the simple ritual, without procession, priests, and sacrifice, the farmer asks blessing on his immediate task, the plowing and sowing. Well done and with the gods' help he may truly hope for a good return. He grasps the handle of the plow as he makes his prayer, in effect, asking the gods to put their hands to what he is holding; touch plays no little part in magic and ritual.21 In battle Echetlaeus in properly heroic fashion wields the whole "Pray to Zeus of the earth and pure (Op. 465-9): plow, the arotron, but his point of origin is the plow-handle as the farmer holds it and makes his prayer. need only be the daimon developing from this prayer, a figure evolved to hear the prayer and become the recipient of whatever simple offerings, perhaps at one time predeistic, may have been made at this moment, as some modern Greek survivals strongly suggest (see above, note 17). He may, indeed, have had more of a cult than this; there is the parallel of the bloodless, private Proerosia and we know that by the fourth century B.C. the Fasti of the At the simplest, Echetlaeus 20 E.g., interior of a black-figured kylix, Berlin (1806, signed by Nicosthenes as Gerhard, Trinkschalen und Gefdsse des koeniglichen Museums zu Berlin potter, F. [Berlin 1848-50] pl. I [whence A. Baumeister, Denkmiler des klassischen Altertums (Munich & Leipzig 1885) I, pl. I, fig. 12, a, [whence J. C. Hoppin, Handbook of Greek b], Wiener Vorlegeblatter[1889] pl. VII, 2a Black-Figured Vases (Paris 1924) British Museum, 185]); 1906.12-15. pi. 10, 6b, JHS 66 (1946) pl. iii, b, f, g; black-figured kylix exterior of a black-figured kylix (Siana cup) in the 1, Corpus Vasorum III He, (band-cup) in the Louvre, F 77, 290, A and B (Baumeister [above] Sacks. Ber. [1867] pl. 1, 2). See also below, note 26. Encyclopedie photographique de l'art (Paris 1936) II, fig. 13, a, b, shows an old drawing taken from Jahn, 21 It is as an example of the combination of Plutarch (Superstit. 169B) cites one line of this pious prayer and practical action that passage (Op. 465) adding in his own to the echetle"from two lines later, and includes the passage with of the come up and, as he prays, lays his hand to tiller words "holding on other prayers offered in critical helmsman who sees a sudden storm and sail. (All such combinations of with the behaviour of the superstitious man who is by implication, with the atheist who refuses all piety.) (peristatikois) circumstances, such as that pious and practical action Plutarch contrasts incapable of practical action and, 58 Marathonian Tetrapolis paid ample attention to minor local heroes, e.g., a certain Galios to whom a ram was sacrificed, significantly, on the day before the Scira.22 But as such rites are elaborated the attention themselves. stage. not received formal, certainly not public, cult before Marathon. His name and the tradition point to the same conclusion and it is unnecessary to follow Farnell and suppose that the tradition of the founding of a public cult is only the aetiological explanation of a half-forgotten name and cult. As a figure close to the hearts of the Marathonian peasants it is not surprising that he should have appeared in battle on their fields and have been elevated to full heroic rank. We might recall the miraculous manifestation of a cloud of dust from which issued the Iacchus cry at the battle of Salamis (Hdt. 8.65) and the fact that Iacchus himself is little more than a personification of the Iacchus cry; the incident before Salamis may have had much to do with his emergence as an individ- ual, even as the worship of Pan in Attica dates from Marathon.23 Demeter seems to songs: Ioulo, from ioulos, a sheaf of grain and a song (Semus fr. 19 Miiller ap. Athen. 14, 618D-E), and Himalis (Polemo Hist. frs. 39 and 74 Miiller ap. Athen. 3, 109A and 10, 416B), himalis being the return (nostos) of grain (cf. the hero Eunostos) but also a song sung at the mill (Tryphon ap. Athen. 14, 618D; Hesych., s.v.; Pollux 4.53). The mysterious Lityerses was at least a figure in story as well as the name of a reapers' song (Pollux 4.54). Hesiod recommends prayer to Zeus Chthonios and Demeter. Characteristically, he elects the two most common patrons of agriculture throughout the Greek world: Zeus, not the god of the underworld, but of the fertile soil, the farmer's god who receives sacrifice "for the crops" in company with Ge Chthonie on Myconos (SIG3 1024.24-25). Demeter herself is Chthonia in Hermione (Paus. 2.35.5-8) and Sparta (Paus. 3.14.5). Under other titles shifts from the inconspicuous plow-handle to the rites owes his origin at least simpler Echetlaeus We may interpret Pausanias' report to mean that he had have derived two of her' epithets from farmers' 22 IG 22.1358 (von Prott, Fasti Sacri [Leipzig 1896] No. 26) B 51, 30-33. the heroes of the Tetrapolis, see S. Solders, Die A usserstadtischen Kiilte und die Einigung Attikas 83-84; (Lund M. P. 1931) 93-97. 23 Cf. Nilsson, "Die eleusinischen Gottheiten," Cf. Linos ArchRW and the cry ailinon, 32 (1935) O. Kern, "Iakchos," RE 9 (1914) 613-22. and hymen hymenaie. Hymenaeus 59 both are found in numerous agricultural roles. It is Demeter who dominates the Scira and Thesmophoria, the Eleusinian mysteries and the Proerosia, and with Zeus-like figures in Asia Minor are connected reliefs of oxen drawing the plow.24 There is reason, however, to believe that there was considerable local variation in the identity of the gods who watched over the plowing. Figures who later appear as culture heroes, the reputed inventors of the plow or the art of yoking oxen, must often be regarded as originally involved in the ceremonies at plowing time. Bouzyges, we may well believe, was no inventor but the eponym of the clan whose name derived from their hereditary role as sacred plowmen. Dionysus was also described as the first to yoke oxen to the plow (Diod. Sic. 3.64.1 and 4.4.2) and in a modern Thracian plowing ritual which shows strong traces of a Dionysiac origin, a prayer is made which Hesiod would surely have echoed: "May wheat be ten piastres the bushel! Rye five piastres the bushel! Barley three piastres the bushel! Amen, O God, that the poor may eat! Yea, O God, that poor folk be filled!"25 To Poseidon, whose agricultural ties are well recognized, were dedicated at Corinth votive plaques with representations of oxen, once apparently draw- ing a plow (JDAI In Athens, Athena Polias participated in the plowing at Sciron and, no doubt, originally dominated the plowing below the Acropolis. Athena appears beside Bouzyges and his plow on an Attic vase of about 425 B.C.26 A curious story found only in Servius (in Verg. Aen. 4.402) suggests even some special connection between Athena and the echetle: Athena invented the plow to match Demeter's 12 [1897] 31, F 729, cf. 44, No. 85). 24 For the agricultural epithets of Demeter see H. Usener, Gotternamen (2nd ed.; harvest time to Demeter Epogmios Bonn 1929) 242-47. Cf. especially the prayer at for a good crop, Anth. Pal. 6.258, and Demeter Eual6sia plow and Zeus in Asia Minor, e.g. Zeus Bronton, see AM 25 (1900) 417-18; BCH 33 For the (Hesychius, s.v.). (1909) 290-300, No. 47, 48, 52, figs. 19, 20, 24. For Zeus Dios, see GGA 159 (1897) 409-10. On gravestones in the form of dedications, pl. 55; No. 322, especially to Zeus Bronton, see MAMA 5, xxxiv-vii, and cf. MAMA 6, No. 275, pl. 49; No. 311, pl. 56; No. first yoked mules for the 362, pl. 63. The reference in Et. Mag., s.v. zeuxai, to Zeus as the one who sowing of crops is probably purely etymological. 26 25R. M. Dawkins, JHS (1906) 193-204, and, in a most important study, C. A. Romaios (above, note 18) 121 ff. in the collection of Professor (1931) p. 153, fig. 1, Corpus Religion, III, 1 (Cambridge 1940) p. 606, pl. XLV, J. D. Beazley, Attic Red-Figured Vase-Painters (Oxford 1942) p. 391, No. 19. VasorumII, pl. 48, 2, A. B. Cook, Zeus: A Study in Ancient D. M. Robinson, now in Oxford, Mississippi, AJA 26 Red-figured bell krater by the Hephaestus painter 35 60 MA.H. Jameson agricultural benefactions; a favorite of Athena's, an Attic maiden by the name of Myrmix, stole the stiva, i.e., echetle of course, the plow could not be guided and the share would not bite the earth), and claimed the invention for her own. She was turned into an ant, the farmer's enemy, as punishment. Zeus in pity saw to it that the illustrious Myrmidons were descended from her.27 Even in Hesiod's Boeotia, where the plowing month was called Damatrios (Plut., Is. et Os. 69, 378E), Athena had the title Boarmia (Lycophr. 520 and schol.) and in Thessaly she was Boudeia (Lycophr. 359 and Tzetzes ad loc.), both names referring to her yoking Bien plus que des documents. Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.Annulez à tout moment.
https://fr.scribd.com/document/207444602/Jameson-The-Hero-Echetlaeus-pdf
CC-MAIN-2020-45
refinedweb
5,997
72.26
GitHub repository MicroPython works great on ESP32, but the most serious issue is still (as on most other MicroPython boards) limited amount of free memory.MicroPython works great on ESP32, but the most serious issue is still (as on most other MicroPython boards) limited amount of free memory.The repository can be used to build MicroPython for modules/boards with SPIRAM as well as for regular ESP32 modules/boards without SPIRAM. As of Sep 18, 2017 full support for psRAM is included into esp-idf and xtensa toolchain. Building on Linux, MacOS and Windows is supported. You can support this project by donating via PayPal ESP32 can use external SPI RAM (psRAM) to expand available RAM up to 16MB. Currently, there are several modules & development boards which incorporates 4MB of SPIRAM: ESP-WROVER-KIT boards from Espressif, AnalogLamb or Electrodragon. ESP-WROVER from Espressif, AnalogLamb or Electrodragon. ALB32-WROVER from AnalogLamb. S01 and L01 OEM modules from Pycom. The repository contains all the tools and sources necessary to build working MicroPython firmware which can fully use the advantages of 4MB (or more) of SPIRAM It is huge difference between MicroPython running with less than 100KB of free memory and running with 4MB of free memory. Some basic documentation specific to this MicroPython port is available. It will soon be updated to include the documentation for all added/changed modules. Some examples can be found in modules_examples directory. The MicroPython firmware is built as esp-idf component This means the regular esp-idf menuconfig system can be used for configuration. Besides the ESP32 configuration itself, some MicroPython options can also be configured via menuconfig. This way many features not available in standard ESP32 MicroPython are enabled, like unicore/dualcore, all Flash speed/mode options etc. No manual sdkconfig.h editing and tweaking is necessary. Features and some differences from standard MicroPython ESP32 build - MicroPython build is based on latest build (1.9.2) from main MicroPython repository. - ESP32 build is based on MicroPython's ESP32 build) with added changes needed to build on ESP32 with SPIRAM and with esp-idf build system. - Default configuration for SPIRAM build has 2MB of MicroPython heap, 20KB of MicroPython stack, ~200KB of free DRAM heap for C modules and functions - MicroPython can be built in unicore (FreeRTOS & MicroPython task running only on the first ESP32 core, or dualcore configuration (MicroPython task running on ESP32 App core). - ESP32 Flash can be configured in any mode, QIO, QOUT, DIO, DOUT - BUILD.sh script is provided to make building MicroPython firmware as easy as possible - Internal filesystem is built with esp-idf wear leveling driver, so there is less danger of damaging the flash with frequent writes. File system parameters (start address, size, ...) can be set via menuconfig. - sdcard module is included which uses esp-idf sdmmc driver and can work in SD mode (1-bit and 4-bit) or in SPI mode (sd card can be connected to any pins). On ESP-WROVER-KIT it works without changes, for information on how to connect sdcard on other boards check the dicumentation - Native ESP32 VFS support is used for spi Flash (FatFS or SPIFS) & sdcard filesystems. - SPIFFS filesystem support, can be used instead of FatFS on SPI Flash. Configurable via menuconfig - RTC Class is added to machine module, including methods for synchronization of system time to ntp server, deepsleep, wakeup from deepsleep on external pin level, ... - Time zone can be configured via menuconfig and is used when syncronizing time from NTP server - files timestamp is correctly set to system time both on internal fat filesysten and on sdcard - Built-in ymodem module for fast transfer of text or binary files of any size to/from host. Uses the same uart on which REPL runs. - Some additional frozen modules are added, like pye editor, urequests, functools, logging, ... - Btree module is included, can be Enabled/Disabled via menuconfig. - _threads module is greatly improved, inter-thread notifications and messaging included - Neopixel module using ESP32 RMT peripheral with many new features, unlimited number of pixels - i2c module uses ESP32 hardware i2c driver - spi module uses ESP32 hardware spi driver - curl module added, http/https get,post, send mail (including gMail), ftp client (get, put, list) - ssh module added, sftp get, put, list, mkdir, execute any command on host - display module added with full support for spi TFT displays - DHT module implemented using ESP32 RMT peripheral - mqtt module added, implemented in C, runs in separate task - telnet module added, connect to **REPL via WiFi** using telnet protocol - ftp server module added, runs as separate ESP32 task - GSM module with PPPoS support, all network functions works the same as with WiFi, SMS, AT commands, ... - NVS support in machine module - Eclipse project files are included. To include the project into Eclipse goto File->Import->Existing Projects into Workspace->Select root directory->[select MicroPython_BUILD directory]->Finish. After opening, rebuild the index. How to Build Clone the repository: Code: Select all git clone Goto MicroPython_BUILD directory To change some ESP32 & Micropython options run: Code: Select all ./BUILD.sh menuconfig Code: Select all ./BUILD.sh If using too high N the build may fail, if that happens, run build again or run without the -j option. If no errors are detected, you can now flash the MicroPython firmware to your board. Run: Code: Select all ./BUILD.sh flash You can also run ./BUILD.sh monitor to use esp-idf's terminal program, it will reset the board automatically. BUILD.sh Included BUILD.sh script makes building MicroPython firmware easy. Usage: - ./BUILD.sh - run the build, create MicroPython firmware - ./BUILD.sh -jn - run the build on multicore system, much faster build. Replace n with the number of cores on your system - ./BUILD.sh menuconfig - run menuconfig to configure ESP32/MicroPython - ./BUILD.sh clean - clean the build - ./BUILD.sh flash - flash MicroPython firmware to ESP32 - ./BUILD.sh erase - erase the whole ESP32 Flash - ./BUILD.sh monitor - run esp-idf terminal program - ./BUILD.sh makefs - create SPIFFS file system image which can be flashed to ESP32 - ./BUILD.sh flashfs - flash SPIFFS file system image to ESP32, if not created, create it first - ./BUILD.sh copyfs - flash the default SPIFFS file system image to ESP32 - ./BUILD.sh makefatfs - create FatFS file system image which can be flashed to ESP32 - ./BUILD.sh flashfatfs - flash FatFS file system image to ESP32, if not created, create it first - ./BUILD.sh copyfatfs - flash the default FatFS file system image to ESP32 To build with SPIRAM support: In menuconfig select → Component config → ESP32-specific → Support for external, SPI-connected RAM In menuconfig select → Component config → ESP32-specific → SPI RAM config → Make RAM allocatable using heap_caps_malloc After the successful build the firmware files will be placed into firmware directory. flash.sh script will also be created which can be used for flashing the firmware without building it first. Using SPIFFS filesystem SPIFFS filesystem can be used on internal spi Flash instead of FatFS. If you want to use it configure it via menuconfig → MicroPython → File systems → Use SPIFFS Prepared image file can be flashed to ESP32, if not flashed, filesystem will be formated after first boot. SFPIFFS image can be prepared on host and flashed to ESP32: Copy the files to be included on spiffs into components/spiffs_image/image/ directory. Subdirectories can also be added. Execute: Code: Select all ./BUILD.sh makefs Execute: Code: Select all ./BUILD.sh flashfs Execute: Code: Select all ./BUILD.sh copyfs Some examples Using new machine methods and RTC: Code: Select all import machine rtc = machine.RTC() rtc.init((2017, 6, 12, 14, 35, 20)) rtc.now() rtc.ntp_sync(server="<ntp_server>" [,update_period=]) <ntp_server> can be empty string, then the default server is used ("pool.ntp.org") rtc.synced() returns True if time synchronized to NTP server rtc.wake_on_ext0(Pin, level) rtc.wake_on_ext1(Pin, level) wake up from deepsleep on pin level machine.deepsleep(time_ms) machine.wake_reason() returns tuple with reset & wakeup reasons machine.wake_description() returns tuple with strings describing reset & wakeup reasons Code: Select all import uos uos.mountsd() uos.listdir('/sd') Code: Select all >>> import uos >>> uos.mountsd(True) --------------------- Mode: SD (4bit) Name: NCard Type: SDHC/SDXC Speed: default speed (25 MHz) Size: 15079 MB CSD: ver=1, sector_size=512, capacity=30881792 read_bl_len=9 SCR: sd_spec=2, bus_width=5 >>> uos.listdir() ['overlays', 'bcm2708-rpi-0-w.dtb', ...... Code: Select all rst:0x1 (POWERON_RESET),boot:0x30010,len:4 load:0x3fff0014,len:5656 load:0x40078000,len:0 ho 12 tail 0 room 4 load:0x40078000,len:13220 entry 0x40078fe4 W (36) rtc_clk: Possibly invalid CONFIG_ESP32_XTAL_FREQ setting (40MHz). Detected 40 MHz. I (59) boot: ESP-IDF ESP32_LoBo_v1.9.1-13-gfecf988-dirty 2nd stage bootloader I (60) boot: compile time 21:07:29 I (108) boot: Enabling RNG early entropy source... I (108) boot: SPI Speed : 40MHz I (108) boot: SPI Mode : DIO I (115) boot: SPI Flash Size : 4MB I (128) boot: Partition Table: I (139) boot: ## Label Usage Type ST Offset Length I (162) boot: 0 nvs WiFi data 01 02 00009000 00006000 I (185) boot: 1 phy_init RF data 01 01 0000f000 00001000 I (209) boot: 2 MicroPython factory app 00 00 00010000 00270000 I (232) boot: 3 internalfs Unknown data 01 81 00280000 00140000 I (255) boot: End of partition table I (268) esp_image: segment 0: paddr=0x00010020 vaddr=0x3f400020 size=0x48a74 (297588) map I (613) esp_image: segment 1: paddr=0x00058a9c vaddr=0x3ffb0000 size=0x07574 ( 30068) load I (650) esp_image: segment 2: paddr=0x00060018 vaddr=0x400d0018 size=0xc83f4 (820212) map 0x400d0018: _stext at ??:? I (1525) esp_image: segment 3: paddr=0x00128414 vaddr=0x3ffb7574 size=0x052d0 ( 21200) load I (1551) esp_image: segment 4: paddr=0x0012d6ec vaddr=0x40080000 size=0x00400 ( 1024) load 0x40080000: _iram_start at /home/LoBo2_Razno/ESP32/MicroPython/MicroPython_ESP32_psRAM_LoBo/Tools/esp-idf/components/freertos/./xtensa_vectors.S:1675 I (1553) esp_image: segment 5: paddr=0x0012daf4 vaddr=0x40080400 size=0x1a744 (108356) load I (1711) esp_image: segment 6: paddr=0x00148240 vaddr=0x400c0000 size=0x0006c ( 108) load I (1712) esp_image: segment 7: paddr=0x001482b4 vaddr=0x50000000 size=0x00400 ( 1024) load I (1794) boot: Loaded app from partition at offset 0x10000 I (1794) boot: Disabling RNG early entropy source... I (1800) spiram: SPI RAM mode: flash 40m sram 40m I (1812) spiram: PSRAM initialized, cache is in low/high (2-core) mode. I (1834) cpu_start: Pro cpu up. I (1846) cpu_start: Starting app cpu, entry point is 0x400814e4 0x400814e4: call_start_cpu1 at /home/LoBo2_Razno/ESP32/MicroPython/MicroPython_ESP32_psRAM_LoBo/Tools/esp-idf/components/esp32/./cpu_start.c:219 I (0) cpu_start: App cpu up. I (4612) spiram: SPI SRAM memory test OK I (4614) heap_init: Initializing. RAM available for dynamic allocation: I (4615) heap_init: At 3FFAE2A0 len 00001D60 (7 KiB): DRAM I (4633) heap_init: At 3FFC30C0 len 0001CF40 (115 KiB): DRAM I (4653) heap_init: At 3FFE0440 len 00003BC0 (14 KiB): D/IRAM I (4672) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM I (4692) heap_init: At 4009AB44 len 000054BC (21 KiB): IRAM I (4712) cpu_start: Pro cpu start user code I (4777) cpu_start: Starting scheduler on PRO CPU. I (2920) cpu_start: Starting scheduler on APP CPU. FreeRTOS running on BOTH CORES, MicroPython task started on App Core. uPY stack size = 19456 bytes uPY heap size = 2097152 bytes (in SPIRAM using heap_caps_malloc) Reset reason: Power on reset Wakeup: Power on wake I (3130) phy: phy_version: 359.0, e79c19d, Aug 31 2017, 17:06:07, 0, 0 Starting WiFi ... WiFi started Synchronize time from NTP server ... Time set MicroPython ESP32_LoBo_v2.0.2 - 2017-09-19 on ESP32 board with ESP32 Type "help()" for more information. >>> >>> import micropython, machine >>> micropython.mem_info() stack: 736 out of 19456 GC: total: 2049088, used: 6848, free: 2042240 No. of 1-blocks: 37, 2-blocks: 9, max blk sz: 329, max free sz: 127565 >>> machine.heap_info() Free heap outside of MicroPython heap: total=2232108, SPISRAM=2097108, DRAM=135000 >>> >>> a = ['esp32'] * 200000 >>> >>> a[123456] 'esp32' >>> >>> micropython.mem_info() stack: 736 out of 19456 GC: total: 2049088, used: 807104, free: 1241984 No. of 1-blocks: 44, 2-blocks: 13, max blk sz: 50000, max free sz: 77565 >>> Tested on ESP-WROVER-KIT v3, Adafruit HUZZAH32 - ESP32 Feather
https://forum.micropython.org/viewtopic.php?p=20833
CC-MAIN-2019-39
refinedweb
2,005
54.32
As its name implies, MyJarExplorer is a lightweight and easy to use application that you can use to view and edit the contents of JAR files located on your computer. The program does not provide users with a standard interface, but once installed on your system, it adds a new item in the context menu of each JAR file, enabling you to edit it easily. Simply double click on any JAR to explore its contents. The Explorer window provides a number of views for JAR resources. Thus, you can explore the file manifest (displays the version, author and main class) and view information about all the classes, methods and fields. The entry data is editable, so you just have to make the desired modifications and press the 'Save' button. Once you do so, the class-path and the main class attributes of the manifest file are automatically updated. The case-sensitive search tool is designed to help you easily navigate between classes, but you can also search for a certain string in a class definition. In addition, the application allows you to run executable JARs, create new manifests and manage dependencies, which means that it can store a list of files that are required for opening the current JAR Moving and renaming the JAR file is also possible and you are advised to use these options, as MyJarExplorer can keep track of the dependencies, while moving the file manually might make it nonfunctional. Since it is created using the Java programming language, the application can be installed on any platform: Windows, MAC OSX or Linux. MyJarExplorer is designed to provide you with a simple solution for exploring and editing JAR files. It can prove to be a handy tool for any Java developer out there. MyJarExplorer Crack + Free Download [March-2022] This small application lets you explore JAR files and modify their manifest data without opening them. With MyJarExplorer Crack Keygen, you have the following functionalities: • View JAR information: the version, the author and the main class name. • Edit the manifest: add, modify and remove entries. • View programmatic entry data: fields, methods and their parameter types. • Open executable JAR files. • Create, edit and manage dependencies. • View, move and rename a JAR file. • Check if the JAR file is digitally signed. This application is freely available for Windows and Linux. MyJarExplorer is a free program that can be used to explore the contents of JAR files on Windows and Linux systems. What’s New in This Release: – Minor correction in all the possible menus. – New sections for signatures and dependencies. Windows 2000 and XP This small utility creates a text file in the “C:\Program Files\Common Files” folder with the name “LICENCE.” Licence.txt stores the Windows Software License agreement. Connectivity Serial cables This program can detect the USB, Serial, Ethernet, Phone and Printer cus. It has the capacity to create a chart of the cus connected to your computer. Great program… Doesn’t work with Windows 7 though, it hangs on start up. I love this program. I am using it every time I change an email or do something related to my email. It just makes so much more sense. The only thing I would like is if I can get some sort of notification when I have the emails ready to go. For example, I get the email, then download to my iPhone. I then go into the phone and look for a different email and select the one I downloaded to the iPhone. I come back to my computer and do the same thing again. With this program I have to keep my computer running all the time. Not only does this drive me crazy, it makes me want to skip the email. It’s absolutely wonderful! It’s exactly what I needed. It’s good. If you haven’t tried it, you should. It’s free. It has some problems running under Windows 7. But the developer seems to be working on it. The description is better than the program. It describes very clearly the MyJarExplorer Crack+ With License Key Free [Latest] MyJarExplorer Screen Shots: View videos from YouTube, FaceBook and many more, all in one place, from a single application. Use gestures to navigate, and search for the videos you want to watch. Search is case sensitive, so make sure you are typing the correct spelling. The videos you watch are stored directly to your computer without any permission needed from YouTube. The videos are also downloaded with their original resolution. Use the Brightness and Contrast tools to adjust the details. Videos you watch are deleted after you finish watching. All the videos you download are stored in the Videos folder of the program’s main folder, which is located in the same place as the main folder of the program. If you have both program and app installed, they communicate together, so both applications are synchronized. You are free to download videos only from the program. Videos can be added or deleted from the app. The program is designed to be very stable and efficient. If you don’t find the video you want, a search can find it for you. Settings are saved in the program’s main configuration folder, which is stored in the same place as the main program folder. Once you open the program, a dialog box appears in which you select your language. Settings saved from the dialog box are stored to the program’s main configuration folder, which is located in the same place as the main folder of the program. While viewing videos, you can turn your Android phone or tablet off. You can have more than one instance of the application running at the same time. Jar File Explorer allows you to open and view all the files, folders and JAR files in a Windows Explorer like window. You can open and copy, delete and organize files and folders in a Windows Explorer like tree. The main window displays an index of the files and folders in a tree, so you can use it to navigate to specific files and folders very easily. You are able to customize the window, changing it to the size you want, making it transparent and arranging your buttons. The file explorer is heavily customizable. You can use resizing if you want. You can even make it to show/hide specific type of files or folders. You can enable/disable the search engine, organizing your tree in the way 09e8f5149f MyJarExplorer Activation Code With Keygen Free Download MyJarExplorer is a lightweight application for opening and viewing JAR files and their content. About the Download.com Installer The Download.com Installer securely delivers software from Download.com’s servers to your computer. During this process, the Download.com Installer may offer other free applications provided by our partners. All offers are optional for your personal enjoyment. No purchases are required to download or install the application. The Download.com Installer is designed to automatically detect your environment. Its mission is to install your software and entitle you to enjoy your computer boundless amounts of free entertainment and functionality. Simply click the yellow button below and follow the instructions. The entire process will take about five minutes.Q: Datetime to String conversion in python I am working on a code which takes pandas dataframe values and attempts to parse it as a datetime (this is the only datatype, I cannot change the data type as I have thousands of values at the end of this column). When I convert a datetime to a string with the format “YYYYMMDD” it returns an empty string. import pandas as pd index = pd.DatetimeIndex(data=pd.to_datetime(df[‘Datum’])) df = df[pd.to_datetime(df[‘Datum’])] df.loc[index] This returns: If I do the same thing but changing the format to “YYYYMMDD-HHMMSS” then it works! Why does it work if I use the “YYYYMMDD-HHMMSS” format, but not “YYYYMMDD”? A: The first argument to the.loc accessor should be an index or slice. So instead of df.loc[pd.to_datetime(df[‘Datum’])] try df.loc[pd.to_datetime(df[‘Datum’]).strftime(‘%Y%m%d%H%M%S’)] The use of diagnostic ultrasound in the treatment of portosystemic venous shunts. Diagnostic ultrasound (DUS) is a complementary, non-invasive, and non-radiographic method for screening the portal and splenic veins. In the last two decades, DUS has gained an increasing What’s New In? “MyJarExplorer is a small Java application that allows you to explore the contents of JAR files on your computer. “MyJarExplorer allows you to open any JAR file by double-clicking on it, and displays information about its classes, methods and fields. You can edit the contents of the manifest file and run the application. The option to do so makes it possible to uninstall the application from your computer, or to delete all its files. “MyJarExplorer should work on any platform, but it is especially recommended for those who have to work with JDK 1.4.X-based applications. ” What’s New in this Release: · The product has been fully translated into English and Polish.· The application can also generate new manifest files (for the main class and for all the classes and jars in the same JAR file). MyJarExplorer 2.4 – Explorer program for JAR files Oct 22, 2006 · Updated to work with JDK 1.4.x and JRE 6.0 · Added the ability to open the main class attribute of each JAR file· Updated Chinese translations MyJarExplorer 2.3 – Explorer program for JAR files Nov 20, 2005 · Updated to work with JDK 1.3.1 · Updated Chinese translations MyJarExplorer 2.2 – Explorer program for JAR files Mar 5, 2005 · Updated to work with JDK 1.3 · Added the ability to run the JAR file. MyJarExplorer 2.1 – Explorer program for JAR files Feb 25, 2005 · Updated to work with JDK 1.2.2 · Updated Chinese translations MyJarExplorer 2.0 – Explorer program for JAR files Jan 9, 2005 · Added the ability to open any JAR file, for both Windows and Unix systems. · Updated Chinese translations MyJarExplorer 1.5 – Explorer program for JAR files Jun 23, 2004 · Introduced a file search tool, which allows you to easily navigate and search for a JAR file. · Updated to work with JDK 1.2 · Updated Chinese translations MyJarExplorer 1.4 – Explorer program for JAR files Jun 23, 2004 · Updated to work with JD System Requirements For MyJarExplorer: Operating System: Windows 7 / 8 / 8.1 / 10 Windows 7 / 8 / 8.1 / 10 Processor: Intel Core i5-4670 (3.4 GHz) or better. Intel Core i5-4670 (3.4 GHz) or better. Memory: 2 GB RAM 2 GB RAM Graphics: NVIDIA GeForce GTX 660 2GB or better, AMD Radeon 7870 2GB or better NVIDIA GeForce GTX 660 2GB or better, AMD Radeon 7870 2GB or better DirectX: Version 11 Version 11
https://fitenvitaalfriesland.nl/myjarexplorer-crack-patch-with-serial-key-free-3264bit/
CC-MAIN-2022-40
refinedweb
1,822
65.62
I would like help with the proper python script to field calculate a fields value from two separate fields plus a sequential number when there is more than 1 record with the same facility number. Basically build a expression where the fields rpsuid + facilityNumber + sequential number if more than two records with the same facility number. The field waterUtilityNodeIDPK I populated manually to show what I would like the result to look like. Please notice the record number with the facility number named Comm where there is only one record. In the IDPK field the value is 6705Comm with no sequential number since there is only one. Have you got a start on anything? I have some python code that does sequential numbering which works and I would think I need some type of if then statement to only add a sequential number at the end if more than one value . I am a novice to python and have taken some course work but programming doesn't come natural to me. I really struggle. Willing to to learn just need help and hints. I'd say this might be easiest to accomplish in two passes over your feature class using data access cursors, at least to make it understandable when you're learning Python. The data type 'dictionary' in Python is a great way to deal with tabular data, the first pass on your table can use an arcpy.da.SearchCursor to create a dictionary where the keys are your rpsuid numbers, and the value is a list of the features' ObjectIDs that have the same rpsuid. Then on the second pass with an arcpy.da.UpdateCursor, you can check the rpsuid against the dictionary's keys, which will return the list of ObjectIDs. Then find the ObjectID's position in the list and append it to that Feature's facility to ultimately get the final value for the idpk field. I'll leave it as a learning exercise But feel free to ask if you want some help with the code. How about something like: layer = 'layerName' # your layer or shapefile field0 = "rpsuid" # first field to concatenate field1 = "facilityNumber" # second field to concatenate field2 = "waterUtilityNodeIDPK" # field to update d = {} # dictionary for counting with arcpy.da.UpdateCursor(layer, [field0, field1, field2]) as rows: for row in rows: dictValue = "{}{}".format(row[0],row[1]) if dictValue not in d.keys(): d[dictValue] = 0 # insert key into dictionary and set value to 0 or 1 # 0 will start appending with '_1' and 1 will start with '_2' row[2] = dictValue # value to update field # print dictValue else: d[dictValue] += 1 #increment value in dictionary row[2] = "{}_{}".format(dictValue,d[dictValue]) # value to update field # print "{}_{}".format(dictValue,d[dictValue]) rows.updateRow(row) # print d You may be able to add an sql_clause in the UpdateCursor if you want a certain order. The sequential numbering is not exactly as you desire; that would require two passes. But this produces results that may be acceptable: 6705Water 6705Water_1 6705Sewer 6705Sewer_1 6705Sewer_2 6705Comm In the perfect scenario, your field would be sorted as in your example, so that replicant case are sequential. In such rare situations you can use... old = "" cnt = 0 def seq_count(val): global old global cnt if old == val: cnt += 1 ret = "{} {:04.0f}".format(val, cnt) else: cnt = 0 ret = "{} {:04.0f}".format(val, cnt) old = val return ret __esri_field_calculator_splitter__ seq_count(!Test!) Replacing the !Test! field with your field's name. Of course this is python etc. However... unless the table is sorted physically (not just hitting the sort ascending/descending option), then the above won't work, and you would have to build a dictionary to save your counts. So this is just an example and probably not a solution unless you want to sort your table on that field Using two passes, this provides the sequential numbering desired. layer = 'layerName' # your layer or shapefile field0 = "rpsuid" # first field to concatenate field1 = "facilityNumber" # second field to concatenate field2 = "waterUtilityNodeIDPK" # field to update d0 = {} # dictionary for counting (first pass) d1 = {} # dictionary for counting (second pass) # first pass - just count with arcpy.da.SearchCursor(layer, [field0, field1, field2]) as rows: for row in rows: dictValue = "{}{}".format(row[0],row[1]) if dictValue not in d0.keys(): d0[dictValue] = 1 # insert key into dictionary and set value to 1 else: d0[dictValue] += 1 # increment value in dictionary # second pass - update with arcpy.da.UpdateCursor(layer, [field0, field1, field2]) as rows: for row in rows: dictValue = "{}{}".format(row[0],row[1]) if dictValue not in d1.keys(): d1[dictValue] = 1 # insert key into dictionary and set value to 1 # check value in d0 from first pass if d0[dictValue] > 1: row[2] = "{}_{}".format(dictValue,d1[dictValue]) # value to update field else: row[2] = dictValue # value to update field else: d1[dictValue] += 1 #increment value in dictionary row[2] = "{}_{}".format(dictValue,d1[dictValue]) # value to update field rows.updateRow(row) print "Done" Results: 6705Water_1 6705Water_2 6705Sewer_1 6705Sewer_2 6705Sewer_3 6705Comm Randy- Appreciate your help with this. Correct me if I am wrong but the way the code was written would be appropriate if one was running the code from say idle or the command line, right? If I plugged this into the field calculator I wouldn't need the first line of code layer = 'layerName' # your layer or shapefile as well as line 4 field2= "waterUtilityNodeIDPK" # field to update , right? Randy's code isn't meant to run in the field calculator. You would have to turn it into a 'def' as in my example You can run the code inside ArcMap's python window, or by adding a few additional lines ("import arpy", path to layer, etc) you can run it frome idle or command line. It is not meant to run in field calculator.
https://community.esri.com/t5/python-questions/field-calculating-with-python-script-using-multiple-fields-and/m-p/488766
CC-MAIN-2021-21
refinedweb
973
61.06
Working With Multiple Cloud Providers (Part 1): Azure Functions Working With Multiple Cloud Providers (Part 1): Azure Functions Dive into the multi-cloud world with this Christmastime-themed problem in need of a solution that GCP and Azure can work together to fix. Join the DZone community and get the full member experience.Join For Free Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform. Regular readers may have noticed that I’ve recently been writing a lot about two main cloud providers. I won’t link to all the articles, but if you’re interested, a quick search for either Azure or Google Cloud Platform will yield several results. Since it’s Christmastime, I thought I’d do something a bit different and try to combine them. This isn’t completely frivolous; both have advantages and disadvantages: GCP is very geared towards big data, whereas the Azure Service Fabric provides a lot of functionality that might fit well with a much smaller LOB app. So, what if we had the following scenario: Santa has to deliver presents to every child in the world in one night. Santa is only one man* and Google tells me there are 1.9B children in the world, so he contracts out a series of delivery drivers. There needs to be around 79M deliveries every hour, and let’s assume that each delivery driver can work 24 hours**. Each driver can deliver, say 100 deliveries per hour, which means we need around 790,000 drivers. Every delivery driver has an app that links to their depot; recording deliveries, schedules, etc. That would be a good app to write in, say, Xamarin, and maybe have an Azure service running it; here’s the obligatory box diagram: The service might talk to the service bus, might control stock, send e-mails, all kinds of LOB jobs. Now, I’m not saying for a second that Azure can’t cope with this, but what if we suddenly want all of these instances to feed metrics into a single data store. There are 190*** countries in the world; if each has a depot, then there’s ~416K messages/hour going into each Azure service. But there’s 79M/hour going into a single DB. Because it’s Christmas, let assume that Azure can’t cope with this, or let’s say that GCP is a little cheaper at this scale; or that we have some Hadoop jobs that we’d like to use on the data. In theory, we can link these systems; which might look something like this: So, we have multiple instances of the Azure architecture, and they all feed into a single GCP service. Disclaimer: At no point during this post will I attempt to publish 79M records/hour to GCP BigQuery. Neither will any Xamarin code be written or demonstrated – you have to use your imagination for that bit. Proof of Concept Given the disclaimer I’ve just made, calling this a proof of concept seems a little disingenuous; but let’s imagine that we know that the volumes aren’t a problem and concentrate on how to link these together. Azure Service Let’s start with the Azure Service. We’ll create an Azure function that accepts a HTTP message, updates a DB and then posts a message to Google PubSub. Storage For the purpose of this post, let’s store our individual instance data in Azure Table Storage. I might come back at a later date and work out how and whether it would make sense to use CosmosDB instead. We’ll set-up a new table called Delivery: Azure Function Now we have somewhere to store the data, let’s create an Azure Function App that updates it. In this example, we’ll create a new Function App from VS: In order to test this locally, change local.settings.json to point to your storage location described above. And here’s the code to update the table: public static class DeliveryComplete { [FunctionName("DeliveryComplete")] public static HttpResponseMessage Run( [HttpTrigger(AuthorizationLevel.Function, "post", Route = null)] HttpRequestMessage req, TraceWriter log, [Table("Delivery", Connection = "santa_azure_table_storage")] ICollector < TableItem > outputTable) { log.Info("C# HTTP trigger function processed a request."); // parse query parameter string childName = req.GetQueryNameValuePairs() .FirstOrDefault(q => string.Compare(q.Key, "childName", true) == 0) .Value; string present = req.GetQueryNameValuePairs() .FirstOrDefault(q => string.Compare(q.Key, "present", true) == 0) .Value; var item = new TableItem() { childName = childName, present = present, RowKey = childName, PartitionKey = childName.First().ToString() }; outputTable.Add(item); return req.CreateResponse(HttpStatusCode.OK); } public class TableItem: TableEntity { public string childName { get; set; } public string present { get; set; } } } Testing There are two ways to test this; the first is to just press F5; that will launch the function as a local service, and you can use PostMan or similar to test it; the alternative is to deploy to the cloud. If you choose the latter, then your local.settings.json will not come with you, so you’ll need to add an app setting: Remember to save this setting, otherwise, you’ll get an error saying that it can’t find your setting, and you won’t be able to work out why – ask me how I know! Now, if you run a test … You should be able to see your table updated (shown here using Storage Explorer): Summary We now have a working Azure function that updates a storage table with some basic information. In the next post, we’ll create a GCP service that pipes all this information into BigQuery and then link the two systems. Footnotes * Remember, all the guys in Santa suits are just helpers. ** That brandy you leave out really hits the spot! *** I just Googled this – it seems a bit low to me, too. Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers. Published at DZone with permission of Paul Michaels , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/working-with-multiple-cloud-providers-9-part-1-azu?fromrel=true
CC-MAIN-2018-51
refinedweb
1,032
60.55
At the moment, I'm working on a package to control a Sanyo security vcr via RS-232C. Not all controllable devices are VCRs, but it's all I'm interested in. The protocol is called SSSP (Sanyo Serial Security Protocol). These devices can also be controlled by RS-485. I've got no plans on implementing that functionality. I'd really like this to end up on CPAN when I'm done. Any suggestions for a namespace that makes sense giving the particulars? Logically something like the following seems ok... but ugly. is ::Serial necessary since SSSP is a Serial Protocol? or are you trying to insinuate that RS-232C is serial and rs-485 isn't? 485 is cool stuff too. :D so ... either drop the ::Serial or use ::RS232 or something. Thunder fish Shocky knifefish TBD Electric eels were invented at the same time as electricity Before electricity was invented, electric eels had to stun with gas Results (292 votes). Check out past polls.
http://www.perlmonks.org/index.pl?node=509611
CC-MAIN-2017-09
refinedweb
167
68.67
Hi to all of you, I will have a project base on the Mega Pro 2560 3.3 V (a clone of the Mega 2560 by Sparkfun) where a battery will be used. I need to measure that battery’s voltage. I came across the secretVolmeter and found out the proper code for the MEGA2560 here . The code is:) ; #else ADMUX = _BV(REFS0) | _BV(MUX3) | _BV(MUX2) | _BV(MUX1); #endif delay } That works fine… until I use analogRead(), then I get -1 (in fact the ADC value is 0) :~. Does some one know why readVcc() goes wrong after calling analogRead() ? Must be one of the registers that’s wrong afterwards, because if I reset the board, the readVcc() works fine. AnalgoRead() seems to set ADCSRB, but stragely readVcc() does not, and that’s strange because other versions of readVcc do. Thanks, Gilles Plante
https://forum.arduino.cc/t/the-secretvoltmeter-again-for-mega-2560/201612
CC-MAIN-2021-31
refinedweb
143
81.63
Updated to mid-August 2011 ?xml:namespace> Asian market review by Bee Lin Chow, ICIS pricing Polypropylene (PP) prices in ?xml:namespace> Concerns of further credit tightening measures in However, fears of a double-dip recession triggered a downward correction in PP prices in mid-August. The average weekly prices of injection and yarn grade PP, isotactic PP and biaxially oriented PP prices fell by 0.3–5.7% to $1,560–1,595/tonne CFR (cost & freight) China market review by Amy Yu, ICIS pricing Domestic polypropylene (PP) yarn prices dropped to yuan (CNY) 11,550-11,900/tonne at the end of June, hit their lowest level in early July, then rose to CNY12,550-12,900/tonne in early August. Prices fell to CNY12,200-12,500/tonne in mid-August because of sluggish transactions. Downstream users suffered from fund pressures as a result of the Chinese government’s tight monetary policy. This, together with end-users’ lower operating rates caused by a power shortage in coastal areas, led to falling PP yarn prices, with values reached their lowest level in late June. Domestic supply decreased in July as many units were undergoing maintenance. Petrochemical giants’ sales branches lifted their ex-works (EXW) prices. This prompted optimism among participants about the outlook, and values rose sharply until early August. However, when crude oil prices plunged in early August, participants retreated to the sidelines. Transactions decreased and prices progressively softened until mid-August. European market review by Stephanie Wilson, ICIS pricing Falling upstream propylene values, a wide gap between contract and spot values and hesitant buying interest enabled European polypropylene (PP) buyers to secure reductions of €190-210/tonne in the domestic market between mid-May and mid-August. Expectations of lower future prices drove consumers to the sidelines of the market. This, coupled with firm cracker margins, which made integrated producers reluctant to cut operating rates for much of May-July, created abundant supply. This was compounded by a surge of aggressively priced imported material, mostly originating from the US market review by Michelle Klump, ICIS pricing In the period from mid-May to mid-August, US polypropylene (PP) prices saw large swings along with the price of propylene, ultimately ending with a net decrease of 10 cents/lb ($220/tonne) over the three-month period. In all cases, PP prices followed monomer pricing, with a 9 cent/lb increase in May, followed by a 15 cent/lb decrease in June and a 4 cent/lb decrease in July. High prices, particularly compared with low Asia and Middle East prices, contributed to weak demand, and had some buyers considering importing material from Asia. The By August, sources were expecting between a rollover to an increase of 2 cents/lb, with market participants predicting small price fluctuations for the balance of the year. Latin American market review by George Martin, ICIS pricing The May-August period featured a soft and gradual decline in polypropylene (PP) prices. Currency fluctuations injected some volatility into otherwise steadily declining prices. Values followed crude oil prices, which came down from levels above $112/bbl to the low $80s/bbl in early August. The steep decline that started at the end of July had yet to be reflected in PP prices. In In In Prices in Venezuelan PP prices were steady during this quarter. But it was thought that imports entering the country might produce a sharp increase in the short term.
http://www.icis.com/resources/news/2007/11/06/9076429/polypropylene-pp-prices-and-pricing-information/
CC-MAIN-2014-10
refinedweb
578
50.67
Getting Started 1. Requirements 1.1 Web Browsers Ext JS 4 supports all major web browsers, from Internet Explorer 6 to the latest version of Google Chrome. During development, however, we recommend that you choose one of the following browsers for the best debugging experience: - Google Chrome 10+ - Apple Safari 5+ - Mozilla Firefox 4+ with the Firebug Web Development Plugin This tutorial assumes you are using the latest version of Google Chrome. If you don't already have Chrome take a moment to install it, and familiarize yourself with the Chrome Developer Tools. 1.2 Web Server Even though a local web server is not a requirement to use Ext JS 4, it is still highly recommended that you develop with one, since XHR over local file:// protocol has cross origin restriction on most browsers. If you don't already have a local web server it is recommended that you download and install Apache HTTP Server. - Instructions for installing Apache on Windows - Instructions for installing Apache on Linux - Mac OS X comes with a build in apache installation which you can enable by navigating to "System Preferences > Sharing" and checking the box next to "Web Sharing". Once you have installed or enabled Apache you can verify that it is running by navigating to localhost in your browser. You should see a startup page indicating that Apache HTTP Server was installed successfully and is running. 1.3. Ext JS 4 SDK Download Ext JS 4 SDK. Unzip the package to a new directory called "extjs" within your web root directory. If you aren't sure where your web root directory is, consult the docs for your web server. Your web root directory may vary depending on your operating system, but if you are using Apache it is typically located at: - Windows - "C:\Program Files\Apache Software Foundation\Apache2.2\htdocs" - Linux - "/var/www/" - Mac OS X - "/Library/WebServer/Documents/" Once you have finished installing Apache navigate to in your browser. If an Ext JS 4 welcome page appears, you are all set. 2. Application Structure 2.1 Basic Structure Although not mandatory, all suggestions listed below should be considered as best-practice guidelines to keep your application well organized, extensible and maintainable. The following is the recommended directory structure for an Ext JS application: - appname - app - namespace - Class1.js - Class2.js - ... - extjs - resources - css - images - ... - app.js - index.html appnameis a directory that contains all your application's source files appcontains all your classes, the naming style of which should follow the convention listed in the Class System guide extjscontains the Ext JS 4 SDK files resourcescontains additional CSS and image files which are responsible for the look and feel of the application, as well as other static resources (XML, JSON, etc.) index.htmlis the entry-point HTML document app.jscontains your application's logic Don't worry about creating all those directories at the moment. For now lets just focus on creating the minimum amount of code necessary to get an Ext JS application up and running. To do this we'll create a basic "hello world" Ext JS application called "Hello Ext". First, create the following directory and files in your web root directory. - helloext - app.js - index.html Then unzip the Ext JS 4 SDK to a directory named extjs in the helloext directory A typical Ext JS application is contained in a single HTML document - index.html. Open index.html and insert the following html code: <html> <head> <title>Hello Ext</title> <link rel="stylesheet" type="text/css" href="extjs/resources/css/ext-all.css"> <script type="text/javascript" src="extjs/ext-debug.js"></script> <script type="text/javascript" src="app.js"></script> </head> <body></body> </html> extjs/resources/css/ext-all.csscontains all styling information needed for the whole framework extjs/ext-debug.jscontains a minimal set of Ext JS 4 Core library classes app.jswill contain your application code Now you're ready to write your application code. Open app.js and insert the following JavaScript code: Ext.application({ name: 'HelloExt', launch: function() { Ext.create('Ext.container.Viewport', { layout: 'fit', items: [ { title: 'Hello Ext', html : 'Hello! Welcome to Ext JS.' } ] }); } }); Now open your browser and navigate to. You should see a panel with a title bar containing the text "Hello Ext" and the "welcome" message in the panel's body area. 2.2 Dynamic Loading Open the Chrome Developer Tools and click on the Console option. Now refresh the Hello Ext application. You should see a warning in the console that looks like this: Ext JS 4 comes with a system for dynamically loading only the JavaScript resources necessary to run your app. In our example Ext.create creates an instance of Ext.container.Viewport. When Ext.create is called the loader will first check to see if Ext.container.Viewport has been defined. If it is undefined the loader will try to load the JavaScript file that contains the code for Ext.container.Viewport before instantiating the viewport object. In our example the Viewport.js file gets loaded successfully, but the loader detects that files are being loaded in a less-than optimal manner. Since we are loading the Viewport.js file only when an instance of Ext.container.Viewport is requested, execution of the code is stopped until that file has been loaded successfully, causing a short delay. This delay would be compounded if we had several calls to Ext.create, because the application would wait for each file to load before requesting the next one. To fix this, we can add this one line of code above the call to Ext.application: Ext.require('Ext.container.Viewport'); This will ensure that the file containing the code for Ext.container.Viewport is loaded before the application runs. You should no longer see the Ext.Loader warning when you refresh the page. 2.3 Library Inclusion methods When you unzip the Ext JS 4 download, you will see the following files: ext-debug.js- This file is only for use during development. It provides the minimum number of core Ext JS classes needed to get up and running. Any additional classes should be dynamically loaded as separate files as demonstrated above. ext.js- same as ext-debug.jsbut minified for use in production. Meant to be used in combination with your application's app-all.jsfile. (see section 3) ext-all-debug.js- This file contains the entire Ext JS library. This can be helpful for shortening your initial learning curve, however ext-debug.jsis preferred in most cases for actual application development. ext-all.js- This is a minified version of ext-all-debug.jsthat can be used in production environments, however, it is not recommended since most applications will not make use of all the classes that it contains. Instead it is recommended that you create a custom build for your production environment as described in section 3. 3. Deployment The newly-introduced Sencha SDK Tools (download here) makes deployment of any Ext JS 4 application easier than ever. The tools allow you to generate a manifest of all JavaScript dependencies in the form of a JSB3 (JSBuilder file format) file, and create a custom build containing only the code that your application needs. Once you've installed the SDK Tools, open a terminal window and navigate into your application's directory. cd path/to/web/root/helloext From here you only need to run a couple of simple commands. The first one generates a JSB3 file: sencha create jsb -a index.html -p app.jsb3 For applications built on top of a dynamic server-side language like PHP, Ruby, ASP, etc., you can simply replace index.html with the actual URL of your application: sencha create jsb -a -p app.jsb3 This scans your index.html file for all framework and application files that are actually used by the app, and then creates a JSB file called app.jsb3. Generating the JSB3 first gives us a chance to modify the generated app.jsb3 before building - this can be helpful if you have custom resources to copy, but in most cases we can immediately proceed to build the application with the second command: sencha build -p app.jsb3 -d . This creates 2 files based on the JSB3 file: all-classes.js- This file contains all of your application's classes. It is not minified so is very useful for debugging problems with your built application. In our example this file is empty because our "Hello Ext" application does not contain any classes. app-all.js- This file is a minimized build of your application plus all of the Ext JS classes required to run it. It is the minified and production-ready version of all-classes.js + app.js. An Ext JS application will need a separate index.html for the production version of the app. You will typically handle this in your build process or server side logic, but for now let's just create a new file in the helloext directory called index-prod.html: <html> <head> <title>Hello Ext</title> <link rel="stylesheet" type="text/css" href="extjs/resources/css/ext-all.css"> <script type="text/javascript" src="extjs/ext.js"></script> <script type="text/javascript" src="app-all.js"></script> </head> <body></body> </html> Notice that ext-debug.js has been replaced with ext.js, and app.js has been replaced with app-all.js. If you navigate to in your browser, you should see the production version of the "Hello Ext" application.
http://docs.sencha.com/extjs/4.2.1/?_escaped_fragment_=/guide/getting_started
CC-MAIN-2017-30
refinedweb
1,595
57.57
Today I want to share with you some Java sample code to reset the value of a field when another field is modified. In the example below I have implemented a little piece of logic to reset the WORKORDER.SUPEVISOR field when the WORKORDER.OWNERGROUP is modified. Here is the Java code. package cust.psdi.app.workorder; import psdi.mbo.*; import psdi.util.*; import java.rmi.*; /** * Custom field class to reset the supervisor when the Owner Group is changed */ public class FldWOOwnerGroup extends psdi.app.workorder.FldWOOwnerGroup { public FldWOOwnerGroup(MboValue mbv) throws MXException { super(mbv); } public void action() throws MXException, RemoteException { if (!getMboValue().isNull() && !getMboValue().equals(getMboValue().getPreviousValue())) { getMboValue("SUPERVISOR").setValueNull(NOACCESSCHECK|NOVALIDATION_AND_NOACTION); } super.action(); } } Deploy the FldWOOwnerGroup.class file in your Maximo source tree and attach it on WORKORDER.OWNERGROUP field using the Database Configuration application. I didn't think you could name the extending class the same as the superclass. Thanks for sharing. Sure you can. That's my personal best practice. I like to use the same name as the superclass and the same package name with a 'cust' prefix. I would rather suggest to use a prefix before class name ex: "CustomFldWOOwnerGroup", this is the same strategy ISs use. why not choose automation script ? Maybe because you are on TPAE 7.1? Hi there, do you know how do do this using automation script? Bruno, thanks for your posts on MAXIMO customisation and Development. They are very useful for me as a beginner. Same as DuleyFacts, I would like to know how we can do that using automation script? This comment has been removed by the author. This comment has been removed by the author. Automation script. 1) Create attribute launchpoint on attribute WORKORDER.OWNERGROUP. 2) Add an OUT variable: supervisor 3) assign to it the value: supervisor 4) Jython script will be look like this: if ownergroup != None & ownergroup != '': supervisor=None Automation script will not start, while you not CHANGE the value of the OWNERGROUP field, so we don't need to check the previous value This comment has been removed by the author. This comment has been removed by the author.
http://maximodev.blogspot.com/2012/10/reset-value-field-when-another-changed-java.html
CC-MAIN-2018-09
refinedweb
357
51.55
08 February 2011 00:08 [Source: ICIS news] HOUSTON (ICIS)--FMC saw a net loss of $53.5m (€39.2m) in the 2010 fourth quarter, the US soda ash producer said on Monday, down from a $62.1m profit in the year-earlier period amid restructuring charges related to the shutdown of the company’s phosphates business in ?xml:namespace> Sales for FMC were $810.5m in the 2010 fourth quarter, up 12% from $722.1m in the 2009 fourth quarter. Revenue in FMC’s industrial chemicals segment increased 5% to $273.1m, as volumes grew within soda ash, particularly for export. Industrial chemicals segment earnings declined 13% to $28.6m, as the sales gain was offset by planned maintenance outages, including the successful fourth-quarter completion of boiler repair at the company’s Looking forward, however, the company said industrial earnings would likely increase 10% year on year in the 2011 first quarter, driven by volume growth and higher selling prices. In other segments, revenue in agricultural products surged 18% to $1.24bn, while revenue in specialty chemicals gained 10% to $824.5m because of “robust demand recovery” in lithium markets. For the full-year 2010, revenues increased 10% to $3.12bn, while net income fell 25% to $172.5m as a result of higher restructuring charges. ($1 = €0.74)
http://www.icis.com/Articles/2011/02/08/9433037/us-soda-ash-firm-fmc-swings-to-53.5m-q4-loss-on-restructuring.html
CC-MAIN-2013-48
refinedweb
221
58.79