text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Learn Haskell in 1015 3 to the terminal. The If you need multiple I/O actions in one expression, you can use a do block. Actions are separated by semicolons. Prelude> do { putStr "2 + 2 = " ; print (2 + 2) }2 + 2 = 4 Prelude> do { putStrLn "ABCDE" ; putStrLn "12345" }ABCDE 12345 Reading can be done with getLine (which gives back a String) or readLn (which gives back whatever type of value you want). The <- symbol is used to assign a name to the result of an I/O action. Prelude> do { n <- readLn ; print (n^2) expression as a bonus. The first non-space character after do is special. In this case, it's the p from putStrLn. Every line that starts in the same column as that p is another statement in the do block. If you indent more, it's part of the previous statement. If you indent less, it ends the do block. This is called "layout", and Haskell uses it to avoid making you put in statement terminators and braces all the time. (The then and else phrases have to be indented for this reason: if they started in the same column, they'd be separate statements, which is wrong.) (Note: Do not indent with tabs if you're using layout. It technically still works if your tabs are 8 spaces, but it's a bad idea. Also, don't use proportional fonts -- which apparently some people do, even when programming!) 4 Simple types So far, not a single type declaration has been mentioned. That's because Haskell does type inference. You generally don't have to declare types unless you want to. If you do want to declare types, you use :: to do it. Prelude> 5 :: Int5 Prelude> 5 :: Double5Prelude> :t 'X' 'X' :: CharPrelude> :t "Hello, Haskell" "Hello, Haskell" :: [Char] (In case you noticed, [Char] is another way of saying String. See the section on lists later.) Things get more interesting for numbers. Prelude> :t 42 42 :: (Num t) => tPrelude> :t 42.0 42.0 :: (Fractional t) => tPrelude> :t gcd 15 20 gcd 15 20 :: (Integral t) => t These types use "type classes." They mean: 42can be used as any numeric type. (This is why I was able to declare 5as either an Intor a Doubleearlier.) 42.0can be any fractional type, but not an integral type. gcd 15 20(which is a function call, incidentally) can be any integral type, but not a fractional type. There are five numeric types in the Haskell "prelude" (the part of the library you get without having to import anything): Intis an integer with at least 30 bits of precision. Integeris an integer with unlimited precision. Floatis a single precision floating point number. Doubleis a double precision floating point number. Rationalis a fraction type, with no rounding error. All five are instances of the Num type class. The first two are instances of Integral, and the last three are instances of Fractional. Putting it all together, Prelude> gcd 42 35 :: Int7 Prelude> gcd 42 35 :: Double<interactive>:1:0: No instance for (Integral Double) The final type worth mentioning here is (), pronounced "unit." It only has one value, also written as () and pronounced "unit." Prelude> () ()Prelude> :t () () :: () You can think of this as similar to the void keyword in C family languages. You can return () from an I/O action if you don't want to return anything. : operator appends an item to the beginning of a list. (It is Haskell's version of the cons function in the Lisp family of languages.) zip, a library function that turns two lists into a list of tuples.Prelude> snd (1, 2) 2Prelude> map fst [(1, 2), (3, 4), (5, 6)] [1,3,5] Also see how to work on lists 6 Function definitions We wrote a definition of an IO action earlier, called main: main = do putStrLn "What is 2 + 2?" x <- readLn if x == 4 then putStrLn "You're right!" else putStrLn "You're wrong!" Now, let's supplement it by actually writing a function definition and call it factorial. I'm also adding a module header, which is good form. factorial 5 without needing parentheses. Now ask ghci for the type. $ ghci Test.hs << GHCi banner >> Ok, modules loaded: Main. Prelude Main> :t factorial factorial :: (Num a) => a -> a Function types are written with the argument type, then ->, then the result type. (This also has the type class The let expression defines temporary names. (This is using layout again. You could use {braces}, and separate the names with semicolons, if you prefer.) classify age = case age of 0 -> "newborn" 1 -> "infant" 2 -> "toddler" _ -> "senior citizen" The case expression does a multi-way branch. The special label _ means "anything else". The import says to use code from Data.Map and that it will be prefixed by M. (That's necessary because some of the functions have the same names as functions from the prelude. Most libraries don't need the as part.)
http://www.haskell.org/haskellwiki/Learn_Haskell_in_10_minutes
crawl-002
refinedweb
834
75.1
Well recently I had nasty worm/rootkit problem and naturally I wanted to know what he changed in my system. So i started seeking for some tool to detect registry changes. some simple tool to dump complete registry content to text file before infection and after and by simple text diff i would be able to see the changes fast. I was not very lucky thou. Since all reg tools i found were using win32 api to get data which that clever rootkit redirected to himself and thus stayed hidden. Also as i later found out malware don't even need to be that clever to hide things in registry from standard api. So now I had physical clean registry files from system restore point and dirty ones from my infected system. And I didn't stop poking in the hives until I did come up with simple tool to dump and compare their real contents in simple text format. I also needed full reg path at each entry so in case I use text diff on those dumps I see where the change happened. NT/XP registry files (binary hives not textual reg files) are actually very simple. tey are just bunch of 4k blocks where each block contain variable sized blocks . Each of those starts with usual 4b size and 2b type. And thats about it . thats ms registry hive format. Oh and I nearly forgot. First 1k of first block is hive header with no usefull info as far as i know So there are 2 basic blocks one for keys and one for values. what's nice is that MS decided to use human readable 2 char strings in the block type field i mentioned earlier. so if you open hive in hex viewer jou can clearly see "nk" for key block and "vk" for value block. And here is actual code to dump registry hives. I used portable c code so it should be compilable on unix too without much change. #include <string.h> #include <stdio.h> #include <stdlib.h> struct offsets { long block_size; char block_type[2]; // "lf" "il" "ri" short count; long first; long hash; }; struct key_block { long block_size; char block_type[2]; // "nk" char dummya[17]; int subkey_count; char dummyb[4]; int subkeys; char dummyc[4]; int value_count; int offsets; char dummyd[28]; short len; short du; char name; }; struct value_block { long block_size; char block_type[2]; // "vk" short name_len; long size; long offset; long value_type; short flags; short dummy; char name; }; void walk ( char* path, key_block* key ) { static char* root=(char*)key-0x20, *full=path; // add current key name to printed path memcpy(path++,"/",2); memcpy(path,&key->name,key->len); path+=key->len; // print all contained values for(int o=0;o<key->value_count;o++){ value_block* val = (value_block*)(((int*)(key->offsets+root+4))[o]+root); // we skip nodes without values if(!val->offset) continue; // data are usually in separate blocks without types char* data = root+val->offset+4; // but for small values MS added optimization where // if bit 31 is set data are contained wihin the key itself to save space if(val->size&1<<31) { data = (char*)&val->offset; } // notice that we use memcopy for key/value names everywhere instead of strcat // reason is that malware/wiruses often write non nulterminated strings to // hide from win32 api *path='/'; if(!val->name_len) *path=' '; memcpy(path+1,&val->name,val->name_len); path[val->name_len+1]=0; printf("%s [%d] = ",full,val->value_type); for(int i=0;i<(val->size&0xffff);i++) { // print types 1 and 7 as unicode strings if(val->value_type==1||val->value_type==7) { if(data[i]) putchar(data[i]); // and rest dump as binary data } else { printf("%02X",data[i]); } } printf("\n"); } // for simplicity we can imagine keys as directories in filesystem and values // as files. // and since we already dumped values for this dir we will now iterate // thru subdirectories in the same way offsets* item = (offsets*)(root+key->subkeys); for(int i=0;i<item->count;i++){ // in case of too many subkeys this list contain just other lists offsets* subitem = (offsets*)((&item->first)[i]+root); // usual directory traversal if(item->block_type[1]=='f') { // for now we skip hash codes (used by regedit for faster search) walk(path,(key_block*)((&item->first)[i*2]+root)); } else for(int j=0;j<subitem->count;j++) { // also ms had chosen to skip hashes altogether in this case walk(path,(key_block*)((&subitem->first)[j]+root)); } } } int main(int argc, char** argv) { char path[0x1000]={0}, *data; FILE* f; int size; if(argc<2||!(f=fopen(argv[1],"rb"))) return printf("hive path err"); fseek(f,0,SEEK_END); if(!(size=ftell(f))) return printf("empty file"); rewind(f); data=(char*)malloc(size); fread(data,size,1,f); fclose(f); // we just skip 1k header and start walking root key tree walk(path,(key_block*)(data+0x1020)); free(data); return 0; } Remember it will dump values that you normally don't even have access to so be careful. It's prerfect to just dump the hives before and after software installation and just compare changes with text diff (for example commandline version from UnixUtils is great) . General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/recipes/RegistryDumper.aspx
crawl-002
refinedweb
867
55.98
11 October 2012 11:09 [Source: ICIS news] SINGAPORE (ICIS)--Chinese oil majors Sinopec and PetroChina are expected to refine a total of about 30.5m tonnes of crude in October, with daily throughput flat with September’s level of about 984,000 tonnes, sources from both companies said on Thursday. Sinopec’s crude throughput target for October is 18.15m-18.25m tonnes, with its daily throughput up by 0.43% month on month to around 587,000 tonnes. The increase comes mainly from its ?xml:namespace> Its 100,000 bbl/day In addition, its 160,000 bbl/day Qilu refinery was shut on 8 October for a 25-day maintenance. PetroChina plans to process 12.25m-12.35m tonnes of crude in October, with its daily throughput down by 0.6% to about 397,000 tonnes. The decrease is primarily because its 200,000 bbl/day joint venture refinery in Dalian and its 120,000 bbl/day Daqing refinery will both cut operating rates this month amid the weak market, said a company source.
http://www.icis.com/Articles/2012/10/11/9602966/sinopec-petrochina-keep-october-daily-crude-throughput-unchanged.html
CC-MAIN-2014-41
refinedweb
176
74.59
Linus Torvalds's Double Pointer Problem 519 54 55141 In 2012, Linus Torvalds presented an 'intuitive' way to use double pointers to easily remove a node from a linked list. But how intuitive can double pointers even really be? I present a way of visualizing pointers that I do not see often presented, but works very well for me when trying to figure out the jungles of addresses, dereferences, and other commands a complex C program can use. Explanation by video This problem has been discussed by Philip Buuck in this YouTube video. I recommend you to watch this if you need more detailed explanation. Explanation by example If you like learning from examples, I prepared one. Let's say that we have the following single-linked list: that is represented as follows (click to enlarge): We want to delete the node with the value = 8. Code Here is the simple code that do this: #include <assert.h> #include <stdio.h> #include <stdlib.h> struct node_t { int value; node_t *next; }; node_t* create_list() { int test_values[] = { 28, 1, 8, 70, 56 }; node_t *new_node, *head = NULL; int i; for (i = 0; i < 5; i++) { new_node = (node_t*)malloc(sizeof(struct node_t)); assert(new_node); new_node->value = test_values[i]; new_node->next = head; head = new_node; } return head; } void print_list(const node_t *head) { for (; head; head = head->next) printf("%d ", head->value); printf("\n"); } void destroy_list(node_t **head) { node_t *next; while (*head) { next = (*head)->next; free(*head); *head = next; } } void remove_from_list(int val, node_t **head) { node_t *del, **p = head; while (*p && (**p).value != val) p = &(*p)->next; // alternatively: p = &(**p).next if (p) { // non-empty list and value was found del = *p; *p = del->next; del->next = NULL; // not necessary in this case free(del); } } int main(int argc, char **argv) { node_t *head; head = create_list(); print_list(head); remove_from_list(8, &head); print_list(head); destroy_list(&head); assert (head == NULL); return EXIT_SUCCESS; } If you compile and run this code you'll get: 56 70 8 1 28 56 70 1 28 Explanation of the code Let's create **p 'double' pointer to *head pointer: Now let's analyze how void remove_from_list(int val, node_t **head) works. It iterates over list as long as *p && (**p).value != val. In this example given list contains value that we want to delete (which is 8). (**p).value is 8, so we stop iterating. Note that *p points to the variable node_t *next within node_t that is before the node_t that we want to delete (which is **p). Now let's assign the address of the element that we want to remove ( del->value == 8) to the *del pointer. We need to fix the *p pointer so that **p was pointing to the one element after *del element that we are going to delete: In the code above we call free(del), thus it's not necessary to set del->next to NULL, but if we would like to return the pointer to the element 'detached' from the list instead of the completely removing it, we would set del->next = NULL: As @R. Martinho Fernandes pointed out in his answer, using pointer to pointer as an argument in void push(struct node** head, int data) allows you to change the head pointer directly from within push function instead of returning the new pointer. There is yet another good example which shows why using pointer to pointer instead a single pointer may shorten, simplify and speed up your code. You asked about adding a new node to the list which probably typically doesn't need pointer-to-pointer in contrast to removing the node from the singly-linked list. You can implement removing node from the list without pointer-to-pointer but it is suboptimal. I described the details here. I recommend you also to watch this YouTube video which addresses the problem. BTW: If you count with Linus Torvalds opinion, you would better learn how to use pointer-to-pointer. ;-) Linus Torvalds: (...)". (...) Other resources that may be helpful: Popular Videos 234 Submit Your Video By anonymous 2017-09-20 I like this "real world" code example of pointer to pointer usage, in Git 2.0, commit 7b1004b: Chris points out in the comments to the 2016 video "Linus Torvalds's Double Pointer Problem " by Philip Buuck. kumar points out in the comments the blog post "Linus on Understanding Pointers", where Grisha Trubetskoy explains: Original Thread
https://dev-videos.com/videos/GiAhUYCUDVc/Linus-Torvaldss-Double-Pointer-Problem
CC-MAIN-2018-26
refinedweb
730
56.49
/* * TreeStrategy java.lang.reflect.Array; import java.util.Map; import org.simpleframework.xml.stream.Node; import org.simpleframework.xml.stream.NodeMap; /** * The <code>TreeStrategy</code> object is used to provide a simple * strategy for handling object graphs in a tree structure. This does * not resolve cycles in the object graph. This will make use of the * specified class attribute to resolve the class to use for a given * element during the deserialization process. For the serialization * process the "class" attribute will be added to the element specified. * If there is a need to use an attribute name other than "class" then * the name of the attribute to use can be specified. * * @author Niall Gallagher * @see org.simpleframework.xml.strategy.CycleStrategy public class TreeStrategy implements Strategy { /** * This is the loader that is used to load the specified class. */ private final Loader loader; * This is the attribute that is used to determine an array size. private final String length; /** * This is the attribute that is used to determine the real type. */ private final String label; * Constructor for the <code>TreeStrategy</code> object. This * is used to create a strategy that can resolve and load class * objects for deserialization using a "class" attribute. Also * for serialization this will add the appropriate "class" value. public TreeStrategy() { this(LABEL, LENGTH); } * objects for deserialization using the specified attribute. * The attribute value can be any legal XML attribute name. * * @param label this is the name of the attribute to use * @param length this is used to determine the array length public TreeStrategy(String label, String length) { this.loader = new Loader(); this.length = length; this.label = label; } * This is used to resolve and load a class for the given element. * Resolution of the class to used is done by inspecting the * XML element provided. If there is a "class" attribute on the * element then its value is used to resolve the class to use. * If no such attribute exists on the element this returns null. * @param type this is the type of the XML element expected * @param node this is the element used to resolve an override * @param map this is used to maintain contextual information * @return returns the class that should be used for the object * @throws Exception thrown if the class cannot be resolved public Value read(Type type, NodeMap node, Map map) throws Exception { Class actual = readValue(type, node); Class expect = type.getType(); if(expect.isArray()) { return readArray(actual, node); } if(expect != actual) { return new ObjectValue(actual); return null; } * This also expects a "length" attribute for the array length. private Value readArray(Class type, NodeMap node) throws Exception { Node entry = node.remove(length); int size = 0; if(entry != null) { String value = entry.getValue(); size = Integer.parseInt(value); } return new ArrayValue(type, size); * If no such attribute exists the specified field is returned, * or if the field type is an array then the component type. private Class readValue(Type type, NodeMap node) throws Exception { Node entry = node.remove(label); expect = expect.getComponentType(); String name = entry.getValue(); expect = loader.load(name); } return expect; } * This is used to attach a attribute to the provided element * that is used to identify the class. The attribute name is * "class" and has the value of the fully qualified class * name for the object provided. This will only be invoked * if the object class is different from the field class. * * @param type this is the declared class for the field used * @param value this is the instance variable being serialized * @param node this is the element used to represent the value * @return this returns true if serialization is complete public boolean write(Type type, Object value, NodeMap node, Map map){ Class actual = value.getClass(); Class real = actual; if(actual.isArray()) { real = writeArray(expect, value, node); if(actual != expect) { node.put(label, real.getName()); } return false; * This is used to add a length attribute to the element due to * the fact that the serialized value is an array. The length * of the array is acquired and inserted in to the attributes. * @param field this is the field type for the array to set * @param value this is the actual value for the array to set * @param node this is the map of attributes for the element * @return returns the array component type that is set private Class writeArray(Class field, Object value, NodeMap node){ int size = Array.getLength(value); if(length != null) { node.put(length, String.valueOf(size)); return field.getComponentType(); }
http://simple.sourceforge.net/download/stream/report/cobertura/org.simpleframework.xml.strategy.TreeStrategy.html
CC-MAIN-2017-13
refinedweb
738
56.66
I'm trying to emulate Excel's Insert>Scatter>Scatter with smooth lines and markers command in Matplotlib The scipy function interpolate creates a similar effect, with some nice examples of how to simply implement this here: How to draw cubic spline in matplotlib However Excel's spline algorithm is also able to generate a smooth curve through just three points (e.g. x = [0,1,2] y = [4,2,1]); and it isn't possible to do this with cubic splines. I have seen discussions that suggest that the Excel algorithm uses Catmull-Rom splines; but don't really understand these, or how they could be adapted to Matplotlib: Is there a simple way of modifying the above examples to achieve smooth curves through three or more points using the interpolate library? Many thanks By now you may have found the Wikipedia page for the Centripetal Catmull-Rom spline, but in case you haven't, it includes this sample code: import numpy import matplotlib.pyplot as plt def CatmullRomSpline(P0, P1, P2, P3, nPoints=100): """ P0, P1, P2, and P3 should be (x,y) point pairs that define the Catmull-Rom spline. nPoints is the number of points to include in this curve segment. """ # Convert the points to numpy so that we can do array multiplication P0, P1, P2, P3 = map(numpy.array, [P0, P1, P2, P3]) # Calculate t0 to t4 alpha = 0.5 def tj(ti, Pi, Pj): xi, yi = Pi xj, yj = Pj return ( ( (xj-xi)**2 + (yj-yi)**2 )**0.5 )**alpha + ti t0 = 0 t1 = tj(t0, P0, P1) t2 = tj(t1, P1, P2) t3 = tj(t2, P2, P3) # Only calculate points between P1 and P2 t = numpy.linspace(t1,t2,nPoints) # Reshape so that we can multiply by the points P0 to P3 # and get a point for each value of t. t = t.reshape(len(t),1) A1 = (t1-t)/(t1-t0)*P0 + (t-t0)/(t1-t0)*P1 A2 = (t2-t)/(t2-t1)*P1 + (t-t1)/(t2-t1)*P2 A3 = (t3-t)/(t3-t2)*P2 + (t-t2)/(t3-t2)*P3 B1 = (t2-t)/(t2-t0)*A1 + (t-t0)/(t2-t0)*A2 B2 = (t3-t)/(t3-t1)*A2 + (t-t1)/(t3-t1)*A3 C = (t2-t)/(t2-t1)*B1 + (t-t1)/(t2-t1)*B2 return C def CatmullRomChain(P): """ Calculate Catmull Rom for a chain of points and return the combined curve. """ sz = len(P) # The curve C will contain an array of (x,y) points. C = [] for i in range(sz-3): c = CatmullRomSpline(P[i], P[i+1], P[i+2], P[i+3]) C.extend(c) return C which nicely computes the interpolation for n >= 4 points like so: points = [[0,1.5],[2,2],[3,1],[4,0.5],[5,1],[6,2],[7,3]] c = CatmullRomChain(points) px, py = zip(*points) x, y = zip(*c) plt.plot(x, y) plt.plot(px, py, 'or') resulting in this matplotlib image: Alternatively, there is a scipy.interpolate function for BarycentricInterpolator that appears to do what you're looking for. It is rather straightforward to use and works for cases in which you have only 3 data points. from scipy.interpolate import BarycentricInterpolator # create some data points points1 = [[0, 2], [1, 4], [2, -2], [3, 6], [4, 2]] points2 = [[1, 1], [2, 5], [3, -1]] # put data into x, y tuples x1, y1 =zip(*points1) x2, y2 = zip(*points2) # create the interpolator bci1 = BarycentricInterpolator(x1, y1) bci2 = BarycentricInterpolator(x2, y2) # define dense x-axis for interpolating over x1_new = np.linspace(min(x1), max(x1), 1000) x2_new = np.linspace(min(x2), max(x2), 1000) # plot it all plt.plot(x1, y1, 'o') plt.plot(x2, y2, 'o') plt.plot(x1_new, bci1(x1_new)) plt.plot(x2_new, bci2(x2_new)) plt.xlim(-1, 5) Another option within scipy is akima interpolation via Akima1DInterpolator. It is as easy to implement as Barycentric, but has the advantage that it avoids large oscillations at the edge of a data set. Here's a few test cases that exhibit all the criteria you've asked for so far. from scipy.interpolate import Akima1DInterpolator x1, y1 = np.arange(13), np.random.randint(-10, 10, 13) x2, y2 = [0,2,3,6,12], [100,50,30,18,14] x3, y3 = [4, 6, 8], [60, 80, 40] akima1 = Akima1DInterpolator(x1, y1) akima2 = Akima1DInterpolator(x2, y2) akima3 = Akima1DInterpolator(x3, y3) x1_new = np.linspace(min(x1), max(x1), 1000) x2_new = np.linspace(min(x2), max(x2), 1000) x3_new = np.linspace(min(x3), max(x3), 1000) plt.plot(x1, y1, 'bo') plt.plot(x2, y2, 'go') plt.plot(x3, y3, 'ro') plt.plot(x1_new, akima1(x1_new), 'b', label='random points') plt.plot(x2_new, akima2(x2_new), 'g', label='exponential') plt.plot(x3_new, akima3(x3_new), 'r', label='3 points') plt.xlim(-1, 15) plt.ylim(-10, 110) plt.legend(loc='best')
https://codedump.io/share/r4pxBW329V7k/1/emulating-excel39s-quotscatter-with-smooth-curvequot-spline-function-in-matplotlib-for-3-points
CC-MAIN-2017-09
refinedweb
803
65.93
One common programming question is how to randomly shuffle an array of numbers in-place. There are a few wrong answers to this question - some simple shuffles people tend to think of immediately turn out to be inadequate. In particular, the most common naive algorithm that comes up is [1]: naive_shuffle(arr): if len(arr) > 1: for i in 0 .. len(arr) - 1: s = random from inclusive range [0:len(arr)-1] swap arr[s] with arr[i] This algorithm produces results that are badly skewed. For more information consult this post by Jeff Attwood, and this SO discussion. The correct answer is to use the Fisher-Yates shuffle algorithm: fisher_yates_shuffle(arr): if len(arr) > 1: i = len(arr) - 1 while i > 0: s = random from inclusive range [0:i] swap arr[s] with arr[i] i-- It was first invented as a paper-and-pencil method back in 1938, and later was popularized by Donald Knuth in Volume II of TAOCP. For this reason it's also sometimes called the Fisher-Yates-Knuth algorithm. In this article I don't aim to compare Fisher-Yates to the naive algorithm. Nor do I plan to explain why the naive shuffle doesn't work. Others have done it before me, see the references to Jeff's post and the SO discussion above. What I do plan to do, however, is to explain why the Fisher-Yates algorithm works. To put it more formally, why given a good random-number generator, the Fisher-Yates shuffle produces a uniform shuffle of an array in which every permutation is equally likely. And my plan is not to prove the shuffle's correctness mathematically, but rather to explain it intuitively. I personally find it much simpler to remember an algorithm once I understand the intuition behind it. An analogy Imagine a magician's hat: And a bunch of distinct balls. Let's take pool balls for the example: Suppose you place all those balls into the hat [2] and stir them really well. Now, you look away and start taking balls randomly out of the hat and placing them in a line. Assuming the hat stir was random and you can't distinguish the balls by touch alone, once the hat is empty, the resulting line is a random permutation of the balls. No ball had a larger chance of being the first in line than any other ball. After that, all the remaining balls in the hat had an equal chance of being the second in line, and so on. Again, this isn't a rigorous proof, but the point of this article is intuition. If you understand why this procedure produces a random shuffle of the balls, you can understand Fisher-Yates, because it is just a variation on the same theme. The intuition behind Fisher-Yates shuffling The Fisher-Yates shuffle performs a procedure similar to pulling balls at random from a hat. Here's the algorithm once again, this time in my favorite pseudo-code format, Python [3]: def fisher_yates(arr): if len(arr) > 1: i = len(arr) - 1 while i > 0: s = randint(0, i) arr[i], arr[s] = arr[s], arr[i] i -= 1 The trick is doing it in-place with no extra memory. The following illustration step by step should explain what's going on. Let's start with an array of 4 elements: The array contains the letters a, b, c, d at indices [0:3]. The red arrow shows where i points initially. Now, the initial step in the loop picks a random index in the range [0:i], which is [0:3] in the first iteration. Suppose the index 1 was picked, and the code swaps element 1 with element 3 (which is the initial i). So after the first iteration the array looks like this: Notice that I colored the part of the array to the right of i in another color. Here's spoiler: The blue part of the array is the hat, and the orange part is the line where the random permutation is being built. Let's make one more step of the loop. A random number in the range [0:2] has to be picked, so suppose 2 is picked. Therefore, the swap just leaves the element at index 2 in its original place: We make one more step. Suppose 0 is picked at random from [0:1] so elements at indices 0 and 1 are swapped: At this point we're done. There's only one ball left in the hat, so it will be surely picked next. This is why the loop of the algorithm runs while i > 0 - once i reaches 0, the algorithm finishes: So, to understand why the Fisher-Yates shuffling algorithm works, keep in mind the following: the algorithm makes a "virtual" division of the array it shuffles into two parts. The part at indices [0:i] is the hat, from which elements will be picked at random. The part to the right of i (that is, [i+1:len(arr)-1]) is the final line where the random permutation is being formed. In each step of the algorithm, it picks one element from the hat and adds it to the line, removing it from the hat. Some final notes: - Since all the indices [0:i] are in the hat, the selection can pick i itself. In such case there's no real swapping being done, but the element at index i moves from the hat and to the line. Having the selection from range [0:i] is crucial to the correctness of the algorithm. A common implementation mistake is to make this range [0:i-1], which causes the shuffle to be non-uniform. - The vast majority of implementations you'll see online run the algorithm from the end of the array down. But this isn't set in stone - it's just a convention. The algorithm will work equally well with i starting at 0 and running until the end of the array, picking items in the range [i:len(arr)-1] at each step. Conclusion Random shuffling is important for many applications. Although it's a seemingly simple operation, it's easy to do wrong. The Internet is abound with stories of gambling companies losing money because their shuffles weren't random enough. The Fisher-Yates algorithm produces a uniform shuffling of an array. It's optimally efficient both in runtime (running in O(len(arr))) and space (the shuffle is done in-place, using only O(1) extra memory). In this article I aimed to explain the intuition behind the algorithm, firmly believing that a real, deep understanding of something [4] is both intellectually rewarding and useful.
http://eli.thegreenplace.net/2010/05/28/the-intuition-behind-fisher-yates-shuffling/
CC-MAIN-2015-48
refinedweb
1,128
59.94
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project. Hello, This is on sparc-solaris. I was trying to reproduce another error that our testsuite occasionally reports (without much success :-(). I noticed while running the test repeatedly within the same debugger process (that is: repeatedly doing "run; [...]; kill" instead of starting a new debugger every time) that it would eventually cause another error in the debugger. The debugger would be telling us that it is unable to find a given thread in the procinfo list. In other words, the debugger received an event for LWP=1 and then failed to create the assocated thread element in the procinfo list. After some investgation, it turned out that this was because we were unable to open the associated lwp file in the /proc filesystem, and that was because we ran out of file descriptors! Digging further, I found that we leak the file descriptor created when opening the procfs map file. we create a cleanup routine to make sure that the associated file descriptor gets closed, but we never call the cleanup. gdb/ChangeLog: * procfs.c (iterate_over_mappings): Call do_cleanups before returning. Fixed thusly. Seems fairly straightforward, except that the cleanup is only really needed when NEW_PROC_API is defined (basically, Solaris except old versions). But I think that #ifdef code is ugly, so I made the cleanup run regardless... I'll commit in a couple of days pending feedback... --- gdb/procfs.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diff --git a/gdb/procfs.c b/gdb/procfs.c index 871dd47..2a253a1 100644 --- a/gdb/procfs.c +++ b/gdb/procfs.c @@ -5217,6 +5217,7 @@ iterate_over_mappings (procinfo *pi, find_memory_region_ftype child_func, int funcstat; int map_fd; int nmap; + struct cleanup *cleanups = make_cleanup (null_cleanup, NULL); #ifdef NEW_PROC_API struct stat sbuf; #endif @@ -5254,8 +5255,12 @@ iterate_over_mappings (procinfo *pi, find_memory_region_ftype child_func, for (prmap = prmaps; nmap > 0; prmap++, nmap--) if ((funcstat = (*func) (prmap, child_func, data)) != 0) - return funcstat; + { + do_cleanups (cleanups); + return funcstat; + } + do_cleanups (cleanups); return 0; } -- 1.7.1
https://sourceware.org/legacy-ml/gdb-patches/2011-11/msg00251.html
CC-MAIN-2020-50
refinedweb
340
64.1
table of contents NAME¶ Error_handling - A set of routines for error handling in NCAR Graphics. SYNOPSIS¶ ENTSR - Enters recovery mode. EPRIN - Prints the current error message. ERROF - Turns off the internal error flag. FDUM - A dump routine - the default version just RETURNS. ICFELL - Checks for an outstanding error condition. ICLOEM - Computes the real length of its character-string argument (ignoring blanks on the end) NERRO - Gets the current value of the internal error flag. RETSR - Restores a previous value of the internal error flag. SEMESS - Gets a specified portion of the current error message. SETER - Called by NCAR Graphics routines to report error conditions. C-BINDING SYNOPSIS¶ #include <ncarg/ncargC.h> c_entsr c_eprin c_errof c_icfell c_icloem c_nerro c_retsr c_semess c_seter USING SETER IN NCAR GRAPHICS¶ There are specific conventions for the use of SETER within NCAR Graphics, as follows: - - - All detectable errors shall be recoverable, in the sense described above. (That is, in every call to SETER, the final argument shall be a 1, rather than a 2.) This is by request of the folks doing NCAR Interactive, who rightly consider STOPs in the utilities undesirable. The idea is to let the user decide what is to be done about the various error conditions. - - - Whenever an NCAR Graphics routine calls a lower-level routine that might detect an error and call SETER, it should subsequently use ICFELL to check the error state; if a recoverable error has occurred, it should first do required clean-up chores, if possible, and then pass control back to the routine that called it. In all such uses of ICFELL, the first argument should be the name of the routine referencing ICFELL and the second argument should be a new number for the error, reflecting the position of the reference to the lower-level routine in the upper-level routine. - - - Any NCAR Graphics routine that can be called by a user and that can potentially yield a call to SETER must immediately check the error state and, if that error state is non-zero, return control without doing anything else. This is most conveniently done using a reference to ICFELL; see the second example in the "Usage" section of the description of ICFELL. All such references should have a first argument of the form 'XXXXXX - UNCLEARED PRIOR ERROR', where "XXXXXX" is the name of the routine in which the reference occurs, and a second argument equal to "1". - - - It is recommended that, within a given utility routine, the error numbers in references to SETER and ICFELL should start at 1 and increment by 1. These numbers generally have no intrinsic meaning in and of themselves: they are merely intended to allow a consultant to find the reference that generated a given error. - - - NCAR Graphics routines are not required to turn recovery mode on before calling a lower-level routine that might call SETER (which was the convention in the PORT library, as described in the PORT document). Instead, the assumption is that it is the responsibility of the user of NCAR Graphics to set recovery mode if he/she desires to do recovery. Since, by default, recovery mode is turned off, all NCAR Graphics calls to SETER will be treated as fatal: the error message will be printed and execution will be terminated. Once the user turns recovery mode on, however, no NCAR Graphics error will be treated in this way except for one that the user fails to recover from. Note: These conventions are being adopted as of December 2, 1993, and represent a goal for the future. The current situation is somewhat muddled: In some utilities, all SETER calls are fatal ones. In other utilities, some SETER calls are fatal and some are not. In other utilities, no SETER calls are fatal. In general, errors at a lower level are not detected and passed back up the call chain. Users have complained (and rightly so) that error recovery is, in general, not possible; observance of these conventions should help to fix the situation. at least the following things might been changed: 1) the current SET call; 2) the current polyline color index; 3) the current polymarker color index; 4) the current text color index; 5) the current fill area color index; 6) the current dash pattern. ACCESS¶ To use the Error_handling C or Fortran routines, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. SEE ALSO¶ Online: entsr, eprin, errof, fdum, icfell, icloem, nerro, retsr, semess, seter, ncarg_cbind University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://manpages.debian.org/unstable/libncarg-dev/error_handling.3NCARG.en.html
CC-MAIN-2022-40
refinedweb
765
58.11
Savitzky Golay Filtering The Savitzky Golay filter is a particular type of low-pass filter, well adapted for data smoothing. For further information see: (or for a pre-numpy implementation). Sample Code 1 def savitzky_golay(y, window_size, order, deriv=0, rate=1): 2 r"""Smooth (and optionally differentiate) data with a Savitzky-Golay filter. 3 The Savitzky-Golay filter removes high frequency noise from data. 4 It has the advantage of preserving the original shape and 5 features of the signal better than other types of filtering 6 approaches, such as moving averages techniques. 7 Parameters 8 ---------- 9 y : array_like, shape (N,) 10 the values of the time history of the signal. 11 window_size : int 12 the length of the window. Must be an odd integer number. 13 order : int 14 the order of the polynomial used in the filtering. 15 Must be less then `window_size` - 1. 16 deriv: int 17 the order of the derivative to compute (default = 0 means only smoothing) 18 Returns 19 ------- 20 ys : ndarray, shape (N) 21 the smoothed signal (or it's n-th derivative). 22 Notes 23 ----- 24 The Savitzky-Golay is a type of low-pass filter, particularly 25 suited for smoothing noisy data. The main idea behind this 26 approach is to make for each point a least-square fit with a 27 polynomial of high order over a odd-sized window centered at 28 the point. 29 Examples 30 -------- 31 t = np.linspace(-4, 4, 500) 32 y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape) 33 ysg = savitzky_golay(y, window_size=31, order=4) 34 import matplotlib.pyplot as plt 35 plt.plot(t, y, label='Noisy signal') 36 plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal') 37 plt.plot(t, ysg, 'r', label='Filtered signal') 38 plt.legend() 39 plt.show() 40 References 41 ---------- 42 .. [1] A. Savitzky, M. J. E. Golay, Smoothing and Differentiation of 43 Data by Simplified Least Squares Procedures. Analytical 44 Chemistry, 1964, 36 (8), pp 1627-1639. 45 .. [2] Numerical Recipes 3rd Edition: The Art of Scientific Computing 46 W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery 47 Cambridge University Press ISBN-13: 9780521880688 48 """ 49 import numpy as np 50 from math import factorial 51 52 try: 53 window_size = np.abs(np.int(window_size)) 54 order = np.abs(np.int(order)) 55 except ValueError, msg: 56 raise ValueError("window_size and order have to be of type int") 57 if window_size % 2 != 1 or window_size < 1: 58 raise TypeError("window_size size must be a positive odd number") 59 if window_size < order + 2: 60 raise TypeError("window_size is too small for the polynomials order") 61 order_range = range(order+1) 62 half_window = (window_size -1) // 2 63 # precompute coefficients 64 b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)]) 65 m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv) 66 # pad the signal at the extremes with 67 # values taken from the signal itself 68 firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] ) 69 lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1]) 70 y = np.concatenate((firstvals, y, lastvals)) 71 return np.convolve( m[::-1], y, mode='valid') Code explanation In lines 61-62 the coefficients of the local least-square polynomial fit are pre-computed. These will be used later at line 68, where they will be correlated with the signal. To prevent spurious results at the extremes of the data, the signal is padded at both ends with its mirror image, (lines 65-67). Figure CD-spectrum of a protein. Black: raw data. Red: filter applied A wrapper for cyclic voltammetry data One of the most popular applications of S-G filter, apart from smoothing UV-VIS and IR spectra, is smoothing of curves obtained in electroanalytical experiments. In cyclic voltammetry, voltage (being the abcissa) changes like a triangle wave. And in the signal there are cusps at the turning points (at switching potentials) which should never be smoothed. In this case, Savitzky-Golay smoothing should be done piecewise, ie. separately on pieces monotonic in x: def savitzky_golay_piecewise(xvals, data, kernel=11, order =4): turnpoint=0 last=len(xvals) if xvals[1]>xvals[0] : #x is increasing? for i in range(1,last) : #yes if xvals[i]<xvals[i-1] : #search where x starts to fall turnpoint=i break else: #no, x is decreasing for i in range(1,last) : #search where it starts to rise if xvals[i]>xvals[i-1] : turnpoint=i break if turnpoint==0 : #no change in direction of x return savitzky_golay(data, kernel, order) else: #smooth the first piece firstpart=savitzky_golay(data[0:turnpoint],kernel,order) #recursively smooth the rest rest=savitzky_golay_piecewise(xvals[turnpoint:], data[turnpoint:], kernel, order) return numpy.concatenate((firstpart,rest)) Two dimensional data smoothing and least-square gradient estimate Savitsky-Golay filters can also be used to smooth two dimensional data affected by noise. The algorithm is exactly the same as for the one dimensional case, only the math is a bit more tricky. The basic algorithm is as follow: - for each point of the two dimensional matrix extract a sub-matrix, centered at that point and with a size equal to an odd number "window_size". - for this sub-matrix compute a least-square fit of a polynomial surface, defined as p(x,y) = a0 + a1*x + a2*y + a3*x2 + a4*y2 + a5*x*y + ... . Note that x and y are equal to zero at the central point. - replace the initial central point with the value computed with the fit. Note that because the fit coefficients are linear with respect to the data spacing, they can pre-computed for efficiency. Moreover, it is important to appropriately pad the borders of the data, with a mirror image of the data itself, so that the evaluation of the fit at the borders of the data can happen smoothly. Here is the code for two dimensional filtering. 1 def sgolay2d ( z, window_size, order, derivative=None): 2 """ 3 """ 4 # number of terms in the polynomial expression 5 n_terms = ( order + 1 ) * ( order + 2) / 2.0 6 7 if window_size % 2 == 0: 8 raise ValueError('window_size must be odd') 9 10 if window_size**2 < n_terms: 11 raise ValueError('order is too high for the window size') 12 13 half_size = window_size // 2 14 15 # exponents of the polynomial. 16 # p(x,y) = a0 + a1*x + a2*y + a3*x^2 + a4*y^2 + a5*x*y + ... 17 # this line gives a list of two item tuple. Each tuple contains 18 # the exponents of the k-th term. First element of tuple is for x 19 # second element for y. 20 # Ex. exps = [(0,0), (1,0), (0,1), (2,0), (1,1), (0,2), ...] 21 exps = [ (k-n, n) for k in range(order+1) for n in range(k+1) ] 22 23 # coordinates of points 24 ind = np.arange(-half_size, half_size+1, dtype=np.float64) 25 dx = np.repeat( ind, window_size ) 26 dy = np.tile( ind, [window_size, 1]).reshape(window_size**2, ) 27 28 # build matrix of system of equation 29 A = np.empty( (window_size**2, len(exps)) ) 30 for i, exp in enumerate( exps ): 31 A[:,i] = (dx**exp[0]) * (dy**exp[1]) 32 33 # pad input array with appropriate values at the four borders 34 new_shape = z.shape[0] + 2*half_size, z.shape[1] + 2*half_size 35 Z = np.zeros( (new_shape) ) 36 # top band 37 band = z[0, :] 38 Z[:half_size, half_size:-half_size] = band - np.abs( np.flipud( z[1:half_size+1, :] ) - band ) 39 # bottom band 40 band = z[-1, :] 41 Z[-half_size:, half_size:-half_size] = band + np.abs( np.flipud( z[-half_size-1:-1, :] ) -band ) 42 # left band 43 band = np.tile( z[:,0].reshape(-1,1), [1,half_size]) 44 Z[half_size:-half_size, :half_size] = band - np.abs( np.fliplr( z[:, 1:half_size+1] ) - band ) 45 # right band 46 band = np.tile( z[:,-1].reshape(-1,1), [1,half_size] ) 47 Z[half_size:-half_size, -half_size:] = band + np.abs( np.fliplr( z[:, -half_size-1:-1] ) - band ) 48 # central band 49 Z[half_size:-half_size, half_size:-half_size] = z 50 51 # top left corner 52 band = z[0,0] 53 Z[:half_size,:half_size] = band - np.abs( np.flipud(np.fliplr(z[1:half_size+1,1:half_size+1]) ) - band ) 54 # bottom right corner 55 band = z[-1,-1] 56 Z[-half_size:,-half_size:] = band + np.abs( np.flipud(np.fliplr(z[-half_size-1:-1,-half_size-1:-1]) ) - band ) 57 58 # top right corner 59 band = Z[half_size,-half_size:] 60 Z[:half_size,-half_size:] = band - np.abs( np.flipud(Z[half_size+1:2*half_size+1,-half_size:]) - band ) 61 # bottom left corner 62 band = Z[-half_size:,half_size].reshape(-1,1) 63 Z[-half_size:,:half_size] = band - np.abs( np.fliplr(Z[-half_size:, half_size+1:2*half_size+1]) - band ) 64 65 # solve system and convolve 66 if derivative == None: 67 m = np.linalg.pinv(A)[0].reshape((window_size, -1)) 68 return scipy.signal.fftconvolve(Z, m, mode='valid') 69 elif derivative == 'col': 70 c = np.linalg.pinv(A)[1].reshape((window_size, -1)) 71 return scipy.signal.fftconvolve(Z, -c, mode='valid') 72 elif derivative == 'row': 73 r = np.linalg.pinv(A)[2].reshape((window_size, -1)) 74 return scipy.signal.fftconvolve(Z, -r, mode='valid') 75 elif derivative == 'both': 76 c = np.linalg.pinv(A)[1].reshape((window_size, -1)) 77 r = np.linalg.pinv(A)[2].reshape((window_size, -1)) 78 return scipy.signal.fftconvolve(Z, -r, mode='valid'), scipy.signal.fftconvolve(Z, -c, mode='valid') Here is a demo 1 2 # create some sample twoD data 3 x = np.linspace(-3,3,100) 4 y = np.linspace(-3,3,100) 5 X, Y = np.meshgrid(x,y) 6 Z = np.exp( -(X**2+Y**2)) 7 8 # add noise 9 Zn = Z + np.random.normal( 0, 0.2, Z.shape ) 10 11 # filter it 12 Zf = sgolay2d( Zn, window_size=29, order=4) 13 14 # do some plotting 15 matshow(Z) 16 matshow(Zn) 17 matshow(Zf) Original.pdf Original data Original+noise.pdf Original data + noise Original+noise+filtered.pdf (Original data + noise) filtered Gradient of a two-dimensional function Since we have computed the best fitting interpolating polynomial surface it is easy to compute its gradient. This method of computing the gradient of a two dimensional function is quite robust, and partially hides the noise in the data, which strongly affects the differentiation operation. The maximum order of the derivative that can be computed obviously depends on the order of the polynomial used in the fitting. The code provided above have an option derivative, which as of now allows to compute the first derivative of the 2D data. It can be "row"or "column", indicating the direction of the derivative, or "both", which returns the gradient.
http://wiki.scipy.org/Cookbook/SavitzkyGolay
CC-MAIN-2013-48
refinedweb
1,803
58.08
Hi Ive been experimenting with "mixed world" IJ2 and IJ1 commands. Its working really well but Ive been having some memory build up issues which eventually lead to "Out of Memory" errors if the command is run on a large number of datasets. I think Ive narrowed the problem down to the conversion from IJ2 datasets to IJ1 ImagePlus objects. The following groovy script should hopefully reproduce this problem, its much more obvious if run on a largeish multidimensional dataset. Any help or advice would be much appreciated. Thank you Jeremy // @ImageJ ij // @Dataset currentData import net.imagej.legacy.LegacyService; import ij.ImagePlus; import ij.IJ; import java.lang.System; // create legacy service for conversion between IJ and IJ2 classes LegacyService legacy = ij.get(LegacyService.class); // run conversion a few times to demonstrate memory build up for (int i = 0; i < 25; i++) { println(i); // convert to float, this makes the issue more obvious dataFloat = ij.op().run("convert.float32", currentData); // convert to IJ1 ImagePlus imp = legacy.getImageMap().registerDataset(dataFloat); } // My attempt to clean up dataFloat = null; imp = null; legacy = null; System.gc(); Hi Jeremy, I can reproduce the issue that you are describing on my machine. Could you try to not use the LegacyService directly but use a UIService instead? Does the following Groovy script still result in the described memory error? LegacyService UIService // @UIService uiService // @OpService opService // @Dataset currentData import net.imagej.legacy.LegacyService; import ij.ImagePlus; import ij.IJ; import java.lang.System; // run conversion a few times to demonstrate memory build up for (int i = 0; i < 25; i++) { println(i); // convert to float, this makes the issue more obvious dataFloat = opService.run("convert.float32", currentData); uiService.show("test", dataFloat); } // My attempt to clean up dataFloat = null; imp = null; legacy = null; System.gc(); Best,Stefan Hi Stefan Thank you for the response, its good to know the issue is reproducible. Your script dosnt result in memory errors however the problem is I dont want to display the data which I think it what the UIService offers? I would like to use various ops commands and then retrieve an ij.ImagePlus object which can be used for IJ1 functionality. BestJeremy For now, you could use ImgLib2's ImageJFunctions.wrap(RandomAccessibleInterval<T> img, String title). Although I will admit that this might not be best practice. ImageJFunctions.wrap(RandomAccessibleInterval<T> img, String title) Maybe @imagejan is aware of a better way? Great that seems to work with no memory build up. Thank you very much. Thanks for the report, @jpike. It is definitely unfortunate that the LegacyService causes this issue for you. One of my goals is to eliminate any need to use ImageJFunctions directly. ImageJFunctions I guess we should file an issue in imagej-legacy about this. Otherwise I will probably forget, and this thread will fade into the background. imagej-legacy
http://forum.imagej.net/t/memory-issues-with-ij2-to-ij1-conversion-using-legacyservice/4710
CC-MAIN-2017-26
refinedweb
475
51.55
please read the text from this link --> It-Doesnt-Work-Is-Useless dcbm.addElement(res.getString("names")); thank you michael, i have done some modifications. but it's not updating again.. sorry but i am again doing mistake somewhere.. here is my two classes (save student and load student).. [LoadStudents() reference].loadStudents1(comboBox reference);//this is where you load the students 1) type a name into the textfield 2) click a button 3) the typed-in name is saved to the db 4) the newly saved name appears in the comboBox (indicating it has been saved to the db) view plaincopy to clipboardprint? public class SearchByName { //this class needs to be scrapped, but you need to add the panel/combo elsewhere in the program ls.loadStudents1(studentNames); // here i am loading it.----------------------------------> JComboBox studentName = new JComboBox(); JPanel coursePanel = new JPanel(); coursePanel.add(studentName); } post back when you've modified your code to remove that class.
http://www.coderanch.com/t/579226/GUI/java/auto-update-jcombobox
CC-MAIN-2014-42
refinedweb
155
57.67
#include <sys/types.h> #include <sys/rman.h>_FIRSTSHARE 0x0020 /* first in sharing list */ #define RF_PREFETCHABLE 0x0040 /* resource is prefetchable */ #define RF_UNMAPPED 0x0100 /* don't map resource when activating */ Bits 15:10 of the flag register. The rm_start and rm_end fields may be set to limit the range of acceptable resource addresses. If these fields are not set, rman_init() will initialize them to allow the entire range of resource addresses. It also initializes any mutexes associated with the structure. If rman_init() fails to initialize the mutex, it will return ENOMEM; otherwise it will return 0 and rm will be initialized.. If any part of the region falls outside of the valid address range for rm, it will return EINVAL. ENOMEM will be returned when rman_manage_region() failed to allocate memory for the region. The rman_init_from_resource() function is a wrapper routine to create a resource manager backed by an existing resource. It initializes rm using rman_init() and then adds a region to rm corresponding to the address range allocated to r via rman_manage_region(). The rman_first_free_region() and rman_last_free_region() functions can be used to query a resource manager for its first (or last) unallocated region. If rm contains no free region, these functions will return ENOENT. Otherwise, *start and *end are set to the bounds of the free region and zero is returned. a boundary restriction and required aligment, and the code will attempt to find a free segment which fits. The start argument is the lowest acceptable starting value of the resource. The end argument is the highest acceptable ending value of the resource. Therefore, start, No, +, Fa, count, No, -, 1 must be [<=] end for any allocation to happen. The aligment requirement (if any) is specified in flags. The bound argument may be set to specify a boundary restriction such that an allocated region may cross an address that is a multiple of the boundary. The bound argument must be a power of two. It may be set to zero to specify no boundary restriction. A shared segment will be allocated if the RF_SHAREABLE flag is set, otherwise an exclusive bound argument. The rman_make_alignment_flags() function returns the flag mask corresponding to the desired alignment size. This should be used when calling rman_reserve_resource_bound(). The rman_is_region_manager() function returns true if the allocated resource r was allocated from rm. Otherwise, it returns false. The rman_adjust_resource() function is used to adjust the reserved address range of an allocated resource to reserve start through end. It can be used to grow or shrink one or both ends of the resource range. The current implementation does not support entirely relocating the resource and will fail with EINVAL if the new resource range does not overlap the old resource range. If either end of the resource range grows and the new resource range would conflict with another allocated resource, the function will fail with EBUSY. The rman_adjust_resource() function does not support adjusting the resource range for shared resources and will fail such attempts with EINVAL. Upon success, the resource r will have a start address of start and an end address of end and the function will return zero. Note that none of the constraints of the original allocation request such as alignment or boundary restrictions are checked by rman_adjust_resource(). It is the caller's responsibility to enforce any such requirements._mapping() function is used to associate a resource mapping with a resource r. The mapping must cover the entire resource. Setting a mapping sets the associated bus_space(9) handle and tag for r as well as the kernel virtual address if the mapping contains one. These individual values can be retrieved via rman_get_bushandle(), rman_get_bustag(), and rman_get_virtual(). The rman_get_mapping() function can be used to retrieve the associated resource mapping once set. The rman_set_rid() function associates a resource identifier with a resource r. The rman_get_rid() function retrieves this RID. The rman_get_device() function returns a pointer to the device which reserved the resource r. Please direct any comments about this manual page service to Ben Bullock. Privacy policy.
https://nxmnpg.lemoda.net/9/rman_is_region_manager
CC-MAIN-2020-05
refinedweb
666
55.95
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed. Update, July 25, 2015: I included some new plots suggested by my colleague Ariel Rokem. Scroll to the end! Last year I wrote a blog post examining trends in Seattle bicycling and how they relate to weather, daylight, day of the week, and other factors. supervised machine learning approach for data modeling, this post will examine the data using an unsupervised learning approach for data exploration. Along the way, we'll see some examples of importing, transforming, visualizing, and analyzing data in the Python language, using mostly Pandas, Matplotlib, and Scikit-learn. We will also see some real-world examples of the use of unsupervised machine learning algorithms, such as Principal Component Analysis and Gaussian Mixture Models, in exploring and extracting meaning from data. To spoil the punchline (and perhaps whet your appetite) what we will find is that from analysis of bicycle counts alone, we can make some definite statements about the aggregate work habits of Seattleites who commute by bicycle. The data we will use here are the hourly bicycle counts on Seattle's Fremont Bridge. These data come from an automated bicycle counter, installed in late 2012, which has inductive sensors under the sidewalks on either side of the bridge. The daily or hourly bicycle counts can be downloaded from; here is the direct link to the hourly dataset. To download the data directly, you can uncomment the following curl command: # !curl -o FremontBridge.csv import pandas as pd data = pd.read_csv('FremontBridge.csv', index_col='Date', parse_dates=True) data.head() We'll do some quick data cleaning: we'll rename the columns to the shorter "West" and "East", set any missing values to zero, and add a "Total" column: data.columns = ['West', 'East'] data.fillna(0, inplace=True) data['Total'] = data.eval('East + West') We can get a better idea of the dataset as a whole through a simple visualization; for example, we can resample the data to see the weekly trend in trips over the nearly three-year period: # first some standard imports %matplotlib inline import matplotlib.pyplot as plt import seaborn; seaborn.set() # plot styling import numpy as np data.resample('W', how='sum').plot() plt.ylabel('weekly trips'); The counts show both a strong seasonal variation, as well as a local structure that can be partially accounted for by temperature, time of year, precipitation, and other factors. From here, we could do a variety of other visualizations based on our intuition about what might affect bicycle counts. For example, we could look at the effect of the days of the week, the effect of the weather, and other factors that I explored previously. But we could also proceed by letting the dataset speak for itself, and use unsupervised machine learning techniques (that is, machine learning without reference to data labels) to learn what the data have to tell us. We will consider each day in the dataset as its own separate entity (or sample, in usual machine learning parlance). For each day, we have 48 observations: two observations (east and west sidewalk sensors) for each of the 24 hour-long periods. By examining the days in light of these observations and doing some careful analysis, we should be able to extract meaningful quantitative statements from the data themselves, without the need to lean on any other assumptions. The first step in this approach is to transform our data; essentially we will want a two-dimensional matrix, where each row of the matrix corresponds to a day, and each column of the matrix corresponds to one of the 48 observations. We can arrange the data this way using the pivot_table() function in Pandas. We want the "East" and "West" column values, indexed by date, and separated by hour of the day. Any missing values we will fill with zero: pivoted = data.pivot_table(['East', 'West'], index=data.index.date, columns=data.index.hour, fill_value=0) pivoted.head() 5 rows × 48 columns Next we extract the raw values and put them in a matrix: X = pivoted.values X.shape (1001, 48) Our data consists of just over 1000 days, each with the aforementioned 48 measurements. We can think of this data now as representing 1001 distinct objects which live in a 48-dimensional space: the value of each dimension is the number of bicycle trips measured on a particular side of the bridge at a particular hour. Visualizing 48-dimensional data is quite difficult, so instead we will use a standard dimensionality reduction technique to project this to a more manageable size. The technique we'll use is Principal Component Analysis (PCA), a fast linear projection which rotates the data such that the projection preserves the maximum variance. We can ask for components preserving 90% of the variance as follows: from sklearn.decomposition import PCA Xpca = PCA(0.9).fit_transform(X) Xpca.shape (1001, 2) The output has two dimensions, which means that these two projected components describe at least 90% of the total variance in the dataset. While 48-dimensional data is difficult to plot, we certainly know how to plot two-dimensional data: we'll do a simple scatter plot, and for reference we'll color each point according to the total number of trips taken that day: total_trips = X.sum(1) plt.scatter(Xpca[:, 0], Xpca[:, 1], c=total_trips, cmap='cubehelix') plt.colorbar(label='total trips'); We see that the days lie in two quite distinct groups, and that the total number of trips increases along the length of each projected cluster. Further, the two groups begin to be less distinguishable when the number of trips during the day is very small. I find this extremely interesting: from the raw data, we can determine that there are basically two primary types of days for Seattle bicyclists. Let's model these clusters and try to figure out what these types-of-day are. When you have groups of data you'd like to automatically separate, but no previously-determined labels for the groups, the type of algorithm you are looking at is a clustering algorithm. There are a number of clustering algorithms out there, but for nicely-defined oval-shaped blobs like we see above, Gaussian Mixture Models are a very good choice. We can compute the Gaussian Mixture Model of the data using, again, scikit-learn, and quickly plot the predicted labels for the points: from sklearn.mixture import GMM gmm = GMM(2, covariance_type='full', random_state=0) gmm.fit(Xpca) cluster_label = gmm.predict(Xpca) plt.scatter(Xpca[:, 0], Xpca[:, 1], c=cluster_label); This clustering seems to have done the job, and separated the two groups we are interested in. Let's join these inferred cluster labels to the initial dataset: pivoted['Cluster'] = cluster_label data = data.join(pivoted['Cluster'], on=data.index.date) data.head() Now we can find the average trend by cluster and time using a GroupBy within this updated dataset by_hour = data.groupby(['Cluster', data.index.time]).mean() by_hour.head() Finally, we can plot the average hourly trend among the days within each cluster: fig, ax = plt.subplots(1, 2, figsize=(14, 5)) hourly_ticks = 4 * 60 * 60 * np.arange(6) for i in range(2): by_hour.ix[i].plot(ax=ax[i], xticks=hourly_ticks) ax[i].set_title('Cluster {0}'.format(i)) ax[i].set_ylabel('average hourly trips') These plots give us some insight into the interpretation of the two clusters: the first cluster shows a sharp bimodal traffic pattern, while the second shows a wide unimodal pattern. In the bimodal cluster, we see a peak around 8:00am which is dominated by cyclists on the west sidewalk, and another peak around 5:00pm which is dominated by cyclists on the east sidewalk. This is very clearly a commute pattern, with the majority of cyclists riding toward downtown Seattle in the morning, and away from downtown Seattle in the evening. In the unimodal cluster, we see fairly steady traffic in each direction beginning early in the morning and going until late at night, with a peak around 2:00 in the afternoon. This is very clearly a recreational pattern of use, with people out riding through the entire day. I find this is fascinating: from simple unsupervised dimensionality reduction and clustering, we've discovered two distinct classes of days in the data, and found that these classes have very intuitive explanations. Let's go one step deeper and figure out what we can learn about people (well, bicycle commuters) in Seattle from just this hourly commute data. As a rough approximation, you might guess that these two classes of data might be largely reflective of workdays in the first cluster, and non-work days in the second. We can check this intuition by re-plotting our projected data, except labeling them by day of the week: dayofweek = pd.to_datetime(pivoted.index).dayofweek plt.scatter(Xpca[:, 0], Xpca[:, 1], c=dayofweek, cmap=plt.cm.get_cmap('jet', 7)) cb = plt.colorbar(ticks=range(7)) cb.set_ticklabels(['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun']) plt.clim(-0.5, 6.5); We see that the weekday/weekend intuition holds, but only to a point: in particular, it is clear that there are a handful of weekdays which follow the typical weekend pattern! Further, it's interesting to note that Fridays tend to be pulled closer to weekend days in this plot, though as a whole they still fall solidly in the work-day cluster. Let's take a closer look at the "special" weekdays that fall in the "wrong" cluster. We start by constructing a dataset listing the cluster id and the day of the week for each of the dates in our dataset: results = pd.DataFrame({'cluster': cluster_label, 'is_weekend': (dayofweek > 4), 'weekday': pivoted.index.map(lambda x: x.strftime('%a'))}, index=pivoted.index) results.head() First, let's see how many weekend days fall in the first, commute-oriented cluster weekend_workdays = results.query('cluster == 0 and is_weekend') len(weekend_workdays) 0 zero! Apparently, there is not a single weekend during the year where Seattle cyclists as a whole decide to go to work. Similarly, we can see how many weekdays fall in the second, recreation-oriented cluster: midweek_holidays = results.query('cluster == 1 and not is_weekend') len(midweek_holidays) 23 There were 23 weekdays over the past several years in which Seattle cyclists as a whole did not go to work. To label these, let's load the US Federal holiday calendar available in Pandas: from pandas.tseries.holiday import USFederalHolidayCalendar cal = USFederalHolidayCalendar() holidays = cal.holidays('2012', '2016', return_name=True) holidays.head() 2012-01-02 New Years Day 2012-01-16 Dr. Martin Luther King Jr. 2012-02-20 Presidents Day 2012-05-28 MemorialDay 2012-07-04 July 4th dtype: object Just for completeness, we will add to the list the day before and day after each of these holidays: holidays_all = pd.concat([holidays, "Day Before " + holidays.shift(-1, 'D'), "Day After " + holidays.shift(1, 'D')]) holidays_all = holidays_all.sort_index() holidays_all.head() 2012-01-01 Day Before New Years Day 2012-01-02 New Years Day 2012-01-03 Day After New Years Day 2012-01-15 Day Before Dr. Martin Luther King Jr. 2012-01-16 Dr. Martin Luther King Jr. dtype: object Note that these are observed holidays, which is why New Years Day 2012 falls on January 2nd. With this ready to go, we can compute the complete list of non-weekend days on which Seattle bicycle commuters as a whole chose to stay home from work: holidays_all.name = 'name' # required for join joined = midweek_holidays.join(holidays_all) set(joined['name']) {'Christmas', 'Day After Christmas', 'Day After Thanksgiving', 'Day Before Christmas', 'July 4th', 'Labor Day', 'MemorialDay', 'New Years Day', 'Thanksgiving'} On the other side of things, here are the Federally recognized holidays where Seattle bicycle commuters chose to go to work anyway: set(holidays) - set(joined.name) {'Columbus Day', 'Dr. Martin Luther King Jr.', 'Presidents Day', 'Veterans Day'} A colleague of mine, Ariel Rokem, saw the first version of this post and noticed something interesting. For the most part, Fridays tend to lie on the upper side of the weekday cluster, closer in this parameter space to the typical weekend pattern. This pattern holds nearly universally for Fridays, all except for three strange outliers which lie far on the other side of the cluster. We can see these more clearly if we highlight the Friday points in the plot: fridays = (dayofweek == 4) plt.scatter(Xpca[:, 0], Xpca[:, 1], c='gray', alpha=0.2) plt.scatter(Xpca[fridays, 0], Xpca[fridays, 1], c='yellow'); The yellow points in the bottom-left of the plot are unique – they're far different than other Fridays, and they even stand-out in comparison to the other work days! Let's see what they represent: weird_fridays = pivoted.index[fridays & (Xpca[:, 0] < -600)] weird_fridays Index([2013-05-17, 2014-05-16, 2015-05-15], dtype='object') All three of these outlying Fridays fall in the middle of May. Curious! Let's quickly visualize the daily stats for these, along with the mean trend over all days. We can arrange the data this way with a pivot table operation: all_days = data.pivot_table('Total', index=data.index.time, columns=data.index.date) all_days.loc[:, weird_fridays].plot(); all_days.mean(1).plot(color='gray', lw=5, alpha=0.3, xticks=hourly_ticks); Apparently these three strange Fridays are days with extreme amounts of bicycle commuting. But what makes them so special? After some poking-around on the internet, the answer becomes clear: we've discovered Seattle's annual bike to work day. Mystery solved! We have seen here that by taking a close look at raw bicycle counts and using some basic visualization and unsupervised machine learning, we can make some very definite statements about the overall work habits of people in Seattle who bicycle to work across the Fremont bridge. In summary, this is what we have learned: Thanks for reading! This post was written entirely in the IPython notebook. You can download this notebook, or see a static view here.
http://nbviewer.jupyter.org/url/jakevdp.github.io/downloads/notebooks/SeattleCycling2.ipynb
CC-MAIN-2017-26
refinedweb
2,351
53.81
Prev C++ VC ATL STL Operator Code Index Headers Your browser does not support iframes. Re: C++ Template Overloading From: "Mathias Gaunard" <loufoque@gmail.com> Newsgroups: comp.std.c++ Date: Wed, 11 Apr 2007 23:55:54 CST Message-ID: <1176347696.543920.50880@o5g2000hsb.googlegroups.com> On Apr 11, 4:41 pm, "Emerson" <emerson.cla...@gmail.com> wrote: This is going to be a bit of a rant, but its only becuase im passionate about software and the state of the industry. We seem to be ever more focused creating new and esoteric language features when we should be trying to find ways to combine all languages at some higher level and solve the fundamental problems of software development, algorithm reuse, and binary interoperability. Actually, templates are certainly the most interesting and novative aspect of C++. The fact that it is given more work than other 'problems' is proof that more people are simply more interested with it. I just finished watching a Google tech talk presentation on concepts in the upcoming C++0x standard and it left me wondering. Google is not really known for its modern usage of C++. To me it seems they use it "the old way". The concepts seem like a good idea, but it does concern me that such metaphore specific notions are making it into the C++ language ahead of perhaps more basic and useful features. Concepts, technically, aren't really needed. Similar functionality can already be achieved by the language as it is. The whole idea of concepts is to allow simpler code and simpler error messages. The metaphore that i refer to is that off iterators. Without iterators, STL would not be what it is. Both STL and Boost are libraries which have been designed around a single idea, the use of C+ + operators to provide non type specific generic algorithms and containers. I certainly wouldn't say that. Especially for boost, which contains various different things. The aim STL, as it names says, is to provide templates for data structures and related algorithms. I don't see the relation with operators. "non type specific" is a weird expression, and makes me thing you misunderstand templates. With templates, everything is type specific, it's just that it can be generated for any type. It's not obfuscation. But it means that the algorithms and containers are restricted to things which behave like pointers. Huh? I fail to see the relation. Are you talking about the fact that the iterator design is purposedly modeled after pointers? That's just a question of style, it could have used next(), etc. C++ operators are convinient becuase they dont require you to specify types in the definition of your code You always need to specify the type of the variable in C++. (There will be type inference in C++0x though) and they support primitive types which you cannot do directly with interfaces, but it isnt very OO Templates are just what they are, templates. A calibre of a class of function, that you can instantiate with types or integers to generate the given class or function. They have no relation with OO. Which is a good thing, by the way. With OO and subtyping polymorphism the real type of objects is not known until runtime, while templates work at compile-time and can use inference to instantiate themselves in the case of template functions. and no, i dont buy the notion of "parametric polymorphism". It's not whether you buy it or not. It's a notion that exists, and is heavily demonstrated by statically typed functional languages such as ML. Templates offer that kind of functionality for C++. The boundary between the opaque and the visible for the user only occurs when they dereference an iterator, at this point the user must know the underlying type and the generic metaphore falls away to expose the details of the implementation. If you don't know the types of your variables, maybe you shouldn't use them in the first place. The type of your iterator contains all the necessary information to know the type of the derefenced element. This is a clever workaround It's not especially new or clever. and it has allowed libraries like Boost and STL to flourish, but it is not the only way, and it has its drawbacks. The only drawback it that, since it happens at compile-time, it can't be dynamic. For instance, using the STL metaphore it is not possible to create a function which takes only an iterator of integers because there is no such thing as an iterator of integers, the underlying type is hidden. That's absolutely false. The iterator type contains the information of the type it iterates. While such an information wouldn't be in the type using an OO design, it is with templates. Writing such a function can be done with SFINAE, which becomes easy using tools from boost : template<typename It> typename enable_if< is_same<It::value_type, int>, void>::type some_function(It it); Iterators deal only with operators, not with types or interfaces. Its more like passing around macros, with all of the details being obscured. There is nothing obscure. All information is in the type. So the STL metaphore necessarily does not integrate well with the underlying C++ type system It integrates very well. So well that people are still amazed by things you can do with it. and cannot be constrained by overloading They can, using SFINAE. or inheritance. Templates apply on types. Inheritance is a special propriety of a specific category of types. They're of course not related. I think we should not forget that C++ was an object oriented language well before generic programming came about That would be a very good thing to forget. The OO idiom in C++ is certainly not its good side. Generations of developers thought C++ was an OO language, but they were simply coding in C with Classes. That's very far from the functionality modern C++ gives. and perhaps some of the ideas which generic programming brings with it are better suited to functional languages. There is nothing especially functional in genericity. And C++ can handle the functional idiom quite well. Generic code is unfortunately opaque, and whilst it can be amazingly useful, there is a high price to pay in terms or readability and debugability. It's not more opaque and non generic code. Actually, it is often clearer, since it is generic and not specific to some kind of type. So before we take library specific ideas like begin() and end() and start building for loops and concept maps around them and adding support for requirements in generic code, can we pause and considder the alternatives ? Looks like you don't understand concept maps. What about ensuring that C++ templates can be overloaded by interfaces rather than just concrete types ? template<typename Kind> class Compare { public: int Compare(Kind left, Kind right) {return left-right;} } Here, you could just write your comparison as left < right, and if operator<(decltype(left), decltype(right)) is not found, you get a compile-time error. template<typename Kind> class Compare<Comparable Kind> { public: int Compare(Kind left, Kind right) {return left.Compare(right);} } Thats a pretty useful feature when your doing generic programming That's concepts. They're doable using library techniques as of today, and are being included in the next standard to be simpler. that does not use the STL or Boost metaphore It *is* the STL metaphore. For example, to instantiate std::set with a type, that type must satisfy the requirement of comparable. and it integrates tightly with the more OO aspects of C++, overloading and inheritance. I fail to see the OO integration. And how about standardising the representation of function pointers so that we can write better even handling and callbacks. I don't see how standardizing the representation of function pointers would help for callbacks. What about closures and coroutines ? Why do these ideas have to remain "undefined" whilst we focus on much more esoteric problems. Closures can certainly be very well done in C++, thanks to overloading. It's called functors. This is already highly used by the STL, by the way. Be it to indicate how comparison is to be done or what code to apply with an algorithm (for_each, transform, etc.) Functors are usually used through templates, but you can also use them through subtyping polymorphism, in case you need a single type (to store the thing, eventually). Of course, the later introduces some overhead. For that there is an utility in boost and TR1 to unify all functors with the same signature to a single type, it's called boost::function. It could be implemented like this, but actually it uses other tricks for efficiency. Here is how for a function taking no argument. For other cases, the template will have to be "overloaded". Usage of variadic templates (a new C++0x feature) could also be considered. template<typename R> struct base { virtual R operator()() = 0; }; template<typename R, typename F> struct derived : base<R> { derived(const F& f) { f_ = f; } R operator()() { return f_(); } F f_; }; template<typename R> class function { public: template<typename F> function(const F& f) { p_ = new derived<R, F>(f); } ~function() { delete p_; } R operator()() { return (*p_)(); } /* additional stuff */ private: base<R>* p_; }; Notice how templates and inheritance can be used together to create that kind of generic, yet runtime-oriented, component. For every generic programming feature in the C++0x standard, i fear there will be 10 far more imporant and far more fundamental non generic features which will be missed. And for all this we have to wait till the end of the decade. Whoopy do ! Templates are one of the key features of C++, more important than subtyping polymorphism, which can simply be done by hand with function pointers anyway. The only point where templates don't integrate well is with virtuality, because that would mean the vtable can't be generated until link-time. That's kinda problematic given the translation model of C++. That's too bad actually, that would allow very interesting things. If your interested in some concrete examples of the alternative generic programming metaphores which are out there, check out the structure's namespace from the C++ framework that i recently released called Reason. When reading your work it seems your definition of OO varies. I was assuming you meant subtyping polymorphism, but actually sometimes it seems you're just talking about the idea of object based programming, grouping state data in objects and attaching member functions to them. So basically, your reproach that STL is not OO is based on the fact that to advance an iterator, you have to do ++it and not it.next() ? It frustrates me that so much of the code that we write has to be thrown away, but atleast i can take pride in the fact that the code i write is readable and understandable. It can always be ported to another language when C++ eventually digs a hole so deep we cant climb out :( That's not gonna happen, only C++ has this awesome feature called templates. ;) --- [ comp.std.c++ is moderated. To submit articles, try just posting with ] [ your news-reader. If that fails, use mailto:std-c++@ncar.ucar.edu ] [ --- Please see the FAQ before posting. --- ] Generated by PreciseInfo ™)
http://preciseinfo.org/Convert/Articles_CPP/Operator_Code/C++-VC-ATL-STL-Operator-Code-070412025554.html
CC-MAIN-2021-49
refinedweb
1,916
63.7
This is the documentation for older versions of Odoo (formerly OpenERP). See the new Odoo user documentation. See the new Odoo technical documentation. Mako Template¶. <%inherit <% rows = [[v for v in range(0,10)] for row in range(0,10)] %> <table> % for row in rows: ${makerow(row)} % endfor </table> <%def <tr> % for name in row: <td>${name}</td> % endfor </tr> </%def> Features¶ Super-simple API. For basic usage, just one class, Template is needed: from mako.template import Template print Template("hello ${data}!").render(data="world") For filesystem management and template caching, add the TemplateLookup class. Insanely Fast. An included bench suite, adapted from a suite included with Genshi, has these results for a simple three-sectioned layout: Mako: 1.10 ms Kid: 14.54 ms Standard template features control structures constructed from real Python code (i.e. loops, conditionals) straight Python blocks, inline or at the module-level Callable blocks can access variables from their enclosing scope as well as the template's request context can be nested arbitrarily can specify regular Python argument signatures outer-level callable blocks can be called by other templates or controller code (i.e. "method call") calls to functions can define any number of sub-blocks of content which are accessible to the called function (i.e. "component-call-with-content"). This is the basis for nestable custom tags. Inheritance supports "multi-zoned" inheritance - define any number of areas in the base template to be overridden. supports "chaining" style inheritance - call next.body() to call the "inner" content. the full inheritance hierarchy is navigable in both directions (i.e. parent and child) from anywhere in the chain. inheritance is dynamic ! Specify a function instead of a filename to calculate inheritance on the fly for every request. Examples¶ Basic Usage from mako.template import Template mytemplate = Template("hello world!") print mytemplate.render()="openerp") Using File-based Templates A Tempalte can also load its template source code from a file, using the filename keyword argument: from mako.template import Template mytemplate = Template(filename='/test.html') print mytemplate.render() Using TemplateLook=['']) mytemplate = Tempalte('<% include Hello!',lookup=mylookup) Above, we created a textual template which includes the file "header.txt". In order for it to have somewhere to look for "header.txt", we passed a TemplateLookup object to it, which will search in the current directory for the file "header.txt". Syntax¶ Expression Substitution The simplest expression is just a variable substitution. The syntax for this is the ${} construct, which is inspired by Perl, Genshi, JSP EL, and others: ${x} ${5%5} ${7*2} ${pow(x,2) + pow(y,2)} Controller Structures Conditionals(i.e if/else) loops(for and while) as well as try/except control structures are written using the % marker followed by a regular Python control expression, and are “closed” by using another % marker with the tag “end<name>“, where “<name>” is the keyword of the expression: % if user_name == 'openerp': valid user % endif % if a > 1: a is positive number % elif a == 0: a is 0 % else: a is negative number % endif <table> % for a in [1,2,3,4,5]: <tr> <td> ${a} </td> </tr> % endfor </table> Python Blocks Any arbitrary block of python can be dropped in using the <% %> tags: <% a = {'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5} b = a.values() %> % for x in b: ${x} % endfor Module-level Blocks A variant on <% %> is the module-level code block, denoted by <%! %>. Code within these tags is executed at the module level of the template, and not within the rendering function of the template. <%! import cherrypy def get_user_from_session(): return cherrypy.session['current_user'] %> Therefore, this code does not have access to the template’s context and is only executed when the template is loaded into memory (which can be only once per application, or more, depending on the runtime environment). Mako Tags <%page> This tag defines general characteristics of the template, including caching arguments, and optional lists of arguments which the template expects when invoked. Also defines caching characteristics. <%page <%page <%include> just accepts a file argument and calls in the rendered result of that file: Also accepts arguments which are available as <%page> arguments in the receiving template: <%include Welcome to OpenERP <%include <%include <%inherit> Inherit allows templates to arrange themselves in inheritance chains. When using the %inherit tag, control is passed to the topmost inherited template first, which then decides how to handle calling areas of content from its inheriting templates. <%inherit <%def> The %def tag defines a Python function which contains a set of content, that can be called at some other point in the template.). <%def this is function ${x} <%def> <%namespace> %namespace is Mako’s equivalent of Python’s import statement. It allows access to all the rendering functions and metadata of other template files, plain Python modules, as well as locally defined “packages” of functions. <%namespace <%doc> handles multiline comments: <%doc> Multi line comments Using doc tag </%doc> For More Details visit the documentation:
https://doc.odoo.com/6.1/id/developer/Web_client_v6/mako_template/
CC-MAIN-2019-09
refinedweb
829
53.92
This. The search pattern can be a simple character or a substring or it may be a complex string or expression that defines a particular pattern to be searched in the string. Further, the pattern may have to match one or more times to the string. => Visit Here To See The Java Training Series For All. What You Will Learn: - Regular Expression: Why We Need It - Conclusion Regular Expression: Why We Need It A regular expression is mainly used to search for a pattern in a string. Why do we search for a pattern in a string? We might want to find a particular pattern in a string and then manipulate it or edit it. So in a computer application, we may have a continuous requirement of manipulating various patterns. Hence, we always require regex to facilitate searching for the pattern. Now given a pattern to search for, how exactly does the regex works? When we analyze and alter the text using a regex, we say that ‘we have applied regex to the string or text’. What we do is we apply the pattern to the text in a ‘left to right’ direction and the source string is matched with the pattern. For example, consider a string “ababababab”. Let’s assume that a regex ‘aba’ is defined. So now we have to apply this regex to the string. Applying the regex from left to right, the regex will match the string “aba_aba___”, at two places. Thus once a source character is used in a match, we cannot reuse it. Thus after finding the first match aba, the third character ‘a’ was not reused. java.util.regex Java language does not provide any built-in class for regex. But we can work with regular expressions by importing the “java.util.regex” package. The package java.util.regex provides one interface and three classes as shown below: Pattern Class: A pattern class represents the compiled regex. The Pattern class does not have any public constructors but it provides static compile () methods that return Pattern objects and can be used to create a pattern. Matcher Class: The Matcher class object matches the regex pattern to the string. Like Pattern class, this class also does not provide any public constructors. It provides the matcher () method that returns a Matcher object. PatternSyntaxException: This class defines an unchecked exception. An object of type PatternSyntaxException returns an unchecked exception indicating a syntax error in regex pattern. MatchResult Interface: The MatchResult interface determines the regex pattern matching result. Java Regex Example Let’s implement a simple example of regex in Java. In the below program we have a simple string as a pattern and then we match it to a string. The output prints the start and end position in the string where the pattern is found. import java.util.regex.Matcher; import java.util.regex.Pattern; public class Main { public static void main(String args[]) { //define a pattern to be searched Pattern pattern = Pattern.compile("Help."); // Search above pattern in "softwareTestingHelp.com" Matcher m = pattern.matcher("softwareTestingHelp.com"); // print the start and end position of the pattern found while (m.find()) System.out.println("Pattern found from position " + m.start() + " to " + (m.end()-1)); } } Output: Pattern found from 15 to 19 Regex Matcher In Java The matcher class implements the MatchResult interface. Matcher acts as a regex engine and is used to perform the exact matching of a character sequence. Given below are the common methods of the Matcher class. It has more methods but we have listed only the important methods below. Regular Expression Implementation Example Let’s see an example of the usage of some of these methods. import java.util.regex.Matcher; import java.util.regex.Pattern; public class MatcherDemo { public static void main(String[] args) { String inputString = "She sells sea shells on the sea shore with shells"; //obtain a Pattern object Pattern pattern = Pattern.compile("shells"); // obtain a matcher object System.out.println("input string: " + inputString); Matcher matcher = pattern.matcher(inputString); inputString = matcher.replaceFirst("pearls"); System.out.println("\nreplaceFirst method:" + inputString); //use replaceAll method to replace all occurrences of pattern inputString = matcher.replaceAll("pearls"); System.out.println("\nreplaceAll method:" + inputString); } } Output: input string: She sells sea shells on the sea shore with shells replaceFirst method:She sells sea pearls on the sea shore with shells replaceAll method:She sells sea pearls on the sea shore with pearls Regex Pattern Class In Java Pattern class defines the pattern for the regex engine which can then be used to match with the input string. The following table shows the methods provided by the Pattern class that is commonly used. The below example uses some of the above methods of Pattern class. import java.util.regex.*; public class Main { public static void main(String[] args) { // define a REGEX String String REGEX = "Test"; // string to be searched for given pattern String actualString = "Welcome to SoftwareTestingHelp portal"; // generate a pattern for given regex using compile method Pattern pattern = Pattern.compile(REGEX); // set limit to 2 int limit = 2; // use split method to split the string String[] array = pattern.split(actualString, limit); // print the generated array for (int i = 0; i < array.length; i++) { System.out.println("array[" + i + "]=" + array[i]); } } } Output: array[0]=Welcome to Software array[1]=ingHelp portal In the above program, we use the compile method to generate a pattern. Then we split the input string about this pattern and read it into an array. Finally, we display the array that was generated as a result of splitting the input string. Regex String Matches Method We have seen the String.Contains () method in our string tutorials. This method returns a boolean value true or false depending on if the string contains a specified character in it or not. Similarly, we have a method “matches ()” to check if the string matches with a regular expression or regex. If the string matches the specified regex then a true value is returned or else false is returned. The general syntax of the matches () method: public boolean matches (String regex) If the regex specified is not valid, then the “PatternSyntaxException” is thrown. Let’s implement a program to demonstrate the usage of the matches () method. public class MatchesExample{ public static void main(String args[]){ String str = new String("Java Series Tutorials"); System.out.println("Input String: " + str); //use matches () method to check if particular regex matches to the given input System.out.print("Regex: (.*)Java(.*) matches string? " ); System.out.println(str.matches("(.*)Java(.*)")); System.out.print("Regex: (.*)Series(.*) matches string? " ); System.out.println(str.matches("(.*)Series(.*)")); System.out.print("Regex: (.*)Series(.*) matches string? " ); System.out.println(str.matches("(.*)String(.*)")); System.out.print("Regex: (.*)Tutorials matches string? " ); System.out.println(str.matches("(.*)Tutorials")); } } Output: Input String: Java Series Tutorials Regex: (.*)Java(.*) matches string? true Regex: (.*)Series(.*) matches string? true Regex: (.*)Series(.*) matches string? false Regex: (.*)Tutorials matches string? true We use lots of special characters and Metacharacters with regular expressions in Java. We also use many character classes for pattern matching. In this section, we will provide the tables containing character classes, Meta characters, and Quantifiers that can be used with regex. Regex Character Classes Regex Quantifiers Quantifiers are used to specify the number of times the character will occur in the regex. The following table shows the common regex quantifiers used in Java. Regex Meta Characters The Metacharacters in regex work as shorthand codes. These codes include whitespace and non-whitespace character along with other shortcodes. The following table lists the regex Meta characters. Given below is a Java program that uses the above special characters in the Regex. import java.util.regex.*; public class RegexExample{ public static void main(String args[]){ // returns true if string exactly matches "Jim" System.out.print("Jim (jim):" + Pattern.matches("Jim", "jim")); // Returns true if the input string is Peter or peter System.out.println("\n[Pp]eter(Peter) :" + Pattern.matches("[Pp]eter", "Peter")); //true if string = abc System.out.println("\n.*abc.*(pqabcqp) :" + Pattern.matches(".*abc.*", "pqabcqp")); // true if string doesn't start with a digit System.out.println("\n^[^\\d].*(abc123):" + Pattern.matches("^[^\\d].*", "abc123")); // returns true if the string contains exact three letters System.out.println("\n[a-zA-Z][a-zA-Z][a-zA-Z] (aQz):" + Pattern.matches("[a-zA-Z][a-zA-Z][a-zA-Z]", "aQz")); System.out.println("\n[a-zA-Z][a-zA-Z][a-zA-Z], a10z" + Pattern.matches("[a-zA-Z][a-zA-Z][a-zA-Z], a10z", "a10z")); //input string length = 4 // true if the string contains 0 or more non-digits System.out.println("\n\\D*, abcde:" + Pattern.matches("\\D*", "abcde")); //True // true of line contains only word this ^-start of the line, $ - end of the line System.out.println("\n^This$, This is Java:" + Pattern.matches("^This$", "This is Java")); System.out.println("\n^This$, This:" + Pattern.matches("^This$, This", "This")); System.out.println("\n^This$, Is This Java?:" + Pattern.matches("^This$, Is This Java?", "Is This Java?")); } } Output: Jim (jim):false [Pp]eter(Peter) :true .*abc.*(pqabcqp) :true ^[^\d].*(abc123):true [a-zA-Z][a-zA-Z][a-zA-Z] (aQz):true [a-zA-Z][a-zA-Z][a-zA-Z], a10zfalse \D*, abcde:true ^This$, This is Java:false ^This$, This:false ^This$, Is This Java?:false In the above program, we have provided various regexes that are matched with the input string. Readers are advised to read the comments in the program for each regex to better understand the concept. Regex Logical or (|) Operator We can use the logical or (| operator) in regex that gives us the choice to select either operand of | operator. We can use this operator in a regex to give a choice of character or string. For example, if we want to match both the words, ‘test’ and ‘Test’, then we will include these words in logical or operator as Test|test. Let’s see the following example to understand this operator. import java.util.regex.Matcher; import java.util.regex.Pattern; public class RegexOR { public static void main(String[] args) { // Regex string to search for patterns Test or test String regex = "(Test|test)"; // Compiles the pattern and obtains the matcher object from input string. Pattern pattern = Pattern.compile(regex); String input = "Software Testing Help"; Matcher matcher = pattern.matcher(input); // print every match while (matcher.find()) { System.out.format("Text \"%s\" found at %d to %d.%n", matcher.group(), matcher.start(), matcher.end()); } //define another input string and obtain the matcher object input = "SoftwaretestingHelp"; matcher = pattern.matcher(input); // Print every match while (matcher.find()) { System.out.format("Text \"%s\" found at %d to %d.%n", matcher.group(), matcher.start(), matcher.end()); } } } Output: Text “Test” found at 9 to 13. Text “test” found at 8 to 12. In this program, we have provided the regex “(Test|test)”. Then first we give the input string as “Software Testing Help” and match the pattern. We see that the match is found and the position is printed. Next, we give the input string as “SoftwaretestingHelp”. This time also the match is found. This is because the regex has used or operator and hence the pattern on either side of | operator is matched with the string. We can also validate email id (address) with regex using java.util.regex.Pattern.matches () method. It matches the given email id with the regex and returns true if the email is valid. The following program demonstrates the validation of email using regex. public class EmailDemo { static boolean isValidemail(String email) { String regex = "^[\\w-_\\.+]*[\\w-_\\.]\\@([\\w]+\\.)+[\\w]+[\\w]$"; //regex to validate email. return email.matches(regex); //match email id with regex and return the value } public static void main(String[] args) { String email = "ssthva@gmail.com"; System.out.println("The Email ID is: " + email); System.out.println("Email ID valid? " + isValidemail(email)); email = "@sth@gmail.com"; System.out.println("The Email ID is: " + email); System.out.println("Email ID valid? " + isValidemail(email)); } } Output: The Email ID is: ssthva@gmail.com The Email ID is: @sth@gmail.com As we can see from the above output, the first email id is valid. The second id directly starts with @, and hence regex does not validate it. Hence it is an invalid id. Frequently Asked Questions Q #1) What is in a Regular Expression? Answer: A Regular Expression commonly called regex is a pattern or a sequence of characters (normal or special or Meta characters) that is used to validate an input string. Q #2) What is the significance of the Matcher class for a regular expression in Java? Answer: The matcher class (java.util.regex.Matcher) acts as a regex engine. It performs the matching operations by interpreting the Pattern. Q #3) What is the pattern in Java? Answer: The package java.util.regex provides a Pattern class that is used to compile a regex into a pattern which is the standard representation for regex. This pattern is then used to validate strings by matching it with the pattern. Q #4) What is B in a regular expression? Answer: The B in regex is denoted as \b and is an anchor character that is used to match a position called word boundary. The start of the line is denoted with a caret (^) and the end of the line is denoted by a dollar ($) sign. Q #5) Is pattern thread-safe Java? Answer: Yes. Instances of the Pattern class are immutable and safe for use by multiple concurrent threads. But the matcher class instances are not thread-safe. Conclusion. We have also seen various special character classes and Metacharacters that we can use in the regex that give shorthand codes for pattern matching. We also explored email validation using regex. => Explore The Simple Java Training Series Here.
https://www.softwaretestinghelp.com/java-regex-tutorial/
CC-MAIN-2021-10
refinedweb
2,294
58.89
by Zoran Horvat Dec 14, 2013 Given a positive number N, write a function which returns sum of squares of numbers between 1 and N, i.e. 1^2 + 2^2 + ... + N^2. Example: If N is 5, then return value should be 55 (1 + 4 + 9 + 16 + 25 = 55). It is easy to produce sum o squares of a sequence by simply iterating through numbers 1 to N. Here is the function which does precisely that: function Sum(n) begin sum = 0 for i = 1 to n sum = sum + i * i return sum end This function runs in O(N) time and O(1) space. In order to improve the running time of the function, we need to find a definite form of the sum of squares. That is possible to do, and here is one of the ways to derive the simple form of the expression: This derivation depends on the equation for calculating simple sum of the numbers 1 to N, which is explained in exercise Sum of First N Numbers. Now the function that runs in O(1) time and O(1) space looks like this: function Sum(n) begin return n * (n + 1) * (2 * n + 1) / 6 end Below is the full listing of a console application in C# which lets the user enter value N and then prints the sum of squares of the sequence. using System; namespace SumOfSquares { public class Program { static int Sum(int n) { return n * (n + 1) * (2 * n + 1) 6; } static void Main(string[] args) { while (true) { Console.Write("Enter sequence length (zero to exit): "); int n = int.Parse(Console.ReadLine()); if (n <= 0) break; Console.WriteLine("Sum of the sequence is {0}\n", Sum(n)); } } } } When application is run, it produces output like this: Enter sequence length (zero to exit): 4 Sum of the sequence is 30 Enter sequence length (zero to exit): 5 Sum of the sequence is 55 Enter sequence length (zero to exit): 217 Sum of the sequence is 3429685.
http://codinghelmet.com/exercises/sum-of-squares-first-n
CC-MAIN-2019-04
refinedweb
335
59.47
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project. On 2/20/19 3:12 AM, Florian Weimer wrote: > * Carlos O'Donell: > >> On 2/19/19 8:26 AM, Tone Kastlunger wrote: >>> Hi; >>> the sem_open implementation in the pthread library seems to refer to >>> privately defined aliases of some functions of the glibc >>> (__libc_open/cloe, __libc_write for instance). >>> >>> On the other hand, in the same implementation >>> other glibc function calls are addressed via the (usual) public counterparts >>> (for example unlink). >>> >>> I'm puzzled about this "dissonance". Any reason behind int? >> >> It's complicated. >> >> >> >> In this case it's libpthread calling to libc for unlink and munmap. >> >> So it's easiest to call the public symbol because we know in the >> given standard, POSIX, which defines sem_open, we must also have >> a working unlink and munmap. >> >> So there is no namespace issue, and there is no PLT avoidance needed. > > But that's true for open and write as well. I don't know for certain, > but I suspect it's related to the interceptors for open and write in > libpthread (which are still there today, but are not actually needed). Yes, that's probably the case. -- Cheers, Carlos.
https://sourceware.org/ml/libc-alpha/2019-02/msg00501.html
CC-MAIN-2020-10
refinedweb
205
72.66
Understand use of Pointers (*) in C and C++ Reading time: 35 minutes | Coding time: 10 minutes We know that every instruction we give to the computer or every variable we define or every data we input must be stored in the memory of the computer so that it can be processed. This storage allocation is done in two ways: 1.Static 2.Dynamic Static Allocation In this, the amount of memory to be allocated is known beforehand and is reserved during compilation itself. Eg: int arr[10] - defines an array of 10 elements and allocates 10 * sizeof(int) space. Dynamic Allocation In this, the amount of memory is not known beforehand and is allocated during run time, that is, when the program is actually running. Eg: In a Car Parking database of a shopping mall, we do not know the exact number of cars that will be visiting our mall throughout the day. Pointers Facilitate this dynamic allocation What is a Pointer A pointer in C and C++ is a variable that contains a memory address which is usually the address/ location of another variable in the memory. So, for example, if I say the pointer variable 'ptr' points to variable x, I mean that 'ptr' holds the memory location or the exact address where x is stored in the memory. The address is usually a hexadecimal address. The pointers are one of the most unique, useful and strongest feature of C++. They allow us to directly access the memory of the computer. Significance of Pointers in C++ - Pointers provide us a way in which the memory location can be directly accessed and thus be manipulated in the way we require. - Pointers support C++'s dynamic allocation routines. - Pointers can improve the efficiency of various inbuilt routines/functions. Word of Caution Although pointers provide us this powerful feature of accessing the memory, however, we must be careful while using them. Incorrect use of pointers can effect the program in inexplicable and complex ways. Declaration and Intialisation of Pointers Pointers are declared in pretty much the same way we declare variables in C++. However, in order to differentiate, we make use of the character '*'. General form of declaration- type* pointer_name; Description: - 'type' indicates the data type to which this pointer will point to. Eg: an integer pointer can only point to int variable, a char pointer can only point to char variable and so on. This means that pointers are data type specific. An int pointer cannot point to a float variable and vice-versa. Same thing applies to all data types. 2. 'pointer_name' is any valid pointer name just like we name variables. All rules used for naming of idetifiers in C++ apply here as well. Eg: int* ipter;//creates an integer pointer char* cpter;//creates a char pointer float* fpter;//creates a float pointer Initialisation of Pointers A already stated, pointer variables hold memory addreses. But how do we make a pointer point to a variable, that is, how can we access its memory address to store it in the pointer variable? This is where the unary operator & comes in. Pointers revolve around the two operators: * and & & - This is the operator which when placed before any variable returns its memory address which we can store in our pointer. * - While this is used to declare a pointer also, however, when it is used with a pointer that has already been declared, it returns the value of the variable that is stored at the location to which the pointer is pointing to. To understand the above two operators, consider the example- Let us say we have a variable i, whose value = 25 and whose address = 10500(assume). Let us see what happens when we write the following lines of code- int i = 25; Normal variable declaration int* iptr; Declaration of pointer as discussed above. Note that here, * is used for declaration purpose. iptr = &i; Here, & operator is placed before i. Therefore, it will return the address of i which is 10500. This address gets stored in the pointer variable iptr.This is the initialisation step of pointer - iptr. cout<<*iptr; Here,* is being used with a pointer that has already been decalared and initialised. iptr hold 10500 which is the address of variable i. i holds value 25. Hence, this statements outputs 25. Important- The operand of '&' operator is an ordinary variable. The operand of ' * ' will be a pointer variable always. The operator * is used for dereferencing the pointer, that is, accessing the original value. The operator & is used for referencing the address of a variable. Pointer Arithmetic For normal varibales like int/float, any type of mathematical operation can be applied on them. For pointers, only addition and subtarction can be used. Rule of pointer arithmetic: In pointer arithmetic, all pointers increase/decrease by the length (size) of the data type they point to. This means that if we write: int* iptr; iptr += 2; Here, we want to add two to iptr. However, we will not literally add 2. Writing the above will make iptr point to the second element of iptr's type. That is, literally, we add the size of int twice to the address stored in iptr. Eg: if iptr had address 1050, then iptr + 2 becomes: iptr + (size of two integers) or iptr + 2(size of int) which is equal to 1050 + 2(4) = 1058. //size of int is generally 4 bytes. 1058 is the new address we get. By writing *(iptr + 2), we can access the value at 1058. A more clear understanding can be made with the following picture- Notice, how the address increments by 4 as we keep on adding 1 to ptr because ptr is an integer pointer. Similarly, by decrementing the pointer value, each time, the same operation depicted above will be applied. Only this time, we will subtract the size of data type. eg: iptr -= 2; = 1050 - 2(4) = 1042. 1042 is the new memory address. Dynamic Allocation and Pointers: As already stated, pointers are used for dynamic allocation. The operators that facilitate this process are - new and delete. new - The operator new is used for creating objcts/variables of all types that are to be allocated at run time. The general syntax: pointer_variableName = new int; //or any other data type. The data type of pointer name and data type on RHS must be the same. eg: int*iptr; iptr = new int; or int*iptr = new int; Both the above statements behave in exactly the same way. Second form is just shorter to write. delete - This operator is used to delete or deallocate dynamically allocated memory at the end of the program. Word of Caution: It is very important to deallocate all dynamically allocated memory before the program ends. Failing to do so caused memory leaks. This means that each time the program will run, it will keep on allocating memory until we run out of all storage space. Hence, deallocating is important. To deallocate, write: delete pointer_variableName; eg: delete iptr; Pointers and Arrays: Pointers are quite closely connected to arrays. C++ treats the name of the array as a pointer. That is, if we write: int arr[10]; arr is basically treated as a pointer. arr stores the address of the first element of the array, that is: arr[0] Code description- int *a; int age[10]; cout<<"Enter values for age:"; for(int i = 0; i < 10; i++) { cin>>age[i]; } a = age; //check if both *a and *age have same values or not- cout<<"a points to:"<<*a<<endl; cout<<"age points to:"<<*age; Input: 3 7 4 9 7 10 6 7 2 1 Output: 3 3 Both a and age point to: age[0] Thus, age which is the array name is essentially a pointer pointing to age[0]. Thus, because age is a pointer, hence writing cout<<age[3]; and writing cout<<*(age+3); mean the same. Both produce output as 9. Thus, essentially, all pointer rules apply to the array name. Applications of Pointers - Pointers are used for passing function arguments by reference. Passing by reference means that no copy of the arguments is created within the function and the actual values are operated upon. For eg: #include <iostream> using namespace std; void swap(int* x, int* y) { int temp = *x; *x = *y; *y = temp; cout<<x<<" "<<y<<; } int main() { int x = 10, y = 20; swap(&x, &y); cout << x << " " << y << endl; return 0; } Ouput: 10 20 10 20 Here, values of x and y are swapped totally and not just within the function. Implementing data structures: Data structures such as linked lists, trees etc are implemented using pointers in C and C++. System Level access and programming: Various internal routines are implemented using pointers for speedy access. For eg: the system essentially accesses array elements using pointers. With this, you will have a complete basic idea of using pointers in C and C++.
https://iq.opengenus.org/pointers-in-cpp/
CC-MAIN-2020-24
refinedweb
1,491
63.09
I tried to make 11..10..0 (which is binary number with consecutive 32-n zeros in small digits). // Can assume that 0 <= n <= 31 int masking(int n) { return (~0)<<(~n+33); } However, when I put 0 in input n, I expected 0, but I got -1(0xffffffff). Without using input, (~0)<<(~0+33) gives 0. (-1)<<32 also gives 0. I don't know why I got different results. You might want to consider forcing 64 bit math. According to "C" standard, result of shifting of a variable with N bits is only defined when the number of shifts is less than the size of the variable (0..N-1) Performing the shift on (~0) (integer, usually 32 bit), will result in undefined behavior for ~n+33 (n=0) since ~n+33 = 32, above the limit of 31. Changing the code to use (~0L) produce the requested result masking(0) = 0 Assuming that you run on generic Linux - gcc will default to 32 bit integer, 64 bit long and 64 bit pointer. include <stdio.h> int masking(int n) { return (~0UL)<<(~n+33); } void main(void) { for (int i=0 ; i<4 ; i++) { printf("M(%d)=%x\n", i, masking(i)) ; } } Output: M(0)=0 M(1)=80000000 M(2)=c0000000 M(3)=e0000000 User contributions licensed under CC BY-SA 3.0
https://windows-hexerror.linestarve.com/q/so58154077-The-output-value-differs-from-the-value-received-as-argument-within-the-function-when-performing-a-d
CC-MAIN-2019-47
refinedweb
225
71.24
const char * strrchr ( const char * str, int character ); char * strrchr ( char * str, int character ); <cstring> Locate last occurrence of character in string Returns a pointer to the last occurrence of character in the C string str.The terminating null-character is considered part of the C string. Therefore, it can also be located to retrieve a pointer to the end of a string. /* strrchr example */ #include <stdio.h> #include <string.h> int main () { char str[] = "This is a sample string"; char * pch; pch=strrchr(str,'s'); printf ("Last occurence of 's' found at %d \n",pch-str+1); return 0; } Last occurrence of 's' found at 18
http://www.cplusplus.com/reference/clibrary/cstring/strrchr/
crawl-002
refinedweb
107
63.49
Design patterns have been one of the main topics in software engineering since the book "Design Patterns: Elements of Reusable Object-Oriented Software" (1995). After that there have been also other books of these "GOF-Patterns", like the top rated "Head First Design Patterns". Most of the patterns are still valid, but some of them have become a bit useless in practical Microsoft .NET � environment. One example is the Iterator-pattern (for handling managed collections) because of .NET 2.0 Generics. The book Head First Design Patterns describes the Template Method Design Pattern as follows: "Template Method - Define template function and give it functions as parameters? Well, in Microsoft .NET 3.5 we can, so there is alternative way to make the same thing as the Template Method Design Pattern. Ok, as if we would like to have a software for� let's say� a recipe-making �software for hot drinks. There are some generic methods like: 1. Boil water, 2. Brew, 3. Pour in cup, 4. Add condiments. So let that be the generic template algorithm. First, what we need is a unit test. VS2008 unit test Template Method Design Pattern will look like this: Where custom hot drinks (like Coffee, Tea, etc.) override the needed special functionality from the template algorithm class (abstract class called HotDrink). The source code will look like this: current class diagram: So, what's wrong with Template Method? Well, one of generally accepted fundamental principle of Object- Oriented design is "Favor Composition Over Inheritance". If the HotDrink �class changes there will be troubles as this was not the most dynamical way. Better design would be looking something like this: But this will add more complexity. Could we keep the number of our classes and lines of code low? Let's try another solution, with Microsoft .NET 3.5 Framework new functional programming� We already have the unit test, and it can be exactly same as in the Template Method Design Pattern unit test! Let's see the template algorithm, the class HotDrink: It will also look much like� And now explanation what is "List<Func<string>>": It's just a list of functions... :-) That Func<string> means a code that we can execute at runtime and the return type for the function is string. If it would have also input parameters, they would come before the result, so the syntax would be Func<T, T2> where T would be some input type and T2 some return type. Then those special drinks: //Making tea recipe public class Tea { public List<string> MakeRecipe() { List<Func<string>> CustomMethods = new List<Func<string>>(){ (() => "Steeping the tea"), // same as delegate(){ return <...>{ �, �, �} is also a new Microsoft .NET 3.5 enhancement called Collection Initializer, just an easier way to fill the list. The syntax (() => "") is lambda expression and means the same than anonymous function delegate(){return "...";} which is like normal function, just typed inline. If method would have much more functionality than just return, then it would be clearer to use (named) delegate. Anonymous delegates won't be visible in the class diagram: One more thing, I draw: Let's see if Functional Programming is the way of the future. The real best practices always come after some time has passed. 2007-11-07: The first version and my first article. :-) General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/architecture/TMDPvsLambda.aspx
crawl-002
refinedweb
559
64.51
Vojtech Pavlik <vojtech@suse.cz> writes:> Btw, what I don't completely understand is why you need linear> regression, when you're not trying to detect motion or something like> that. Basic floating average, or even simpler filtering like the input> core uses for fuzz could work well enough I believe.Indeed, this function doesn't make much sense:+static inline int smooth_history(int x0, int x1, int x2, int x3)+{+ return x0 - ( x0 * 3 + x1 - x2 - x3 * 3 ) / 10;+}In the X driver, a derivative estimate is computed from the last 4absolute positions, and in that case the least squares estimate isgiven by the factors [.3 .1 -.1 -.3]. However, in this case you wantto compute an absolute position estimate from the last 4 absolutepositions, and in this case the least squares estimate is given by thefactors [.25 .25 .25 .25], ie a floating average. If the function ischanged to this:+static inline int smooth_history(int x0, int x1, int x2, int x3)+{+ return (x0 + x1 + x2 + x3) / 4;+}the standard deviation of the noise will be reduced by a factor of 2compared to the unfiltered values. With the old smooth_history()function, the noise reduction will only be a factor of 1.29.--
http://lkml.org/lkml/2005/7/9/138
CC-MAIN-2015-11
refinedweb
204
55.78
How to Create an App in Django ? Prerequisite – How to Create a Basic Project using MVT in Django? Django is famous for its unique and fully managed app structure. For every functionality, an app can be created like a completely independent module. This article will take you through how to create a basic app and add functionalities using that app. For example, if you are creating a Blog, Separate modules should be created for Comments, Posts, Login/Logout, etc. In Django, these modules are known as apps. There is a different app for each task. Benefits of using Django apps – - Django apps are reusable i.e. a Django app can be used with multiple projects. - We have loosely coupled i.e. almost independent components - Multiple developers can work on different components - Debugging and code organization is easy. Django has an excellent debugger tool. - It has in-built features like admin pages etc, which reduces the effort of building the same from stratch Pre-installed apps – Django provides some pre-installed apps for users. To see pre-installed apps, navigate to projectName –> projectName –> settings.py In your settings.py file, you will find INSTALLED_APPS. Apps listed in INSTALLED_APPS are provided by Django for the developer’s comfort. : python manage.py startapp projectApp Method-2 - To create a basic app in your Django project you need to go to the directory containing manage.py and from there enter the command : django-admin startapp projectApp Now you can see your directory structure as under : - To consider the app in your project you need to specify your project name in INSTALLED_APPS list as follows in settings.py: Python3 - So, we have finally created an app but to render the app using URLs we need to include the app in our main project so that URLs redirected to that app can be rendered. Let us explore it. Move to projectName-> projectName -> urls.py and add below code in the header from django.urls import include - Now in the list of URL patterns, you need to specify the app name for including your app URLs. Here is the code for it – Python3 - Now You can use the default MVT model to create URLs, models, views, etc. in your app and they will be automatically included in your main project. The main feature of Django Apps is independence, every app functions as an independent unit in supporting the main project. Now the urls.py in the project file will not access the app’s url. To run your Django Web application properly the following actions must be taken:- 1. Create a file in the apps directory called urls.py 2. Include the following code: Python3 The above code will call or invoke the function which is defined in the views.py file so that it can be seen properly in the Web browser. Here it is assumed that views.py contains the following code :- Python3 After adding the above code, go to the settings.py file which is in the project directory, and change the value of ROOT_URLCONF from ‘project.urls’ to ‘app.urls’ From this:- To this: 3. And then you can run the server(127.0.0.1:8000) and you will get the desired output
https://www.geeksforgeeks.org/how-to-create-an-app-in-django/
CC-MAIN-2022-33
refinedweb
541
67.25
Common Type Library (CTL) - CT definitions and FQNs - CT schema versioning and dependencies - CT scopes and visibility - CT management - Further reading The Common type library (CTL) is a repository of data type schemas used for all Kaa modules. As more schema types and versions are created, they are recorded in the CTL for future use. Common type (CT) is a CTL unit representing a set of data type schema versions. Using CTL allows for consistent schema management within a Kaa instance. CT definitions and FQNs Schemas used in Kaa are based on the Apache Avro format. Every CT is identified by Fully qualified name (FQN). FQN is a combination of namespace and name attributes defined in the root Avro record of a CT schema. Each CT contains a set of CT schemas with the same FQN and different versions. A CT schema ID is the combination of the schema FQN and version. Any CT schema ID is unique across the CT. After a CT schema is created, it becomes unmodifiable and can only be deleted. CT schema versioning and dependencies A CT schema version must be explicitly defined in the CT schema as shown below. { "type":"record", "name":"SampleCT", "namespace":"org.kaaproject.sample", "version":1, "dependencies":[ { "fqn":"org.kaaproject.sample.ReferencedCT", "version":2 } ], "fields":[ ... ] } Attempting to load a CT schema with no version or with a used version will result in an error. A CT schema can have dependencies on other CT schemas. CT schema dependencies are defined as an array of the CT schema IDs (for example, see org.kaaproject.sample.ReferencedCT schema in the code snippet above). Deleting a CT schema is only permitted if its ID is not referenced in any other CT schema. Cyclic dependencies are not permitted. Thus, CT schemas are nodes in a directed acyclic graph of dependencies. CT scopes and visibility CTs can be defined within these scopes: system, tenant, and application. Scopes impact the visibility of CTs. For example, a CT defined for application A is not visible for application B. Any FQN is unique within its CT scope. Attempting to create a CT with an FQN that already exists in the same scope will result in an error. IMPORTANT: You can create a CT with an FQN that already exists in other scope, but this is not recommended. Attempting to do so will result in a warning message. The expected outcomes of an attempt to create a CT with a non-unique FQN are summarized in the following table. CT management You can manage CTs using the Server REST API or the Administration UI. Get the list of CTs As Kaa administrator, you can get the list of available system CTs using the REST API call or by clicking System CTL on the Administration UI page. As Tenant administrator, you can get the list of available Tenant CTs and System CTs using the REST API call or by clicking Tenant CTL on the Administration UI page. Use the Display higher scopes checkbox to toggle visibility of the System CTs. As Tenant developer, you can get the list of available Tenant CTs and System CTs in the same way as Tenant administrator. In addition, Tenant developer can get the list of available application CTs using the REST API call or by clicking Application CTL on the Administration UI page. Use the Display higher scopes checkbox to toggle visibility of the system and tenant CTs. View CT details To view the CT details: OR - Open the Administration UI page, select the corresponding CTL and click on the CT in the list. The Common type details page will open. To view another version of the CT, select it form the Version drop-down list. To create a new version of the CT, click Create new version. Create a new CT To create a new CT: OR - Open the Administration UI page, select the corresponding CTL and click the Add new type button. If you want to import a schema file, click Browse, select a .json file containing your schema, click Upload, then click Add. NOTE: Kaa administrator creates new system CTs. Tenant administrator creates new tenant CTs. Tenant developer creates new application CTs. In the Add new type window, fill in all the required fields and click Add. Delete CT schemas To delete a schema: OR - Open the Common type details page and click Delete. NOTE: Kaa administrator can delete system CT schemas. Tenant administrator can delete tenant CT schemas. Tenant developer can delete application CT schemas. Promote CTs If you want some of your Application scope CTs to be available in the Tenant scope, you neeed to promote them. To do this, you can use the REST API call or use the Promotion feature: Open the Administration UI page, unfold the Applications list. In the sub-list of a chosen application, click Application CTL. Click the CT to open the Common type details page and click Promote. The CT (including all its versions) is now available in the Tenant scope. NOTE: You cannot promote a CT from the Application scope if there is a CT in the Tenant scope with the same FQN, or if the CT in question has dependencies on other CTs in the Application scope. CT schema export To export a CT schema: OR - On the Common type details page of the CT, click Export and select the exporting option. There are four options for CT schema export: - shallow: exports the CT schema file. - deep: exports the CT schema file and a file with all referenced CTs recursively. - flat: exports the CT schema file and a file with all referenced CTs inline. - library: exports .jararchive containing the CT schema and all referenced CTs as compiled java classes. The java library provides all necessary java structures, including the nested types, in compliance with the CT schema. You can use these java classes in external applications. For example, you can serialize binary log records generated by data collection process.
https://kaaproject.github.io/kaa/docs/v0.10.0/Programming-guide/Key-platform-features/Common-Type-Library/
CC-MAIN-2021-49
refinedweb
997
64.91
This is the first in a series of posts I plan to do as an introduction to the basics of Appium. In this post I will walk through how to use tools that come with Appium to record the basic steps of a functional test for a sample iOS application and then how to clean that code up and translate it into a JUnit test that can be run as part of an automated test suite. Future posts will demonstrate how to record similar tests with an Android app and then how to leverage Appium in the cloud to run tests across multiple devices and OS versions. In my mobile application testing training classes we do a demonstration of how to record Selenium tests using Firefox SeleniumIDE. This is a great mechanism for getting a base functional test recorded for a web application or mobile web application. However, it doesn’t do much good for native mobile applications. So I wanted to get a very simple example built for doing exactly that. First, a few pre-requisites: - The Apple XCode iOS Simulator is required to create the recording so a Mac running the latest version of XCode will be needed. - This should go without saying, but you will need to have Appium installed locally. Get it here: - The app I am using for this example is the UICatalog app used in many other Appium examples on the web. However, feel free to use your own app or any other one you have available. It is available here:. Configuring and Starting Appium In order to record a test we need to configure Appium to run the iOS Simulator and launch the UICatalog app. To do this - Launch Appium - Click the Apple button to configure the iOS Simulator. - Edit the properties to use appropriate iOS versions and launch the UICatalog app. Make sure to check “Full Reset”. This is necessary to ensure that the app environment is cleaned up prior to running the Simulator. If unchecked, Appium will do this by starting and killing the Simulator once prior to starting it again and launching the app. This is fine but will cause your tests to take a lot longer to run (since the Simulator is booted twice). - Launch the Appium server for iOS by selecting the “Apple” and clicking “Launch”. If everything launches properly, the Appium window should look something like this - Next open the Inspector by clicking the magnifying glass Icon in the Appium server. This will start the iOS Simulator, load the UICatalog app, and launch the inspector. - The Inspector is the mechanism we will use to determine locators for elements in the UI and to record the test. The Appium Inspector Prior to recording a test it is a good idea to get familiar with the Appium Inspector interface. There are three important areas of the Inspector. On the left side is a series of vertical boxes. These are the navigators that show the path through the current UI to a particular element (very similar in concept to the OS X Finder interface). The right-most vertical box contains the details of the element being selected. This contains the information needed to locate the element. We will use this later when we write our test case code. On the far right is a window that mirrors the current screen being displayed in the iOS Simulator. Selecting an element here will highlight it and show the navigator path to get to it as well as the details of the element. Two important things to note: - Clicking on the element does not interact with it (i.e. this does not send a “click” or “tap” to the iOS Simulator), it simply highlights the element and shows its details. How to interact with elements will be explained in more detail below and when we actually record the test. - In certain cases elements are nested and are not directly selectable. In this case it may be necessary to click on the parent element and use the navigator to get to the desired element On the bottom left are a set of buttons used to interact with the elements of the elements. There are several tabs that perform different interactions: - Touch — contains controls that allow physical input via touch interface (tap, swipe, etc), - Text — contains controls that allow text to be entered into forms - Locator — allows for searching the UI to find an element by a particular locator - Misc — allows interactions with alert windows. This is a good time to take a minute to get familiar with the interface. Note that whenever an element is clicked it will take several seconds for the mirror window to refresh. Be patient and wait for it, and for the navigator windows, to refresh between taps or other interactions. Also note that the window will not automatically refresh if you interact with the iOS Simulator directly. Clicking the “Refresh” button will refresh the inspector with the current screen displayed on the Simulator. Recording a Test Case Now that we have Appium up and running and are familiar with the Inspector interface, it is time to do something interesting and actually record a test. The following steps refer to the figure below. - To get started, make sure the app is on the main start page and click on the “Record” button. It will turn red to indicate recording is on. Also, another window will drop from the bottom of the Inspector. This is where the code generated by the recorder will appear. - For those familiar with Firefox IDE, this recorder is very similar in concept but has one significant difference. Instead of recording into Selenese and then requiring you to export into anther language, it records directly into the language of choice. In my case, I chose Java. - Note also the “Add Boilerplate” check box. When checked this includes boilerplate setup code. For Java, this means that appropriate library imports are included as well as a skeleton class definition, and a default DesiredCapabilities object definition based on the settings we configured earlier. I’m going to leave this checked. - Now simply use the interaction controls to navigate through the application. For example, select elements in the mirror window or use the navigator to find them and then click the “Tap” button. Code will start to appear in the proper place in the window below the Inspector - When finished, click “Save” to save the code to a file. The generated code can now be used as the basis for an automated WebDriver test in JUnit. As an example I clicked around through a couple menus and this was the code that was generated. In the next section we will clean this up so it will actually run. import io.appium.java_client.AppiumDriver; import org.openqa.selenium.remote.DesiredCapabilities; import java.net.URL; public class {scriptName} {"); wd = new AppiumDriver(new URL(""), capabilities); wd.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS); wd.findElement(By.xpath("//UIAApplication[1]/UIAWindow[2]/UIATableView[1]/UIATableCell[1]")).click(); wd.findElement(By.name("Gray")).close(); } } Cleaning up the Test Now that the basic test steps have been recorded into a skeleton Java class we want to do three things: - Clean up the code so that it will actually run - Translate the code into a JUnit test - Add in an assertion so that we have a proper test case In order to get the code to run, we need to do a few things: - Add a package statement and any appropriate imports (a good IDE like Eclipse can help out alot with this) - Replace {scriptName} with the name of the class. In Java this should be the same as the filename. - Add a private static field named wd of type AppiumDriver. - Change the statement on line 14 to instantiate IOSDriver instead of AppiumDriver. AppiumDriver is actually an abstract class and can’t be instantiated directly. IOSDriver is a concrete subclass of AppiumDriver appropriate for this iOS test case. - Surround line 14 with a try/catch block to catch the potential MalformedUrlException that could be thrown - Change wd.close() on line 24 to wd.quit() because wd.close() is not supported by the IOSDriver. The resulting code should look something like this. To try it out, ensure that the inspector is not running, nor is the iOS Simulator. Also, it is probably a good idea to just stop and restart the iOS Appium server. Then just run this class. package com.coveros.training.map.appium; import java.net.MalformedURLException; import java.net.URL; import java.util.concurrent.TimeUnit; import org.openqa.selenium.By; import org.openqa.selenium.remote.DesiredCapabilities; import io.appium.java_client.AppiumDriver; import io.appium.java_client.ios.IOSDriver; import io.appium.java_client.ios.IOSElement; public class CodeMakerTest{ private static AppiumDriver<IOSElement> wd;"); try { wd = new IOSDriver<>(new URL(""), capabilities); } catch (MalformedURLException e) { e.printStackTrace(); } wd.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);.quit(); } } The next step is to make this a true JUnit test. In order to do this we want to make sure and follow a good setup-execute-tear down pattern. This means we need to get rid of the static main method and break the test into three parts - setup — define desired capabilities, instantiate the driver and connect to the Appium Server - execute — locate elements, interact with them and assert that the results are what we expect - teardown — close the app and shut down the iOS Simulator The biggest thing that is missing here is the assertion. The inspector does not have a mechanism for recording asserts directly so we will have to write this code ourselves. For my simple example I navigated to the “Pickers” menu and assert that the text displayed for the middle picker button on the bottom of the screen is “UIDatePicker”. To find an appropriate identifier, use the inspector and look at the details for the appropriate element. The code to click on the picker menu and assert the button text looks like this:()); The next step is to write the new setup method. This is essentially the same code that we used previously to set up the DesiredCapabilities object and the AppiumDriver with a few changes: - Refactored into a class method with the @Before JUnit tag - Added the “app” capability to explicitly declare the app to be loaded. This will allow us to remove the App Path from the iOS Settings in the Appium server and use the server to run tests against more than one app. The updated code is as follows private AppiumDriver<IOSElement> wd; @Before public void setup() { File classpathRoot = new File(System.getenv("HOME")); File appDir = new File(classpathRoot, "Development/appium"); File app = new File(appDir, "UICatalog.app"); DesiredCapabilities capabilities = DesiredCapabilities.iphone(); capabilities.setCapability("appium-version", "1.5.2"); capabilities.setCapability("platformName", "iOS"); capabilities.setCapability("platformVersion", "9.2"); capabilities.setCapability("deviceName", "iPhone 6"); capabilities.setCapability("app", app.getAbsolutePath()); // Don't need a browser for a native app test capabilities.setBrowserName(""); try { wd = new IOSDriver<>(new URL(""), capabilities); wd.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS); wd.launchApp(); } catch (MalformedURLException e) { e.printStackTrace(); fail(); } } The final step is to write the test execution method and the teardown method. The code for these is essentially unchanged. It has just been moved into two properly annotated methods as follows: @Test public void testSampleIOSApp() {()); } @After public void tearDown() { wd.closeApp(); wd.quit(); } Hopefully this helped to get a first unit test written using Appium. Check back here soon for more information on how to improve your Appium tests and how to move them up into the cloud.
https://www.coveros.com/recording-a-functional-ios-app-test-with-appium/
CC-MAIN-2018-13
refinedweb
1,926
54.83
Part of having a great developer experience is having great documentation. A lot goes into creating good docs - the ideal documentation is concise, helpful, accurate, complete, and delightful. Recently we've been working hard to make the docs better based on your feedback, and we wanted to share some of the improvements we've made. Inline Examples When you learn a new library, a new programming language, or a new framework, there's a beautiful moment when you first write a bit of code, try it out, see if it works... and it does work. You created something real. We wanted to put that visceral experience right into our docs. Like this: import React, { Component } from 'react'; import { AppRegistry, Text, View } from 'react-native'; class ScratchPad extends Component { render() { return ( <View style={{flex: 1}}> <Text style={{fontSize: 30, flex: 1, textAlign: 'center'}}> Isn't this cool? </Text> <Text style={{fontSize: 100, flex: 1, textAlign: 'center'}}> 👍 </Text> </View> ); } } AppRegistry.registerComponent('ScratchPad', () => ScratchPad); We think these inline examples, using the react-native-web-player module with help from Devin Abbott, are a great way to learn the basics of React Native, and we have updated our tutorial for new React Native developers to use these wherever possible. Check it out - if you have ever been curious to see what would happen if you modified just one little bit of sample code, this is a really nice way to poke around. Also, if you're building developer tools and you want to show a live React Native sample on your own site, react-native-web-player can make that straightforward. The core simulation engine is provided by Nicolas Gallagher's react-native-web project, which provides a way to display React Native components like Text and View on the web. Check out react-native-web if you're interested in building mobile and web experiences that share a large chunk of the codebase. Better Guides In some parts of React Native, there are multiple ways to do things, and we've heard feedback that we could provide better guidance. We have a new guide to Navigation that compares the different approaches and advises on what you should use - Navigator, NavigatorIOS, NavigationExperimental. In the medium term, we're working towards improving and consolidating those interfaces. In the short term, we hope that a better guide will make your life easier. We also have a new guide to handling touches that explains some of the basics of making button-like interfaces, and a brief summary of the different ways to handle touch events. Another area we worked on is Flexbox. This includes tutorials on how to handle layout with Flexbox and how to control the size of components. It also includes an unsexy but hopefully-useful list of all the props that control layout in React Native. Getting Started When you start getting a React Native development environment set up on your machine, you do have to do a bunch of installing and configuring things. It's hard to make installation a really fun and exciting experience, but we can at least make it as quick and painless as possible. We built a new Getting Started workflow that lets you select your development operating system and your mobile operating system up front, to provide one concise place with all the setup instructions. We also went through the installation process to make sure everything worked and to make sure that every decision point had a clear recommendation. After testing it out on our innocent coworkers, we're pretty sure this is an improvement. We also worked on the guide to integrating React Native into an existing app. Many of the largest apps that use React Native, like the Facebook app itself, actually build part of the app in React Native, and part of it using regular development tools. We hope this guide makes it easier for more people to build apps this way. We Need Your Help Your feedback lets us know what we should prioritize. I know some people will read this blog post and think "Better docs? Pffft. The documentation for X is still garbage!". That's great - we need that energy. The best way to give us feedback depends on the sort of feedback. If you find a mistake in the documentation, like inaccurate descriptions or code that doesn't actually work, file an issue. Tag it with "Documentation", so that it's easier to route it to the right people. If there isn't a specific mistake, but something in the documentation is fundamentally confusing, it's not a great fit for a GitHub issue. Instead, post on Canny about the area of the docs that could use help. This helps us prioritize when we are doing more general work like guide-writing. Thanks for reading this far, and thanks for using React Native!
https://reactnative.dev/blog/2016/07/06/toward-better-documentation
CC-MAIN-2022-21
refinedweb
812
60.65
im a NEWBIE!!! to C# and following a book... and the code isnt doing as the book says..the book is telling me to use the Unity 3d engine plz keep it as newbie friendly as possible thank you he code and the output isnt matching up right...what am i doing wrong... I recommend posting over in the Unity Forum. My Code Guru Articles thx u..but I thought about that...but unity has it own language and figured if I posted their no one would use c# Unfortunately, I can't read the text in the picture you've posted so I can't be much help. well im at work atm when I get home ill put the block of code in the post here is my code... using UnityEngine; using System.Collections; public class Leaningscript : MonoBehaviour { public int myNumber = 10; // Use this for initialization void Start () { Debug.Log(2 + 9);//non varible should show 11 Debug.Log(11 + myNumber); //using varibles should show 21 } // Update is called once per frame void Update () { } } now the line that should appear as 21 out puts as 20... i found out that if i change the variable in the code itself it doesnt change... but if i change it in the inspector panel it will change...so i guess i should prolly take this to the unity forums site....thx for ur help tho Forum Rules
http://forums.codeguru.com/showthread.php?544523-alittle-C-help-plz&p=2151755
CC-MAIN-2017-04
refinedweb
236
84.37
Originally posted by Bernardus Irmanto: Hi there It's a little bit hard to read yer code..so I'll try to formulate my own solution to yer problem(based on yer code). Below is how I will do it : 1. create a bean (well you can name it "loveBean" if you want) the bean has the following private members : -id -title -name -content -time -fatherid 2. I will define a method, say getData which return ArrayList in this method I will query the database, and iterate through the resultset, wraps each resultset in to bean object, and put the bean in to the arraylist. (suggestion: separating the code which query the data, and the code which wraps yer data will ease you to maintain yer code) public ArrayList getData(...)throws SQLException { ... } 3. I will define a class which extends ActionForm.(say, loveForm). in the loveForm, I will define a ArrayList Variable. public class loveForm extends ActionForm { ... private ArrayList loves; ...//other vars public void setLoves(ArrayList arr){ loves=arr; } public ArrayList getLoves(){ return loves } public void setLove(int index, loveBean love){ loves.add(index,love) } public loveBean getLove(int index){ return (loveBean) loves.get(index) } ... } 4. In the "execute" method of the Action class, I will call the getData method, set the Arraylist returned by the method in to loveForm, and put the form in the request object loveForm myForm.setLoves(getData(...)) obRequest.setAttribute(Constant.LOVE_FORM,myForm) 5. In my jsp I will use the logic iterate to output the data <logic:iterate and then I can output the data using html:text tag/bean:write tag ex(to output title): <html:text welll..that's how i will do it.. rgds beN
http://www.coderanch.com/t/47017/Struts/put-arraylist-session
CC-MAIN-2014-35
refinedweb
283
62.78
Hi, I have the following C++ code #include <string> #include <iostream> //maybe std::string using namespace std; int main() { size_t r = 134480; size_t c = 268960; size_t **opt; opt = (size_t **)malloc(r * sizeof(size_t *)); if(opt != NULL) { opt[0] = (size_t *)malloc(r * c * sizeof(size_t)); if(opt[0] != NULL) { for (int i = 1; i < c; i++) { opt[i] = opt[0] + i * c; } } else { cout << "line:189:r:" << r << endl; cout << "line:190:c:" << c << endl; cout << "ERR:diff:185:Memory allocation failed." << endl; } } else { cout << "ERR:diff:183:Memory allocation failed." << endl; } ..... free code .... } I am compiling it on CentOs using the command g++ test.cpp -o test When I run it I get the following error: line:189:r:134480 line:190:c:268960 ERR:diff:185:Memory allocation failed. Can someone explain me why my first c++ code is failing?
https://www.daniweb.com/programming/software-development/threads/129820/why-is-malloc-failing
CC-MAIN-2017-39
refinedweb
141
72.46
Following my previous post on Optimizing Javascript, I thought I’d write a similar post regarding Python optimization. Before going on to the more interesting stuff, there are a few issues that need to be addressed: 0. Basics Know the basics – especially profiling! Just by looking at the profiling output, you can tell where does the computing time go. To get that information, I like to sort on cumulative time (i.e, time taken by a given function and all functions called from it, over all of its calls). 0.5. Knowing your goal, and your enemy The kind of optimizations you do, and how far you’re willing to go is dependent on your code’s users. If you’re writing batch processing software, your required time for running might be a minute, an hour, or a day. So far, I had to optimize various cases of weeks to minutes for batch processing and also seconds to milliseconds for web-application UI. Your timing should apply to typical input, and probably to your biggest probable input as well. Create some simple benchmark you can test your code against. It’s important that your benchmark be typical ‘complexity-wise’, but smaller in size, so that running it and getting profiling results takes no more than a few seconds. You may even want multiple benchmarks, each one for a different size. That way, once you are more sure of yourself, you can run your code against the larger benchmark. If your benchmarks are real inputs – all the better. 1. Python vs. C and similar considerations In my line of work, I usually do research oriented development. That means that it’s harder to know upfront where the bottlenecks will be. As a result, the prevailing attitude is usually “let’s write it in Python, and later, when the need arises, convert the critical code to C”. So far, I haven’t had the chance to do that. Usually what happens is we write the code, it works well enough, and we figure that the flexibility of writing it in Python is more important than the Python to C conversion gains. Also, Python is not always the bottleneck – sometimes it’s a database, or some 3rd party API. Usually “import pysco”, and changing the code to allow parallel processing is cheaper and simpler than the conversion to C. 2. The small time-eater A common problem is when a relatively trivial function is taking a lot of cumulative time. That’s usually a sign you’re doing something wrong. I had this issue when I used my symbolic constants for a new project. Consider the following: def SymbolInt(value, name): class _SymbolInt(int): def __str__(self): return name def __repr__(self): return 'SymbolInt(%d, "%s")' % (value, name) def __eq__(self, other): if isinstance(other, str): other = other.lower() return int(self)==other or name.lower() == other def __ne__(self, other): return not self == other return _SymbolInt(value) This one is very nice for interactive interfaces. However, in the new project, I found out that __eq__ was taking *a lot* of time. Way more than it should, even when I wasn’t comparing SymbolInt-s to strings! It turned out that ‘or name.lower() == other’ was very bad speed wise. So for that project, I removed this subcondition, and voila! My code was fast! 3. The algorithm is critical In many cases I’ve worked on, the greatest reductions in running time were due to algorithm changes. That means that playing with issues such as variable lookups and so on should come after you’re mostly settled on your algorithm. The latest example that I can think of is the set counting problem, where using my solution got me down from two weeks to 20 something minutes on my real input. Later I did some simpler optimizations that chopped off a few more minutes. 4. Avoiding loops That one is easy. Everyone knows you should avoid loops, especially nested ones. Still, there are some cases where your code just has to have these loops – because that’s the essence of what your code is doing. To make loop avoidance possible, and specifically cartesian product kind of loops, consider refactoring your code to use set intersections and unions. As simple illustration, consider: Instead of this, for x in a: for y in b: if x == y: yield (x,y) do this: return set(a) & set(b) Sometimes, applying this change doesn’t quite fit your algorithm. In that cases, try to change your algorithm to accommodate. For example, it might yield less accurate results. In that case, aim for returning more than you need, and then do a second pass to filter the bad ones. The time gains from avoiding the extra loops should still be worth it. (Note: this is similar to doing your computations in your database queries instead of in your code. Similar ideas apply.) 5. Lookups If you spend time looking for something, use a dict. If that is not feasable, use any other data-structure that fits your problem. For example, let’s say you’re looking for given strings in a lot of files. You can build a small index beforehand, and instead of looking at the files each time, just look at this index. (Note: this is similar to creating an index on the database column you are searching on.) 6. Memory When dealing with large inputs, you’ll usually want to reduce your memory requirements. Consider an algorithm that requires O(n) memory, for n-sized inputs. All you need is a factor of 4, and 500 megs of input, and your code will choke on many current machines. Also, I’ve found out that writing your code in such a way as to use drastically less memory, will sometimes force me to write it more time-efficiently as well. There are a few techniques to dealing with the memory issue. The central idea is to have as little of your data as you need available at any time. 7. Generators Generator expressions are usually preferable to list comprehensions. Similarly, consider replacing this kind of functions: def myfunc(some_input): ... result = [] for bla in foo: ... result.append(bar) return result with the following idiom: def myfunc(some_input): ... for bla in foo: ... yield bar This has the added advantage of simplifying myfunc, as its state is kept for you. On really big inputs and outputs, this one could save you from keeping all of your output in memory. If you are not familiar with generators, I suggest reading David Beazley’s presentation on the subject, it’s an excellent read, regardless of optimizations. 8. Outputs If your goal is to generate output, dump it to a file as soon as possible. This is made simple by the previous idiom: for bar in myfunc(): #process bar ... dump(foobar) Just make sure that dump doesn’t keep your data around for too long. For example, I once had to insert a lot of data into a database. After I finished processing each record, I would insert it. The bottleneck was the database. I tried flushing only after several inserts (which meant inserts in chunks of N for various N), until I was introduced to the solution: bulk inserts. The, my extraction script just dumped to a text file, which was lightning fast, and later I did the bulk insert. 9. Summing up Do sum(x for x in some_generator) Instead of for x in some_list: my_sum += x Kidding! Use your profiler, your head, psyco, and more experienced advice in the best order that suits you. As I’ve come to learn, getting advice from friends is an excellent way to avoid bashing your head against some mad bugger’s O(n2) wall. Really nice info, I agree with everything. I’ve met lazy people (as programmers should be) that skip the profiling step and just guess where they need to optimize so I’d like to +1 how important it is. Not to mention it’s easy: import profile profile.run(“any_python_expression”) where usually you’d replace any_python_expression with a call to your main function… One more… Understand where your program fits. It may be easier to optimise a cooperating program rather than the current one, to gain the performance gain needed for the whole system. - Paddy. Thanks for posting this. I found it to be very practical and helpful. One cheap trick you can also use is to cache a method call in a tight loop. Method lookups can be expensive, so for example in: > for i in xrange(1000): > myobj.compute(i) you can eliminate the lookup: > compute = myobj.compute > for i in xrange(1000): > compute(i) Great post, thanks. I guess the bottom line is that an algorithm should be designed for performance and later use profiling to improve implementation speed. Benny, It might be best to design first for correctness and then address any performance issues. Pingback: Luke Maurits » Blog Archive » Python vs C for performance
http://www.algorithm.co.il/blogs/computer-science/10-python-optimization-tips-and-issues/
CC-MAIN-2014-41
refinedweb
1,510
64.2
# Enumerable: How to yield a business value This article is a brief explanation about how using a common language keywords might have an influence on the budget of IT-infrastructure of a project or help to achieve some limitations/restrictions of hosting infrastructure and, moreover, will be a good sing of the quality and maturity of the source code. For the demonstration of ideas, in the article will be using C# language, but most of the ideas may be translated into other languages. From the set of language's features, from my point of view, 'yield' is the most undervalued keyword. You can read the documentation and find a huge bunch of examples on the Internet. To be short, let's say that 'yield' allow creating 'iterators' implicitly. By design, an iterator should expose an IEnumerable source for 'public' usage. And here the tricky starts. Because we have a lot of implementations of IEnumerable in the language: list, dictionary, hashset, queue and etc. And from my experience, the choice of one of them for satisfaction requirements of some business task is wrong. Moreover, all of this is aggravated by whatever implementation is chosen, the program 'just works' — this is what really needs for business, isn't it? Commonly, it works, but only until the service is deployed into a production environment. For a demonstration of the problem, I suggest choosing very common business case/flow for most enterprise project which we can extend during the article and substitute some part of this flow for understanding a scale of influence this approach on enterprise projects. And it should help you to find your own case in this set to fix it. Example of the task: 1. Load byline a set of records from a file or DB into memory. 2. For each column of the record change the value to someone other value. 3. Save the results of transformation into a file or DB. Let's assume several cases where this logic may be applicable. At this moment, I see two cases: 1. It is maybe a part of flow for some console ETL application. 2. It is maybe a logic inside of action in Controller of MVC application . If we paraphrase the task into a more technical manner, so it may be sound like this: "(1)Allocate an amount of memory, (2) load information into memory from persistence storage, (3)modify and (4)flush records changes in memory to the persistence storage." Here the first phrase in the description "(1)Allocate an amount of memory" may have a real correlation to your non-functional requirements. Because your job/service should 'live' in some hosting environment which may have some limitations/restrictions(for instance, 150Mb per micro-service) and to predict spendings on your service in budget, we should predict, in our case amount of memory which service will use (commonly we say about maximum amounts of memory). In other words, we should determine a memory 'footprint' for your service. Let's consider a memory footprint for really common implementation which I observe from time to time in different codebases of enterprise projects. Also, you can try to find it in your projects too, for example, 'under the hood' of 'repository' pattern implementation, just try to find such words: 'ToList', 'ToArray', 'ToReadonlyCollection' and etc. All of such implementation means that: 1. For each line/record into file/db, allocates memory to hold properties of record from file/db (i.e. var user = new User() { FirstName = 'Test', LastName = 'Test2' }) 2. Next, with help of, for example, 'ToArray' or manually, object's references are held into some collection (i.e. var users = new List(); users.Add(user)). So, it is allocated some amount of memory for each record from a file and not to forget about it, the reference is stored into some collection. Here is an example: ``` private static IEnumerable LoadUsers2() { var list = new List(); foreach(var line in File.ReadLines("text.txt")) { var splittedLine = line.Split(';'); list.Add(new User() { FirstName = splittedLine[0], LastName = splittedLine[1] }); } return list; // or return File.ReadLines("text.txt") .Select(line => line.Split(';')) .Select(splittedLine => new User() { FirstName = splittedLine[0], LastName = splittedLine[1] }).ToArray(); } ``` Memory profiler results: ![image](https://habrastorage.org/r/w780q1/webt/du/i2/bx/dui2bxcfqb1tvgzfeyvwsj6mfba.jpeg) Exactly such picture I saw every time in prodaction environment before container stops/reloads due to hosting's resource limitation per container. So, a footprint for this case, roughly, depends on the number of records into a file. Because memory allocates per record in the file. And, the sum of this small peases of memory give us a maximum amount of memory which may be consumed by our service — it is the footprint of the service. But is this footprint predictable? Apparently, no. Because we can not predict a number of records in the file. And, in most case, the file size exceeds the amount of allowed memory in hosting in several times. It means that it is hard to use such implementation in the production environment. Looks like it is the moment to re-thinks such implementation. Next assumption may give us more opportunities to calculate a footprint for the service: «a footprint should depend on the size only ONE record in the file». Roughly, in this case, we can calculate the maximum size of each column of only one record and sum them. It is quite easy to predict the size of a record instead of prediction of the number of records in the file. **And it is really wondered that we can implement a service which may handle an unpredictable amount of records and constantly consumes only a couple of megabytes with help only one keyword — 'yield'\*.** The time for an example: ``` class Program { static void Main(string[] args) { // 1. Load byline a set of records from a file or DB into memory. var users = LoadUsers(); // 2. For each column of the record change the value to someone other value. users = ModifyFirstName(users); // 3. Save the results of transformation into a file or DB. SaveUsers(users); } private static IEnumerable LoadUsers() { foreach(var line in File.ReadLines("text.txt")) { var splitedLine = line.Split(';'); yield return new User() { FirstName = splitedLine[0], LastName = splitedLine[1] }; } } private static IEnumerable ModifyFirstName(IEnumerable users) { foreach (var user in users) { user.FirstName += "\_1"; yield return user; } } private static void SaveUsers(IEnumerable users) { foreach(var user in users) { File.AppendAllLines("results.txt", new string []{ user.FirstName + ';' + user.LastName }); } } private class User { public string FirstName { get; set; } public string LastName { get; set; } } } ``` As you can see in the example above, there is allocates memory only for one object at a time: 'yield return new User()' instead of creating a collection and fills it with objects. It is the main point of optimization which allows us to calculate more predictable memory footprint for the service. Because we only need to know the size of two fields, in our case FirstName and LastName. When a modified user is saved into file (see File.AppendAllLines), the instance of the user object is available for garbage collection. And memory which is occupied by the object is deallocated (i.e. the next iteration of 'foreach' statement in LoadUsers), so the next instance of user object may be created. In other words, roughly, the same amount of memory replaces by the same amount of memory on each iteration. That is why we no need more memory than the size of a single record in the file. Memory profiler results after optimization: ![image](https://habrastorage.org/r/w780q1/webt/6k/gy/y6/6kgyy6wiwpafktukdwq2lsh_vx0.jpeg) From another perspective, if we slightly rename a couple of methods in the implementation above, so that use can notice some meaningful logic for Controllers in MVC application: ``` private static void GetUsersAction() { // 1. Load byline a set of records from a file or DB into memory. var users = LoadUsers(); // 2. For each column of the record change the value to someone other value. var usersDTOs = MapToDTO(users); // 3. Save the results of transformation into a file or DB. OkResult(usersDTOs); } ``` One important note before code listing: most of the important libraries like EntityFramework, ASP.net MVC, AutoMapper, Dapper, NHibernate, ADO.net and etc expose/consume IEnumerables sources. So, it means in the example above that LoadUsers may be replaced by an implementation which uses EntityFramework, for example. Which loads data row by row from the DB table, instead of a file. MapToDTO may be replaced by Automapper and OkResult may be replaced by a 'real' implementation of IActionResult in some MVC framework or our own implementation base on network stream, for example: ``` private static void OkResult(IEnumerable users) { // you can use a networksteam implementation using(StreamWriter sw = new StreamWriter("result.txt")) { foreach(var user in users) { sw.WriteLine(user.FirstName + ';' + user.LastName); } } } ``` This 'mvc-like' example shows us that we still able to predict and calculate a memory footprint also for Web-application. But in this case, it will be depends on requests count also. For example, the non-functional requirements may sound in this way: «Maximum memory amount for 1000 request not more then: 200KB per user object x 1000 requests ~ 200MB». Such calculations are very useful for performance optimization in case of scaling the web application. For instance, you need to scale your web application on 100 containers/VMs. So, in this case, to make a decision about how much resources you should allocate from hosting provider, so you can adjust the formula like this: 200KB per user object x 1000 requests x 100VMs ~ 20GB. Moreover, this is the maximum amount of memory and this is amount is under the control of your project's budget. I hope that information from this article will be helpful and allow to save a lot of money and time in your projects.
https://habr.com/ru/post/444358/
null
null
1,635
54.83
The docker environment accompanying this guide provides mitmweb to inspect requests sent to the NIS instance running in the container. This section will show how to use it. We will send requests to out NIS in the ruby language, using the rest-client gem. We will try to get the block at the current height, which, as we have already seen, is obtained by sending a POST request to /block/public/at with a JSON payload telling the height of the block we want to retrieve. According to the documentation of rest-client this is done easily. We will do it in the interactive ruby interpreter that can be started with the command irb in the tools container. Once in the interpreter, we type require 'rest-client' RestClient.post '', {'height': '243'} This however returns an error message:):2 from /usr/bin/irb:11:in `<main>' This error message is not very helpful, so we might want to take a look at the mitmweb page available at after you start containers with the script ndev as explained ealier. Opening Mitmweb in your browser (use Firefox if you get a blank page in Google Chrome), and selecting the last request in the list, you should see a page similar to this: The left pane lists the requests it has been intercepted (2 in this case), and the right pane gives you a detailed view of the request and response headers. The User-Agent header shows this request has been sent by rest-client to the host nemdevnis (the container running our NIS instance) with a content type header application/x-www-form-urlencoded. Clicking on the Response link of the right tab shows this: This is already much more informative. It means our request sent its data in an encoding not supported by the server. We can fix this by setting the content type header to json and ensure we send a json payload: require 'rest-client' require 'json' RestClient.post '', {'height': '243'}.to_json, {content_type: :json, accept: :json} This still yields an error in ruby, which is not much clearer:):10 from /usr/bin/irb:11:in `<main>' It reports a incompatible value for the property height. Looking at the request details, we have: We see at the bottom that the text passed as value is valid JSON. If we don't see what's wrong here, we can send the same request with a tool that gave a succesful response. In our case, this is a request we already sent with httpie. We know that issuing the command http :7890/block/at/public height:=243 yields a successful result. Let's just issue that command and look at the request headers. Here are the details of the request sent by httpie: Did you spot the difference? Httpie sent { "height": 243 } while with rest-client we sent: { "height": "243" } We send the value of the height property as a string, whereas it should be an integer. Let's see if this fixes our problem: require 'rest-client' require 'json' resp=RestClient.post '', {'height': 243}.to_json, {content_type: :json, accept: :json} => <RestClient::Response 200 "{\"timeStamp..."> Sure enough! We now successfully debugged and sent a request from Ruby. Debugging websockets is not as accessible as debugging your HTTP requests. There's no perfect solution, and we will debug websocket connections with the Google Chrome console, as well as with Wireshark. None of these is great to debug websockets, but combining both tools might help you get forward. In this section, we will not open websocket connections ourselves, but we will see how to observe the traffic on the websocket connections opened by the NanoWallet. You can open the Google Chrome inspctor by pressing CTRL-SHIFT-I. In the Inspector, select the Network tab (surrounded in screenshot), and select to only display websockets by clicking on WS (indicated by the red arrow in the screenshot). If you open the inpector on a page that is loaded, you will have to reload the page as indicated: When you login to the NanoWallet with the Inspector open and with the filter displaying only websocket connections, you will see a websocket connection established: By selecting the connection in the list, you can take a closer look to the connection. By default, the detailed view of the connection open on the headers tab showing all headers of the connection, both for request and response: The websocket connection is a persistent connection over which multiple data frames are exchanged. Luckily, the inspector lets us take a close look at the frames exchanged. Selecting the Frames tab gives you a list of frames exchanged over the connection. This list is updated automatically as new frames are exchanged: Let's focus a bit on the frames list after opening a wallet in the NanoWallet client: Outgoing messages have a light-green background, incoming messages have a white background. WebSocket opcodes are light-yellow and errors are light-red (from the Google Chrome documentation). We see that we send a frame to connected, and we get confirmation that we have successfully connected. After that, the NanoWallet client subscribes to multiple notifications. We see a total of 9 subscriptions (sub-0 to sub-9) for information regarding the account TA6XFSJYZYAIYP7FL7X2RL63647FRMB65YC6CO3G like transactions, mosaics, namespaces. There are also global subscriptions for errors and new blocks. Subscription related information have been highlighted in red in the screenshot. In addition to subscriptions, some other frames are sent to request information about the account. Subscriptions will only get messages sent when an update is available, but the NanoWallet client needs to retrieve that information to display the current status of the account. This is done by sending SEND command messages. The information regarding these requests are highlighted in blue in the screenshot. We see that the information requested covers the account details, the transactions of that account, as well as the mosaics and namespaces. Following that are some frames received in relation with the 9 subscriptions just created. The first MESSAGE received is part of sub-0 Selecting a frame in that list will display its content: But here is the result is far from great. The frame's content is just displayed on one line, which does not ease reading and analysing the frame.... Advanced socket information can be gathered from the Google Chrome internal socket details available at chrome://net-internals/#sockets, but this is out of scope for this guide. The container nis container provided with this guide automatically collects traces of the network traffic to and from the NIS instance it runs. The traces are stored in the traces subdirectory of the directory you configured as the location for the persistent data. See the variable persistent_location in the file settings.sh in the same location as the ndev executable used to control the containers. Two traces are captured: nis.pcap for traffic on port 7890 (http requests) and ws.pcap for traffic on port 7778 (websockets). We will use the ws.pcap file to observe websocket traffic. A great tool to analyse the traces captured is Wireshark, and this is what we will use. It is free software, available for Linux, Mac and Windows. When you start Wireshark, you will not start a live capture but open the existing capture file available in the persistent location you configured. But as that file is continuously updated, you will git this error: This is not a problem for our analysis, it just means that we have openen a file in which the last packet was not completely written to disk. All packets in the trace are usable though, and will give us a detailed view on the websocket traffic. When you open the traces file, you get something like this: Each line is a TCP packet, and aggregated views for supported protocols (like HTTP, websocket, etc) also have a line displayed. We are only interested in websocket traffic, so we can filter to only display the lines of interest. Enter websocket in the text field at the top of the list and press enter: This result in the list only displaying websocket protocol information, offering a higher level view than the TCP packets, in which we are not interested for our analysis. As in our analysis with Google Chrome, we see the CONNECT packet sent by our browser: We know that this frame is sent by our client, so we deduce that the server address is 172.18.0.3, and the client address is 172.18.0.2. Following by the CONNECT request, we receive a CONNECTED message confirming our websocket connection is successfully established: We also see the subscription request being sent: The first request is displayed individually, but the other subscription requests are grouped in one entry: After that are the SEND command sent by the client: and the MESSAGEs received from the server: But here again, the display is not perfect, but this description should help you get up to speed regarding the debugging of your websocket requests.
http://docs.nem.io/en/nem-dev-basics-docker/debug-nis-websockets
CC-MAIN-2018-47
refinedweb
1,505
59.64
Line Detection. [closed] After increasing the minLineLength and decreasing maxLineGap program is still showing number of lines and also the gap is also visible. Can someone tell me the problem in the program? Image is:- Code is-: include "stdafx.h" include <cv.h> include <highgui.h> include <math.h> using namespace cv; int main(int argc, char** argv) { Mat src, dst, color_dst; src = imread("line2.jpg", 0); Canny(src, dst, 50, 200, 3); cvtColor(dst, color_dst, CV_GRAY2BGR); vector<Vec4i> lines; HoughLinesP(dst,lines,1,CV_PI/180,80,600,300); for (size_t i = 0; i < lines.size(); i++) { line(color_dst, Point(lines[i][0], lines[i][1]), Point(lines[i][2], lines[i][3]), Scalar(0, 0, 255), 3, 8); } namedWindow("Source", 1); imshow("Source", src); namedWindow("Detected Lines", 1); imshow("Detected Lines", color_dst); waitKey(0); return(0); } IT is due to the fact you are using a canny filter. Offcourse this will generate double edges if you increase the line thickness. Why not apply dilate and erode operations. First dilate enough to merge them, then erode the same amount and only a single line will remain. ya it's good to use morphological operations but HoughP transform already have parameters to do this. Can you tell me the working and use of last 2 parameters in which i have given 600 and 300 value? Just go for the manual erosion dilation which gives you way more control than the embedded options. That is what I prefer in my software. Actually if I will use morphological operations, the edges on which I have to work will be lost! Since they are of same thickness. I think that you are using the opposite operation, try the other way (open <-> close, dilate <-> erode, top hat <-> black hat) Since all the edges are of same thickness it doesn't matter what I should use first opening or closing. I want to use the parameters in Hough P Transform to eliminate redundant lines. I do not know more than the docs, but you can test it and verify what is happening when changing the parameters. Do this one by one, so you do not change more than one parameter at once. @Vivek Goel : that is not correct! If you apply dilate first then reduce, then lines will NOT dissappear! Only the thickest line will be a bit thicker... Will canny give edges of different thickness? Since if the canny output give edges of different thickness then it will be correct solution but if the detected edges are of same thickness then the morphological operations will be much hard to calibrate and I will to design a much good and sophisticated structuring element. At first all edges are equal, but at a certain moment the two closest edges merge with dilation into a single element. Applying the same amount of erosion elements then will not remove the others NOR will it return to 2 edges. I do not see the problem. I think you are missing something.
https://answers.opencv.org/question/44760/line-detection/
CC-MAIN-2021-21
refinedweb
501
64.81
How to use "characteristic.write(value)" What i want is when the Lopy ble received a string then reply a string. so i think i need to use "characteristic.read()" and "characteristic.write(value)" in class GATTCCharacteristic. How to use them? @jmarcelino Thank you,it works now. The lightBlue get the reply in reads. - jmarcelino @DongYin There you set reply value using characteristic.value(b'Your reply')so next time your client (the other device) reads it will get your reply. @jmarcelino Thank you for explain .These are my codes. from network import Bluetooth #from machine import Pin #TestButton = Pin('P23', mode=Pin.IN, pull=Pin.PULL_UP) bluetooth = Bluetooth() bluetooth.set_advertisement(name='LoPy', service_uuid=b'123456789'1234567890123456', isprimary=True) chr1 = srv1.characteristic(uuid=b'2234567890123456', value=6) def char1_cb(chr): print("Write request with value = {}".format(chr.value())) if chr.value() == b'\xdd' : print("reply here") #reply here char1_cb = chr1.callback(trigger=Bluetooth.CHAR_WRITE_EVENT, handler=char1_cb) so, how to use CHAR_READ_EVENT at #reply here. - jmarcelino That is one way to do it, where the LoPy connects to a service and characteristic on the other device. In that case you'd usually setup a NOTIFY callback on the LoPy with: characteristic.callback(trigger=Bluetooth.CHAR_NOTIFY_EVENT, handler=YourFunctionHere) When the other device writes to the characteristic you'll get a notification that the value has changed and you can then characteristic.read() it. After that you can characteristic.write(b'Your Reply') if you're reusing the same characteristic. As I said this is just one of the ways to do it. Another would be to do the opposite, have the other device connect to the LoPy and accept a write from the other device (via a Bluetooth.CHAR_WRITE_EVENT event callback) and then reply to a read from it (a Bluetooth.CHAR_READ_EVENT event). For that you could use something like the GATTSCharacteristic example:
https://forum.pycom.io/topic/859/how-to-use-characteristic-write-value
CC-MAIN-2018-17
refinedweb
311
53.68
: - Fix a Pod’s Service Account That Has Too Many Permissions - The buffyPod in the sunnydalenamespace has a buffy-saServiceAccount with permissions the Pod doesn’t need. Modify the attached Role so that it only has the ability to list pods. - Then, create a new Role called watch-services-secretsthat grants permission to watch, services, and secrets. - Bind the Role to the ServiceAccount with a RoleBinding called buffy-sa-watch-rb. Note: You can find skeleton manifests for the new role and RoleBinding on the CLI server at: /home/cloud_user/watch-services-secrets.yml /home/cloud_user/buffy-sa-watch-rb.yml - Fix a Pod Configured to Use an Incorrect Service Account A Pod in the cluster cannot be created properly because its ServiceAccount is misconfigured. - The Pod is meant to live in the bespinnamespace. Delete the existing ServiceAccount from this namespace. - Create a new ServiceAccount called lando-sa. - Configure the Pod to use the lando-saServiceAccount. - Create the Pod. Note: You can find the Pod manifest at /home/cloud_user/lando.ymlon the CLI server.
https://acloudguru.com/hands-on-labs/certified-kubernetes-security-specialist-cks-practice-exam-part-1
CC-MAIN-2021-31
refinedweb
174
59.8
KirkulaMember Content count18 Joined Last visited Community Reputation141 Neutral About Kirkula - RankMember Help me figure out what I'm doing wrong, please! Kirkula replied to Kirkula's topic in For Beginnersoh man, what a dope I am! lol, thanks matt! Experienced programmer, where do I start? Kirkula replied to fligex's topic in For BeginnersInstead of "playing" the video games, try analyzing them. Look at them like a programmer, like you would any other program. Think about how every frame would be coded, for instance, say you're making asteroids. Think down to just one frame. You have the ship in the middle, the asteroids flying around, the score and lives at the top. There you go, you have a few objects already. Now unpause that, and you have animation. Well, all that is is just a continuous loop that has changes in every iteration, depending on what happens in the game. You also have the controller, so there's input to worry about. And, of course, the screen, so you have to draw to that. When you destroy an asteroid, they would split or be destroyed, and your score would go up. Destroy them all, and you get to the next level, or new frame/scene. When you die, your lives would be reduced by one, and when you run out of lives, the game loop ends. You can either restart it, or close it all together. So basically, you just have to scrutinize every aspect of a game. Think about it like a programmer, not a gamer, and you should be all set. I'm sure there are plenty of books on the subject as well. Have fun! CryEngine 3 or Unity 3d? Kirkula replied to Emmet Cooper's topic in For BeginnersHah, I like how UDK is free for schools to use, but it's 2500 a year for a business to use to train safety to it's employees...lol C++, should I switch? Kirkula replied to superman3275's topic in For BeginnersI don't see why there's a need to "switch" to a language from another one. From my limited knowledge, it's my understanding that once you've mastered a lower level language such as C++, you're able to accomplish MUCH more than you ever can from higher ones, such as C# or Java (which both require virtualization) Instead of "switching" you should be "adding". The good thing about Java is that almost EVERYONE has the JVM, so programming in that will mean almost everyone will be able to run your programming. Also, you don't have to worry about optimization for every machine on the market. The bad thing is, being it's a higher level language, it will run slower, and you have less control over what the program can do. The good thing about C# is that, with XNA, you can make games for the XBox. The bad thing is the craziness you have to deal with when trying to monetize said game. From what I understand, it's friggin nuts trying to deal with them. With java, you can make games and sell them on countless websites, even Steam. Then again, you can do the same with C#, I would imagine. The moral is, don't SWITCH your language...add to your repertoire! Don't stop using C++ while you learn other languages, continue keeping it fresh in your mind and fingers, and when you start to feel like you've brought yourself up to par with your knowledge in another language as compared to your knowledge in C++, either learn more in C++, or add another language! From what I understand, every language can do everything, but some languages do some things better than others. If you know all these languages, you'll know what's good for what, and what you should avoid using for such and such! Help me figure out what I'm doing wrong, please! Kirkula posted a topic in For BeginnersSo I've read a good bit of a couple of books on Java. Java: The Complete Reference 8th edition by oracle press, and Java: How to Program 8th edition by Deitel. Mind you, I never finished either book, only halfway through each, so maybe that's why I can't figure this out. Or perhaps I'm just being a complete retard that's been working all day on his first real class and has brain freeze, lol. So anyway, here's the code: [source lang="java"]import java.util.Random; public class Planet { private Random rand; private double mass; private double xPos, yPos; private double xVel, yVel; /** constructors */ Planet(){ setMass(); setPos(); setVelocity(); } Planet(double m){ mass = m; setPos(); setVelocity(); } Planet(double x, double y){ setMass(); xPos = x; yPos = y; setVelocity(); } Planet(double x, double y, double m){ mass = m; xPos = x; yPos = y; setVelocity(); } Planet(double x, double y, double i, double j, double m){ mass = m; xPos = x; yPos = y; xVel = i; yVel = j; } /** returns distance from calling object * * @param x is the xPos of the calling object * @param y is the yPos of the calling object */ double distance(double x, double y){ double distX = Math.abs(xPos - x); double distY = Math.abs(yPos - y); return Math.sqrt(distX * distX + distY * distY); } /** returns unit-length x vector pointing from this to calling object * * @param x is the xPos of calling object * @param dist is the value returned from distance() */ double dirX(double x, double dist){ return (x - xPos) / dist; } /** returns unit-length y vector pointing from this to calling object * * @param y is the yPos of calling object * @param dist is the value returned from distance() */ double dirY(double y, double dist){ return (y - yPos) / dist; } /** calculates acceleration scalar for calling object * * @param m is the mass of the calling object * @param dist is the value returned from distance() */ double accelScalar(double m, double dist){ return (6.674e-11 * m) / (dist * dist); } /** translates this object on XY plane. * should be called before accelerate. */ void translate(){ xPos += xVel; yPos += yVel; } /** determines delta V for calling object. * should be called after translate. * * @param as is the value returned from accelScalar() * @param dX is the value returned from dirX() * @param dY is the value returned from dirY() */ void accelerate(double as, double dX, double dY){ xVel += as * dX; yVel += as * dY; } void setMass(){ double d = rand.nextDouble() * 10.0; int e = (Math.abs(rand.nextInt()) % 6) + 22; setMass(d, e); } void setMass(double d, int e){ mass = Math.pow(d, e); } void setPos(){ int w = 800; int h = 600; double x = rand.nextDouble() * w; double y = rand.nextDouble() * h; setPos(x, y); } void setPos(double x, double y){ xPos = x; yPos = y; } void setVelocity(){ double x = rand.nextDouble() * 50000.0; double y = rand.nextDouble() * 50000.0; setVelocity(x, y); } void setVelocity(double x, double y){ xVel = x; yVel = y; } double[] getPos(){ double[] pos = {xPos, yPos}; return pos; } double[] getVelocity(){ double[] vel = {xVel, yVel}; return vel; } double getXPos(){ return xPos; } double getYpos(){ return yPos; } double getXVelocity(){ return xVel; } double getYVelocity(){ return yVel; } public String toString(){ return "X: " + xPos + "\tY: " + yPos; } } [/source] I'm getting a NullPointerException argument at line 129, the setMass() method. It works perfectly fine when I build the object using the full set of constructor arguments. Can anyone tell me where my problem is? - Wow, I never expected this kind of feedback, nor this much. Thanks everyone for helping me along my path :-D. Need opinion on game directional control keys. Kirkula replied to wyattbiker's topic in For BeginnersThis type of control is nothing new, really. In many older games like this, most of the time you would control in 1 of 2 ways: 1) the way you were suggesting, with W = Accelerate, A = CounterClockwise, S = Decelerate/Reverse, D = ClockWise. 2) W = Up, A = Left, S = Down, D = Right. Either way is just as good, depends on the end user I suppose (I was always partial to type 1 myself). Type 1 may take a bit longer to get used to, but it will be more impressive with the idea of changing speeds and the ability of reverse, instead of just up, down, left, right. Map rotation would be nice, but I don't think it suites top down views, might get people dizzy :-P hehe. VC# '05, or VC# '08....which one should I learn on? Kirkula replied to Kirkula's topic in For BeginnersThanks for the input guys. I ended up buying "Beginning C# 2008 - by Christian Gross", I'll let you know how it turns out for me :-D. VC# '05, or VC# '08....which one should I learn on? Kirkula replied to Kirkula's topic in For BeginnersOk, thanks c0uchm0nster, that was what I was thinkin while I was in the shower. (my favorite place to think! get clean while you get smart!) Off to borders! VC# '05, or VC# '08....which one should I learn on? Kirkula posted a topic in For BeginnersI have downloaded both some time ago, and now I'm starting to get more serious about learning. I downloaded '05 because XNA was only available for that one, but I downloaded '08 as well, if only because, hey, it's there. Unfortanately though, I don't know which one I should spend more time learning. I'm leaning towards '08, because by the time I'm ready for my first games, the full XNA 3 will be available. I appreciate your thoughts on the matter, and if this was already posted by someone else, or answered in a FAQ, I appologize for the redundancy, but I'm on my way to the shower and off to borders for a book to start learning. I'll be purchasing whichever one you guys recommend, based on '05 or '08, or I'll just think of which version to go through while I'm there :-D. Thanks for your time. - Quote:Original post by Tom Sloper [You're not too old. But you appear to be too insecure. I was older than you when I started, and it never occurred to me to ask if I was "too old." Although I appreciate the constructive criticism, and the "get your ass off the floor" attitude you were trying to instill onto me, I see from glancing at your web page (btw, nice page, I'ma add it to my faves and check it out later :-D) that when you started, the video game industry was still a frontier for programmers to jump in and take root in. Unfortunately, with todays high standards and the fact that games are starting to be a better investment for entertainment companies than the movie industry is (IMHO anyways), the space for my foot to get in the door is shrinking by the minute it seems. My question wasn't about insecurity as you have guessed, well, not fully anyway. I asked this because I can't afford to spend the next 4+ years investing my time and money into something that won't give me a carrer. If the majority posted that I AM to old (which most certainly wasn't the case, eg., you for instance) then I would have just either takin a few courses on the subject, or do what I usually do, and self teach. Thank you all for your helpful posts, and I'll be posting updates as to my progress in the wonderful world of programming :-D. But for now, it's off to borders! (yay! pay check cleared!). Mentors? Kirkula replied to natebuckley's topic in For BeginnersQuote:Original post by ibebrett at school i practically live in a certain math professors office. also, once you get to know them you get all kinds of inside perks. hmm...I think this goes back to the dating ad theme...what school you go to? I think I may have made up my mind as to which to pick ;-) - Quote:Original post by Nik02 If you have a wireless keyboard and a large enough monitor, you can even code from the bed, so sitting in a chair is not actually a pre-requisite for programming. Actually, I AM in my bed with my computer :-D, but with a 19" monitor and a wired keyboard...you must have a big room :-P. If I strech out, I can touch both walls (shows how small my room is, it's even in 2D!!). But anyways, thank you all for the help, especially you, Maveryck, making me realize that I'm not to old to get into the biz (uh...and I appologize for the subtle way for calling you old...and if you didn't catch that till now, then I appologize for appologizing, hah!) seriously though, thanks guys :-D - You hit the nail right on the head, breakin. To be quite honest, though, reading up on XNA is what got me re-interested in programming after 20 some odd years. Mainly the monetary gains you can supposedly achieve through it $-) haha. But I kid (somewhat), and I can't wait till I see my name in the credits of Halo 9!!!! (yes I can, Halo sucks...) - Sounds promising so far, but by the looks of it, you all started learning at a much younger age. I just got out of 'Hello, world!' last week, so by the time I'm entry level, I'll be 35 most likely, juggling school and work. If you know of anyone else that has achieved this feat at this age, or any future poster happens to have started at the same time as me, bring my hopes up over the edge!
https://www.gamedev.net/profile/143843-kirkula/?tab=issues
CC-MAIN-2018-05
refinedweb
2,282
70.53
The prime motivation for me to go through Qt licensing documentation and installing Qt Creator IDE was to explore the new UI infrastructure introduced in Qt 5 under the umbrella of “Qt Quick“. As far as I can tell, this is an entirely different system for creating user interface of a Qt application. Built with modern ideas such as OpenGL graphics acceleration for animation effects and UI layout declared with a text-based markup language QML (probably stands for Qt Markup Language.) Up to this point my experience with building graphics user interface in Qt was with the QWidget-based infrastructure, which has a long lineage in past editions of Qt. Qt Quick is new for Qt5 and seem to share nothing in common with QWidget other than both a part of Qt5. Now that I’ve had a bit of QWidget UI work under my belt I wanted to see what Qt Quick has to offer. And this starts with a smoke test to make sure I could run Qt Quick in the environments I care about: Python and Raspberry Pi. Step 1: Qt Creator IDE Default Boilerplate. Once the Qt Creator IDE was up and running, I followed the Qt Quick tutorial to create a bare bones boilerplate Qt Quick application. Even without any changes to the startup boilerplate, it reported error messages complaining of missing modules. Reading the error message, I looked at the output of apt list qml-module-qtquick* and installed the ones that sound right. (From memory: qml-module-qtquick2, qml-module-qtquick-controls2, qml-module-qtquick-templates2, and qml-module-qtquick-layouts) Once the boilerplate successfully launched, I switched languages… Step 2: PyQt5 The next goal is to get it up and running on Python via PyQt5. The PyQt5 documentation claimed support for QML but the example on the introductory page doesn’t quite line up with the Qt Creator boilerplate code. Looking at the Qt Creator boilerplate main.cpp for reference, I translated the application launch code into main.py. This required sudo apt install python3-pyqt5.qtquick in addition to the python3-pyqt5 I already had. (If there are additional dependencies I forgot about, look for them in the output of apt list python3-pyqt5*) Once that was done, the application launched successfully on my Ubuntu desktop machine, albeit with visual appearance very different from the C++ version. That’s good enough for now, so I pushed these changes up to Github and switched platforms… Step 3: Raspberry Pi (Ubuntu mate) I pulled the project git repository to my Raspberry Pi running Ubuntu Mate and tried to run the project. After installing the required packages, I got stuck. My QML’s import QtQuick 2.7 failed with error module "QtQuick" version 2.7 is not installed The obvious implication is that the version of QtQuick in qml-module-qtquick2 was too old, but I couldn’t figure out how to verify version number is indeed the problem or if it’s a configuration issue elsewhere in the system. Searching on the web, I found somebody on stackoverflow.com stuck in the same place. As of this writing, no solution had been posted. I wish I was good enough to figure out what’s going on and contribute intelligently to the discussion! I don’t have a full grasp of what goes on in the world of repositories ran by various Debian-based distributions, but I could see URLs flying by on-screen and I remembered that Ubuntu Mate pulled from different repositories than Raspbian. I switched to Raspbian to give that a shot… Step 4: Raspberry Pi (Raspbian Stretch) After repeating the process on the latest Raspbian, the Qt Quick QML test application launches. Hooray! Whether it was some configuration issue or out of date binaries we don’t know yet for sure, but it does run. That’s the good news. Now the bad news: it launches with the error: JIT is disabled for QML. Property bindings and animations will be very slow. Visit to learn about possible solutions for your platform. And indeed, the transition between “First” and “Second” tabs were slow. Looking on the page that it pointed to, it looks like the V4 JavaScript engine used by Qt for QML applications does not have JIT compilation for Raspberry Pi’s ARM chip. That’s a shame. For now, this excludes Qt Quick as a candidate for writing modern responsive user interfaces for Raspberry Pi applications. If I want to stick with Qt and Python, I’m better off writing Qt interfaces in the old school QWidget style. We’ll keep an eye on this – maybe they’ll add JIT support for Raspberry Pi in the future. (The source code related to this blog post are publicly available on Github.) 2 thoughts on “Qt Quick with PyQt5 on Raspberry Pi” It matters much more whether you have 3d acceleration enabled on your Raspberry, or you go with software rendering. Raspberry is one of the best platforms for Qt, including QML, so I think you should look more closely to your performance issues on the platform. Building proper embedded Qt version often involves something like Yocto, not sure how does Stretch work here. 3D acceleration is important for sure, but the biggest problem right now isn’t rendering speed. It’s V4 JavaScript running interpreted instead of compiled, which slow everything down. Even if graphics rendering is fast.
https://newscrewdriver.com/2017/10/17/a-brief-and-unsuccessful-try-at-qt-quick/
CC-MAIN-2018-30
refinedweb
909
61.26
Toolbox Reference for ISA Server 2006 Microsoft® Internet Security and Acceleration (ISA) Server 2006 includes a Toolbox containing the set of rule elements that you can use when creating ISA Server policies and rules. This document provides background information and descriptions for each Toolbox element. The Toolbox is accessed from the Firewall Policy node of ISA Server Management. The Toolbox includes these types of rule elements: ISA Server Network Objects include Networks, Enterprise Networks (Enterprise Edition only) Network Sets, Computers, Address Ranges, Subnets, Computer Sets, URL Sets, Domain Name Sets, Web Listeners, and Server Farms). In ISA Server®2006 Enterprise Edition, enterprise administrators can create and modify Toolbox rule elements at both the enterprise level (from the Enterprise Policies node in ISA Server Management) and the array level (from the Firewall Policy node in ISA Server Management). Enterprise-level rule elements can be used in both enterprise and array-level policies. ISA Server®2006 includes a variety of preconfigured protocols that you can use when you create access rules and server publishing rules. You can further expand the set of protocols by using ISA Server Management to create your own. Protocol Categories In the Toolbox, protocols are categorized in functional groups. These categories were created to help facilitate selection of the appropriate protocol for your specific scenario. Note that some protocols are listed in more than one category. For a complete list of protocols used by Microsoft Windows Server System™ products and subcomponents, see "Service overview and network port requirements for the Windows Server System" at Microsoft Help and Support. Protocol Properties Predefined and user-defined protocols are comprised of Protocol Type, Direction, Port Range, Protocol Number, ICMP Properties (ICMP protocols only), and Secondary Connections (optional). Predefined protocols included with ISA Server cannot be modified or deleted. Protocol Type Specifies which low-level protocol is used for the protocol definition: TCP, UDP, ICMP, or IP-level. Direction ISA Server Domain Name System (DNS) services to reach the published DNS server. When you define protocols for server publishing, you are not required to add the suffix. However, you must define the protocol as inbound. Port Range For TCP and UDP, this is a range of ports between 1 and 65,535 that is used for the initial connection. More than one protocol can be associated with the same port. If you create a rule denying access to a specific protocol, be sure to include all protocols that use the same port in the exception list. Alternatively, you can create a rule denying any one of the protocols that use the port, and place the deny rule before the access rule in the rules order. For example, if you create a protocol to be used in a rule that denies access to a virus, do not create an access rule that allows access to everything except the new protocol. Instead, create a rule that denies access to the new protocol. Place this rule before any other access rules that allow protocols on the same ports as the new protocol.. - Secondary connections is an optional property. You cannot define secondary connections for IP-level primary protocols. Application Filters and Protocols, Web Proxy Filter applies to the Hypertext Transfer Protocol (HTTP). When you disable Web Proxy Filter, Web filters will not apply to traffic that matches this rule. In addition, you can configure a protocol so that an application filter does not apply to the protocol. The following describes the process: - The client opens a primary connection to a server on the Internet. - The ISA Server computer notifies the filter about the connection. - The filter examines the data that is flowing through the primary connection and determines which secondary connection the client is going to use. - The filter informs the ISA Server computer to allow that particular secondary connection. - The ISA Server computer opens the specific port, as indicated by the application filter. RPC Protocols When you install ISA Server, incoming and outgoing remote procedure call (RPC) protocol definitions are provided. Incoming RPC Protocols When you install ISA Server, two default RPC protocol definitions are provided for incoming requests: - RPC Server (all interfaces). If this protocol definition is allowed in a server publishing rule, ISA Server will map any inbound RPC requests to the published RPC server. If the universally unique identifier (UUID) is registered on the RPC server, access to the procedure is given. If the UUID is not registered on the RPC server, the request is dropped. - Exchange RPC server. A list of UUID interfaces used for Microsoft Exchange Server is defined as an RPC protocol definition. You can use this protocol definition in server publishing rules to deny or allow access to specific Exchange functions. You can create additional RPC protocol definitions. Using the New RPC Protocol Wizard, you can either select UUID interfaces from a list of interfaces available on the RPC server, or you can define the interfaces manually. If you do not specify any interfaces for the incoming RPC protocol definition, server publishing rules that allow this protocol definition do not allow any traffic. Outgoing RPC Protocols When you install ISA Server, an outbound RPC protocol is defined for outgoing requests. All. Microsoft Windows® user, a user from a RADIUS namespace, and another user from the SecurID namespace. ISA Server ISA Server. Authentication You can create Web publishing rules, allowing or denying access to a set of computers or to a group of users. If the rule applies specifically to users, ISA Server checks the incoming Web request properties to determine how the user will be authenticated. For example, a Web publishing rule might allow access only to specific users. ISA Server will authenticate the user requesting the object, to determine if the Web publishing rule allows the requesting user access. The user must authenticate, using one of the authentication methods specified for the incoming Web requests. ISA Server provides a secure, encrypted logon environment for browsers that support Microsoft Windows NT® Challenge/Response authentication, and for other browsers that use Basic authentication. Authentication methods can be set for all IP addresses on the server, or separately for each IP address. File Transfer Protocol (FTP) traffic, which passes through ISA Server 2006. When a client requests HTTP content, ISA Server sends the request to the Web server. When the Web server returns the object, ISA Server checks the object's MIME type or its file name extension, depending on the header information returned by the Web server. ISA Server determines if a rule applies to a content type that includes the requested. Preconfigured Content Types ISA Server is preconfigured with the following content types that can be used in access rules. Preconfigured content types cannot be modified or deleted. Creating Content Type Sets In addition to the ISA Server preconfigured content types, you can create your own content type rule element, called a content type set. When you create a content type set, we recommend that you specify the content's MIME type and file name extension. For example, to include all director files in a content type, select the following file name extensions and MIME types: - .dir - .dxr - .dcr - application/x-director When you configure a content type set and specify the MIME type, you can use an asterisk (*) as a wildcard character. For example, to include all application types, type application/*. The asterisk wildcard character can be used only with MIME types (and not with file extensions). The asterisk can be specified only once, at the end of the MIME type after the slash mark (/). For a complete list of Internet Information Services (IIS) default associations, see Appendix A: MIME Types and File Name Extensions. Link Translation and Content Types Some published Web sites files and MIME types. When you create rules, you can apply a schedule to the rule to determine when it is in effect. ISA Server 2006 is preconfigured with the following two schedules: - Weekends. Permits access at all times on Saturday and Sunday. - Work hours.. Network objects are used to categorize IP addresses into different types of network entities, which are used to specify network traffic sources and destinations in the access rules, publishing rules, cache rules, traffic chaining rules, and HTTP compression settings that make up your firewall policy. Note that network rules determine whether there is a relationship between two network entities, and define the type of relationship. Network relationships can be configured for a network address translation (NAT) or route relationship. The following network objects are created in the Toolbox: - Networks. A network entity typically corresponds to a physical network. A network always has a network adapter associated with it, and represents one or more IP address range or ranges that can be reached from the associated network adapter. -. - Web Listeners. Web listener objects are used to enable an ISA Server network to listen for Web requests on a specific IP address and port. Web listeners can also be enabled to require client authentication for Web requests. - Server Farms. The server farms object allow you to publish a farm of Web servers, rather than a single Web server. For more information, see "Web Publishing Concepts in ISA Server 2006" at the Microsoft TechNet Web site. For details about configuring network objects and network rules, see "Network Concepts in ISA Server 2006" at the Microsoft TechNet Web site. Enterprise-Level Network Objects In ISA Server 2006 Enterprise Edition, an enterprise-level network is a network defined for the enterprise, rather than for a specific array. Such a network can be used when defining enterprise-level access rules, or included in the definition of an array-level network. The following network objects can also be created at the enterprise level: - Enterprise Networks - Network Sets - Computers - Address Ranges - Subnets - Computer Sets - URL Sets - Domain Name Sets Networks Networks describe a range of IP addresses. Networks, however, are different from other network objects, in that they also describe physical boundaries. Within these physical boundaries that the network describes, traffic can flow freely. ISA Server policy is not applied within an ISA Server network. A network must always have a network adapter associated with it, and networks cannot have overlapping IP addresses. When you install ISA Server, the networks described in the following table are created. Enterprise Networks (Enterprise Edition Only) When you install ISA Server, the Enterprise networks described in the following table are created at the enterprise level. These networks can be used in enterprise-level and array-level rules. Network Sets Network sets are used to define several networks as a single set. This set can be used in firewall policy rules to apply rules to all the networks in the set. ISA Server 2006 is preconfigured with the following network sets: - All Networks (including Local Host) . This predefined network set includes all the currently defined ISA Server networks (user-defined and built-in networks). - All Protected Networks . This predefined network set includes all currently defined ISA Server networks (user-defined and built-in networks), except for the built-in External network. There are two types of network sets, Exclude and Include. Exclude network sets are defined by selecting a set of networks excluded from the network set. The network set is actually comprised of all the networks that are not selected. Include network sets are defined by selecting the networks that are included in the network set. Computers A computer network object defines a single computer IP address as a network element that can be used in access and policy rules. Note that a computer name cannot be used. Address Ranges. Subnets. Computer Sets - Computer sets define a collection of computers, IP address ranges, or subnets as a single network object that can be used in access and policy rules. When you install ISA Server, the following computer sets are created: - Anywhere. A predefined computer set of all IP address ranges. - IPsec Remote Gateways. A predefined computer set that includes the IP addresses of Internet Protocol security (IPsec) remote VPN gateways that are configured using the Site-to-Site VPN Wizard. - Remote Management Computers. A predefined computer set that includes computers allowed to manage ISA Server remotely. It should be modified to include IP addresses of all computers that can manage ISA Server remotely. If ISA Server is installed remotely within an active Remote Desktop session, the IP address of the remote computer is added automatically to this computer set. - Array Servers. (Enterprise Edition only.) A predefined computer set used in a system policy rule that allows traffic between array members. For each array, this computer set includes the IP addresses of array members. Computers are added during installation. If you subsequently change the address of an array member, be sure to update this computer set accordingly. - Managed ISA Server Computers. (Enterprise Edition only.) A predefined computer set that includes computers allowed to connect to this array's Configuration Storage server. It should be modified to include IP addresses of all computers that will connect to the Configuration Storage server. When you install ISA Server Enterprise Edition, the following enterprise-level computer sets are created. - Anywhere. A predefined computer set of all IP address ranges. - Enterprise Remote Management Computers. A computer set that includes computers allowed to remotely manage all ISA Server computers in the enterprise. The Enterprise Remote Management Computers computer set can also be used when creating array-level rules. - Replicate Configuration Storage servers. A predefined computer set that includes all Configuration Storage server computers that are replicated with the local Configuration Storage server. URL Sets Uniform Resource Locator (URL) sets specify one or more URLs grouped together to form a set. URL sets can be used in access rules to allow or deny access to specified Web sites. You can create a URL set, and then use it in access rules to allow or deny access to Web sites specified in the set. When ISA Server processes a rule that applies to a URL set, the URL set element of the rule is only processed for Web traffic requests. Protocols include HTTP, HTTPS, or FTP over HTTP. If a client request uses another protocol, ISA Server ignores the URL set when processing the rule. For example, if a rule has both a computer set and a URL set specified as destination criteria, only the computer set will be evaluated in the rule. The URL set will be ignored. You can specify one or more URLs in URL format: In the host part of the name, you can use an asterisk (*) wildcard character to specify a set of computers. For example, to specify all computers in the Microsoft.com domain, specify *.microsoft.com. In the path part of the name, you can specify an asterisk wildcard character as part of the path, but only at the end. For example: -* is acceptable. -*/sales is not acceptable. You cannot specify a URL set as an IP address. Processing URL Sets ISA Server processes rules that apply to URL sets only for Web traffic (for client requests for HTTP or FTP over HTTP). When a client uses any other protocol, ISA Server does not process rules that apply only to a URL set. Note the following behavior in matching requests with rules containing URL sets: - Only the host name and path are considered in a request. - The protocol part of the URL is stripped from requests and ignored. - You can also specify a path. Wildcard characters can be used in the path, but only at the end. For example,* is acceptable. However,*/sales is not acceptable. - Although the URL can include a specific port number, ISA Server ignores that port number when processing the rule. Any port number specified is stripped from requests and ignored. - If a request includes a question mark (?), the question mark and everything following it are stripped from the request before matching. - When matching, the host and path names are not case-sensitive. For example, this means that folderA and foldera would be considered the same path. - For HTTP or FTP over HTTP, when the URL is specified in a request without a path, it will match any path. For example, or a.com is equivalent to*. - For HTTPS traffic, URL sets are only processed if the URL does not have a path specified, for example, or a.com. If the URL has a path specified, for example a slash mark (/), it is ignored for HTTPS traffic. - When ISA Server checks the URL sets configured for a rule, text after a question mark (?) is ignored. URLs with a ?, which are included in a URL set, are ignored. Possible protocols are HTTP, HTTPS, and FTP. However, when ISA Server processes a rule that applies to a URL set, the protocol specified is ignored—only the host name and path are considered. Name Resolution (URL Sets are no rule criteria that prevent. URL Set Mapping Some URL set mapping examples are as follows: - For a URL set that includes the URL, requests for and for will be matched, because protocol and port are stripped. - For a URL set that includes the URL, requests for will be matched, as will requests for. For example, is the equivalent of*. The exception is for HTTPS requests, which will not be processed because a path is specified. - For a URL set that includes the URL, requests for will be matched. But requests for will not be matched. In such an entry, requests are not matched to the tree following "a." - For a URL set that includes the URL, the question mark and everything following will be stripped from requests. If this URL set was specified in a deny rule, and a request arrives for, the request is stripped to. It would be allowed because it does not match the URL set specified in the deny rule. To block such a request, you should specify in the URL set. - For a URL set that includes the URL a.com, HTTPS requests will be matched, because no path is specified. - For a URL set that includes the URL b.com/, HTTPS requests will not be matched, because a path (/) is specified. Domain Name Sets Domain name sets define one or more domain names as a single set, so that you can apply firewall policy to the specified domains. When you install ISA Server, the following domain name sets are created: - Microsoft Error Reporting Sites. A predefined domain name set used to allow error reporting. - System Policy Allowed Sites. A predefined domain name set used to allow access to trusted sites for maintenance and management. - Enterprise Configuration Storage Servers. (Enterprise Edition only.) A predefined domain name set for the Configuration Storage server used by the ISA Server computer. - Microsoft Update Domain Name Set. A predefined domain name set of all Microsoft update servers. This domain name set is used in the ISA Server Microsoft Update Cache Rule properties. Specifying Domain Names When you apply a rule to a domain name set, ISA Server checks whether the request matches the specified domain name set. ISA Server checks the exact name that you specified, including port numbers. For example, consider a Web publishing rule that allows access to a domain name set that includes fabrikam.put:1111. Requests to fabrikam.put will be denied. Requests to fabrikam.put will be allowed only if the domain name set is changed to include fabrikam.put. For this reason, do not specify a port number in a domain name set. When creating a domain name set, note the following: - When you specify a domain name, specify the computer name using the fully qualified domain name (FQDN). For example, computer_name.microsoft.com, and not \\computer_name. You cannot specify a domain name set as an IP address. When specifying the domain name, you can use an asterisk (*) to specify a set of computers. For example, to specify all computers in the Microsoft.com domain, type the domain name as *.microsoft.com. Note that the asterisk can appear only at the start of the domain name, and can be specified only once in the name. - When you create a domain with a wildcard character, such as *.microsoft.com, this only includes host computers at the domain, for example,. Note that if the domain name points to a host, *.microsoft.com will have no effect on the URL. - We recommend that you enter the domain name as it is returned by the Domain Name System (DNS). If you specify a dot at the end of a domain name, a request for the domain name (without a dot) may not be matched as required. - When matching rules, the domain name is not case-sensitive. Name Resolution is no rule criteria that prevents. Web Listeners When you create a Web publishing rule, you specify a Web listener to be used when applying the rule. The Web listener properties determine: - Type of connections the Web listener will establish with clients, either Secure Sockets Layer (SSL) or HTTP. - The ISA Server networks, and which IP addresses and ports on the specified networks, will listen for Web requests. - The SSL server certificate that will be used to authenticate the client connection (if SSL is selected). - Which authentication method will be used and when authentication is required. - The method used by clients to authenticate to ISA Server. - The method used by ISA Server to validate client credentials. Web listeners can be used by more than one Web publishing rule. Web Listener Network (IP Address) Selection The Web listener network, or networks, that you select depend on which network clients will use to connect to the published Web server. For example, if the Web site you are publishing allows client requests from the Internet (External network), you should select the External network for the Web listener. By selecting the External network, you are selecting the IP addresses on the ISA Server computer that are associated with the external network adapter. If you do not limit the IP addresses, all IP addresses associated with the selected network adapter will be included in the listener configuration. Web listeners are used by a Web publishing rule. The rule specifies source network objects in addition to specifying a Web listener. The network objects specified for the Web publishing rule must also be specified for the Web listener. Selecting SSL Server Certificates Each Web listener can be used for one or more Web sites. However, ISA Server limits one certificate per IP address. If all the Web sites use the same certificate, you can publish using the same Web listener. However, if different certificates are required, you must do one of the following: - Install a wildcard certificate. - Add an IP address to the listening network adapter on the ISA Server computer (or array in ISA Server Enterprise Edition) for each SSL-enabled Web site. For example, you want to publish three SSL Web sites:,, and. All three sites are registered in a public DNS, and resolve to the same IP address. You must install a wildcard certificate for *.treyresearch.net to publish these sites. Alternatively, you could add more IP addresses. In ISA Server 2004, wildcard certificates are supported only on the ISA Server computer. In HTTPS-to-HTTPS bridging, you cannot use wildcard certificates to authenticate the back-end Web server to ISA Server. In ISA Server 2006, wildcard certificates are supported on both the ISA Server computer and the back-end Web server. For more information about wildcard certificates, see "Publishing Multiple Web Sites using a Wildcard Certificate in ISA Server 2006" at the Microsoft TechNet Web site. Most of the procedures in this document are applicable for ISA Server 2006. Limiting Concurrent Connections By limiting the number of connections allowed simultaneously to the ISA Server computer, you can prevent attacks that overwhelm the system's resources. This is particularly useful when publishing servers. You can limit the number of computers that connect, while allowing for specific clients to continue connecting, even when the limit is surpassed. Port Specification By default, ISA Server listens on port 80 for HTTP requests. If, however, connecting clients are expected to use a different port, you should change the port number accordingly. You can also enable the Web listener to listen for SSL requests on the default port 443. If you choose SSL, an appropriate certificate must first be installed on the ISA Server computer. You must select a server certificate to be used by the Web listener so that the ISA Server computer can authenticate itself to the client. Server Farms Web applications and sites are often hosted by a Web farm, consisting of two or more mirrored Web servers. A server farm (also referred to as a Web farm) defines an existing load balanced cluster as an ISA Server server farm that can be used for publishing load balanced Web applications. When you create a server farm, you specify the computer name or IP address of each server in the farm. Server Farm Load Balancing Mechanism In many scenarios, for load balancing to be effective, affinity must be maintained between the client and the Web server that receives and responds to the client's request. Otherwise, a series of requests from the client and responses from the Web farm will be handled by different Web servers, ignoring the context of the requests. For example, Outlook Web Access is an application that requires client affinity, because the Outlook Web Access server maintains a context for the connected client. The need for affinity is also demonstrated in a Web shopping scenario, where a client sets up a shopping cart on a Web server. If affinity is not maintained, at some point in the client's session, the client's requests may be directed to another Web server that is unaware of the shopping cart. When you apply a rule to a server farm, you can also configure if the load balancing mechanism should be cookie based or source-IP based. We recommend that you use session affinity when possible, because it provides more reliable client affinity when a Web server is restarted. This is sometimes referred to as client stickiness. It also works better in a situation where you are draining a Web server. Session affinity should be used for load balancing Outlook Web Access or Microsoft Windows SharePoint® Services access, both of which use Microsoft Internet Explorer®, and therefore support HTTP cookies. IP address-based affinity has an advantage over session affinity in that it supports clients that are not fully compliant with HTTP 1.1 (clients that do not support HTTP cookies), such as some mobile devices. IP address-based affinity must also be used in a scenario where you are load balancing RPC over HTTP Outlook access. Outlook does not work with HTTP cookies, and therefore cannot use session affinity. Note that if you are publishing a server farm with ISA Server 2006 located behind another firewall, you must either use session affinity, or use IP address affinity and verify that the front-end firewall is configured to pass the original client's IP address to ISA Server. Server Farm Connectivity Verifiers When you create a server farm, ISA Server creates a connectivity verifier for each Web server. Note that if your Web servers use a port other than port 80, specify that port on the server farm properties Connectivity Verification tab. You can view connectivity verifier status for the server farm in the Monitoring node, on the Connectivity Verifiers tab. Draining a Server Farm ISA Server provides a Drain option allowing you to specify that a server in the farm should temporarily stop accepting new connections. When you are ready for the server to begin accepting connections again, the Resume option is available. Note that Drain and Resume are only active after you click the Apply button on the Apply Changes bar. When you drain a server, it stops accepting new connections. However, existing connections are not dropped. The following resources provide additional information when configuring ISA Server network objects: - "Network Concepts
https://technet.microsoft.com/en-us/library/bb794907(d=printer).aspx
CC-MAIN-2015-32
refinedweb
4,677
55.13
synaptic touchpad not recognized on dell latitude e6510 Bug Description It wrongly recognized as PS/2 Generic Mouse. And then scrolling does not work, but tapping does. ProblemType: Bug AplayDevices: **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: STAC92xx Analog [STAC92xx Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 Architecture: i386xe9660000 irq 22' Mixer name : 'IDT 92HD81B1C5' Components : 'HDA:111d76d5, Controls : 26 Simple ctrls : 16 Date: Fri Jul 16 13:36:04 2010 DistroRelease: Ubuntu 9.10 HibernationDevice: RESUME= InstallationMedia: Ubuntu 9.10 "Karmic Koala" - Release i386 (20091028.2) MachineType: Dell Inc. Latitude E6510 NonfreeKernelMo Package: linux-image- PccardctlIdent: Socket 0: no product info available PccardctlStatus: Socket 0: no card ProcCmdLine: BOOT_IMAGE= ProcEnviron: PATH=(custom, user) LANG=pl_PL.UTF-8 SHELL=/bin/bash ProcVersionSign RelatedPackageV linux- linux-firmware 1.26 SourcePackage: linux Uname: Linux 2.6.31- dmi.bios.date: 05/28/2010 dmi.bios.vendor: Dell Inc. dmi.bios.version: A03 dmi.board.name: 0N5KHN dmi.board.vendor: Dell Inc. dmi.board.version: A00 dmi.chassis.type: 9 dmi.chassis.vendor: Dell Inc. dmi.modalias: dmi:bvnDellInc. dmi.product.name: Latitude E6510 dmi.product. dmi.sys.vendor: Dell Inc. Hi Jeremy! I've tried those packages: http:// http:// http:// and I've installed them succesfully through dpkg -i *.deb. Afrer installing nvidia drivers recompiled succesfully. But after reboot kernel during booting in text mode several time screen blinks and after switching to graphics mode its probably set wrong graphics mode because all screen messed up and X session not apperared properly... :-( It's doesn't hang, I could switch to text mode and back (via Ctrl-Alt-F1 Ctrl-Alt-F7), but I can't login to the system. So I couldn't check touchpad without X. Hi Øyvind, Actually I don't think it is. The touchpad on the Dell Latitude 6510 is not recognized as such but as a PS/2 Generic Mouse and I do not have a TouchPad section at all (see screenshot attached) BR, Steven I've attached as much information as I could think of. please tell me if you need more/other information and how to retrieve it. Thanks, Steven is i386 a relevant tag? I'm having the same issue on a 64-bit Ubuntu install. Mine is 64bit too I have the same problem on a Hercules EC-800 (very cheap 8" notebook). Kernel 2.6.32-24-generic #39-Ubuntu (installed from xubuntu "lucid" 10.04). Ressources on this "machine" are low (20 GB hard drive, etc.), so testing is hard. Same problem here, touchpad is detected as PS/2 Generic Mouse. Biggest symptom is that it moves the cursor all the time when typing, makes it almost unusable to type on. 10.04 64bit uname -s -r -v -i -o Linux 2.6.32-24-generic #39-Ubuntu SMP Wed Jul 28 05:14:15 UTC 2010 unknown GNU/Linux cat /proc/bus/ [...] I: Bus=0011 Vendor=0002 Product=0001 Version=0000 N: Name="PS/2 Generic Mouse" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse2 event15 B: EV=7 B: KEY=70000 0 0 0 0 B: REL=3 xinput --list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ PS/2 Generic Mouse id=13 [slave pointer (2)] ⎜ ↳ Macintosh mouse button emulation=12 [slave keyboard (3)] ↳ Dell WMI hotkeys id=15 [slave keyboard (3)] Doug, I noticed similar problems on my Latitude e6510 with irritated cursor moves during pressing keyboard..., ubuntu 9.10 32bit For a fix one may be interested in: https:/ Hi Remco, The suggested fix did have the touchpad recognized as such in "System> ~$ uname -s -r -v -i -o -m Linux 2.6.32-24-generic #41-Ubuntu SMP Thu Aug 19 01:38:40 UTC 2010 x86_64 unknown GNU/Linux ~$ cat /proc/bus/ [...] I: Bus=0011 Vendor=0002 Product=0008 Version=0000 N: Name="DualPoint Stick" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse2 event14 B: EV=7 B: KEY=70000 0 0 0 0 B: REL=3 I: Bus=0011 Vendor=0002 Product=0008 Version=7326 N: Name="AlpsPS/2 ALPS DualPoint TouchPad" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse3 event15 B: EV=f B: KEY=420 70000 0 0 0 0 B: REL=3 B: ABS=1000003 ~$ xinput --list ⎡ Virtual core pointer id=2 [master pointer (3)] ⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)] ⎜ ↳ MLK Trust Mouse 16536 id=10 [slave pointer (2)] ⎜ ↳ DualPoint Stick id=12 [slave pointer (2)] ⎜ ↳ Macintosh mouse button emulation id=14 [slave pointer (2)] ⎜ ↳ AlpsPS/2 ALPS DualPoint=15 [slave keyboard (3)] The same like Steven said... after the applied Remco psmouse patch touchpad is recognized: cat /proc/bus/ I: Bus=0011 Vendor=0002 Product=0008 Version=0000 N: Name="DualPoint Stick" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse1 event14 B: EV=7 B: KEY=70000 0 0 0 0 0 0 0 0 B: REL=3 I: Bus=0011 Vendor=0002 Product=0008 Version=7326 N: Name="AlpsPS/2 ALPS DualPoint TouchPad" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse2 event15 B: EV=f B: KEY=420 0 70000 0 0 0 0 0 0 0 0 B: REL=3 B: ABS=1000003 and $ uname -a Linux karolszk-lap 2.6.31- I've got the same problem with lucid amd64, on a Latitude E6510. Both default kernel (2.6.32-24.41) and 2.6.36- The link to the fix is currently down, I'll try again later. Same problem with me too. I have Latitude E6510 & E6500 both have identical issues, uname -a Linux rgururaj 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 GNU/Linux cat /proc/bus/ I: Bus=0011 Vendor=0002 Product=0001 Version=0000 N: Name="PS/2 Generic Mouse" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse1 event13 B: EV=7 B: KEY=70000 0 0 0 0 0 0 0 0 B: REL=3 Touchpad is working on my e6510 except for the scrolling fields. The message "name of the touchpad device not..." shows up in the KDE System Settings. Kubuntu 10.4. 2.6.32-24-generic #39-Ubuntu Same problem for me (E6510): uname -a Linux chrisj-laptop-linny 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:21:58 UTC 2010 x86_64 GNU/Linux cat /proc/bus/ I: Bus=0011 Vendor=0002 Product=0001 Version=0000 N: Name="PS/2 Generic Mouse" P: Phys=isa0060/ S: Sysfs=/ U: Uniq= H: Handlers=mouse1 event14 B: EV=7 B: KEY=70000 0 0 0 0 B: REL=3 Hi, I actually want to *deactivate* the touchpad. I prefer working with the pointer in the middle of the keyboard. But even though Remco's workaround gets the touchpad recognized I cannot deactivate it and it still interferes with typing. Steven uname -s -r -v -i -o -m Linux 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:21:58 UTC 2010 x86_64 unknown GNU/Linux if you want to enable/disable the touchpad Thanks Jordi. But that also disables the pointer in the middle of the keyboard... Is there a way to identify the touchpad and the keyboard pointer as 2 different devices and then only disable the touchpad ? Steven uname -s -r -v -i -o -m Linux 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:58:24 UTC 2010 x86_64 unknown GNU/Linux Steven, no, both are physically in the same cable Well Jordi it actually helps me a lot already: when I have an external mouse plugged in, I simply run your script (using bash though ;) ) and voilà! no more typing freakness :D It remains of course when I don't have an external mouse. So: thanks :) Steven Steven, I have Win+F7 binded to that script, that way i can easily enable/disable the touchpad #550625 addresses a slightly different type of hardware; the touchpad in question is identified as Product=0005 at /proc/bus/ However, the root cause appears to be described at https:/. The hardware detection used by the DELL patch in alps.c is: { { 0x73, 0x02, 0x64 }, 0x00, 0x00, ALPS_EC_PROTO }, /* Dell E2 series multitouch */ ALPS_EC_PROTO denotes a device memory access protocol used by the pads for initialization. It seems that this issue might affect all DELL E2 notebooks. Thanks Simon! at least we know someone's working on this bug. My primary issue is that the touchpad does not turn off during typing. Hope to see a solution/workaround soon. Thanks Jordi for the script. I will use your script as a workaround till a fix is found. Just tested it works great. Jordi, thanks for the script! After little modifications to xinput for 9.10 ubuntu the script works well. I don't know why xinput have such different syntax beetwen 9.10 and 10.04... I attached script for enable/disable touchpad for 9.10. I can confirm that this problem also appear for the Dell E5510 laptops. I agree with Simon Dierl that this is not the same issue as #550625 (I also get Product 0x1) even though some persons with this problem has posted at that thread. Many thanks for the sh script!!! It is really useful!!! I have Dell E5510 with ubuntu 10.04 and I can confirm that I have exactly the same issues with my touchpad. Regarding the bug: It is interesting that touchpad is actually recognised by old kernel versions, see http:// It would be really lovely is this touchpad problem is solved in new kernel versions. Meanwhile, I am using external mouse and the toggleps2.sh with the shortcut (I also changed zsh to bash int eh script). Another question - do you know actually how to make the SD card reader working on E5510? Many thanks! Hi, I have E6410 and Ubuntu 10.10 kernel 2.6.35-24-generic #42-Ubuntu SMP Thu Dec 2 02:41:37 UTC 2010 x86_64 GNU/Linux So far I have found this: https:/ I assume it is the best option so far. I am totally new with patching. Could anyone tell me if this patch will work in Ubuntu ? If yes, how to apply this patch (step by step guide is preferred ;-) ? Cheers Grzesiek I've attached a sloppy python program inspired by Jordi's handy script. It disables the touchpad as soon as it detects keyboard activity, then a fraction of a second after the end of the keyboard activity, it re-enables the touchpad. I have an E6510 running Ubuntu 10.10 on x64, and I run the script like this: $ sudo ./keyboarddetec If you're running on x86, you might need to tweak the line containing "calcsize". It could be cleaned up to figure out automatically which of the event* entries to use. Graham Here's a refined version of the script. - Auto detect keyboard and trackpoint devices. - Auto detect 32-bit/64-bit processors. - Don't disable trackpoint for modifier keys. The last one is important to allow modifier keys to be used in conjunction with the mouse, e.g. "control select". Kevin - Responses inline On 02/18/2013 05:07 PM, Kevin Cernekee wrote: > Just so we're on the same page - the "input-next" tree [1] (input.git, > branch "next") is Dmitry's staging area for proposed input subsystem > changes to send to Linus for the next merge window - currently targeting > Linux 3.9. My patches 01-13 are in there now. This includes the code > refactoring + Rushmore support, but no Dolphin support. Good to know; I wondered how the roll-up works. > > This past weekend I submitted three more patches [2] to be applied on > top of input-next: > > 1) Remove unused argument to alps_enter_ > > 2) Dolphin V1 support, credited to Dave/florin. This is believed to be > in working order, and ready to merge. > > 3) Dolphin V2 support, consisting of my new init + detection sequence. > This was marked as WIP because the pressure readings at the edge of the > touchpad are still "off." I am hoping somebody with access to this > hardware can help figure out why. Maybe switching to the V2-native > report format (and writing a new decoder for it) would help, since that > is what the ALPS drivers seem to prefer. Great! I dump a lot of email lists, including linux-input, on my server and only periodically check them. I just saw your three-part submission. The code looks good. I'll post a new comment on the 606238 bug thread about my understanding of your progress - just to avoid confusion. > > > "It doesn't make sense to me for ALPS to create different touchpad layout using the same signature; more likely is the laptop exposes an area of the touchpad based on the available real estate." > > It appears that the driver can query the Dolphin V2 touchpads for their > specs, and adjust the operating parameters accordingly. > > Unfortunately I don't know much about how this works, or what range of > values we can expect to see in the wild. > > As for Rushmore, I can confirm that the touchpad dimensions and > trackstick/buttons differ between Dell E6230/E6430, even though the IDs > are the same. I haven't actually taken the laptops apart to see how > they contrast physically. Doing so might shed light on how ALPS manages > product variants. Good to know but I'm unclear how to proceed. Back when, I put in a sysfs debug interface that a user could run to capture the physical coordinates of the edges in order to tune the X edge properties. I could take your patches, add the debug and release a new dkms. > > > "Update ./Documentation > > That's a great idea, and it's something I've been neglecting. I'll update alps.txt tonight and submit to linux-input > > One other thing that might be worthwhile is to see how much (if any) of > the refactored V3 code can be used for the V4 touchpads. There are many > similarities between the two protocols. I agree there are a lot of similarities but some differences as well. For example, V3 uses byte 4 for x/y coordinates and byte 3 for buttons; V4 uses byte 3 for x/y coordinates and byte 4 for buttons. This could take a little bit of time to refactor. I'm backed up on several other projects so I need to move on. ... Just for the record, I'm on a Dell Latitude e6430u running Ubuntu 12.10 (Quantal) and this issue affects me. Hi, what's the lead time from fixes getting committed until they arrive via the usual package updates? I have an old Dell Inspiron 8200 that I'm trying to setup and the touchpad doesn't work at all, I think it's detected as PS2 Mouse. I think this will probably fix it or should I submit a separate bug report with all the info? Thanks for all the work on this. @matt I noticed that Kevin's patches were accepted for the next 3.7 kernel release. I also noticed that a fedora maintainer backported Kevin's patches to the next Fedora release. See https:/ I have not seen any activity by Ubuntu maintainers to backport the patches, so I doubt they will be integrated into the Ubuntu train any time soon. You can test it yourself by installing the psmouse- I can confirm that the patch contained in the psmouse- Also, I would like to note that the alps-1.3 version at http:// Regards. Hi Dave, thanks also for all of your work. How would, or is it ever likely, that this fix will make it into Quantal? There's got to be a huge number of Ubuntu users out there with Dell laptops that could do with this functionality. Is there anything we can do to petition the Ubuntu maintainers on this? Thanks again. I also confirmed that like Miguel, the patch psmouse- I'm on a Dell Latitude E5530 running Ubuntu 13.04 (Raring Ringtail, Beta1) and this issue affects me on Kernel 3.8, too. "Linux Dell-Latitude-E5530 3.8.0-13-generic #22-Ubuntu SMP Fri Mar 15 17:51:30 UTC 2013 i686 i686 i686 GNU/Linux." I tried to fix it with #299: I took ppa from quantal, made apt-get update and got this error: Traceback (most recent call last): File "/usr/share/ import apport ImportError: No module named apport Error! Bad return status for module build on kernel: 3.8.0-13-generic (i686) Consult /var/lib/ .... root@Dell- "ImportError: No module named apport" Try: sudo apt-get install python-apport It is possible that some of the DKMS packages posted in this thread will need tweaking to build against Linux 3.8. Now the touchpad works! I think, it was not the ****** sudo apt-get install python-apport ****** because i got an error. But it i think, the fix was a newer kernel, which i got with ubuntu rairing update: ****** Linux Dell-Latitude-E5530 3.8.0-14-generic #24-Ubuntu SMP Fri Mar 22 19:21:28 UTC 2013 i686 i686 i686 GNU/Linux ****** Now I can see the Alps Touchpad and scrolling does work! wolfgang@ GlidePoint id=13 [slave pointer (2)] First of all: thanks for the work guys I am on Dell Latitude XT Alps hardware: touchpad, 4 buttons, 1 stick Touchpad is detected as PS2 mouse, synapticts not loaded. Touchpad works (tapping/moving), but not very sensitive cq. delays, buttons an stick work The insensitivity / delays made me look for changing the setup. And that made me aware of the synaptics issue. (I never use scrolling so had not noticed anything missing ...). Installed Os's: wheezy 3.2.0-4-amd64 / xubuntu 3.2.0-40-generic / XP . Downloaded psmouse-alps-1.3 from http:// Unfortunately, there is no support for Dell Latitude XT. Found an old mail exchange (2009) from Dmitry Torokhov. This gave me a clue as what to try. Added the following line to alps_model_data[] { { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V2, 0xf8, 0xf8, ALPS_DUALPOINT | ALPS_FW_BK_2 }, /* Dell Latitude XT */ resulting dmesg:: F5 report: 73 00 14 input: DualPoint Stick as /devices/ input: AlpsPS/2 ALPS DualPoint TouchPad as /devices/ Touchpad recognized, tapping works, scrolling works, buttons work. The stick however goes haywire, sending random click events when touched. This is in both xubuntu and wheezy In wheezy I had to enable tapping, in xubuntu tapping worked 'out of the box' My questions: What is the status of Latitude XT support in alps.c ? Can I help ? Where to put these questions ? I am pretty comfortable with linux and C I also have win XP with latest Dell Alps driver on this laptop (must say, I am not a happy windows hacker but can do ...) Lee @libondom-0 My guess is this is another mutation of the ALPS touchpad. It clearly is a new signature, which you added, which indicates new behavior: the trackstick. There have been several new significant behaviors added to the alps driver ("Rushmore" and "Dolphin"). The best I can recommend, not having this touchpad (and it's an n:m mapping between Dell system and Alps touchpad), is to add code to the driver to dump the trackstick changes and then try to reverse engineer what the movement codes actually mean. See the alps_process_ BTW, I experienced similar upheaval in the late 1980's as a customer to a company called Newbridge. Its staff was turning over so quickly that relative newbies were the sole support for some of their hardware and just hacked it up to get it to work regardless of documentation or compatibility. They released M$ drivers to support the new firmware but Unix boxes (we were a SUN shop) were left hanging. Newbridge and SUN no longer exist; I think this bodes poorly for ALPS. Dave Hi Dave Glad to see you are monitoring. What/where is the best way to communicate about this? The problem at hand: It comes down to reverse engineering then. The mail exchange from Dmitry Torokhov I mentioned was about Latitude XT So some (reverse) engineering has been going on. I have read some posts of people doing this with Virtualbox. Must go backtrack a bit here. Do you have any idea if there is anything of the sort going on upstream? I do not want to invent the (mouse)wheel, but if nothing is happening, I will have a go. Looks a bit like a can of worms to me, but hey, I believe I like to make things work. Would appreciate suggestions/tips as to how to go about. Like using real or virtual XP I use kvm/qemu here, and touchpad support would have to be added I think. And then, how much do I have to know about win XP? will keep posting here for now Lee On second thought, do I need XP? The touchpad is already recognized an handled by synaptics. So all can be done in alps.c no? Reverse engineer the event readings from 'cat /dev/input/mouseN' On the right track here? Lee on third thought >>add code to the driver to dump trackstick changes' i am there now bit slow in pickin up ... laters lee @libondam-0 Response to comments 348, 349, 350: The easiest, and I mean *easiest*, way is to hack alps.c for the raw input from the touchpad and then "xinput setprop" to tune the X11 cooked input. For brand-new alps touchpads that don't adhere to any of the known protocols, this is not sufficient and one needs to reverse engineer the Windows driver behavior. Seth Forshee showed us the way and then Ben Garami figured out the the new extensions. I seriously doubt this will be the case for you. Use Virtualbox or Qemu to create a guest OS. I used Vista. Seth showed how to patch the I/O layer to dump the bytes going between the guest driver and the hardware. The catch is that the new alps drivers check the BIOS ACPI DSDT tables to make sure it's an ALPS hardware module; if not it drops into 3-byte PS2 mode. Therefore the virtual ACPI DSDT table must be updated to use the Hardware ID (HID) for the alps hardware model (taken from the real ACPI DSDT table.) If this sounds a little complicated, it is. Make sure you install the Alps driver into the guest OS! In the alps.sh from the 1.3 DLKM, there are some helper routines to get the real DSDT and patch the qemu acpi-dsdt.dsl table for the correct HID. There is another way to reverse engineer an ALPS touchpad, discovered by Kevin Cernekee but it's not totally reliable. It worked for him, and cleaned up the E6430 code a good deal. Email Kevin directly for how to do it. Dave Installing http:// Download into /usr/src, run ./alps.sh dkms_install_ and then ./alps.sh dkms_build_alps Dell latitude e5430, cat /proc/bus/ This is 12.10 @Dave response to #351 Sorry, I missed your documentation in the alps-1.3 directory, I had just compiled the psmouse module without any script. After reading, I'll be a while I guess. Complicated indeed. Keep you posted. Thanks. Lee Greetings - I have received a number of emails about running the our dlkms on a 3.5+ kernel. Kevin Cernekee made the required API changes and added it as an attachment to this issue. I have copied his tarball to my public area at: https:/ Dave Sorry - Kevin did a lot of work on the driver and then uploaded as an issue attachment. I just uploaded his tarball to http:// No one has reported anything negative about the new driver and it has been accepted to the linux kernel. Dave On 04/19/2013 11:26 AM, Richard Merren wrote: >:// > /public-download but the touchpad is still not recognized. > > > The code version from comment #356 fixed my problems mentioned in comment #352. It works with Raring on the 3.8 kernel. I'm very happy because reverting to the supersensitive, nonscrolling touchpad I had before you wrote this driver was pretty unbearable. Thanks again for all of your work on this. When you say it has been accepted to the linux kernel, do you mean to say that at some point this will work out-of-the-box without installing the driver with DKMS? On Fujitsu LB AH532, after installing the driver I have both psmouse and ALPS touchpad active. I have only touchpad. Vertical scroll works in very narrow area on the right of the touchpad. Mouse and Touchpad have duplicate settings in the "Mouse and Touchpad" dialog: pointer acceleration and sensitivity. Is it intentional or anything wrong with my system? xinput --list: ⎡ FJ Camera id=9 [slave keyboard (3)] ↳ AT Translated Set 2 keyboard id=10 [slave keyboard (3)] Hardware info on previos comment (comment #358) can be found in Bug #1041916 (“Touchpad of Fujitsu LifeBook AH532 not recognized” : Bugs : “xserver- I have Ubuntu 12.04 with 3.5.0 kernel and xserver- I am running Ubuntu 13.04 on a Dell Latitude E6430u, and my touchpad is recognized by default. Two-finger scrolling works. Pinch-to-zoom also works in e.g. Eye of GNOME (but not in Firefox/Chromium). It's great that the driver has been backported. Now that Linux 3.9 is making its way into circulation, let's summarize the reported issues to date: 1) No Dolphin V2 support. Still need to borrow hardware to fully understand the report format and make edge scrolling work without excessive pressure. I believe we have a good init sequence. 2) Resync errors: [1766509.702598] psmouse serio1: DualPoint TouchPad at isa0060/ [1766509.712794] psmouse serio1: DualPoint TouchPad at isa0060/ [1766509.722987] psmouse serio1: DualPoint TouchPad at isa0060/ [1766509.733151] psmouse serio1: DualPoint TouchPad at isa0060/ [1766509.743293] psmouse serio1: DualPoint TouchPad at isa0060/ [1766509.753533] psmouse serio1: DualPoint TouchPad at isa0060/ I see these pop up in dmesg every week or two; they run for maybe a minute or so and then vanish, with no obvious ill effects. Not sure how to reproduce them. 3) Click-and-drag (e.g. selecting text in an xterm) suddenly quits working. I've only seen this happen once. Unloading and reloading psmouse.ko fixed it. This problem mystifies me because when I ran xev, I still saw all of the proper events coming from the input device. So maybe it was caused by something higher in the stack. 4) Tap-to-click is broken on Rushmore[1]. Root cause: when transitioning from Linux 3.8 (touchpad detected as generic PS/2 mouse) to 3.9 (touchpad detected as an ALPS touchpad), tap-to-click in the pointer settings may need to be enabled by hand. If the touchpad is detected as a generic PS/2 mouse, tap-to-click will work regardless of this setting. 5) Pointer jumps all over the screen after suspend/resume on a Rushmore touchpad. Seen once, cannot reproduce. 6) "Noisy" X/Y values on Rushmore[2]. Reporter is investigating whether this shows up on other drivers. Three possibilities include: i) it's noisy everywhere, even in Windows; ii) the input data is noisy, and the driver needs to clean it up; or iii) the other drivers get "clean" report data but we're using a bad init sequence so our report data is sketchy. Any hints on reproducing #2, #3, or #5 would be appreciated. [1] http:// [2] http:// Hi all, I have a Dell Vostro 3360 with a 'Dolphin V2' touchpad, running linux 3.10-rc1 (latest from git). I have been using the psmouse- I had to make a few changes to build it against the latest kernel tree though, and instead of doing the DKMS thing I just copied alps.c into drivers/ Is it possible to get this V2 support mainlined? This patch works fine for me, but I'd be happy to provide any data/have a go at mild hacking required to fix other issues (edge scrolling) if that is needed to get the driver upstream. The work so far is great, thanks to everyone involved. Cheers! Chris On Dells this has been worked on and at bug 1089413, you can find fix status for different Ubuntu versions there. Hi, It has been fixed for 'Dolphin V1' touchpads - the driver is upstream so they are fine. The problem is that 'Doplhin V2' touchpads don't yet work - although a driver has been written (it's in the DKMS module and works great), it hasn't yet been pushed to the mainline kernel. It would be great if 'V2' support was mainlined too. Apologies, I didn't really say this very clearly in my first comment. Cheers! Also affects: 201205-11042 Dell Precision M6700 (AlpsPS/2 ALPS DualPoint Touchpad) And it could be solved by updating the system. affects me: dell vostro 3460. following http:// a package for this is being prepared? with Vostro 3360 a have the same pb. http:// Bráulio, which release are you using? I installed the package on my Fujitsu Lifebook and it is almost unusable - the touchpad fails to detect small touches (i.e. detects only a flat finger, not the fingertip); the x/y values are way off (y movement much faster than x, which is very sluggish). Multitouch features work though and ubuntu detects the touchpad as such. Karol Szkudlare. Thank You, Christopher for reminding! I had the same problem (see above): For me it's solved since Ubuntu 13.10. I use now kernel 3.11.0-15. Touchpad works fine. In settings i can choose between - Scroll with 2 fingers and - natural scroll => both works! => for me the bug is fixed! I'm using the psmouse driver on a Dell Inspiron 17R SE and it makes the touchpad work as an actual touchpad and not just a mouse. However, I'm experiencing a significant amount of backlash. By this, I mean that when reversing the direction of movement the cursor will remain still for some time before it actually moves again. The behavior is very similar to the mechanical concept of backlash, which is nicely explained in this Wikipedia article: http:// Could this be a flaw in the driver or in some other part of the software system? Is it just a property of my hardware? I have noticed the exact same behavior on a Fujitsu Lifebook with an ALPS touchpad that also works with this driver. Kalle Elmér,:/ This bug was nominated against a series that is no longer supported, ie quantal. The bug task representing the quantal nomination is being closed as Won't Fix. This change has been made by an automated script, maintained by the Ubuntu Kernel Team. Hi Karol,.]
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/606238
CC-MAIN-2015-27
refinedweb
5,186
73.37
This extension provides a validator class that enables to define an AR class attribute to be containing a valid class name. This is useful in occasions such as when you have a class file that refers to other models in your system. For example, 'views counter', 'like functionality' and more... . Usage ¶ - This extension is designed as an external validation class. To use it, insert snippet like the one demonstrated below to the relevant AR class 'rules' method: public function rules() { // NOTE: you should only define rules for those attributes that // will receive user inputs. return array( // more rules here... array('model_name', 'PcClassExistsValidator'), // more rules here... ); } - In order to change its configuration options, pass them as parameters in the array shown above (in rules()). - You have to make sure that each class that its name is supposed to be the value of the property you want to 'validate' using this validator class, has been 'imported' by Yii before the validation takes place. This typically means updating the 'import' attribute of CWebApplication in config/main.php config file in advance. I didn't find any slicker solution that will achieve "presence" of the class names in PHP's global namespace prior to running the validation process (if you have an idea please let me know). - Validator will output log messages (using Yii::log) in case of errors for easy debugging (if you're using the log facility). Configuration options: ¶ The following are the class public attributes via which the configuration is passed, along with their default values: /* @var bool $allowEmpty Whether the attribute is allowed to be empty. */ public $allowEmpty = false; /* @var string $emptyMessage the message to be displayed if an empty value is validated while 'allowEmpty' is false */ public $emptyMessage = "{attribute} cannot be blank"; public $errorMessage = " is not a valid class name. Make sure to configure config/main.php 'import' section to load all relevant classes."; Resources ¶ Change Log ¶ - v1.0 (23 Dec 2012): Initial version release. If you have any questions, please ask in the forum instead.
https://www.yiiframework.com/extension/pcclassexistsvalidator
CC-MAIN-2019-18
refinedweb
337
54.83
This Tutorial I don't know why in this tutorial we extend JTextField class.I made this program without extending the class or add ActionListener to the textfield text1 and text2 and the program run very well. Hala Java Programmer Lovely Tutorials Hi, I am new to programming and i wish to be java programmer. I found this site effective and helpful, especially the way by which each topic is explained is really very amazing. I love these tutorials. Also if someone could tell me where shou -to-One Relationship | JPA-QL Queries Java Swing Tutorial Section... Component Java Swing Tutorial Section - II Limiting Values in Number... using Swing | Chess Application In Java Swing Jboss 3.0 Tutorial Section 10 Java Swing : JLabel Example In this tutorial, you will learn how to create a label in java swing Calculate sum and Average in java Calculate sum and Average in java How to calculate sum and average in java program? Example:- import java.io.BufferedReader; import.... Description:- In this example we are calculating sum and average of n numbers AWT-swings tutorial - Swing AWT :// Java swing Java swing Write a java swing program to calculate the age from given date of birth Calculate the Sum of three Numbers Calculate the Sum of Three Numbers This is simple java programming tutorial . In this section you will learn how to calculate the sum of three numbers by using three Java example to calculate the execution time Java example to calculate the execution time get execution time This example program... of the program or method by the ending time of the method. In our example java Date picker in Java Swing in need of Time Picker just like the Date picker available in java Swing. Kindly... tutorial to make Date picker in swing . Please visit the following link: Java Date Time Picker Example Hello Sir, Thank java swing. java swing. Hi How SetBounds is used in java programs.The values in the setBounds refer to what? ie for example setBounds(30,30,30,30) and in that the four 30's refer to what Java Dialogs - Swing AWT visit the following links:
http://roseindia.net/tutorialhelp/allcomments/155547
CC-MAIN-2014-41
refinedweb
358
53.1
Blogs More readable code with less typing How do you get back into your code after a week of doing something else? - Start reading and put comments to what you don't understand. - Add the categories you left out last week. Both tasks involve a lot of typing, but that can be alleviated by using features of your favorite text editor, vim in my case. GtkLauncher is dead long live VisualGST Hi, I am working on the debugger for VisualGST : Now you can step, step into, run and inspect the stack. Cheers reboot required - a short tale So here I am, happily hacking along at my first Iliad web application, when I get all fancy and want to set it up on a public server. Heh ... a bit of fiddling required, since neither "my" Iliad nor "my" gst are out of the box anymore, but applying two patches is not too hard work. Now I have everything ready: patched gst starts the REPL, gst-package builds OnlineTester.star, gst-load -viI ot.im OnlineTester ... throws an error, while starting Swazoo. Ah, yes, I think quickly, that port is already in use here. So I change, rinse and repeat ... and the error is still there. A FileError? When opening a socket. What? DRY package description ... in Smalltalk So here we go again ... this time using native Smalltalk code to describe package contents in a DRY way: Eval [ PackageBuilder new name: 'MyPackage'; namespace: 'MyNamespace'; prereq: 'Package1'; prereq: 'Package2'; ... testsBelow: 'Tests' matching: '*.st'; filein: 'File1.st'; filein: 'File2.st'; ... buildXml ] Iliad examples explained part II In the previous part, I explained the basic concepts behind Iliad Applications and Widgets, through the simple counter example. This part will show how to use Magritte to automatically build views and editors with data validation.
http://smalltalk.gnu.org/blog/
crawl-002
refinedweb
298
75
0 replies on 1 page. Today I released an updated ScalaTest-1.5-SNAPSHOT that includes several enhancements, including two new style traits, PropSpec and FreeSpec. The snapshot works with Scala 2.8. You can download the snapshot release via the scala-tools.org Maven repository with: PropSpec FreeSpec Or you can just grab the jar file from: I put the Scaladoc for this snapshot release up here: PropSpec allows you to create a suite of property-based tests. It works the same as a FunSuite, but instead of test you write property, and instead of testsFor you write propertiesFor. FunSuite test property testsFor propertiesFor PropSpec supports both ScalaCheck and ScalaTest property styles (as well as any other property style you may wish to use it with.) If you want to write properties in the ScalaCheck style, mix Checkers into your PropSpec. If you want to write them in the ScalaTest style, mix in PropertyChecks. Here's an example that uses both generator- and table-driven property checks: Checkers PropertyChecks import org.scalatest.PropSpec import org.scalatest.prop.PropertyChecks import org.scalatest.matchers.ShouldMatchers class FractionSpec extends PropSpec with PropertyChecks with ShouldMatchers { property("Fraction constructor normalizes numerator and denominator") { forAll { } } } property("Fraction constructor throws IAE on bad data.") { val invalidCombos = Table( ("n", "d"), (Integer.MIN_VALUE, Integer.MIN_VALUE), (1, Integer.MIN_VALUE), (Integer.MIN_VALUE, 1), (Integer.MIN_VALUE, 0), (1, 0) ) forAll (invalidCombos) { (n: Int, d: Int) => evaluating { new Fraction(n, d) } should produce [IllegalArgumentException] } } } For more information, see the Scaladoc documentation for PropSpec and my earlier post, ScalaTest Property Checks Preview. Trait PropSpec is the last piece of ScalaTest's new support for property-based testing described in that previous post. Note: Trait PropSpec is in part inspired by class org.scalacheck.Properties, designed by Rickard Nilsson for the ScalaCheck test framework. The other new style trait introduced in today's snapshot release is FreeSpec. Whereas ScalaTest's other specification-style traits facilitate writing text with certain grammatical structures using words like "when," "should," and "can," FreeSpec allows you to structure the text of your specification however you wish. You write a test in a FreeSpec with a string followed by in and a block of test code, just as you do in WordSpec and FlatSpec: in WordSpec FlatSpec "should pop values in last-in-first-out order" in { // ... } You can surround tests with description clauses composed of a string, a dash character (-), and a block. Here's an example: - "A Stack" - { "should pop values in last-in-first-out order" in { // ... } } You can nest description clauses inside description clauses to any number of levels. Here's an example: import org.scalatest.FreeSpec class StackSpec extends FreeSpec { "A Stack" - { "whenever it is empty" - { "certainly ought to" - { "be empty" in { // ... } "complain on peek" in { // ... } "complain on pop" in { // ... } } } "but when full, by contrast, must" - { "be full" in { // ... } "complain on push" in { // ... } } } } When run in the interpreter, you'd see: scala> (new StackSpec).execute() StackSpec: A Stack whenever it is empty certainly ought to - be empty - complain on peek - complain on pop but when full, by contrast, must - be full - complain on push Another use case for FreeSpec is writing specification-style test suites in a language other than English, as demonstrated here: import org.scalatest.FreeSpec class ComputerRoomRulesSpec extends FreeSpec { "Achtung!" - { "Alle touristen und non-technischen lookenpeepers!" - { "Das machine is nicht fuer fingerpoken und mittengrabben." in { // ... } "Is easy" - { "schnappen der springenwerk" in { // ... } "blowenfusen" in { // ... } "und poppencorken mit spitzen sparken." in { // ... } } "Das machine is diggen by experten only." in { // ... } "Is nicht fuer gerwerken by das dummkopfen." in { // ... } "Das rubbernecken sightseeren keepen das cottenpicken hands in das pockets." in { // ... } "Relaxen und watchen das blinkenlights." in { // ... } } } } ComputerRoomRulesSpec scala> (new ComputerRoomRulesSpec).execute() ComputerRoomRulesSpec: Achtung! Alle touristen und non-technischen lookenpeepers! - Das machine is nicht fuer fuer gerwerken by das dummkopfen. - Das rubbernecken sightseeren keepen das cottenpicken hands in das pockets. - Relaxen und watchen das blinkenlights. The FreeSpec concept first saw light of day way back in ScalaTest 0.9.4 under the name SpecDasher. I released it already deprecated with a warning that I would remove it in 0.9.5, which I did. I only released it because I'd shown it in Programming in Scala, which had already gone to the printer. I wanted all the code in the book to work for the version of ScalaTest mentioned in the book. I removed it in 0.9.5 because I wasn't convinced I had figured out the best way to do it, and even if I figured that out, I only planned to add it if users actually convinced me it would be useful to them. SpecDasher Eric Torreborre's specs framework always had a >> operator that provided a similar nesting ability to FreeSpec's dash character, but in the specs case >> was an alias for in, which meant tests (called "examples" in specs) were being nested not specification text (and you still could put a should on top). The specsy project by Esko Luontola came closer to the concept, and in fact Esko wrote a blog post about the troubles with pre-defined words, Choice of Words in Testing Frameworks. Over time I did get input from users that they would find value in this kind of trait, for example, in this scalatest-users discussion started by Sukant Hajra. And I got the opposite feedback, in a way, of seeing someone use trait Spec and completely ignoring the guiding structure. So now FreeSpec is here. >> should Spec Another user feedback that I got over time is that users really liked the formatted output of the BDD-style (Behavior-Driven-Development-style) traits, and wanted to see the same thing in the TDD-style (Test-Driven-Development-style) traits. I did things that way originally because making the output of running a test suite a more useful artifact was a push from the BDD folks. But the users gave me this feedback, so now even when you run a Suite, FunSuite, JUnitSuite, JUnit3Suite, or a TestNGSuite, you get nicely formatted output. For example, given this FunSuite: Suite JUnitSuite JUnit3Suite TestNGSuite import org.scalatest.FunSuite class MySuite extends FunSuite { test("addition") { val sum = 1 + 1 assert(sum === 2) assert(sum + 2 === 4) } test("subtraction") { val diff = 4 - 1 assert(diff === 3) assert(diff - 2 === 1) } } Running from the interpreter in ScalaTest 1.3 gives you: scala> (new MySuite).execute() Test Starting - MySuite: addition Test Succeeded - MySuite: addition Test Starting - MySuite: subtraction Test Succeeded - MySuite: subtraction But as of the latest ScalaTest 1.5 snapshot release will give you: scala> (new MySuite).execute() MySuite: - addition - subtraction Another bit of user feedback was that people preferred to see the nested levels in the test class echoed in indentation levels in the output. ScalaTest has supported arbitrary levels of indentation since 1.0 in its event hierarchy, but didn't fire indentation of more than one level from any ScalaTest trait, not even from Spec, which allowed arbitrarily deep levels of nesting. My thought was that most often people wouldn't actually nest more deeply than two levels, and I felt that it was more readable to flatten two levels to one. Also I observated Ruby's RSpec tool, which inspired Spec's describe/it syntax, at the time flattened everything to one level. I wasn't sure which way to go, so I followed RSpec initially. Meanwhile Eric Torreborre indented specs output to match the input, and I observed that led some specs users to nest text more deeply, and even RSpec now indents its output this way. The ScalaTest users I asked about this for the most part indicated they would like the indented output. So now given this Spec: describe it import org.scalatest.Spec class MySpec extends Spec { describe("A Stack") { describe("(when empty)") { it("should be empty") (pending) it("should complain on peek") (pending) it("should complain on pop") (pending) } describe("(when full)") { it("should be full") (pending) it("should complain on a push") (pending) } } } Instead of getting the output you get with ScalaTest 1.3: scala> (new MySpec).execute() A Stack (when empty) - should be empty (pending) - should complain on peek (pending) - should complain on pop (pending) A Stack (when full) - should be full (pending) - should complain on a push (pending) As of the latest ScalaTest 1.5 snapshot, you will get: scala> (new MySpec).execute() MySpec: A Stack (when empty) - should be empty (pending) - should complain on peek (pending) - should complain on pop (pending) (when full) - should be full (pending) - should complain on a push (pending) For two levels of indentation, it isn't necessarily an improvement in readability, but at more levels it is more clearly a readability win. For example, given this WordSpec: import org.scalatest.WordSpec class ScalaTestGUISpec extends WordSpec { def theUser = afterWord("the user") def display = afterWord("display") def is = afterWord("is") "The ScalaTest GUI" when theUser { "clicks on an event report in the list box" should display { "a blue background in the clicked-on row in the list box" in {} "the details for the event in the details area" in {} "a rerun button" that is { "enabled if the clicked-on event is rerunnable" in {} "disabled if the clicked-on event is not rerunnable" in {} } } } } Instead of the ScalaTest 1.3 output of: scala> (new ScalaTestGUISpec).execute() The ScalaTest GUI (when the user clicks on an event report in the list box) - should display a blue background in the clicked-on row in the list box - should display the details for the event in the details area - should display a rerun button that is enabled if the clicked-on event is rerunnable - should display a rerun button that is disabled if the clicked-on event is not rerunnable As of the latest ScalaTest 1.5 snapshot, you'll get: scala> (new ScalaTestGUISpec).execute() ScalaTestGUISpec: The ScalaTest GUI when the user clicks on an event report in the list box should display - a blue background in the clicked-on row in the list box - the details for the event in the details area a rerun button that is - enabled if the clicked-on event is rerunnable - disabled if the clicked-on event is not rerunnable These enhancements will be released as part of ScalaTest 1.5 within the next few weeks. I'm posting this preview now because I want to get feedback in general on the API and find if there are any bugs to fix or any code-breakages. (I expect no source code to break with any of these enhancements, so let me know if you have a problem.) So please give it a try and either post feedback to the discussion for for this blog post, or email the scalatest-users mailing list.
https://www.artima.com/forums/flat.jsp?forum=106&thread=325537
CC-MAIN-2018-22
refinedweb
1,789
62.17
This is the documentation for older versions of Odoo (formerly OpenERP). See the new Odoo user documentation. See the new Odoo technical documentation. Driving your Marketing Campaigns¶ Lead Automation with Marketing Campaigns¶ OpenERP offers a set of modules allowing you to easily create and track your Marketing Campaigns. With the Marketing application, you define your direct marketing campaigns, allowing you to automate your lead communication. You can install the module through the Reconfigure wizard, then select Marketing. Campaigns can be displayed in List or Diagram view. The Diagram view allows you to clearly see the marketing actions (represented by a node) and the applied conditions (represented by an arrow). A marketing campaign is an event or an activity that will help you manage and reach your partners with specific messages. A campaign can have many activities that will be triggered from a specific situation, for instance a response from a contact to an email you sent. The result of such a response (action) could be the sending of an email, for which a template has previously been created in OpenERP. To use the email functionality, you have to configure your email account. This is explained in the chapter ch-crm-fetchmail-install. Example of a Complete Marketing Campaign¶ Suppose we are an insurance company that wants to launch a marketing campaign to generate new leads. The company launches a campaign on its website and proposes potential customers to get a free offer for their car insurance. Each time a customer registers himself through the contact form, a lead is created in OpenERP. For further information about web contact forms, please refer to the chapter Automating your Lead Acquisition. The salesperson responsible for Car Insurances triggers the marketing campaign by sending an introductory email of all the insurance services we offer and thanking for subscribing for the free Car Insurance Offer. Based on the response, the insurance company plots whether the lead is interested in: Buying a Car Insurance, Information about other Insurance policies, Buying the book about Keeping your Children Safe. According to the replies we receive from the leads, we send an email catering their respective needs. If they respond back to such an email, the lead is converted into an opportunity. When the lead buys a car insurance, the lead becomes our partner and is created as a customer in OpenERP. If we do not receive an answer, they get a reminder regarding the offer a week later. If they still do not answer, our salesperson gives a voluntary call to ask about their needs. See it as a flowchart allowing us to trigger a respective activity for every possible cue. The chances of leads going unattended become very low, and for every lead, we have a predefined method of handling it. Moreover, we can measure the method according to our goals. Based on the goals we can evaluate the effectiveness of our campaign and analyze whether there is room for improvement. Совет Campaign Example To a get an example of a complete campaign in OpenERP, you can install the marketing_campaign_crm_demo module. Designing your Campaigns¶ Designing a marketing campaign is mostly a long term process and the success of any campaign depends on the research and the effectiveness in selecting your target audience for the campaign. There are certain questions that every marketeer always asks while designing a campaign. What would be our marketing campaign? Who would be the target audience? How would we measure the effectiveness of our campaign? The OpenERP campaign is based on the principle of lead automation. A lead is created according to a specific response by a customer towards a stimulus. An example: filling the car insurance calculator on your website may create a lead in OpenERP. The first step is to define the campaign, i.e. the sequence of steps to be performed. By defining the campaign, we trigger a set of activities in the Marketing Campaign application of OpenERP. From the lead automation, we define the sequence of steps we ought to follow, the modes of creating and processing these activities and the cost involved in this campaign. After each activity and based on its respective stimuli, we can trigger the next event of the campaign concerned. Segmenting your Campaigns¶ The two most important points for any successful campaign are the adoption of a concrete methodology of execution and choosing the right segment: a target loop of customers to whom our campaign would be directed (i.e. your target audience). Inappropriate focus on the wrong segment would result in the campaign being misfired and our efforts would reach deaf ears. Through the Segment tab in the Campaign module, we can define our segment for each Campaign activity. Indeed, it is perfectly well possible that with every step downwards, the segment gets narrowed in terms of number. You can also synchronize the entire campaign steps according to the defined segments. Our insurance company wants to attack the Spanish market, and will define a segment called Spanish Leads. Of course you would want your segment to be valid for leads coming from Spain only. To achieve this, go to the Leads list view. Filter all the leads for Spain (type Spain in the Country field), make sure to clear the salesteam, so that all leads coming from Spain will be selected. Then click Save Filter and call it for instance Spanish Leads. Now return to the Campaigns menu and open the Segment, then click the Filter field to select Spanish Leads. The segment will now only apply to Spanish leads. As you can see, the Marketing Campaign module is closely synchronized with the Customer Relationship Management Business Application. Let us consider the segment we cater in the campaign as Leads in OpenERP. Goals are set for each campaign, which would be considered as a desired state. Once a lead meets our objective criteria of goals, we change the lead status by converting it into an Opportunity, meaning that we should give focused attention. Once the lead satisfies our final objective, we would consider it as a partner/customer and close that lead. OpenERP allows you to create your own email templates. You can use the Expression Builder to have the variables created for you. Suppose you would like to add the Contact Name in the email, but of course, this will be a different name for each email. In the Expression Builder, in Field, select Contact Name. Automatically, the Expression will be filled. Copy the value from the expression and paste it in your email, e.g. Dear ${object.contact_name}. So your email will start with Dear followed by the name of the contact. This way you automatically create personalized emails. For each email template, you can have OpenERP generate a Wizard Action / Button that will be related to the object. So if you choose to do marketing campaigns for leads, the action will be added to the right side panel of the Lead form. Совет Configuring Marketing Campaigns Please notice that it requires some technical knowledge to configure Marketing Campaigns. To be able to see, create, edit campaign, users need to be in the Marketing / User group. Setting up your Marketing Campaigns¶ Введение A campaign defines a workflow of activities that items/objects entering the campaign will go through. Items are selected by segments. Segments are automatically processed every few hours and inject new items into the campaign, according to a given set of criteria. It is possible to watch the campaign as it is running, by following the campaign "workitems". A workitem represents a given object/item passing through a given campaign activity. See it as a step that still can go either way. Workitems are left behind when the item proceeds to the next activities. This allows an easy analysis and reporting on the running campaign. Each activity may execute an action upon activation depending on a dynamic condition. When the condition is not met, the workitem is cancelled/deleted; if the condition is met, the action is executed, the workitem is marked as Done, and propagated to the next activities. Campaigns (Marketing ‣ Campaigns ‣ Campaigns) - Campaign Each campaign is made of activities and transitions, and must be defined on any specific object the system knows about (e.g. Leads, Opportunities, Employees, Partners). - Mode A campaign can be in one of 4 modes: Test Directly: processes the whole campaign in one go, ignoring any delay put on transitions, and does not actually execute the actions, so the result is simply the set of corresponding campaign workitems (see below). Any time a segment adds new items in the campaign they will be processed in the same manner. Test in Real time: processes the campaign but does not actually execute the actions, so the result is simply the set of corresponding campaign workitems. Any time a segment adds new items in the campaign they will be processed in the same manner. Manual confirmation: No action will be executed automatically, a human intervention is needed to let workitems proceed into the flow. It is like a step-by-step manual process using the Campaign Followup menu. You can ignore the time delays and force any step of the campaign, implementing the campaign at your pace i.e. (you have a test email and want to see if the steps and templates do exactly what you want them to do). You will see that the actions set are defined as To Do and Done and the page has to be refreshed to see the next activities defined by the campaign node: the campaign sends real messages to the actual targets, be warned. Normal: the campaign is processed normally, all actions are executed automatically at the scheduled date. Pay attention that in this status, the campaign sends real messages to the actual target audience. Regardless of the current mode of the campaign, any workitem can be manually executed or cancelled at any time (even if it is scheduled in the future) through Campaign Followup. - Resource Specifies where the campaign will get the information from, i.e. the OpenERP object linked (e.g. Leads, Opportunities, Employees, Partners). - Activities Activities are steps in the campaign. Each activity is optionally linked to previous and next activities through transitions. Each activity has: - one optional condition that stops the campaign, - one action to be executed when the activity is activated and the condition is True (could be a 'do nothing' action), - one optional signal (ignore it), - a start flag. Start Activity Activities that have the Start checkbox set, will receive a new workitem corresponding to each new resource/object entering the campaign. It is possible to have more than one Start Activity, but not less than one. Activity Conditions [a Boolean expression, made of clauses combined using boolean operators: AND, OR, NOT] Each condition is the criterion that decides whether the activity is going to be activated for a given workitem, or just cancelled. It is an arbitrary expression composed of simple tests on attributes of the object, possibly combined using or, and & not operators. See section 6.1 for more information on Comparators. The individual tests can use the "object" name to refer to the object/resource it originates from (e.g the lead), using a "dot notation" to refer to its attributes. Some examples on a CRM Lead resource: - object.name == 'Insurance Offer Lead' would select only leads whose title is exactly "Insurance Offer Lead", - object.state == 'pending' would select Pending leads only, - object.country_id.code == 'be' would select leads whose country field is set to Belgium, - object.country_id.name == 'Belgium' would select leads whose country field is set to Belgium. Tests can also use a 'workitem' name to refer to the actual item denoting the position of the object in the campaign. This can be useful to access some specific attributes, such as the segment that selected this item. Some examples: - workitem.segment_id.name == 'Insurance Offer EU Zone1 - Industry Consulting/Technology' would select leads that entered this campaign through the "Insurance Offer Lead EU Zone1 - Industry Consulting/Technology" segment, - 'EU Zone1' in workitem.segment_id.name would select only leads that entered the campaign through a segment that has "EU Zone1" in its name. Совет In the GTK client you can use "Help > Enable Debug mode tooltips" to see the attribute name of every field in a form. These are the same that you can use during import/export with CSV files. You can also use the special formula re.search(PATTERN_TO_SEARCH, ATTRIBUTE_TO_SEARCH) where PATTERN_TO_SEARCH is a character string delimited with quotes, and ATTRIBUTE_TO_SEARCH uses the dot notation above to refer to a field of the object. An example for CRM leads: - re.search('Plan to buy: True', object.description) would be true if the Notes on a Lead contain this text: "Plan to buy: True". Be careful that all spaces etc. do matter, so you may use the special pattern characters as detailed at the bottom to account for small variations, - re.search('Plan to.*True', object.description) would be true if the Notes on a Lead contain this text: "Plan to" followed later on by "True". You can combine individual tests using boolean operators and parentheses. Some examples on a CRM Lead resource: - object.state != 'pending' and ( re.search('Plan to by:.*True',object.description) and not re.search('Plan to use:.*True',object.description) ) would be true if the lead is NOT in Pending state and it contains "Plan to buy", but not "Plan to use". Guidelines for Creating a Campaign¶ - It is a good idea to have an initial activity that will change some fields on the objects entering the campaign to mark them as such, to avoid mixing them in other processes (e.g. set a specific state and Sales Team on a CRM lead being processed by a campaign). You can also define a time delay so that the campaign seems more human (note if the answer comes in a matter of seconds or minutes it is computer generated). - Put a stop condition on each subsequent activity in the campaign to get items out of the campaign as soon as the goal is achieved (e.g. every activity has a partial condition on the state of the item, if CRM Leads stops being Pending, the campaign ends for that case). - The Email headers: to, from, cc, bcc, subject - The raw HTML body, with the low-level markup and formatting - The plaintext body Headers and bodies can contain placeholders for dynamic contents that will be replaced in the final email with the actual content. Campaign Segments Segments are processed automatically according to a predefined schedule set in the menu Administration ‣ Configuration ‣ Scheduled Actions. It could be set to process every 4 hours or every minute for example. This is the only entry point in a campaign at the moment. Segment filters Segments select resources via filters, exactly the same kind of filter that can be used in advanced search views on any list in OpenERP. You can actually create them easily from any OpenERP screen allowing you to save filters. Save your advanced search criteria as a new filters and add them to the segment in the Filter field. Filters mainly consist in a domain expressing the criteria of selection on a model (the resource). See section 10.3 for more information on the syntax for these filters. - For Leads, the following filter would select draft Leads from any European country with "Plan for use: True" or "Plan for buy: False" specified in the body: - [ ('type','=','lead'), ('state', '=', 'draft'), ('country_id.name', 'in', ['Belgium', 'Netherlands', 'Luxembourg', 'United Kingdom', 'France', 'Germany', 'Finland', 'Denmark', 'Norway', 'Austria', 'Switzerland', 'Italy', 'Spain', 'Portugal', 'Ireland', ]), '|', ('description', 'ilike', 'Plan for use: True'), ('description', 'ilike', 'Plan for buy: False') ] Miscellaneous References, Examples 6.1 Reference of Comparison Operators: - ==: Equal - !=: Not Equal - <: Bigger than - >: Smaller Than - <=: Bigger than or equal to - >=: Smaller than or equal to - in: to check that a given text is included somewhere in another text. e.g "a" in "dabc" is True 6.2 Reference of Pattern/Wildcard characters - . (dot) represents any character (but just one) - * means that the previous pattern can be repeated 0 or more times - + means that the previous pattern can be repeated 1 or more times - ? means that the previous pattern is optional (0 or 1 times) - .* would represent any character, repeated 0 or more times - .+ would represent at least 1 character (but any) - 5? would represent an optional 5 character 6.3 Reference of filter domains Generic format is: [ (criterion_1), (criterion_2) ] to filter for resources matching both criterions. It is possible to combine criterions differently with the following operators: - '&' is the boolean AND operator and will make a new criterion by combining the next 2 criterions (always 2). This is also the implicit operator when no operator is specified. - for example: [ (criterion_1), '&', (criterion_2), (criterion_3) ] means criterion_1 AND (criterion_2 AND criterion_3) - '|' is the boolean OR operator and will make a new criterion by combining the next 2 criterions (always 2) - for example: [ (criterion_1), '|', (criterion_2), (criterion_3) ] means criterion_1 AND (criterion_2 OR criterion_3) - '!' is the boolean NOT operator and will make a new criterion by reversing the value of the next criterion (always only 1) - for example: [ (criterion_1), '!', (criterion_2), (criterion_3) ] means criterion_1 AND (NOT criterion_2) AND criterion_3 Criterion format is: ( 'field_path_operand', 'operator', value ) Where: - field_path_operand specifies the name of an attribute or a path starting with an attribute to reach the value we want to compare - operator is one of the possible operator: - '=' , '!=' : equal and different - '<', '>', '>=', '<=' : greater or lower than or equal - 'in', 'not in' : present or absent in a list of value. Values must be specified as [ value1, value2 ], e.g. [ 'Belgium', 'Croatia' ] - 'ilike' : search for string value in the operand - value is the text or number or list value to compare with field_path_operand using comparator Pushing your Campaign Results further¶ Of course, Marketing Campaigns can only be effective when you also do something with the results. OpenERP offers analysis features to help you better manage future campaigns based on the outcome of past campaigns. Learning from your results, that is. The Marketing ‣ Reporting ‣ Campaign Analysis report allows you to analyse your campaigns in detail, both ongoing and completed campaigns. Segments allow you to keep good track of the results of a marketing campaign. You can see from which segment you have most demands, for instance. Thanks to good insights in the way your respondents answer to your campaign, you can continuously improve your marketing results! Automating your Lead Acquisition¶ Through your website, your company wants to get as much information as possible about the people who visit the website. But how can you make sure that every person who wants to know more about your company is actually registered somewhere? Well, you could use a Contact form for this. And precisely such a form allows you to register contacts automatically in OpenERP. By creating a link from your website's Contact form to OpenERP, your contact data will automatically be created in the CRM (or any other application of your choice, such as HR). Let us show you an example of how this can be achieved. The figure below shows a Contact form on a website. All data entered in this form are linked to the Lead form in the CRM. Each time someone enters this contact form, a new lead is automatically created in OpenERP. Such a system is a very easy yet flexible way of keeping track of your leads and automatically launch your marketing campaigns. How to Link a Web Contact Form to OpenERP?¶ OpenERP is accessible through XML-RPC interfaces, for which libraries exist in many languages. Python example import xmlrpclib # ... define HOST, PORT, DB, USER, PASS url = '' % (HOST,PORT) sock = xmlrpclib.ServerProxy(url) uid = sock.login(DB,USER,PASS) print "Logged in as %s (uid:%d)" % (USER,uid) # Create a new lead url = '' % (HOST,PORT) sock = xmlrpclib.ServerProxy(url) args = { 'name' : 'A New Lead', 'description' : 'This is a new lead from the web contact form', 'inventor_id': uid, } lead_id = sock.execute(DB,uid,PASS,'crm.lead','create',args) PHP Example <? include('xmlrpc.inc'); // Use phpxmlrpc library, available on sourceforge // ... define $HOST, $PORT, $DB, $USER, $PASS $client = new xmlrpc_client(""); $msg = new xmlrpcmsg("login"); $msg->addParam(new xmlrpcval($DB, "string")); $msg->addParam(new xmlrpcval($USER, "string")); $msg->addParam(new xmlrpcval($PASS, "string")); resp = $client->send($msg); uid = $resp->value()->scalarval() echo "Logged in as $USER (uid:$uid)" // Create a new lead $arrayVal = array( 'name'=>new xmlrpcval("A New Lead", "string") , 'description'=>new xmlrpcval("This is a new lead from the web contact form" , "string"), 'inventor_id'=>new xmlrpcval($uid, "int"), ); $msg = new xmlrpcmsg('execute'); $msg->addParam(new xmlrpcval($DB, "string")); $msg->addParam(new xmlrpcval($uid, "int")); $msg->addParam(new xmlrpcval($PASS, "string")); $msg->addParam(new xmlrpcval("crm.lead", "string")); $msg->addParam(new xmlrpcval("create", "string")); $msg->addParam(new xmlrpcval($arrayVal, "struct")); $resp = $client->send($msg); ?> Совет How to Link a Web Contact Form to OpenERP? For technical information about how to link a web contact form to OpenERP, please also refer to the Technical Memento that you can download from, the chapter about WebServices – XML-RPC. Profiling your Customers¶ The segmentation tools let you create partner groups (or categories) and act on each segment differently according to questionnaires. For example, you could create pricelists for each of the segments, or start phone marketing campaigns by segment. To allow you to work with segments in OpenERP, you should install the crm_profiling module, which can also be achieved from the Configuration Wizard (Marketing - Profiling). Profiling can be used to qualify your customers according to a questionnaire you define. When you establish a good customer profile, this will surely help you to close your deals. Customer profiles might even help you beat your competitors! Establishing the Profiles of Prospects¶ During presales activities it is useful to qualify your prospects quickly. You can ask a series of questions to find out what product / service to offer to the customer, or how quickly you should handle the request. Совет Профилирование This method of rapidly qualifying prospects is often used by companies who carry out presales by phone. A prospect list is imported into the OpenERP system as a set of partners and the operators then ask a series of questions to each prospect by phone. Ответы на эти вопросы позволяют быстро классифицировать каждого потенциального клиента, что ведет к предложению конкретных услуг и продуктов на основе этих ответов. As an illustration, take the case of a software company which offers a service based on the OpenERP software. The company goes to several exhibitions and encounters dozens of prospects over a few days. It is important to handle each request quickly and efficiently. The products offered at these exhibitions are: training on OpenERP – for independent people or small companies, partner contract – for IT companies that intend to offer an OpenERP service, OpenERP as SaaS – for small companies, совместная встреча с партнером для проведения демонстрации, нацеленной на проведение интеграции программного обеспечения -- для компаний большего размера. The IT company has therefore put a decision tree in place based on the answers to several questions to prospects. These are given in the following figure Example of Profiling Customer Prospects by the OpenERP Company: The sales person starts by asking the questions mentioned above and then after only a couple of minutes of work, he can decide what to propose to the prospective customer simply by analysing the prospect's answers. At the end of the exhibition, prospects' details and their responses to the questionnaire are entered into OpenERP. The profiling system automatically classifies the prospects into appropriate partner categories. This enables your sales people to efficiently follow up prospects and adapt their approach according to each prospect's profile. For example, they can send a letter based on a template developed for a specific partner category. They would use OpenERP's report editor and generator for their sales proposition, such as an invitation to a training session a week after the show. Using Profiles effectively¶ To use the profiling system, you have to install OpenERP's crm_profiling module. You can also use the Reconfigure Wizard and add Marketing / Profiling. Once the module is installed, you can create several questionnaires through the menu Sales ‣ Configuration ‣ Leads & Opportunities ‣ Questionnaires. For each questionnaire, OpenERP allows you to create a list of questions and the possible responses through the menu Sales ‣ Configuration ‣ Leads & Opportunities ‣ Questions. Для получения представленной ранее схемы вы можете создать следующие вопросы и ответы: Например, сотрудник отдела продаж, специализирующийся на больших клиентах для сектора услуг, может иметь профиль, определенный примерно так: Budget for integration: Unknown , 100k-300k or >300k , Already created a specification for the work? Yes , No Industry Sector? Services . When entering the details of a specific prospect, the prospect's answers to various questions can be entered in the Profiling tab of the Partner form. All you have to do is click the Use a Questionnaire button on the Profiling tab of the Partner form. OpenERP will automatically assign prospects to the appropriate partner category based on these answers. Customers corresponding to a specific search profile can be treated as a priority. The sales person can access the profile of the large active accounts easily.
https://doc.odoo.com/6.1/ru/book/2/9_Marketing/
CC-MAIN-2019-09
refinedweb
4,201
53.81
Introduction the phases of an application lifecycle we are going to focus on today is the testing phase. Tests can take place on the developer’s machine and by leveraging these test in an automated way on DevCS, we ensure the quality of the code throughout the lifecycle of the code. Main Article In this article we will take a closer look at using Node.JS in combination with Jasmine to test our code and configure an automated test script that will run every time a developer pushes his code to a specific branch in the code repository. Why testing? To many developers it is clear that testing can be advantages however many feel that testing adds an overhead to their already busy schedule. This is mainly a misconception as proper testing will increase the quality of the code. If you don’t test, you will spend more time debugging your code later on so you could say that testing is a way of being lazy by spending some more time in the beginning. In addition to this, testing is not just a tool to make sure your code works, it can also be used as a design tool. This comes from the Behavior-driven development paradigm. The idea is to define your unit test before writing any code. By doing this, you will have a clear understanding of the requirements of the code and as such, your code will be aligned with the requirements. This also increases the re-usability of the code because a nice side effect of designing your code this way is that your code will be very modular and loosely coupled. When we talk about Node.JS and JavaScript in general, the side effect of a “test-first” approach is that it will be much easier to reuse your code no matter if it’s client side JavaScript or server side JavaScript. This will become clear in the example we will build in this article. Different types of test When we take about writing tests, it is important to understand that there are different types of tests, each testing a specific area of your application and serving their own purpose: Unit Tests Unit tests are your first level of defense. These are the tests run on your core business logic. A good unit test does not need to know the context it is running on and has no outside dependencies. The purpose of a unit test is like the name says: to test a unit of work. A typical example of a unit test is to test a function that checks if a credit card number is valid. That method doesn’t need to understand where the credit card number is coming from, nor does it need to understand anything around security or encryption. All it does is take a credit card number as input and returns a true or false value depending on the validity of the number. Integration Tests The next level of tests are the integration tests. These will test if all your different modules integrate well and test if the data coming from external sources is accurate. It will group the different modules and check if these work well together. It will check for data integrity when you pass information from one module to another and makes sure that the values passed through are accurate. End 2 End Tests An end 2 end test typically requires a tool that allows you to record a user session after which that session is replayed. In a web application, Selenium is popular tool to perform these E2E tests. In such a scenario, you will define certain areas on the page that you know should have specific value. When the HTML of these areas are different from what you define, the test will fail. This is the highest level of testing you can have. In this post we will focus on unit testing. Creating a new project on Developer Cloud Service Before we can start writing code, we need to define a project in Developer Cloud Service (DevCS). A project in DevCS is much more than a code repository. It also allows us to manage the development lifecycle by creating tasks and assigning them to people. It also provides a scrum board so we can manage project in an agile way. In this post, we will create a microservice that does temperature conversion. It will be able to convert Celsius and Fahrenheit temperatures to each other and Kelvin. In DevCS we define a new project called “Converter”: As template we select the “Initial Repository” as this will create the code repository we will be using to check in our code. In the next step, we define the properties and we initialize a repository with readme file: Now we can continue and create our project. Once the project is created, you will see your project dashboard: As you can see, the system created a repository called converter.git. On the right hand side you can find the HTTP and SSH links to the repo. We will need the HTTP link in order to clone the initial repository before we can start coding. Once you copied the HTTP link to your GIT repo, you can open a command line so we can clone the repo. At the location you want the repo to be created, we simply execute following command: D:\projects\Oracle\testing>git clone https://<yourRepo> Cloning into 'converter'... Password for '': remote: Counting objects: 3, done remote: Finding sources: 100% (3/3) remote: Getting sizes: 100% (2/2) remote: Compressing objects: 100% (37/37) remote: Total 3 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (3/3), done. Checking connectivity... done. This will clone the repository into a folder called “converter”. At the moment that folder will only contain a README.md file. The next step is to initialize that folder as a node.js project. This can easily be done by using the npm init command: D:\projects\Oracle\testing>cd converter D:\projects\Oracle\testing\converter: (converter) version: (1.0.0) description: entry point: (index.js) app.js test command: git repository: (https://<yourURL>) keywords: author: license: (ISC) About to write to D:\projects\Oracle\testing\converter\package.json: { "name": "converter", "version": "1.0.0", "description": "converter.git", "main": "app.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "repository": { "type": "git", "url": "<yourURL>" }, "author": "", "license": "ISC" } Is this ok? (yes) This will have created the package.json file. The next thing we need to do is installing the required modules. For this example we will use express and the body-parser module to give us the basic middleware to start building the application. For testing purpose we will use jasmine which is a popular framework for behavior driven testing. Jasmine will be configured as a development dependency. We will add following content to the package.json: "dependencies": { "body-parser": "^1.15.2", "express": "^4.14.0" }, "devDependencies": { "jasmine": "^2.5.2" } Now we can simply install these modules by executing the npm install command from within the application’s folder. Writing tests Now that the project has been setup and we downloaded the required dependencies, we can start writing our code, or should I say, writing our tests? Like I said in the introduction, we can use testing as a tool to design our service signature and this is exactly what we are going to do. Jasmine is a perfect framework for this as it is designed to define behaviors. These behaviors will be translated to units of code that we can easily test. If we think about our temperature converter that we are going to write, what behaviors would we have? - Convert Celsius to Fahrenheit - Convert Celsius to Kelvin - Convert Fahrenheit to Celsius - Convert Fahrenheit to Kelvin Each of these behaviors will have its own piece of implementation that can be mapped to some testing code. Before we can write our tests, we need to initialize the project for Jasmine. This can be done by executing the jasmine init command from within your application root: node node_modules/jasmine/bin/jasmine.js init This command will create a spec folder in which we need to write the specifications of our tests. In that folder we create a new file converterSpec.js It is important to end the filename with Spec because Jasmine has been configured to search for files that end with Spec. You can of course, change this behavior by changing the spec_files regex in the jasmine.json file in the support folder but by default Jasmine will look for every file ending in Spec in the spec folder. The contents of the converterSpec.js will look like this: describe("Converter ",function(){ it("converts celsius to fahrenheit", function() { expect(converter.celsiusToFahrenheit(0)).toBeCloseTo(32); expect(converter.celsiusToFahrenheit(-10)).toBeCloseTo(14); expect(converter.celsiusToFahrenheit(23)).toBeCloseTo(73.4); expect(converter.celsiusToFahrenheit(100)).toBeCloseTo(212); }); it("converts fahrenheit to celsius", function() { expect(converter.fahrenheitToCelsius(32)).toBeCloseTo(0); expect(converter.fahrenheitToCelsius(14)).toBeCloseTo(-10); expect(converter.fahrenheitToCelsius(73.4)).toBeCloseTo(23); expect(converter.fahrenheitToCelsius(212)).toBeCloseTo(100); }); it("converts celsius to kelvin", function() { expect(converter.celsiusToKelvin(0)).toBeCloseTo(273.15); expect(converter.celsiusToKelvin(-20)).toBeCloseTo(253.15); expect(converter.celsiusToKelvin(23)).toBeCloseTo(296.15); expect(converter.celsiusToKelvin(100)).toBeCloseTo(373.15); }); it("converts fahrenheit to kelvin", function() { expect(converter.fahrenheitToKelvin(32)).toBeCloseTo(273.15); expect(converter.fahrenheitToKelvin(14)).toBeCloseTo(263.15); expect(converter.fahrenheitToKelvin(73.4)).toBeCloseTo(296.15); expect(converter.fahrenheitToKelvin(212)).toBeCloseTo(373.15); }); }); These tests will fail because we haven’t written a converter yet. We can execute this test suite by calling Jasmine from our root directory of the application: node node_modules/jasmine/bin/jasmine.js The output will contain some erros and a message saying that 4 out of 4 specs have failed: D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js Started FFFF Failures: 1) Converter converts celsius to fahrenheit Message: ReferenceError: converter is not defined Stack: ReferenceError: converter is not defined at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:9:16) 2) Converter converts fahrenheit to celsius Message: ReferenceError: converter is not defined Stack: ReferenceError: converter is not defined at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:16:16) 3) Converter converts celsius to kelvin Message: ReferenceError: converter is not defined Stack: ReferenceError: converter is not defined at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:23:16) 4) Converter converts fahrenheit to kelvin Message: ReferenceError: converter is not defined Stack: ReferenceError: converter is not defined at Object.<anonymous> (D:\projects\Oracle\testing\converter\spec\converterSpec.js:30:16) 4 specs, 4 failures Finished in 0.01 seconds By writing these tests, we established that our convertor should have following methods: - celsiusToFahrenheit - fahrenheitToCelsius - celsiusToKelvin - fahrenheitToKelvin Implementing the converter Once the signature of our code has been established, we can start implementing the code. In our case, we need to create an object for the converter with the required functions. Therefore we create a new file converter.js with the following content: var Converter = function(){ var self = this; } Converter.prototype.celsiusToFahrenheit = function(temp){ return temp*9/5+32; }; Converter.prototype.fahrenheitToCelsius = function(temp){ return (temp-32)/1.8; }; Converter.prototype.celsiusToKelvin = function(temp){ return temp +273.15; } Converter.prototype.fahrenheitToKelvin = function(temp){ var cel = this.fahrenheitToCelsius(temp); return this.celsiusToKelvin(cel); } if (typeof exports == 'object' && exports) exports.Converter = Converter; Now that the implementation is done, we can include this file in our converterSpec.js so the test will use this object: At the top of converterSpec.js add the following lines: var Converter = require("../converter").Converter; var converter = new Converter(); If we rerun the jasmine tests we will notice that they succeed: D:\projects\Oracle\testing\converter>node node_modules/jasmine/bin/jasmine.js Started .... 4 specs, 0 failures Finished in 0.005 seconds So far, wrote some tests and implemented a plain old JavaScript object. We haven’t written any server specific code but yet, our core business logic is already done and tested. Notice how we wrote this code without worrying about things like request body, response objects, get, post and other server specific logic. This is a very powerful feature of writing test in this way because now the exact same code can be used in any project that used JavaScript. No matter if it’s Node.JS, Oracle JET, Angular, Ionic,… it should work in any of these frameworks and we didn’t even spend additional time optimizing the code for this. It’s just a bi-product of a test-first approach! Implementing the server The last step is to write our server that consumes the converter. Our server will expose a single endpoint where we can specify an object with a temperature value and a units value. Based upon the units value, the converter will make al the required conversions and send the result back to the user. Create a new file app.js with following contents var express = require("express"); var parser = require("body-parser"); var app = express(); var http = require('http').Server(app); app.use(parser.json()); var Converter = require("./convertor").Converter; var converter = new Converter(); app.post("/convert",function(req,res){ var temp = req.body.temp; var units = req.body.units; var result = {}; if(units.toLowerCase() == "f"){ result.fahrenheit = temp; result.celsius = converter.fahrenheitToCelsius(temp); result.kelvin = converter.fahrenheitToKelvin(temp); } else if(units.toLowerCase() == "c"){ result.celsius = temp; result.fahrenheit = converter.celsiusToFahrenheit(temp); result.kelvin = converter.celsiusToKelvin(temp); } res.send(result); res.end(); }); http.listen(3000, function(){ console.log('listening on *:3000'); }); Setting up automated testing on Developer Cloud Service Once we have a first finished version of the code, it’s a good time to commit our code to the code repository. At the same time, we want to setup a build process on DevCS so that every time we commit code to the repository. it will fire of the tests we created so far. In order to do this, we first need to modify the package.json so that we can make use of the npm test command to start the test. This is fairly simple as npm test is just a shortcut to a script you define in the package.json. This should be the same command as we use when starting the tests from our command line. Modify package.json so the scripts part looks like this: "scripts": { "test": "node node_modules/jasmine/bin/jasmine.js" }, When you save the file and execute npm test from a command line in the root folder of your application, it should start the tests. Adding a .gitignore file The next step we have to do before committing the code, is to add a gitignore file. This file will tell GIT what files and folder to ignore. The reason why we want this is because it’s a bad practice to include the node_modules folder in your code repository. The code in that folder isn’t written by us and we can simply initialize a new consumer of the repo by executing npm install. This way the modules don’t take additional space in the repository and it will be much faster to upload the code. The .gitignore file needs to be put in the root of your application. For this application we only need to ignore the node_modules folder so the file will look like this: # Dependency directories node_modules Creating a build configuration in DevCS Before we commit the code, we need to setup a build configuration in DevCS. A build configuration is a sequence of actions that can be configured depending on the type of application. For example when you are developing a J2EE application, the build configuration can execute a maven build, build the JAR/EAR file and pass it on to a deployment script so it can be deployed automatically to Java Cloud Services. In our case, we are working with Node.JS so technically we don’t have anything to build. However, a build config can still be usefull because it allows us to execute certain command to test the integrity of the code. If everything passes, we are able to hand it over to a deployment profile for Application Container Cloud Service to deploy it on the cloud. In this step, we will focus on the build step. In DevCS, select your project and go to the Build page. At the moment only a sample maven_build has been created which doesn’t do us any good so we will go ahead and create a new job. Once we saved the job we will be redirected to the configuration. The Main and Build Parameters tab can remain unchanged. In the Source Control tab we specify that the build system integrates with a GIT repository. From the Repository drop down, we select our converter repo. In the Branch section we click the Add button and select master. This way we can specify on which branch of the code this build applies. It is a common practice to use something like GitFlow to develop features. Each feature will be represented by a branch and once the feature is finished, that branch is merged into a development branch. In these cases, it makes a lot of sense to only initiate the build when a commit is done towards the development branch so that’s why we specify a certain branch in this step. If we don’t specify a branch, the build will start on every single commit. In the next tab, Triggers, we specify what triggers the build. Because we re relying on a commit to the source control system, we have to select “Based on SCM polling schedule”. This links the configuration from the Source Control tab to the Trigger. The next tab Environment isn’t required in this step so we can go ahead and open the Build Steps tab. This is where we configure the actions that are done when the build starts. In our case we want to execute the npm test command which is a shell script. From the add button we select the Execute Shell step. This will add a text area in which we can specify shell commands to execute. In this box we can add multiple lines of code. Add the following code in the command box: git config --global url. git://github.com/ npm install npm test Because we have added the node_modules to our gitignore file, we need to install the modules from our package.json. On your machine a simple npm install would be sufficient, however because DevCS is behind a firewall that only accepts traffic on port 80 (HTTP) and 443 (HTTPS), we need to make sure that we force git to use HTTP and not the git protocol. The git config command does make sure that we download all the modules over regular HTTPS traffic, even if the git repository of a module has been configured using the git protocol. After that we can install the modules using the npm install command and once this is done, a npm test will start the Jasmine tests. Our build config is now complete. Committing the code Now that our build config has been setup, we can commit and push our code after which the build should start. Commit the code using your favorite GIT client or from within your IDE. After the commit, push the changes to the master branch. Once you have pushed the code, go back to the Jobs Overview page on DevCS and you will notice that our new build config has been queued and after a few seconds or a minute it will start: After aboud half a minute, the build should complete and you should see the status: On the right hand side you have a button that can take you to the Console Output. This gives you a good overview of what the build actually did. In our case, everything went fine and it ended in success however when your test fails, the build will fail and the console output will be crucial to identify what test failed. This is the output from my build (I omitted the npm install output). Started by an SCM change Building remotely on Builder 22 Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.Channel@2564c81d:Builder 22 Using strategy: Default Checkout:<account>.Converter Unit test / /home/c2c/hudson/workspace/developer85310.Converter Unit test - hudson.remoting.LocalChannel@ebc21da Cloning the remote Git repository Cloning repository origin Fetching upstream changes from<account>/converter.git Commencing build of Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master) Checking out Revision af98f72759ebdfc7a88b0a1f49d70b278bdcbab4 (origin/master) No change to record in branch origin/master [developer85310-chatbotdev1_converter_12171.Converter Unit test] $ /bin/sh -xe /home/builder/tmp/hudson2474779517119792441.sh + git config --global url. git://github.com/ + npm install <npm install output> + npm test > converter@1.0.0 test /home/builder/hudson/workspace/<account>.Converter Unit test > node node_modules/jasmine/bin/jasmine.js Started .... 4 specs, 0 failures Finished in 0.01 seconds Finished: SUCCESS Conclusion In this post we have shown how we can leverage the power of Developer Cloud Service to setup an automated test build for your Node.JS code. By doing this, you not only get the benefits of getting instant feedback when your code is pushed to the repository but you also get better quality and re-usability of your code. All site content is the property of Oracle Corp. Redistribution not allowed without written permission
http://www.ateam-oracle.com/automated-unit-tests-with-node-js-and-developer-cloud-services/
CC-MAIN-2019-13
refinedweb
3,601
55.54
/* * ntfs_usnjrnl.h - Defines for transaction log ($UsnJrnl)_USNJRNL_H #define _OSX_NTFS_USNJRNL_H #include <sys/errno.h> #include "ntfs_types.h" #include "ntfs_endian.h" #include "ntfs_layout.h" #include "ntfs_volume.h" /* * Transaction log ($UsnJrnl) organization: * * The transaction log records whenever a file is modified in any way. So for * example it will record that file "blah" was written to at a particular time * but not what was written. If will record that a file was deleted or * created, that a file was truncated, etc. See below for all the reason * codes used. * * The transaction log is in the $Extend directory which is in the root * directory of each volume. If it is not present it means transaction * logging is disabled. If it is present it means transaction logging is * either enabled or in the process of being disabled in which case we can * ignore it as it will go away as soon as Windows gets its hands on it. * * To determine whether the transaction logging is enabled or in the process * of being disabled, need to check the volume flags in the * $VOLUME_INFORMATION attribute in the $Volume system file (which is present * in the root directory and has a fixed mft record number, see layout.h). * If the flag VOLUME_DELETE_USN_UNDERWAY is set it means the transaction log * is in the process of being disabled and if this flag is clear it means the * transaction log is enabled. * * The transaction log consists of two parts; the $DATA/$Max attribute as well * as the $DATA/$J attribute. $Max is a header describing the transaction * log whilst $J is the transaction log data itself as a sequence of variable * sized USN_RECORDs (see below for all the structures). * * We do not care about transaction logging at this point in time but we still * need to let windows know that the transaction log is out of date. To do * this we need to stamp the transaction log. This involves setting the * lowest_valid_usn field in the $DATA/$Max attribute to the usn to be used * for the next added USN_RECORD to the $DATA/$J attribute as well as * generating a new journal_id in $DATA/$Max. * * The journal_id is as of the current version (2.0) of the transaction log * simply the 64-bit timestamp of when the journal was either created or last * stamped. * * To determine the next usn there are two ways. The first is to parse * $DATA/$J and to find the last USN_RECORD in it and to add its record_length * to its usn (which is the byte offset in the $DATA/$J attribute). The * second is simply to take the data size of the attribute. Since the usns * are simply byte offsets into $DATA/$J, this is exactly the next usn. For * obvious reasons we use the second method as it is much simpler and faster. * * As an aside, note that to actually disable the transaction log, one would * need to set the VOLUME_DELETE_USN_UNDERWAY flag (see above), then go * through all the mft records on the volume and set the usn field in their * $STANDARD_INFORMATION attribute to zero. Once that is done, one would need * to delete the transaction log file, i.e. \$Extent\$UsnJrnl, and finally, * one would need to clear the VOLUME_DELETE_USN_UNDERWAY flag. * * Note that if a volume is unmounted whilst the transaction log is being * disabled, the process will continue the next time the volume is mounted. * This is why we can safely mount read-write when we see a transaction log * in the process of being deleted. */ /* Some $UsnJrnl related constants. */ #define UsnJrnlMajorVer 2 #define UsnJrnlMinorVer 0 /* * $DATA/$Max attribute. This is (always?) resident and has a fixed size of * 32 bytes. It contains the header describing the transaction log. */ typedef struct { /*Ofs*/ /* 0*/sle64 maximum_size; /* The maximum on-disk size of the $DATA/$J attribute. */ /* 8*/sle64 allocation_delta; /* Number of bytes by which to increase the size of the $DATA/$J attribute. */ /*0x10*/sle64 journal_id; /* Current id of the transaction log. */ /*0x18*/leUSN lowest_valid_usn; /* Lowest valid usn in $DATA/$J for the current journal_id. */ /* sizeof() = 32 (0x20) bytes */ } __attribute__((__packed__)) USN_HEADER; /* * Reason flags (32-bit). Cumulative flags describing the change(s) to the * file since it was last opened. I think the names speak for themselves but * if you disagree check out the descriptions in the Linux NTFS project NTFS * documentation: */ enum { USN_REASON_DATA_OVERWRITE = const_cpu_to_le32(0x00000001), USN_REASON_DATA_EXTEND = const_cpu_to_le32(0x00000002), USN_REASON_DATA_TRUNCATION = const_cpu_to_le32(0x00000004), USN_REASON_NAMED_DATA_OVERWRITE = const_cpu_to_le32(0x00000010), USN_REASON_NAMED_DATA_EXTEND = const_cpu_to_le32(0x00000020), USN_REASON_NAMED_DATA_TRUNCATION= const_cpu_to_le32(0x00000040), USN_REASON_FILE_CREATE = const_cpu_to_le32(0x00000100), USN_REASON_FILE_DELETE = const_cpu_to_le32(0x00000200), USN_REASON_EA_CHANGE = const_cpu_to_le32(0x00000400), USN_REASON_SECURITY_CHANGE = const_cpu_to_le32(0x00000800), USN_REASON_RENAME_OLD_NAME = const_cpu_to_le32(0x00001000), USN_REASON_RENAME_NEW_NAME = const_cpu_to_le32(0x00002000), USN_REASON_INDEXABLE_CHANGE = const_cpu_to_le32(0x00004000), USN_REASON_BASIC_INFO_CHANGE = const_cpu_to_le32(0x00008000), USN_REASON_HARD_LINK_CHANGE = const_cpu_to_le32(0x00010000), USN_REASON_COMPRESSION_CHANGE = const_cpu_to_le32(0x00020000), USN_REASON_ENCRYPTION_CHANGE = const_cpu_to_le32(0x00040000), USN_REASON_OBJECT_ID_CHANGE = const_cpu_to_le32(0x00080000), USN_REASON_REPARSE_POINT_CHANGE = const_cpu_to_le32(0x00100000), USN_REASON_STREAM_CHANGE = const_cpu_to_le32(0x00200000), USN_REASON_CLOSE = const_cpu_to_le32(0x80000000), }; typedef le32 USN_REASON_FLAGS; /* * Source info flags (32-bit). Information about the source of the change(s) * to the file. For detailed descriptions of what these mean, see the Linux * NTFS project NTFS documentation: * */ enum { USN_SOURCE_DATA_MANAGEMENT = const_cpu_to_le32(0x00000001), USN_SOURCE_AUXILIARY_DATA = const_cpu_to_le32(0x00000002), USN_SOURCE_REPLICATION_MANAGEMENT = const_cpu_to_le32(0x00000004), }; typedef le32 USN_SOURCE_INFO_FLAGS; /* * $DATA/$J attribute. This is always non-resident, is marked as sparse, and * is of variabled size. It consists of a sequence of variable size * USN_RECORDS. The minimum allocated_size is allocation_delta as * specified in $DATA/$Max. When the maximum_size specified in $DATA/$Max is * exceeded by more than allocation_delta bytes, allocation_delta bytes are * allocated and appended to the $DATA/$J attribute and an equal number of * bytes at the beginning of the attribute are freed and made sparse. Note the * making sparse only happens at volume checkpoints and hence the actual * $DATA/$J size can exceed maximum_size + allocation_delta temporarily. */ typedef struct { /*Ofs*/ /* 0*/le32 length; /* Byte size of this record (8-byte aligned). */ /* 4*/le16 major_ver; /* Major version of the transaction log used for this record. */ /* 6*/le16 minor_ver; /* Minor version of the transaction log used for this record. */ /* 8*/leMFT_REF mft_reference;/* The mft reference of the file (or directory) described by this record. */ /*0x10*/leMFT_REF parent_directory;/* The mft reference of the parent directory of the file described by this record. */ /*0x18*/leUSN usn; /* The usn of this record. Equals the offset within the $DATA/$J attribute. */ /*0x20*/sle64 time; /* Time when this record was created. */ /*0x28*/USN_REASON_FLAGS reason;/* Reason flags (see above). */ /*0x2c*/USN_SOURCE_INFO_FLAGS source_info;/* Source info flags (see above). */ /*0x30*/le32 security_id; /* File security_id copied from $STANDARD_INFORMATION. */ /*0x34*/FILE_ATTR_FLAGS file_attributes; /* File attributes copied from $STANDARD_INFORMATION or $FILE_NAME (not sure which). */ /*0x38*/le16 filename_size; /* Size of the filename in bytes. */ /*0x3a*/le16 filename_offset; /* Offset to the filename in bytes from the start of this record. */ /*0x3c*/ntfschar filename[0]; /* Use when creating only. When reading use filename_offset to determine the location of the name. */ /* sizeof() = 60 (0x3c) bytes */ } __attribute__((__packed__)) USN_RECORD; __private_extern__ errno_t ntfs_usnjrnl_stamp(ntfs_volume *vol); #endif /* _OSX_NTFS_USNJRNL_H */
http://opensource.apple.com//source/ntfs/ntfs-65.1/kext/ntfs_usnjrnl.h
CC-MAIN-2016-36
refinedweb
1,117
55.84
In our face-paced modern society, who has time to click through pages and pages of content? “Not I,” said the web developer. In a world full of shortcuts, swipes and other gestures, the most efficient way to get through pages of content is the infinite scroll. While not a new concept, the idea of infinite scroll is still somewhat controversial. Like most things, it has a time and a place in modern web design when properly implemented. For anybody unfamiliar, infinite scroll is the concept of new content being loaded as you scroll down a page. When you get to the bottom of the content, the site automatically loads new content and appends it to the bottom. Infinite scroll may not be ideal for all types of content, it is especially useful of feeds of data that an end-user would most probably want to page through quickly. A perfect use case, and one you may already be familiar with is Instagram. You are presented with a feed of images and as you scroll down, more images keep showing up. Over and over and over until they run out of content to give you. This article will teach you how to implement the infinite scroll concept into a React component and uses the Random User Generator to load a list of users from. 🐊 Alligator.io recommends ⤵Fullstack Advanced React & GraphQL by Wes Bos Before we begin, we need to make sure we have the proper dependencies in our project. To load data from the Random User Generator we are going to use superagent. To add superagent to your project via npm run: $ npm install --save superagent Or via yarn: $ yarn add superagent The crux of our infinite scroll component is going to be an onscroll event that will check to see if the user has scrolled to the bottom of the page. Upon reaching the bottom of the page, our event will attempt to load additional content: window.onscroll = () => { if ( window.innerHeight + document.documentElement.scrollTop === document.documentElement.offsetHeight ) { // Do awesome stuff like loading more content! } }; The data that has been loaded will be appended to an array in the component’s state and will be iterated through in the component’s render method. All good things come to an end. For demonstration purposes our component will eventually stop loading new content and display a message that it's reached the end and there is no additional content. Now that we understand the logic flow that’s necessary to implement infinite scroll, let’s dive into our component: import React, { Component, Fragment } from "react"; import { render } from "react-dom"; import request from "superagent"; class InfiniteUsers extends Component { constructor(props) { super(props); // Sets up our initial state this.state = { error: false, hasMore: true, isLoading: false, users: [], }; // Binds our scroll event handler window.onscroll = () => { const { loadUsers, state: { error, isLoading, hasMore, }, } = this; // Bails early if: // * there's an error // * it's already loading // * there's nothing left to load if (error || isLoading || !hasMore) return; // Checks that the page has scrolled to the bottom if ( window.innerHeight + document.documentElement.scrollTop === document.documentElement.offsetHeight ) { loadUsers(); } }; } componentWillMount() { // Loads some users on initial load this.loadUsers(); } loadUsers = () => { this.setState({ isLoading: true }, () => { request .get('') .then((results) => { // Creates a massaged array of user data const nextUsers = results.body.results.map(user => ({ email: user.email, name: Object.values(user.name).join(' '), photo: user.picture.medium, username: user.login.username, uuid: user.login.uuid, })); // Merges the next users into our existing users this.setState({ // Note: Depending on the API you're using, this value may // be returned as part of the payload to indicate that there // is no additional data to be loaded hasMore: (this.state.users.length < 100), isLoading: false, users: [ ...this.state.users, ...nextUsers, ], }); }) .catch((err) => { this.setState({ error: err.message, isLoading: false, }); }) }); } render() { const { error, hasMore, isLoading, users, } = this.state; return ( <div> <h1>Infinite Users!</h1> <p>Scroll down to load more!!</p> {users.map(user => ( <Fragment key={user.username}> <hr /> <div style={{ display: 'flex' }}> <img alt={user.username} src={user.photo} style={{ borderRadius: '50%', height: 72, marginRight: 20, width: 72, }} /> <div> <h2 style={{ marginTop: 0 }}> @{user.username} </h2> <p>Name: {user.name}</p> <p>Email: {user.email}</p> </div> </div> </Fragment> ))} <hr /> {error && <div style={{ color: '#900' }}> {error} </div> } {isLoading && <div>Loading...</div> } {!hasMore && <div>You did it! You reached the end!</div> } </div> ); } } const container = document.createElement("div"); document.body.appendChild(container); render(<InfiniteUsers />, container); There’s really not much to it! The component manages a few status flags and list of data in it’s state, the onscroll event does most of the heavy lifting and the render method brings it to life on your screen! We’re also making use of setState with a callback function passed-in as the second argument. The initial call to setState in our loadUsers method sets the value of loading to true and then in the callback function we load some users and call setState again to append our new users to the users already in the state. I hope you’ve found this article on implementing infinite scroll in React informative. If interested, you can find a working demo of this component over on CodeSandbox. Enjoy! 💥
https://alligator.io/react/react-infinite-scroll/
CC-MAIN-2019-09
refinedweb
869
57.37
Receive data on lopy through serial I am trying to send string data from python over serial to my lopy. Could anyone tell me in steps how can i receive it and test if it is received on my lopy? Thanks in advance! @monersss At the moment, with my example, that happens when you get a timeout. But again, that would require some "out-of-band" signaling, meaning: in addition to the "payload", the jpg file, some control information must be exchanged. And that is part of a protocol. That could be pretty simple. If I grab one out of the air, it could be: Define record types, where the first letter defines the type of the record, followed by 3 bytes record length, followed by the payload, like: 'fnnn': filename (of length nnn) 'd128': data 'x000': done You would first receive 4 bytes, decode type and length, and then receive further 'length' bytes. And for every record received, your LoPy would send a confirmation, like the three bytes: "ok!" or "bad", in case of errors (or just a single byte). That would implement flow control and the EOF signal. It does not have other elements like check sums and sequence control. But you might not need that. @monersss by protocol implementing we mean simple thing - you send command to destination "START_JPG" - you send back e.g. "OK" or "UNKNOW" - you send e.g 200Bytes of jpg + CRC(or MD5 or what you need to verify} - you read that data and calculate CRC and compare and you send back "OK" or "retry" or "cancel" .... all in your hand (look in google for e.g. handshake protocol) @livius tried with lower speed, result is the same. I have never written a protocol So i am not really sure how to do it.. Another thing is That i would like my lopy to stop reading data when the whole for example 20 kB file was received, how can i do that?? @monersss You also can decrease speed of transmision e.g 9600 but the best is what we suggest you at start you must of course have some protocol implementing. especially you can send info with e.g. md5 hash of portion of data back and if this is ok send more if not then repeat... @monersss Flash is way slower than the SD, so jou are simply loosing data, because there is no flow control. For SD that works, because the firmware buffers some bytes, and that grants a time window for storing. AFAIK, the receive buffer is 256 bytes. Besides that, you code is highly ineffcient, because you open and close the file for every chunk of data. You could try it this way around: from machine import UART import pycom import time from machine import SD import os #sd = SD() #os.mount(sd,'/sd') #os.listdir('/sd') uart = UART(1, baudrate=115200, pins = ('P21', 'P20'), timeout_chars=2000) with open('/flash/pliki/mon','ab') as f: while True: data = uart.read(25) print(".") if data is not None: f.write(data) else: break f.close() The f.close() is still there, but not needed due to the with... clause. The file system has an internal 4k buffer, unless that is full or the fiel is closed, nothing will be written. You should also not print the data, because that also consumes time. If that does not work, you'll need a handshake protocol between sender and receiver. @robert-hh I have managed to read and write a jpg file using lopy and store it on SD after assigning different pins for uart. When i try to do the same with the same jpg and store it on flash the file is being corrupted, number of bytes is different and somewhere in the middle of a file binary data are totally different then those of original file, i am using this code: from machine import UART import pycom import time from machine import SD import os #sd = SD() #os.mount(sd,'/sd') #os.listdir('/sd') uart = UART(1, baudrate=115200, pins = ('P21', 'P20'), timeout_chars=2000) while True: data = uart.read(25) print(data) if data != None: with open('/flash/pliki/mon','ab') as f: f.write(data) f.close() What may be causing file corruption? @crumble unfortunately it was a pin problem, when connected to pins 21 and 20 as serial TXD and RXD data is stored on SD without any problem @crumble If Pin conflict is the issue, one can reassign the UART Tx/Rx to other Pins, usign the uart.init() method - Have you switched it off and on againt? Or better have you checked that SD card is formatted in FAT 16 or 32 and can be written by other devices? - have a look on the pinout PDF. UART 1 and SD card handling seems to share the same pins. Maybe you cannot use UART 1 and SD card at the same time. Try to close the UART before writing to the SD card. Uh, I hope 2) is not the problem. Otherwise I will run with P10 into the same trap. @monersss Try to get Atom out of the way, since it may introduce problems of its own. Use a simple a terminal emulator like putty or screen, copy your script to the target device usiong ftp and try it there. @crumble no this is not a problem, i was actually trying to write anything to sd and it does not work, even when i try to write (data) i am getting 20 bytes once and than OSError Erro 5 EIO, so i am unable to write anything to my sd running script from Atom @monersss said in Receive data on lopy through serial: f.write('data') remove the ' around data. You write the string 'data' multiple times in your file. You shall see this, if you open the file in a text editor. Remove the ' and you will write the content of your data variable. @robert-hh I have updated my code and i am sending a jpg over chunks of data to my lopy, transmission works if i do it to flash, unfortunately SD card write does not work, heres the code: import pycom import time from machine import SD import os sd = SD() os.mount(sd,'/sd') os.listdir('/sd') uart = UART(1, baudrate=9600, timeout_chars=2000) while True: data = uart.read(20) print(data) if data != None: with open('/sd/mydatas','ab') as f: f.write('data') f.close() I have updated my firmware and atom release hoping that it will solve the problem, but that is not the case. When i open the file in 'ab' on sd using console everything is ok. Are you aware of any bugs that can cause this? @monersss That is most likely because the indentation is wrong. If you see data in the print statement, then it shoud work with: import pycom import time from machine import SD import os sd = SD() os.mount(sd, '/sd') uart = UART(1, baudrate=9600, timeout_chars=2000) while True: data = uart.read(2000) print(data) if data != None: with open('/sd/mydat','ab') as f: f.write(data) f.close() Using 'ab' causes data to be appended to the file. @monersss said in Receive data on lopy through serial: @robert-hh sorry for the typo, i cannot save the file on SD, the card is empty after running the code: while True: data = uart.read(2000) print(data) if data != None: with open('/sd/mydat','wb') as f: f.write(data) f.close() Yes, because all you do inside the loop is reading from the uart and printing the content. Because your loop will never end, you will not reach the part where you write it to disk. There will be multiple errors: - You will never write your data to the sd card. You read it only in the while loop. The other part will never be reached. - After it booted your LoPy will have around 30-64Kb of free memory. But not in one chunk. It will be scattered in small parts, depending on what you have done before. So you have either - to split up your image into smaller chunks - request a fix data block in the booting process as soon as possible. Than you read the from the uart into this fix buffer - use a *Py with 3MB memory At best you send the image in smaller chunks, so your LoPy can handle the amount of data. Therefore you will need some sort of protocol around it. Like: start sending image 'FileName' 23123bytes: So your read at first a line with a string. You parse the name of the file and its length. You create the file. Read 90times a small chunk of 255bytes and append it to this file. At last you do this for the remaining 173 bytes and close the file. At the beginning it will be easier to send the controling line as 3 lines. something like first line "send image" followed by two lines for file name and file length. So you work around string parsing and you will need only read, compare and convert to integer. @robert-hh sorry for the typo, i cannot save the file on SD, the card is empty after running the code: import pycom import time from machine import SD import os sd = SD() os.mount(sd, '/sd') uart = UART(1, baudrate=9600, timeout_chars=2000) while True: data = uart.read(2000) print(data) if data != None: with open('/sd/mydat','wb') as f: f.write(data) f.close() @monersss First of all, there is a typo in the uart = UART(... statement. It should be 115200, not 11520. But, if you are using a LoPy 1, the total available heap space is about 50k, which may be fragemented. It is highly unlikely, that you can allocate a 30k buffer. Try using a smaller buffer, like 2k bytes and write these to SD (or just count the number of received bytes). @crumble I have updated my code: import pycom import time from machine import SD import os uart = UART(1, baudrate=11520, timeout_chars=2000) while True: data = uart.read(30000) if data != None: with open('/flash/mydati', 'wb') as f: f.write(data) f.close() I did not change anythng in 'with open..' as for now i am only testing how the tramsfer goes so i do not care about overwriting the data. i am sending 23 KB data to lopy and i am arriving at an error: memory allocation failed, allocating 30001 bytes. Wheres my mistake?
https://forum.pycom.io/topic/2243/receive-data-on-lopy-through-serial
CC-MAIN-2018-39
refinedweb
1,769
81.63
First XSLT Function YouTube function Let’s see what we want to create and then make it. Simple steps at a time; this might just be the first XSLT function you are creating. Our aim here is what you see in the screenshot of a page on my C1 CMS demo site. This part of the template shows two content areas. The one area has pure content… Figure 1: Showing pure content …while the other one shows the YouTube video. Figure 2: Showing a YouTube video inline So we want editors to be able to add these to their page. We need to make it simple for them to insert these content elements anywhere they want to. If you don’t want the editors to have that much freedom in where to insert and play videos on the pages, you can consider using datatypes - a little more on that later! Below is the same page in the manager: the template with three content areas the editor can supply content for. Figure 3: Block 1 contains nothing but HTML And Block 2 has the function inserted to play the YouTube video. This is what you will be able to create in a few minutes after you study this document! Figure 4: Block 2 includes a YouTube video Note: In my design the blocks have a fixed height; that is why the content below the function is not visible. I have just left it so in here to make clear that this function can just go anywhere on a page between content elements etc. If you do the preview, you will see the video again as in the first screenshot. How to embed a YouTube video There are two ways to go with YouTube videos: link or embed. We want to play the video on our site, not just link to it. (An editor can do that just by supplying a hyperlink but that is boring.) So visit the YouTube website and find the code to embed a video on your site. On the YouTube site, when having found a video, now get the embed code. Figure 5: Video on YouTube Here is the code copied straight from the site: <object width="425" height="344"><param name="movie" value=""></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object> Listing 1: The Embed code from the YouTube website Have a look at the code: which code is standard and which differs per video? If you’re not sure, get another video, get its embed code and compare the two. Let’s beautify the code a bit for readability: <object width="425" height="344"> <param name="movie" value=""></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"> </embed> </object> Listing 2: The same code formatted for readability We can see that it has an <object> element with some parameters and an <embed> element. It looks duplicated, which it is. This is how it takes care of different browsers. Using the YouTube code from C1 CMS Let’s first copy this code to a blank page in C1 CMS. Be sure not to copy the word stuff into your browser. This is code not content. If you copy and paste it, or better insert as the Word, it will come out as text. What we need is it keeping HTML tags, not content on a page. Therefore, we need to paste it in as code. - Select to the Content perspective in C1 CMS - Create a new page or edit an existing page - Select the Content tab - Switch to the Source code view (the Source button right on top of Visual editor) - Paste the HTML in Figure 6: HTML pasted in the Source view Now click the Preview tab. Oh no! An error… Too bad it does not work! Don’t be put off immediately. I agree that they are nasty and can be really cryptic but always try to find something in that error message! Figure 7: An error displayed in the Preview It is complaining about ‘=’ at Line 4 Position 60 where it expects a ‘;’. Let’s see if we can fix it there. At Line 4 we see a lot of equal signs. Position 60 must be somewhere at the end: value=" There we see “CVvC80xoWmI&hl=en&fs=1”. Well this is “&hl=” it doesn’t like. It expects something like “&code;” and it reads something like “&code=” which is not allowed in XHTML, and C1 CMS validates your output for that and, therefore, complains. In order to ensure validity of strings like that, ampersands must be expressed themselves as an entity reference, i.e. "&” For example: value=" Note: Please refer to Using Ampersands in Attribute Values (and Elsewhere) for more information. The strange string of characters just before the ampersand “CVvC80xoWmI” is actually the unique YouTube video code. If you look at different <embed> elements, you will see that this one changes for each video. If you change anything there you will see either no video or another one. It is the unique way to specify which video you want to see. That is exactly what we need later on, and it is what we will have the editors specify. So leave that code intact for now. Usually you will just try to take out pieces that cause problems and see if you can still get it to work. So I took out the “&hl=en&fs=1” part and then it started complaining about another line. Yes, indeed, there you find the similar one in the <src> value of the <embed> element. Clear them both out and the preview works now! It plays your video (just give it a little bit of time to load it from YouTube) I am leaving you with the following working code: <object width="425" height="344"> <param name="movie" value=""></param> <param name="allowFullScreen" value="true"></param> <param name="allowscriptaccess" value="always"></param> <embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"> </embed> </object> Listing 3: The modified code that works now This code works, and to play another video, you have to change that video code. So we have found the main variable part in here: “CVvC80xoWmI”. So if we want to play a YouTube video, we need to put in all this HTML code and the code of the video we want to see. To put in some HTML code on a page you can use an XSLT function. This is what we will make now. To put in an input variable for the video code we can create an input parameter in this function. You’re still here, not feared away by that complex error, good! It will not get any more complex than that! We will now create a XSLT function, which will write some HTML, the HTML we have just managed to get correctly rendered by C1 CMS with the page preview. To create a new function, select the Functions perspective. There you can see the list of functions available on your site. It might be empty or list a whole lot of them depending on your site implementation and add-ons installed. I have a bunch of them. Now you will create the function by right clicking in the XSLT Function part of the Functions navigator and clicking Add Function in the context menu. Figure 8: About to add an XSLT function Think well about the name for a function! When editors will want to add a YouTube video, they must insert the function from this list of functions. Giving a good name will make the function easier to find and group with similar functions. You might consider using different namespaces. If you are creating the function as a partner, you might have a namespace of “Partner name functions”, or for a specific customer - “Customer name functions”. You can see that C1 CMS uses its own namespace as well: all the add-ons are under within the “Composite” namespace, as well as items in other perspectives such as “Media”. This is important; but for now we will not go deeper into this. Creating an XSLT function To create a new function: - Select the Functions perspective - Select the XSLT functions in the navigator’s tree - Click Add Function - Specify the name - Specify the namespace - Click OK I named the function MyYoutube and entered “Demo.Media” as the namespace. Meaning you will have a namespace called Media below the already existing Demo namespace. Note: namespaces are case-sensitive, so if you type “demo.media”, you will have two “demo”namespaces: “Demo” and “demo”, which might not be your idea of ordering functions.) Output type XHTML Leave the output type as XHTML, so that we will have XHTML generated. The other choice is XML. Putting it on XHTML will make the function available in the Insert Function list of Visual Editor. In most cases, you will be creating XHTML functions. When going much more advanced on, you might want to create XML that is used in other functions; in that case, you’ll have to change the output type. Figure 9: Editing an XSLT function The function is created. The namespace now shows in the function tree. An XSLT function editor view consists of several tabs. It opens on the Template tab, which shows the XSLT. This is where you specify the output of the function. Here you will add the XHTML and any XSLT needed to render XHTML. We will get to the other tabs later. Let’s start simple, with “hello world”. You can replace the entire inner <body> section with some of your html. Click on the Preview tab to see the result: Figure 10: Previewing the XSLT function. Smaller steps: inserting a new function If you want to take smaller steps, you can now save this function and test it. For the limited functionality it has now, it only shows “Hello World!” in my case. To do so: - From the Content perspective, edit a page, - Switch to the Visual view if necessary - Insert the function (Insert > Function) - Then click to expand Demo.Media and you will see your function there. - Select it and click OK. Figure 11: Inserting the function Now you can see the green box with your function’s name appeared on a page. Add some content above and below your function. Click Preview to see the result. Figure 12: The function has been inserted Okay, that works. Now let’s go back to the function again and add something better than just “Hello World!” to it. Now remove “HELLO WORLD” and add that YouTube code we made earlier. Hopefully, you kept it in Notepad somewhere (Personally I use Notepad++, a free, very handy editor with many cool features, I often have multiple documents open in where I paste parts of code). Otherwise go back in this document to the proper section and copy it again. - Copy the YouTube object embed code - Paste the code in the <body> section of the XSLT template. - Save the function. If you took the smaller steps, you can switch to the Content perspective again. If you still have that page with your function open, you can click on the Preview tab again. This is a great way of testing your function on a page. Click back and forth from the function and content section (the same works well when modifying templates) Figure 13: Pasting the YouTube HTML embed code in the function But this will always show the same YouTube video. Now we need to make it dynamic. An editor should supply that video code. To do so, we can use the Input Parameters of a XSLT function. - Click on the Input Parameters tab - Click Add New (input parameter) - Type “VideoCode” in the field name. This is the name of our variable, we need this later in the specific casing - Specify the Label and Help fields to give more friendly names and explanation about what this input parameter is used for. For example, I have specified “Copy the move id code from YouTube in here ( like 64Nl7Mhnugo from)” in the Help field. Figure 14: Adding an input parameter In the Parameter type and values, you specify, of which type the VideoCode parameter is and what the value of the parameter will be, or better where we will get it from. We need something like “CVvC80xoWmI” to fit in there, which is a string. Let’s create a test value with that string in there: - Click the Test value. Two windows will pop up. - Click Set New. - Select the function Composite | Constant | String. The function opens and shows that it has a parameter : Value) - Specify the value for the String constant function by choosing Constant and then pasting our video code in. Figure 15: Specifying the constant value –video code - Click OK. You see, you are using CMS functions and input parameters, just like you do with your own right now. - Click the Preview tab. Figure 16: Previewing the modified function You can see our input parameter with the test value appearing in the preview. We will now get that parameter into the XHTML we have specified in the Template. This is where we need to do a little XSLT coding. Learning the basics of XSLT This document is not intended to teach you that; but it will give you some bits to start with. XSLT is quite complex to master but it is quite easy to change if you learn the basics. XSLT works very well on reuse. (Installing add-ons from our package server will provide you with good examples, although some are aimed at a more complex level.) Once you have gathered some XSLTs, do a lot of googling and practicing in, for example, the free XRay XML Editor. You will learn a lot about XSLT. Focus on learning the basics, see how XSLT looks. You don’t have to be able to write XSLT from scratch nor understand it fully to use and modify it! You will often be able to use a complex XSLT by only changing some simple bits and pieces! Learn the XPath, learn the template basics and spot the xsl:value-of bits. These are the basic things you need to change in XSLTs. Paste the input XML from your functions into your editor (XRay or better XMLSpy) and code the XSLT in there until you are satisfied, then paste the XSLT back in here. You will not learn anything specific to C1 CMS here. Just open standards XML and XSLT that you can reuse in many other different projects and products. You can also consult other people knowing nothing of C1 CMS but knowing XSLT. To start learning XSLT I have created a page on a personal site with some links and tips to resources and other interesting things:. Getting the video code variable with XSLT As was said, we will not try to teach you XSLT in here, so here we go with only short bits. We have <in:paramCVvC80xoWmI</in:param> which needs to go in our XHTML. Create a variable in XSLT and set the value with our XML using XPath query: <xsl:param Create the $videocode variable and select the value from our <in:inputs …> <in:… Now to refer to variables in XSLT, you use something like this: <xsl:value-of <xsl:value-of With @ you are referring to XML attributes, with $ you are referring to XSLT variables. If you want a variable within an XHTML element you will use the {$variable notation} <strong><xsl:value-of</strong> <img src="{$variable}"/> So to put the $videocode in the right spot, you will get to <param name="movie" value="{$videocode}" /> The same counts for: <embed src="{$videocode}" /> Test it using the preview, and see that the test value appears in the XHTML. Voila! you managed to pull an input parameter into the output XHTML. Okay, we managed to get an input parameter into our XHTML but it is still a constant value. We want the editors to specify the value for the videocode input parameter themselves. This is what the default value is for. Until now we have been having the test value do this. This test value is not required in your function, but it can be handy for testing your function. - Save your function. - Switch to the Content perspective again. - Preview. Another error! Instead of your function a [ Error in rendering ] appears! Nasty, but again don’t be put off! I’m actually ‘forcing’ these errors onto you to experience them and now to deal with them. - Click on the error. - Read the error in the popup System.InvalidOperationException: Failed to get value for function 'Demo.Media.MyYoutube'. ---> System.ArgumentException: Missing parameter 'videocode' (type of System.String) at Composite.Functions.FunctionRuntimeTreeNode.GetValue(FunctionContextContainer contextContainer) --- End of inner exception stack trace --- at Composite.Functions.FunctionRuntimeTreeNode.GetValue(FunctionContextContainer contextContainer) at Composite.Renderings.Page.PageRenderer.ExecuteEmbeddedFunctions(XElement element, FunctionContextContainer contextContainer) Listing 4: Error text It reads like a lot of gibberish until you look a little bit closer and see the Missing parameter 'videocode' - hey! that is our variable! And it is missing. Right, we have not specified it yet. This is because we had already inserted our function before it had any input parameters. Specifying input parameters for a function Select the green box of your function on the page and click the function Properties button (or right-click the green box). Now you will have to specify your videocode. Or better yet, remove your function and insert it again. When you insert it you will see that it automatically prompts for the missing videocode parameter. This is how the editors will now work with the function. Figure 17: The function prompting for the required parameter If there is no default value for a parameter, the function will automatically prompt for it. Note the names of the label and help. The videocode parameter is only used in the function itself. (Sorry, my help text is a bit confusing in here mentioning movie id, which should be a video code.) You can also give a description for the whole function: - Go to the settings tab of your function and type some text in there. Figure 18: Providing a description for the function Figure 19: The function shows its description Generate documentation on functions You can also generate automatic information of all your functions. C1 CMS will list all the descriptions, different parameters and help text for all the functions. So it is a very good practice to put some text in there for both your editors and for documentation. To generate documentation for the functions: - From the Function perspective, select All functions. - Click Generate Documentation on the toolbar. Similarly, you can do with All widget functions. You might as well generate on a single function or a namespace with a number of functions in a similar manner. If you look at the object embed code, you see there are more parameters in there. You can also turn some of these parameters into input parameters of the function, just as you did with the videocode parameter. <embed src="" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344" /> Listing 5: Parameters of the embed element To something like: <object width="{$width}" height="{$height}"> <param name="movie" value="{$videocode}" /> <param name="allowFullScreen" value="{$allowfullscreen}" /> <param name="allowscriptaccess" value="always" /> <embed src="{$videocode}" width="{$width}" height="{$height}" movie="{$videocode}" type="application/x-shockwave-flash" allowFullScreen="{$allowfullscreen}" allowscriptaccess="always" /> </object> Listing 6: Implementing and using more parameters You will have to think about what it would be nice to specify and what it would not. More parameters do make your functions more flexible, but using too many of them will make them harder to use for editors. When implementing more parameters, it makes much sense to use default values so that the editors have to specify as fewer values as possible. Also think about the types of parameters, don’t stick with strings, and use Booleans and integers where possible. It will prevent you from getting wrong values. C1 CMS is heavy on strong typing, preventing many errors and bugs before they might occur. To create a default value, do the same as you did with the test value. It makes sense to do so except for the very minimum of default parameters that need to be specified by the user. By not specifying a default value, you force it to be specified by the user. In this case, I would specify default parameters for all the additional parameters you might choose to add and only leave the default value for the videocode unspecified. More uses of an XSLT function without XML We used all except for the Function calls tab. This tab is actually a very important tab you will use in many XSLT functions except for those functions where you don’t need to get data from the C1 CMS repository. Most functions will use the Function calls section, they do so to get content from somewhere and then transform the xml with the XSLT. You will learn this in other documents on XSLT functions. Templates and XHTML includes Another good example of using XSLT functions, like we did without using the XML function, is for including XHTML in templates. When you work with templates, you will often find pieces of HTML, which are the same in several templates (like CSS and JavaScript includes, disclaimers, headers, footers, web statistics scripts, Google analytics etc). In those cases, it is good to put that HTML in one place, instead of having it in x places (x templates). It means that you will most likely put the HTML in the XSLT function (the way we started out) and add the function to the templates. Some mightview this as a “downside”: - they want to change something in the HTML - open the template - find out the code is not there - need to go to the function and edit it there But this it is not a downside! It is true, you will have to take a few extra steps, but: - you change it for all x templates at once - it prevents you from forgetting a single template - it prevents you from ending up with different pieces where you actually want is one! So it is a good practice to make your code better to manage!
http://docs.c1.orckestra.com/Functions/XSLT/First-XSLT-Function/YouTube-function
CC-MAIN-2017-17
refinedweb
3,788
71.24
If. So is Python better than Perl, Bash, Ruby, or any other language? It’s really difficult to put that sort of qualitative label on a programming language, since the tool is so closely tied to the thought process of the programmer who is using it. Programming is a subjective, deeply personal activity. For the language to be excellent, it must fit the person using it. So we’re not going to argue that Python is better, but we will explain the reasons that we believe Python can be an excellent choice. We’ll also explain why it is a great fit for performing sysadmin tasks. The first reason that we think that Python is excellent is that it are a sysadmin, your work can pile up faster than you can unpile it. With Python, you can start writing useful scripts literally in hours rather than in days or weeks. that we consider Python to be an excellent programming language is that, while it lets you start simply, it also allows you to perform tasks that are as complex as you can imagine. Do you need to read through a logfile line by line and pull out some pretty basic information? Python can handle that.. Additionally, if you are able to perform complex operations, but the maintainability of your code suffers along the way, that isn’t a good thing. Python doesn’t prevent code maintenance problems, but it does allow you to express complex ideas with simple language constructs. Simplicity is a huge factor in writing code that is easy to maintain later. Python has made it pretty simple for us to go back over our own code and work on it after we haven’t touched it in months. It has also been pretty simple for us to work on code that we haven’t seen before. So the language, that is the language’s syntax and common idioms, are clear and concise and easy to work with over long periods of time. The next reason we consider Python to be an excellent language is its readability. Python relies on whitespace to determine where code blocks begin and end. The indentation helps your eyes quickly follow the flow of a program. Python also tends to be “word-based.” By that we mean that while Python uses its share of special characters, features are often implemented as keywords or with libraries. The emphasis on words rather than special characters helps the reading and comprehension of code. Now that we’ve outlined a few of Python’s benefits, we’ll show some comparisons of code examples in Python, Perl, and Bash. Along the way, we’ll also look at a few more of Python’s benefits. Here is a simple example, in Bash, of showing all the combinations of 1, 2 and a, b: #!/bin/bash for a in 1 2; do for b in a b; do echo "$a $b" done done And here is a comparable piece of Perl: #!/usr/bin/perl foreach $a ('1', '2') { foreach $b ('a', 'b') { print "$a $b\n"; } } This is a pretty simple nested loop. Let’s compare these looping mechanisms with a for loop in Python: #!/usr/bin/env python for a in [1, 2]: for b in ['a', 'b']: print a, b Next, we’ll demonstrate using conditionals in Bash, Perl, and Python. We have a simple if/ else condition check here. We’re just checking to see whether a certain file path is a directory: #!/bin/bash if [ -d "/tmp" ] ; then echo "/tmp is a directory" else echo "/tmp is not a directory" fi Here is the Perl equivalent of the same script: #!/usr/bin/perl if (-d "/tmp") { print "/tmp is a directory\n"; } else { print "/tmp is not a directory\n"; } And here is the Python equivalent of the script: #!/usr/bin/env python import os if os.path.isdir("/tmp"): print "/tmp is a directory" else: print "/tmp is not a directory" Another point in favor of Python’s excellence is its simple support for object-oriented programming (OOP). And, actually, the converse of that is that you don’t have to do OOP if you don’t want to. But if you do, it’s dead simple in Python. OOP allows you to easily and cleanly break problems apart and bundle pieces of functionality together into single “things” or “objects.” Bash doesn’t support OOP, but both Perl and Python do. Here is a module in Perl that defines a class: package Server; use strict; sub new { my $class = shift; my $self = {}; $self->{IP} = shift; $self->{HOSTNAME} = shift; bless($self); return $self; } sub set_ip { my $self = shift; $self->{IP} = shift; return $self->{IP}; } sub set_hostname { my $self = shift; $self->{HOSTNAME} = shift; return $self->{HOSTNAME}; } sub ping { my $self = shift; my $external_ip = shift; my $self_ip = $self->{IP}; my $self_host = $self->{HOSTNAME}; print "Pinging $external_ip from $self_ip ($self_host)\n"; return 0; } 1; And here is a piece of code that uses it: #!/usr/bin/perl use Server; $server = Server->new('192.168.1.15', 'grumbly'); $server->ping('192.168.1.20'); The code that makes use of the OO module is straightforward and simple. The OO module may take a bit more mental parsing if you’re not familiar with OOP or with the way that Perl tackles OOP. A comparable Python class and use of the class looks something like this: #!/usr/bin/env python class Server(object): def __init__(self, ip, hostname): self.ip = ip self.hostname = hostname def set_ip(self, ip): self.ip = ip def set_hostname(self, hostname): self.hostname = hostname def ping(self, ip_addr): print "Pinging %s from %s (%s)" % (ip_addr, self.ip, self.hostname) if __name__ == '__main__': server = Server('192.168.1.20', 'bumbly') server.ping('192.168.1.15') Both the Perl and Python examples demonstrate some of the fundamental pieces of OOP. The two examples together display the different flavors that each respective language provides while reaching toward its respective goals. They both do the same thing, but are different from one another. So, if you want to use OOP, Python supports it. And it’s quite simple and clear to incorporate it into your programming. Another element of Python’s excellence comes not from the language itself, but from the community. In the Python community, there is much consensus about the way to accomplish certain tasks and the idioms that you should (and should not) use. While the language itself may support certain phrasings for accomplishing something, the consensus of the community may steer you away from that phrasing. For example, from module import * at the top of a module is valid Python. However, the community frowns upon this and recommends that you use either: import module or: from module import resource. Importing all the contents of a module into another module’s namespace can cause serious annoyance when you try to figure out how a module works, what functions it is calling, and where those functions come from. This particular convention will help you write code that is clearer and will allow people who work on your code after you to have a more pleasant maintenance experience. Following common conventions for writing your code will put you on the path of best practices. We consider this a good thing. The Python Standard Library is another excellent attribute of Python. If you ever hear the phrase “batteries included” in reference to Python, it simply means that the standard library allows you to perform all sorts of tasks without having to go elsewhere for modules to help you get it done. For example, though it isn’t built-in to the language directly, Python includes regular expression functionality; sockets; threads; date/time functionality; XML parsers; config file parser; file and directory functionality; data persistence; unit test capabilities; and http, ftp, imap, smpt, and nntp client libraries; and much more. So once Python is installed, modules to support all of these functions will be imported by your scripts as they are needed. You have all the functionality we just listed here. It is impressive that all of this comes with Python without requiring anything else. All of this functionality will help you out immensely as you write Python programs to do work for you. Easy access to numerous third-party packages is another real advantage of Python. In addition to the many libraries in the Python Standard Library, there are a number of libraries and utilities that are easily accessible on the internet that you can install with a single shell command. The Python Package Index, PyPI (), is a place where anyone who has written a Python package can upload it for others to use. At the time we are writing this book, there are over 3,800 packages available for download and use. Packages include IPython, which we cover in the following chapter; Storm (an object-relational mapper, which we cover in Chapter 12, Data Persistence); and Twisted, a network framework, which we cover in Chapter 5, Networking—just to name 3 of the over 3,800 packages. Once you start using PyPI, you’ll find it nearly indispensible for finding and installing useful packages. Many of the benefits that we see in Python stem from the central philosophy of Python. When you type import this at a Python prompt, you will see The Zen of Python by Tim Peters. Here it is: statement isn’t a dogmatic imperative that is strictly enforced at all levels of development of the language, but the spirit of it seems to permeate much of what happens in and with the language. And we have found this spirit to be a beautiful thing. This is perhaps the essence of why we choose to use Python day after day. This philosophy resonates within us as what we want and expect from a language. And if this resonates with you, then Python is probably a good choice for you as well. If you justpicked up this book in a bookstore or are reading an introduction online somewhere, you may be asking yourself, how hard it is going to be to learn Python and if it is even worth it. Although Python is catching on like wildfire, there are many sysadmins who have been exposed to Bash and Perl only. If you find yourself in this category, you should take comfort in knowing that Python is very easy to learn. In fact, although it is a matter of opinion, Python is considered by many to be the easiest language to learn and teach, period! If you already know Python, or are a programming guru in another language, you will probably be able to jump right into any of the following chapters without reading this intro and immediately start being productive using our examples. We made a concerted effort to create examples that will actually help you get your job done. There are examples of ways to discover and monitor subnets automatically with SNMP, to convert to an interactive Python shell called IPython, to build data processing pipelines, to write custom metadata management tools with object-relational mappers, to perform network programming, to write command-line tools, and much more. If you are coming from a shell programming/scripting background, though, don’t worry at all. You, too, can learn Python quite easily. You need only motivation, curiosity, and determination, the same factors that led you to pick up this book and look at the introduction in the first place. We sense there are still a few skeptics out there. Maybe some of the things you have heard about programming have scared you. One common, and horribly false, misconception is that only some people can learn to program, and they are a mysterious and elite few. The frank truth is that anyone can learn how to program. A second, equally false, misconception is that earning a computer science degree is the only way a person can truly become a software engineer. But some of the most prolific software developers do not have engineering degrees. There are people with philosophy, journalism, nutritional science, and English degrees who are competent Python programmers. Having a degree in computer science is not a requirement to learn Python, although it certainly doesn’t hurt. Another funny, and false, misconception is that you must have started to program in your teenage years, or you will never learn to program. While this makes people who were lucky enough to have someone in their life that encouraged them to program at a young age feel good, it is another myth. It is very helpful to have started learning programming at a young age, but age is not a requirement to learn Python. Learning Python is most certainly not a “young person’s game,” as we have heard some people say. There are countless cases of developers who learned to program in their late 20s, 30s, 40s, and onward. If you have gotten this far, we should point out that you, the reader, have an advantage many people do not. If you decided to pick up a book on Python for Unix and Linux system administration, then you most likely know something about how to execute commands from a shell. This is a tremendous advantage to learning to become a Python programmer. Having an understanding of the way to execute commands from a terminal is all that is required for this introduction to Python. If you truly believe you will learn how to program with Python, then read the next section immediately. If you don’t believe it yet, then reread this section again, and convince yourself it really is just a matter of getting your mind to understand you do have the power to learn how to program in Python. It is really that simple; if you make this decision, it will change your life. This introduction to Python is going to be very different from any other one we’ve seen, as it will use an interactive shell called IPython and a regular Bash shell. You will need to open two terminal windows, one with IPython and one with Bash. In every example, we will compare what we do in Python with a Bash example. The first steps are to download the correct version of IPython for your platform and install it. You can get a copy at. If for some reason, you can’t get IPython to install you can also just use a regular Python shell. You can also download a copy of the virtual machine that includes all of the software for the book, as we have a copy of IPython preconfigured and ready to run. You just need to type in ipython, and you will get a prompt. Once you have installed IPython and have an IPython shell prompt, it should look something like this: [ngift@Macintosh-7][H:10679][J:0]# ipython Python 2.5.1 (r251:54863, Jan 17 2008, 19:35:17) Type "copyright", "credits" or "license" for more information. IPython 0.8.2 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object'. ?object also works, ?? prints more. In [1]: In your Python terminal, type in the following: In [1]: print "I can program in Python" I can program in Python If you spend a lot of your day typing commands into a terminal, then you are used to executing statements and, perhaps, redirecting the output to a file or to another Unix command. Let’s look at the way we would execute a command in Bash and then compare that to the way it works in Python. In the Bash terminal, type the following: [ngift@Macintosh-7][H:10701][J:0]# ls -l /tmp/ total 0 -rw-r--r-- 1 ngift wheel 0 Apr 7 00:26 file.txt Example 1-1. Python wrapper for ls command #!/usr/bin/env python #Python wrapper for the ls command import subprocess subprocess.call(["ls","-l"]) Now if you run this script, you will get the exact same output that you would get if you ran ls -ls from the command line: [ngift@Macintosh-7][H:10746][J:0]# ./pyls.py total 8 -rwxr-xr-x 1 ngift staff 115 Apr 7 12:57 pyls.py Example 1-2. System information script—Python #!/usr/bin/env python #A System Information Gathering Script import subprocess #Command 1 uname = “uname” uname_arg = “-a” print "Gathering system information with %s command:\n" % uname subprocess.call([uname, uname_arg]) #Command 2 diskspace = "df" diskspace_arg = "-h" print "Gathering diskspace information %s command:\n" % diskspace subprocess.call([diskspace, diskspace_arg]) Example 1-3. System information script—Bash #!/usr/bin/env bash #A System Information Gathering Script #Command 1 UNAME="uname -a" printf “Gathering system information with the $UNAME command: \n\n" $UNAME #Command 2 DISKSPACE="df -h" printf "Gathering diskspace information with the $DISKSPACE command: \n\n" $DISKSPACE If we look at both of the scripts, we see that they look a lot a like. And if we run them, we see that the output of each is identical. One quick note though: splitting the command from the argument is completely optional using subprocess.call. You can also use this syntax: subprocess.call("df -h", shell=True) As we mentioned earlier, importing a module like subprocess is just importing a file that contains code you can use. You can create your own module or file and reuse code you have written in the same way you import subprocess. Importing is not magic at all, it is just a file with some code in it. One of the nice things about the IPython shell that you have open is its ability to inspect inside modules and files, and see the attributes that are available inside them. In Unix terms, this is a lot like running the ls command inside of /usr/bin. If you happen to be on a new system such as Ubuntu or Solaris, and you are used to Red Hat, you might do an ls of /usr/bin to see if tools such as wget, curl, or lynx are available. If you want to use a tool you find inside /usr/bin, you would simply type /usr/bin/wget, for example. Modules such as subprocess are very similar. With IPython you can use tab complete to look at the tools that are available inside a module. Let’s walk through subprocess using tab complete to look at the attributes available inside of it. Remember, a module is just a file with some code in it. Here is what a tab complete looks like with the subprocess module in IPython: In [12]: subprocess. subprocess.CalledProcessError subprocess.__hash__ subprocess.call subprocess.MAXFD subprocess.__init__ subprocess.check_call subprocess.PIPE subprocess.__name__ subprocess.errno subprocess.Popen subprocess.__new__ subprocess.fcntl subprocess.STDOUT subprocess.__reduce__ subprocess.list2cmdline subprocess.__all__ subprocess.__reduce_ex__ subprocess.mswindows subprocess.__builtins__ subprocess.__repr__ subprocess.os subprocess.__class__ subprocess.__setattr__ subprocess.pickle subprocess.__delattr__ subprocess.__str__ subprocess.select subprocess.__dict__ subprocess._active subprocess.sys subprocess.__doc__ subprocess._cleanup subprocess.traceback subprocess.__file__ subprocess._demo_posix subprocess.types subprocess.__getattribute__ subprocess._demo_windows Think of the special question mark syntax as a manpage query. If you want to know how a tool works in Unix, simply type: man name_of_tool When we look at this documentation, “Docstring” is the official term, we see an example of the way to use subprocess.call and a description of what it does. You now have enough information to call yourself a Python programmer. You know how to write a simple Python script, how to translate simple scripts from Bash and call them with Python, and, finally, how to find documentation about new modules and attributes. In the next section, you’ll see how to better organize these flat sequences of commands into functions. In the previous section we went through executing statements one after another, which is pretty useful, because it means we were able to automate something that we would normally have to do manually. The next step to automating our code execution is to create functions. If you are not already familiar with writing functions in Bash or another language, then one way to think about functions is as miniscripts. A function allows you to create blocks of statements that get called in groups that live inside of the function. This is quite a bit like the Bash script we wrote in which there were two commands enclosed in a script. One of the differences between a Bash script and a function is that you can include many function scripts. Ultimately, you can have multiple functions that group statements together in a script, and then that group of statements can be called to run a miniprogram at the proper time in your script. At this point, we need to talk about the topic of whitespace. In Python, a uniform level of indentation must be maintained in nesting code. In another language, like Bash, when you define a function you put brackets around the code inside of a function. With Python, you must indent the code inside of the bracket. This can trip up newcomers to the language, at first, but after a while it will grow on you, and you will realize that this encourages readable code. If you have trouble getting any of these examples to work interactively, make sure you refer to the actual source code to see the proper indentation level. The most common practice is to set a tab to indent exactly four spaces. Let’s take a look at how this works in Python and Bash. If you still have the IPython shell open, you don’t need to create a Python script file, although you can if you like. Just type the following into the interactive IPython prompt: In [1]: def pyfunc(): ...: print "Hello function" ...: ...: In [2]: pyfunc Out[2]: <function pyfunc at 0x2d5070> In [3]: pyfunc() Hello function In [4]: for i in range(5): ...: pyfunc() ...: ...: Hello function Hello function Hello function Hello function Hello function We can do the same thing in a live Bash shell as well. Here is one way: bash-3.2$ function shfunc() > { > printf "Hello function\n" > } bash-3.2$ for (( i=0 ; i < 5 ; i++)) > do > shfunc > done Hello function Hello function Hello function Hello function Hello function In the Bash example, we created a simple function shfunc, and then called it five times, just like we did with the Python function earlier. One thing to notice is that the Bash example requires more “baggage” to do the same thing that Python does. Notice the difference between the Bash for loop and the Python for loop. If this is your first exposure to a function in Bash or Python, you should make some other functions in your IPython window before you continue. Functions are not magic, and writing multiple functions interactively is a great way to take away the mystery if this is your first experience with them. Here are a couple of examples of simple functions: In [1]: def print_many(): ...: print "Hello function" ...: print "Hi again function" ...: print "Sick of me yet" ...: ...: In [2]: print_many() Hello function Hi again function Sick of me yet Now we have a few silly examples under our belt, in addition to the silly examples that you tried out on your own as well, right? So we can go back to the script we wrote that prints system information and convert those statements into functions. See Example 1-4, “Converted Python system info script: pysysinfo_func.py”. Example 1-4. Converted Python system info script: pysysinfo_func.py #!/usr/bin/env python #A System Information Gathering Script import subprocess #Command 1 def uname_func(): uname = "uname" uname_arg = "-a" print "Gathering system information with %s command:\n" % uname subprocess.call([uname, uname_arg]) #Command 2 def disk_func(): diskspace = "df" diskspace_arg = "-h" print "Gathering diskspace information %s command:\n" % diskspace subprocess.call([diskspace, diskspace_arg]) #Main function that call other functions def main(): uname_func() disk_func() main() Given our experiments with functions, this converted example of our previous script that we simply placed these statements inside functions and then used the main function to call them all at once. If you are not familiar with this style, you might not have known that it is common to create several functions inside a script and then call them all with one main function. One of many reasons for this is that if you decide to reuse this script for another program, you can either call the functions independently or together with the main method. The key is that you decide after the module is imported. When there is no control flow, or main function, then all of the code gets executed immediately when it is imported. This may be OK for a one-off script, but if you plan to create reusable tools, and you should, then it is a good practice to create functions that encapsulate specific actions, and then have a main function that executes the whole program. For comparison’s sake, let’s convert our previous Bash system information script to use functions as well. See Example 1-5, “Converted Bash system info script: bashsysinfo_func.sh”. Example 1-5. Converted Bash system info script: bashsysinfo_func.sh #!/usr/bin/env bash #A System Information Gathering Script #Command 1 function uname_func () { UNAME="uname -a" printf "Gathering system information with the $UNAME command: \n\n" $UNAME } #Command 2 function disk_func () { DISKSPACE="df -h" printf "Gathering diskspace information with the $DISKSPACE command: \n\n" $DISKSPACE } function main () { uname_func disk_func } main Looking at our Bash example, you can see it has quite a bit in common with its Python cousin. We created two functions and then called those two functions by calling the main function. If this is your first experience with functions, then we would highly recommend that you comment out the main method by placing a pound sign in front of both the Bash and the Python scripts and running them again. You should get absolutely nothing when you run both scripts, because the program should execute, but won’t call the two functions inside. At this point, you are now a programmer capable of writing simple functions in both Bash and Python. Programmers learn by doing, though, so at this point we highly recommend that you change the system calls in these two Bash and Python programs and make them your own. Give yourself some bonus points if you add several new functions to the script and call them from a main function. One problem with learning something new is that, if it is abstract, like calculus, for example, it is hard to justify caring about it. When was the last time you used the math you learned in high school at the grocery store? In our previous examples, we showed you how to create functions as an alternative to executing shell commands one after another in a script. We also told you that a module is really just a script, or some lines of code in a file. It isn’t anything tricky, but it does need to be arranged in a particular way so that it can be reused in another future program. Here is the point where we show you why you should care. Let’s import the previous system information scripts in both Bash and Python and execute. Open the IPython and Bash windows if you closed them so that we can demonstrate very quickly why functions are important for code reuse. One of the first scripts we created in Python was a sequence of commands in a file named pysysinfo.py. In Python because a file is a module and vice versa, we can import this script file into IPython. Keep in mind that you never need to specify the .py portion of the file you are importing. In fact if you do this, the import will not work. Here is what it looks like when we do that on Noah’s Macbook Pro laptop: In [1]: import pysysinfo Here is the output from the IPython terminal: In [3]: import pysysinfo_func Now, if we go back to our IPython interpreter and import this new script, we should see this: In [1]: import pysysinfo_func_2 In [2]: pysysinfo_func_2. pysysinfo_func_2.__builtins__ pysysinfo_func_2.disk_func pysysinfo_func_2.__class__ pysysinfo_func_2.main pysysinfo_func_2.__delattr__ pysysinfo_func_2.py pysysinfo_func_2.__dict__ pysysinfo_func_2.pyc pysysinfo_func_2.__doc__ pysysinfo_func_2.subprocess pysysinfo_func_2.__file__ pysysinfo_func_2.uname_func pysysinfo_func_2.__getattribute__ pysysinfo_func_2.__hash__ In this example, we can ignore anything with double underscores, because these are special methods that are beyond the scope of this introduction. Because IPython is also a regular shell, it picks up the filename and the byte-compiled Python file with the .pyc extension. Once we filter past all of those names, we can see that there is a pysysinfo_func_2.disk_func. Let’s go ahead and call that function: In [2]: pysysinfo_func_2.disk_func() Gathering diskspace information df command: Filesystem Size Used Avail Capacity Mounted on /dev/disk0s2 93Gi 89Gi 4.1Gi 96% / devfs 111Ki 111 Often, the point of writing a reusable module is so that you can take some of the code and use it over and over again in a new script. So practice that by writing another script that uses one of the functions. See Example 1-6, “Reusing code with import: new_pysysinfo”. Example 1-6. Reusing code with import: new_pysysinfo #Very short script that reuses pysysinfo_func_2 code from pysysinfo_func_2 import disk_func import subprocess def tmp_space(): tmp_usage = "du" tmp_arg = "-h" path = "/tmp" print "Space used in /tmp directory" subprocess.call([tmp_usage, tmp_arg, path]) def main(): disk_func() tmp_space() if __name__ == "__main__": main() In this example, not only do we reuse the code we wrote earlier, but we use a special Python syntax that allows us to import the exact function we need. What’s fun about reusing code is that it is possible to make a completely different program just by importing the function from our previous program. Notice that in the main method we mix the function from the other module we created, disk_func(), and the new one we just created in this file. In this section, we learned the power of code reuse and how simple it really is. In a nutshell, you put a function or two in a file and then, if you also want it to run as script, place that special if__name__ == "__main__": syntax. Later you can either import those functions into IPython or simply reuse them in another script. With the information you have just learned, you are truly dangerous. You could write some pretty sophisticated Python modules and reuse them over and over again to create new tools. No credit card required
https://www.safaribooksonline.com/library/view/python-for-unix/9780596515829/ch01.html
CC-MAIN-2017-09
refinedweb
5,134
60.55
0 Ok. As the title says I tried many times to simply write a program that rewrites a file to remove null characters. I confirmed with a hex editor that the file in question has tons on null characters, on average about 1 of every 2 characters in null. So my last attempt: #include <stdlib.h> #include <stdio.h> int main() { FILE *f = fopen("main2.c","r"); FILE *t = fopen("temp","w"); int c; int count = 0; while((c = fgetc(f))!=EOF) { if(c) { fputc(c,t); } else { printf("null found\n"); } } fclose(f); fclose(t); FILE *n = fopen("main2.c","w"); FILE *w = fopen("temp","r"); while((c=fgetc(w))!=EOF) { fputc(c,n); } fclose(n); fclose(w); } I though if nothing else this should work perfectly. Instead it spits out a file with chinese characters. Sorry for being so stupid but I'm feeling impatient right now. I guess I would like to know what I did wrong, an a correct example, maybe a much more concise and faster answer. Thanks.
https://www.daniweb.com/programming/software-development/threads/437829/noob-can-t-write-simple-program-to-remove-null-characters
CC-MAIN-2018-26
refinedweb
173
84.88
Download ····· … Adobe Photoshop CS6 With License Key Download [Mac/Win] ⏳ Adobe Photoshop CS6 Crack + Keygen [Win/Mac] * **Versions**. Photoshop CS2 and CS3 are the most recent versions for the Macintosh, though there are also versions for Windows and other platforms. Versions are numbered according to the release date. Photoshop CS3 is not backward-compatible with Photoshop CS2, so you have to start with the newest version in order to open older projects. The latest Photoshop is currently CS5.5, although many third-party products use earlier versions of Photoshop for compatibility with versions of Photoshop earlier than CS3. * **Main features**. With each new version of Photoshop there are major changes to the program, so use it to learn how to use the tool you need most. For example, the biggest change in Photoshop CS3 was the introduction of the Linked Layers, which allows users to work on a group of layered Photoshop documents and manipulate them all at once. Photoshop CS3 is also the first version of Photoshop to include a Content Aware Fill tool, which automatically fills in areas of photos that are masked out of a specific color. * **Interface**. As it has evolved, Photoshop has had different interfaces. Each version of Photoshop has included a new interface, with Photoshop CS2 being the first version to use a toolbox interface. Photoshop CS3 and later use the Layout view, which lets you manipulate groups of layers as a single object. Photoshop CS4 also uses a Wacom tablet interface for working with layers, similar to drawing with a pencil. The Interface view in Photoshop CS5 makes the program look like a drawing program. * **Plug-ins**. Photoshop CS3 introduced support for _plug-ins_, which are different programs that users can add to Photoshop to perform special tasks. Plug-ins are much easier than learning to use Photoshop’s own special features, as they are already specifically developed for particular purposes. You can use plug-ins in Photoshop without installing them as extensions, which is an extra step you would have to take with third-party plug-ins. You can use Photoshop to create and edit raster images. It has no type of vector drawing capability for creating artwork for print. Photoshop’s most basic tools are raster image editing tools. The drawing tools enable you to make your own art—but only if you are comfortable with a digital pencil or pen. Illustrator is more limited in the types of artwork it can produce. It’s a vector tool, and most digital work is done in vector files. You Adobe Photoshop CS6 [Mac/Win] If you are a professional who needs more features, you can always buy Photoshop. It’s the gold standard in the graphic editing world, but that comes with a hefty price tag. And if you’re the type of person who wants to start tinkering around, i.e. Photoshop doesn’t provide you the tools to make things the way you want them to be, the first Photoshop alternative you should try is Elements. Adobe Photoshop Elements is the perfect Photoshop alternative if you’re on a budget. It still has plenty of features (all the core ones, in fact) and you get even more with free software upgrades on Adobe’s official website. In this post, we’ll show you where Photoshop Elements ranks among its competitors. This article is also available as a free e-book in PDF format, which you can read online on your computer or print if you’re out and about. The 10 Photoshop alternatives you need to know about Top 10 Photoshop alternatives you should know about. #1: Aviary Aviary’s Photoshop alternative is a very user-friendly tool. The best part of this program is its ability to automatically detect new images in your folder and help you create stunning looking images with just a few clicks of the mouse. If you decide to use the free version of Aviary, you’ll get all the features but no watermarking features. The pro version comes with watermarking features, however, but we wouldn’t recommend it unless you want to turn an amateur into a pro. You can also check out some related articles about the best social media marketing tools. ? If you need an excellent video editing alternative check out Windows Movie Maker. ? Windows Movie Maker is one of the best video editors for Windows. It’s free, and even has lots of editing features. If you already have Elements, you can save some money and get Windows Movie Maker for free. Aviary Aviary is a simple photo editor that lets you create and edit images on your computer. Aviary is a great alternative to the more complicated but feature-packed Photoshop and it has lots of really helpful features to make your job easier and faster. The program makes it very easy to edit more than just your photos. Create web graphics, add text, and even add colors and fireworks effects. Also, you can use Aviary’s a681f4349e Adobe Photoshop CS6 Activator Free Download For PC Q: How to convert an URL to a QR code? I have an URL that look like this: I would like to QR code this. Do you know how to convert a url to QR code? I’ve tried and found this Python script: import urllib.parse input = “” output = urllib.parse.urlencode({“url”: input}) url = “” \ .format(output) url = urllib.parse.unquote(url) data = urllib.request.urlopen(url) print(data.read().decode(‘UTF-8’)) but it doesn’t work for me and also this website offers a solution but I couldn’t understand it: A: This is the Python code I would use. from bs4 import BeautifulSoup as soup from qrcode.py import QRCode, QRImage def linkToQRCode(url): response = requests.get(url) soup = soup(response.content, “html.parser”) img = soup.select(“img[src*=qrcode]”) if len(img): qrcode = QRCode(version=1, errorCorrectLevel=QRCode.CorrectLevel.H) qrcode.addData(Uri.urlencode(img[0][‘src’])) qr = QRImage(qrcode) return qr else: return “” code = linkToQRCode(‘ What’s New in the? Rejuvenating me on the literary world. A review of Tenderness: Nick’s love story, by Anne-Clare Nielson I am a book lover to the extreme. Some may think I am crazy but I’m really not. That’s probably why I became a librarian. People tend to be a lot easier to get along with when you are willing to share your love of literature. Nick and Esme’s love story is very much that – a story about love. “What she’d wanted was a clean slate, a sudden opening up, a homecoming; rather than a castle built of memories and cobwebbed by the ghosts of the past. She wanted an ending, a happy one—but most of all she wanted Nick’s forgiveness.” Esme’s love of Nick is dead serious. Even though she is not attracted to him, her love for him is strong. Nick however, has no idea how to reconcile with the past. “But in the end Esme had grown tired of waiting, and had decided she’d get over it all by herself. Esme had wanted Nick’s forgiveness; she’d wanted to be in a new place where she could start over, build a new life; she’d wanted to get out of the past and into the future, and that only remained for the two of them to know.” Tenderness is a good read. I liked the world building of the castle but there were moments where I could have done without it. I felt that it could have been more suspenseful without it. “It was a meeting of two worlds, a collision of universes, two lives that had suddenly been brought together to gaze in wonder at something more vast and more incomprehensible than either could have imagined.” “They had come to that place where the castle was nothing and they were everything.” Tenderness is a love story. It has a bit of mystery to it as well and it’s a story about forgiveness. “The castle was a monument to past pain, not a monument to the truth. But then again, she wasn’t the one who’d built it, was she? She was just where it had found her.” Tenderness is a great System Requirements: OS: Microsoft Windows 7 64-bit or later Processor: Intel Core i5-8500 processor or equivalent Memory: 8 GB RAM Graphics: NVIDIA GeForce 940MX or equivalent Storage: 20 GB available space DirectX: Version 11 Network: Broadband Internet connection Additional Notes: By clicking on the download button, you are confirming that you are 18 years or older and you understand that certain games contain content such as nudity, strong language, sexual content, violence, or drugs. You are
https://madisontaxservices.com/adobe-photoshop-cs6-with-license-key-download-mac-win
CC-MAIN-2022-40
refinedweb
1,463
63.39
Auto-refresh Tableau Reader Dashboards (.twbx)Lance Dacey Mar 18, 2016 8:36 AM I spent a few days trying to figure this out so I thought that I would share. - I have 8 dashboards which I am refreshing connected to two separate data sources. - I have fully automated the data download, modification, and consolidation tasks with Python and Windows Task Scheduler. - The only manual task that I had to do every day was open each dashboard and refresh Tableau Extract (they are connected to a large .csv file) My solution was to: - Use the Python API to write a .tde file (by the way, if anyone could create a to_tde method in the pandas library I would be so grateful) - I use the shutil library to copy this file (/tde_folder/dashboard.tde) to a dummy folder with the .twb and a /Data/Data.twb Files/ folder. The new .tde file goes inside the Data.twb FIles folder which is where the .twbx looks for the data - Use a function and os.walk and ZipFile to zip this dummy folder (which contains the .twb and the new tde.). Credit goes to this post: - Rename the .zip folder to a .twbx folder so Tableau will open it properly Copy the new .tde with the latest data to the dummy location which mirrors the .twbx format (you can open a .twbx with 7zip and check it out): shutil.copy2(extract_file, tde_location) Function to zip the folder: os.chdir('C:/Users/OneDrive//tableau_dashboards/') def zipfolder(foldername, target_dir): zipobj = zipfile.ZipFile(foldername + '.zip', 'w', zipfile.ZIP_DEFLATED) rootlen = len(target_dir) + 1 for base, dirs, files in os.walk(target_dir): for file in files: fn = os.path.join(base, file) zipobj.write(fn, fn[rootlen:]) zipfolder('zippedfile', 'thefolderyouarezipping') exit Change the .zip folder to a .twbx (and delete the old .twbx file if it exists): if os.path.isfile('dashboard.twbx'): os.remove('dashboard.twbx') for filename in glob.iglob(os.path.join(dashboard_location, 'dashboard.zip')): os.rename(filename, filename[:-3] + 'twbx') Finally, I simply use shutil to copy the files to a shared folder and then email a notification to everyone using the wincom32 library and Outlook. 100% automated now. I am pretty pleased. Obviously, I would prefer to have Tableau Server but I am operating with just one license of Tableau Desktop Professional right now... 1. Re: Auto-refresh Tableau Reader Dashboards (.twbx)Rajeev Pandey Mar 18, 2016 8:40 AM (in response to Lance Dacey) Dear Lance, Thanks for sharing this Wonderful method. I would request you to please create a Video if possible and share the Link. I never used Python before but thought of using in upcoming days. Its a polite request to document the every step and upload it into the forum. Thanks Again!! 2. Re: Auto-refresh Tableau Reader Dashboards (.twbx)Morgan DUARTE Apr 26, 2016 10:27 PM (in response to Lance Dacey) Hi Lance Will you be able to share some info on it and may be even the script? I'm in the same position than you where I need to refresh multiple workbooks (Just to open and refresh..). Will be great to apply your solution Thank you
https://community.tableau.com/thread/203185
CC-MAIN-2018-26
refinedweb
532
69.18
12 October 2009 12:40 [Source: ICIS news] LONDON (ICIS news)--The revival of polyethylene (PE) buying interest from China on Monday, as holidays ended, led to business being secured from Europe, sources said. “We have just sold big parcels to ?xml:namespace> Both buyer and seller confirmed a 2,000 tonne parcel of low density PE (LDPE) general purpose at $1,180/tonne (€802/tonne) CFR (cost and freight) China Main Port (CMP). The seller also reported a sale of 4,000 tonnes of coating grade at an average of $1,250/tonne CFR CMP. The price was consistent with other offers in the region. Last week, selling indications for Asian general film grade LDPE were at $1,190-1,200/tonne CFR China for October shipment. Linear low density PE (LLDPE) from LDPE net prices in Europe for prompt delivery were still at €950-970/tonne FD (free delivered) NWE (northwest Buying interest picked up in “We have factored a lower ethylene price into our business,” said the European producer, “but new sales will be at an increase.” ($1 = €0.68) Chow Bee Lin
http://www.icis.com/Articles/2009/10/12/9254590/europe-pe-sold-to-china-as-demand-revives-after-holidays.html
CC-MAIN-2014-42
refinedweb
186
58.52
The functions used by map, filter, reduce and a list's sort method can also be a special function called a lambda form. This permits us to use special one-use-only throw-away functions without the overhead of a def statement. A lambda form is like a defined function: it has parameters and computes a value. The body of a lambda, however, can only be a single expression, limiting it to relatively simple operations. If it gets complex, you'll have to define a real function. map filter reduce list sort Generally, it's clearer to formally define a function rather than try to define a lambda form. We can play with lambda forms by applying them directly as functions to arguments. >>> from math import pi >>> print (lambda x: pi*x*x)(5) 78.53981633970 >>> from math import pi print (lambda x: pi*x*x)(5) 78.53981633970 This statement creates a lambda that accepts a single argument, named x , and computes pi*x*x. This lambda is applied to an argument of 5. It computes the area of a circle with a radius of 5. x pi*x*x Here's a lambda form used in the map function. >>>] This map function applies our radius-computing lambda form to the values from 0 to 7 as created by the range. The input sequence is mapped to the output sequence by having the lambda function applied to each value. range Parameterizing a Lambda. Sometimes we have a lambda which -- in effect -- has two kinds of parameters: parameters that are elements of a sequence being processed by map, filter or reduce function, and parameters that are more global than the items of a sequence. Consider this more complex example. spins = [ (23,"red"), (21,"red"), (0,"green"), (24,"black") ] betOn= "black" print filter( lambda x, y=betOn: y in x, spins ) betOn= "red" print filter( lambda x, y=betOn: y in x, spins ) First, we create four sample spins of a roulette wheel, and save this list in a variable called spins. Then we chose a particular bet, saving this in a variable called betOn. If the given betOn keyword occurs in any of the tuples that describe all of the spins, the tuple is kept. spins betOn tuple The call to filter has a lambda form that uses a common Python hack. The filter function only passes a single argument value to the function or lambda form. If there are additional parameters declared, they must have default values; in this case, we set the default to the value of our variable, betOn. Let's work through this in a little more detail. As the filter function executes, it enumerates each element from the sequence, as if it had a for s in spins: clause. Each individual item is given to the lambda . The lambda then does the evaluation of y in x; x is a tuple from the list (e.g., (23, "red")) and y is a default parameter value, set to the value of betOn (e.g., "black"). Every time the y value actually appears in the tuple x, ("black" in (24,"black")), the tuple is selected to create the resulting list from the filter. When the y value is not in the tuple x, the tuple ignored s y in x y "black" in (24,"black") This default parameter hack is required because of the way that Python maintains only two execution contexts: local and global. The lambda's execution takes place in a fresh local context with only its two local parameter variables, x and y; it doesn't have access to global variables. When the lambda is created, the creation happens in the context where the betOn variable is known. So we provide the extra, global parameters as defaults when the lambda is created. As an alternative to creating lists with the filter function, similar results can be created with a list comprehension. This is covered just after the following material on reduce.
http://www.linuxtopia.org/online_books/programming_books/python_programming/python_ch20s06.html
CC-MAIN-2018-13
refinedweb
667
62.07
Build-Time Hooks¶ A package specifies custom commands in its pkg.yml file. There are three types of commands: pre_build_cmds (run before the build) pre_link_cmds (run after compilation, before linking) post_link_cmds (run after linking) Example¶ Example (apps/blinky/pkg.yml): pkg.pre_build_cmds: scripts/pre_build1.sh: 100 scripts/pre_build2.sh: 200 pkg.pre_link_cmds: scripts/pre_link.sh: 500 pkg.post_link_cmds: scripts/post_link.sh: 100 For each command, the string on the left specifies the command to run. The number on the right indicates the command’s relative ordering. All paths are relative to the project root. When newt builds this example, it performs the following sequence: scripts/pre_build1.sh scripts/pre_build2.sh [compile] scripts/pre_link.sh [link] scripts/post_link.sh If other packages specify custom commands, those commands would also be executed during the above sequence. For example, if another package specifies a pre build command with an ordering of 150, that command would run immediately after pre_build1.sh. In the case of a tie, the commands are run in lexicographic order (by path). All commands are run from the project’s base directory. In the above example, the scripts directory is a sibling of targets. Custom Build Inputs¶ A custom pre-build or pre-link command can produce files that get fed into the current build. Pre-build commands can generate any of the following: .cfiles for newt to compile. .afiles for newt to link. .hfiles that any package can include. Pre-link commands can only generate .a files. .c and .a files should be written to the $MYNEWT_USER_SRC_DIR environment variable (defined by newt), or any subdirectory within. .h files should be written to $MYNEWT_USER_INCLUDE_DIR. The directory structure used here is directly reflected by the includer. E.g., if a script writes to $MYNEWT_USER_INCLUDE_DIR/foo/bar.h, then a source file can include this header with: #include "foo/bar.h" Details¶ Environment Variables¶ In addition to the usual environment variables defined for debug and download scripts, newt defines the following env vars for custom commands: These environment variables are defined for each process that a custom command runs in. They are not defined in the newt process itself. So, the following snippet will not produce the expected output: BAD Example (apps/blinky/pkg.yml): pkg.pre_cmds: 'echo $MYNEWT_USER_SRC_DIR': 100 You can execute sh here instead if you need access to the environment variables, but it is probably saner to just use a script. Detect Changes in Custom Build Inputs¶ To avoid unnecessary rebuilds, newt detects if custom build inputs have changed since the previous build. If none of the inputs have changed, then they do not get rebuilt. If any of them have changed, they all get rebuilt. The $MYNEWT_USER_[...] directories are actually temp directories. After the pre-build commands have run, newt compares the contents of the temp directory with those of the actual user directory. If any differences are detected, newt replaces the user directory with the temp directory, triggering a rebuild of its contents. The same procedure is used for pre-link commands. Paths¶ Custom build inputs get written to the following directories: bin/targets/<target>/user/pre_build/src bin/targets/<target>/user/pre_build/include bin/targets/<target>/user/pre_link/src Custom commands should not write to these directories. They should use the $MYNEWT_USER_[...] environment variables instead.
https://mynewt.apache.org/latest/os/modules/extcmd/extcmd.html
CC-MAIN-2022-05
refinedweb
549
59.3
I have found that FSOUND_SetReserved does not work if the channel handle you use is no longer valid. Is this a bug? This is the situation. I have started a sound playing on a channel, then I have called SetReserved( channelHandle, true ) to ensure the channel is safe. Later on I discover the sound has stopped playing (by calling FSOUND_IsPlaying(channelHandle) so i decide to un-reserve the reserved channel. I call SetReserved( channelHandle, false ), which does not work. The channel remains reserved. The only way I can work around this is to bitmask the channel handle to get the channel index (ie mask out the ref count bits), then call SetReserved( channelIndex, false ). This works ok. Shouldn’t FSOUND_SetReserved still work on channel handles that aren’t valid, by extracting teh channel index and operating on that? - Abandon22 asked 14 years ago - You must login to post comments So you are saying that we shouldn’t unreserve channels after they are stopped? Do they remain reserved or not if we do not call SetReserved(id, false)? I get the following trace in my code: FSOUND_Stream_PlayEx(FSOUND_FREE, m_Stream, NULL, true) FSOUND_SetReserved(4132, true) FSOUND_Stream_GetLength(m_Stream) FSOUND_Stream_GetLengthMs(m_Stream) FSOUND_GetFrequency(4132) FSOUND_SetPan(4132, 128) FSOUND_SetVolume(4132, 255) FSOUND_SetFrequency(4132, 44100) FSOUND_SetPaused(4132, false) FSOUND_Stream_GetTime(m_Stream) FSOUND_IsPlaying(4132) FSOUND_Stream_GetTime(m_Stream) FSOUND_IsPlaying(4132) FSOUND_Stream_GetTime(m_Stream) FSOUND_Stream_Close(m_Stream) FSOUND_IsPlaying(4132) FSOUND_SetReserved(4128, false) and the last call fails…. Thanks! i am not hardcoding the numbers (that’s just my trace). the actual code looks like this: [code:2roe4yvo] bool cSoundChannel::PlayStreamAndReserveNewChannel(void) { bool success = false; m_Section.Lock(); { m_GlobalPlayStreamSection.Lock(); { m_channelId = FSOUND_FREE; if(!m_Stream) { // Stream should not be NULL. ASSERT(false); } else { // Open it paused. { try { FMOD_TRACE(_T("FSOUND_Stream_PlayEx(FSOUND_FREE, m_Stream, NULL, true)\n")); m_channelId = FSOUND_Stream_PlayEx(FSOUND_FREE, m_Stream, NULL, true); } catch(...) { // Fmod dll crash. ASSERT(false); } } if(m_channelId == -1) { // No new channels. #if defined(_DEBUG) { ASSERT(false); } #endif } else { try { // Set it as reserved so no one else can steal it. FMOD_TRACE1(_T("FSOUND_SetReserved(%i, true)\n"), m_channelId); if(!FSOUND_SetReserved(m_channelId, true)) { // Couldn't reserve it (someone else got it). #if defined(_DEBUG) { ASSERT(false); } #endif } else { // Success. success = true; } } catch(...) { // Fmod dll crash. ASSERT(false); } } } } m_GlobalPlayStreamSection.Unlock(); if(!success) { const bool fadeout = false; StopPlaying(fadeout); } } m_Section.Unlock(); return success; } [/code:2roe4yvo] i don’t want channels to be reserved all the time — i want to create a dynamic range of sound samples, where whenever i create a cSoundChannel, it opens a new channel, and then releases it when it’s done (or when the object is destroyed). How do I release the channel automatically once the stream stops playing? I understand I can’t do this because the handle is no longer at that point? try putting a sleep in your test code like so: [code:2roe4yvo] int c = FSOUND_PlaySound(FSOUND_FREE, samp1); FSOUND_SetReserved(c, TRUE); while (FSOUND_IsPlaying(c)) { Sleep(100); } FSOUND_SetReserved(c, FALSE); >> Sleep(1000); printf("\n\nreserved = %d\n\n", FSOUND_GetReserved(c)); [/code:2roe4yvo] thanks for your help… 8) To reproduce this, I constantly call my play function to keep playling random sounds one after another, and after about 10 minutes, i get an access violation, and the following debug output (including my trace): [code:1pxokjie] FSOUND_Stream_GetTime(m_Stream) FSOUND_IsPlaying(1421354) FSOUND_Stream_GetTime(m_Stream) FSOUND_Stream_GetTime(m_Stream) FSOUND_SetReserved(1896488, false) FSOUND_Stream_Close(m_Stream) First-chance exception in Photokeeper – Debug.exe (FMOD.DLL): 0xC0000005: Access Violation. [/code:1pxokjie] i can get the same result during normal operation, usually after like a half hour of testing with sounds on. might be a thread sync issue with fmod?[/code] stack at the crash is: [code:2ekuykl3] FMOD! 0035714e() FMOD! 003572be() KERNEL32! 77e7d33b() [/code:2ekuykl3] this doesn’t happen on the main thread, but some other thread — the only thread with priority 11.[/code] hey brett! any ideas on this? i’m banging my head on the wall trying to figure out what is going on… your help would be much appreciated. how thread-safe is fmod? what is that fmod thread with priority 11 that seems to appear out of nowhere and then crash? Thanks… um ok i did some more experiementing… i found that if i don’t reserve/unreserve channels, the crash never occurs. so that’s it for me, but i still think you have a pesky little bug in there somewhere… sorry about the overpostage. Geez well that explains a lot! I was assuming that FMOD was threadsafe because it is really made for games, which are inherently multithreaded. Perhaps you need an intro topic in the help docs that spells this out ;).
http://www.fmod.org/questions/question/forum-7831/
CC-MAIN-2017-39
refinedweb
756
65.62
lck.django 0.8.4 Various common Django-related routines. The source code repository and issue tracker are maintained on GitHub. This package bundles some royalty free static images that are useful in almost every Django project: - Silk icons 1.3 by FamFamFam - requires attributing the author - Silk Companion 1 by Damien Guard - requires attributing the author - Country Flags by SenojFlags.com - requires using the following HTML:<a href="">Country flag</a> image from <a href="">Flags of all Countries</a>. How to run the tests The easiest way would be to run: $ DJANGO_SETTINGS_MODULE="lck.dummy.settings" DJANGO_SETTINGS_PROFILE="test" django-admin.py test This command runs the internal Django tests as well and that’s fine because there are monkey patches and other subtleties that should better be tested for potential breakage. The dummy project is also used as an example of setting up a Django project. However, it seems Django tests are not happy with some changes to the settings so we’re using the test profile (which loads overrides from settings-test.py) to avoid that. Change Log 0.8.4 - TimeTrackable models can now force marking fields as dirty with mark_dirty() and mark_clean() methods 0.8.3 - concurrent_get_or_create will now raise AssertionErrors if given either too many fields (e.g. not all of which are unique or compose a unique-together constraint) or too few (e.g. fields do not form a whole unique-together constraint). Non-unique fields should be passed in the defaults keyword argument if needed at object creation time. - profile now implements automatic profile account synchronization by registering a post-save signal on User and creating an AUTH_PROFILE_MODEL instance. A management command for existing applications called sync_profiles has been created. - Unit tests converted to unittest2 format 0.8.2 - fixed regression from 0.8.1: removed savepoint support since the updated concurrent_get_or_create fails miserably on MySQL due to dogdy savepoint support in MySQL-python 0.8.1 - concurrent_get_or_create based on get_or_create from Django 1.4.2 - namespace_package_support extended to cover django.utils.translation as well (previously namespace-packaged projects only worked with I18N if setup.py develop or pip install -e . was used to install them) - dj.chain requirement bumped to 0.9.1 (supports more collective methods) 0.8.0 - lazy_chain moved to a separate dj.chain package. The old interface is thus deprecated and will be removed in a future version. - activitylog updates: removed redundant user fields so it works again with ACTIVITYLOG_PROFILE_MODEL set to auth.User - EditorTrackable doesn’t require overriding get_editor_from_request anymore if EDITOR_TRACKABLE_MODEL is set to a profile model instead of auth.User - profile admin module includes a predefined ProfileInlineFormSet for inclusion of profile-tied models to the UserAdmin as inlines - the dummy application now passes all internal Django unit tests in versions 1.4.0 - 1.4.2 0.7.14 - lazy_chain: the fix from 0.7.13 introduced a different kind of bug, reverted and fixed properly now. More tests included. - flatpages now serve content in the default language if the language requested by the browser is unavailable. - some internal cleanups 0.7.13 - lazy_chain: when iterating over a slice, the iterator fetched one item too many. It didn’t yield it back so the result was correct but if using xfilter() that caused unnecessary iteration. - dj.choices requirement bumped to 0.9.0 (choices are int subclasses, unicode(choice) is now equivalent to choice.desc) 0.7.12 - namespace package support now works with Unicode literals in settings.py - dummy app settings refinements: timing middleware moved down the stack because it uses the user session, WSGI app definition was wrong 0.7.11 - No code changes - dj.choices requirement bumped to 0.8.6 (fully compatible with 0.8.5 and significantly improves ChoiceFields) 0.7.10 - BACKLINKS_LOCAL_SITES setting to control if all configured sites should be considered local upon backlink discovery - More backlink fixes data model fixes to make it more cross-compatible with different backends 0.7.9 - Fixed backlink hash generation in activitylog - activitylog accepts UTF-8 characters in User-Agent headers - activitylog South migration #0002 now also works on backends with DDL transactions (e.g. Postgres) 0.7.8 - Fixed South support for custom fields (DefaultTags and MACAddressField). 0.7.7 South migrations supported across the board. For existing installations you should run: $ python manage.py migrate APP_NAME 0001 --fake $ python manage.py migrate APP_NAME where APP_NAME is activitylog, badges, common, flatpages, profile, score or tags. uniqueness constraints in activitylog.models.Backlink and activitylog.models.UserAgent moved to separate hash fields to make MySQL happy. South migrations should handle schema evolution regardless of the backend you’re using..8.4.xml
https://pypi.python.org/pypi/lck.django/0.8.4
CC-MAIN-2016-44
refinedweb
780
51.55
Internationalization This document is for Django's SVN release, which can be significantly different from previous releases. Get old docs here: 0.96, 0.95. Django has full support for internationalization of text in code and templates. Here’s how it works. Overview USE_I18N is set to False, then Django will make some optimizations so as not to load the internationalization machinery. See the documentation for USE_I18N. You’ll probably also want to remove 'django.core.context_processors.i18n' from your TEMPLATE_CONTEXT_PROCESSORS setting. If you do need internationalization: three steps - Translation strings specify “This text should be translated.” These strings can appear in your Python code and templates. It’s your responsibility to mark translatable strings; the system can only translate strings it knows about. In Python code Standard translation., make-messages.py, won’t be able to find these strings. More on make-messages later.) The strings you pass to _() or uget) whenever you have more than a single parameter. If you used positional interpolation, translations wouldn’t be able to reorder placeholder text. Marking strings as no-op. Lazy translation. If you don’t like the. It’s a good idea to add translations for the field names and table names, too. This means writing explicit verbose_name and verbose_name_plural options in the Meta class, though: from django.utils.translation import ugettext_lazy as _ class MyThing(models.Model): name = models.CharField(_('name'), help_text=_('This is the help text')) class Meta: verbose_name = _('my thing') verbose_name_plural = _('mythings') Pluralization). In template code|length as counter %} There is only one {{ name }} object. {% plural %} There are {{ counter }} {{ name }} objects. {% endblocktrans %} Internally, all block and inline translations use the appropriate ugettext / ungettext call. Each RequestContext has access to three translation-specific variables: - LANGUAGES is a list of tuples in which the first element is the language code and the second is the language name (in that language). - LANGUAGE_CODE is the current user’s preferred language, as a string. Example: en-us. (See “How language preference is discovered”, below.) - LANGUAGE_BIDI is the current language’s direction. If True, it’s a right-to-left language, e.g: Hebrew, Arabic. If False it’s a left-to-right language, e.g: English, French, German etc.). Working with lazy translation objects a couple of helper functions. Joining strings:). The allow_lazy() decorator. 2. How to create language files. Message files,: - The root django directory (not a Subversion checkout, but the one that is linked-to via $PYTHONPATH or is located somewhere on that path). - The root directory of your Django project. - The root directory of your Django app.: make-messages.py -a Compiling message files. A note to translators If you’ve created a translation in a language Django doesn’t yet support, please let us know! See Submitting and maintaining translations for the steps to take. 3. How Django discovers language preference in your settings file. Django uses this language as the default translation — the final attempt if no other translator finds a translation. If all you want to do is run Django with your native language, and a language file is available for your language, all you need to do is set LANGUAGE_CODE. If you want to let each individual user specify which language he or she prefers, use LocaleMiddleware. LocaleMiddleware enables language selection based on data from the request. It customizes content for each user. To use LocaleMiddleware, add 'django.middleware.locale.LocaleMiddleware' to your MIDDLEWARE_CLASSES setting. Because middleware order matters, you should follow these guidelines: - Make sure it’s one of the first middlewares installed. - It should come after SessionMiddleware, because LocaleMiddleware makes use of session data. - If you use CacheMiddleware, put LocaleMiddleware after it. For example, your MIDDLEWARE_CLASSES might look like this: MIDDLEWARE_CLASSES = ( a django_language key in the the current user’s session. - Failing that, it looks for a cookie that is named according to your LANGUAGE_COOKIE_NAME setting. (The default name is django_language, and this setting is new in the Django development version. In Django version 0.96 and before, the cookie’s name is hard-coded. available, Django uses de. Only languages listed in the LANGUAGES setting can be selected. If you want to restrict the language selection to a subset of provided languages (because your application doesn’t provide all those languages), set, make-messages.py will still find and mark these strings for translation, but the translation won’t happen at runtime — so you’ll have to remember to wrap the languages in the real ugettext() in any code that uses LANGUAGES at runtime. The LocaleMiddleware can only select languages for which there is a Django-provided base translation. If you want to provide translations for your application that aren’t already in the set of translations in Django’s source tree, you’ll want to provide at least. Feel free to read this value in your view code. Here’s a simple example:. Using translations in your own projects in the settings documentation,. You can also run compile-message.py --settings=path.to.settings to make the compiler process all the directories in your LOCALE_PATHS setting.. The set_language redirect view As a convenience, Django comes with a view, django.views.i18n.set_language, that sets a user’s language preference and redirects back to the previous page. Activate this view by adding the following line to your URLconf: (r'^i18n/', include('django.conf.urls.i18n')), (Note that this example makes the view available at /i18n/setlang/.) if you’re using the Django development version.) After setting the language choice, Django redirects the user, following this algorithm: -. You hook it up like this: js_info_dict = { 'packages': ('your.app.package',), } urlpatterns = patterns('', (r'^jsi18n/$', 'django.views.i18n.javascript_catalog', js_info_dict), ). You can make the view dynamic by putting the packages into the URL pattern: urlpatterns = patterns('', (r'^jsi18n/(?P<packages>\S+?)/$', 'django.views.i18n. Using the JavaScript translation catalog). Creating JavaScript translation catalogs. Specialties of Django translation. Questions/Feedback If you notice errors with this documentation, please open a ticket and let us know! Please only use the ticket tracker for criticisms and improvements on the docs. For tech support, ask in the IRC channel or post to the django-users list.
http://www.djangoproject.com/documentation/i18n/
crawl-001
refinedweb
1,031
51.85
Fresh garlic with good price in China high quality US $740.0-880.0 / Metric Tons 11 Metric Tons (Min. Order) Low price China made garlic with good quality in European market US $800-2300 / Ton 5 Tons (Min. Order) normal white fresh garlic with good quality in China US $300-800 / Metric Ton 1 Metric Ton (Min. Order) Natural Garlic with Good Quality in China US $500-1000 / Metric Ton 10 Metric Tons (Min. Order) natural fresh garlic with good quality in china US $950-1100 / Ton 26 Tons (Min. Order) 2015 New Garlic Stems Price In China With Good Quality US $1000-2000 / Ton 10 Tons (Min. Order) good brand garlic price in china high quality garlic with great price US $1000-1700 / Ton 27 Tons (Min. Order) good quality pure white garlic US $190.0-500.0 / Metric Ton 1 Metric Ton (Min. Order) 2016 china good quality fresh garlic US $400-800 / Ton 12 Tons (Min. Order) Chinese Cheap White Garlic With Good Quality US $800-1500 / Metric Ton 25 Metric Tons (Min. Order) Good Quality Fresh White Garlic US $500-800 / Metric Ton 13 Metric Tons (Min. Order) supply good quality fresh garlic/natural garlic 30 Metric Tons (Min. Order) good quality Chinese fresh garlic for hot sales 4.5cm 5.0cm 5.5cm 6.0cm US $1500.0-1800.0 / Metric Tons 10 Metric Tons (Min. Order) Jinxiang fresh garlic 2017 new good quality red garlic US $600-1000 / Metric Ton 12 Metric Tons (Min. Order) Chinese fresh garlic,ajo,ail,alho bulbs US $1-1500 / Metric Ton 24 Tons (Min. Order) Fresh organic red garlic with good quality US $1400.0-1500.0 / Tons 15 Tons (Min. Order) sales high quality good garlic US $900.0-900.0 / Kilograms 10 Kilograms (Min. Order) Laiwu natural fresh pure white garlic with good quality US $1550.0-1550.0 / Metric Tons 24 Metric Tons (Min. Order) Fresh good quality white garlic for sale US $550-900 / Metric Ton 14 Metric Tons (Min. Order) Fresh pure white garlic with good quality US $1000-1700 / Metric Ton good quality anti-oxidant fermented solo clove black garlic US $6-12 / Kilogram 10 Kilograms (Min. Order) Good Quality Grade A Good Farmer Garlic US $500-1000 / Ton 10 Tons (Min. Order) jining 2015 new crop natrule garlic for hot sale good white garlic US $300-1000 / Ton 1 Ton (Min. Order) The 2017 New Crop In Mesh Bags For Garlic With Good Quality US $500-900 / Metric Ton 8 Metric Tons (Min. Order) Fresh pure white garlic with good quality US $600.0-600.0 / Ton | Buy Now 25 Tons (Min. Order) import chinese garlic with good quality 2017' US $800-1400 / Metric Ton 1 Metric Ton (Min. Order) fatory price China fresh garlic with good quality US $800-1500 / Ton 1 Ton (Min. Order) Good Quality White Garlic US $800-1000 / Metric Ton 26 Metric Tons (Min. Order) fresh good quality garlic/organic white garlic 10 Metric Tons (Min. Order) Superior Quality Good Price China Black Garlic 500g/Bag US $10-100 / Kilogram 10 Kilograms (Min. Order) Good quality dried garlic for the international market fob price US $700-1200 / Ton 3 Tons (Min. Order) pure white garlic factory in 2013 US $1-801 / Ton 26 Tons (Min. Order) Good Quality Fresh Garlic US $1000-1200 / Ton 18 Tons (Min. Order) Lowest price good quality for white garlic US $0.0007-0.0011 / Gram 300 Grams (Min. Order) good quality haccp certified products garlic price in china US $850-1280 / Metric Ton 12 Metric Tons (Min. Order) - About product and suppliers: Alibaba.com offers 4,214 garlic with good quality in china products. About 26% of these are fresh garlic, 6% are dried vegetables, and 1% are frozen vegetables. A wide variety of garlic with good quality in china options are available to you, such as common, organic, and gmo. You can also choose from bulk, bottle, and drum. As well as from ball, powder, and sliced. And whether garlic with good quality in china is free samples, or paid samples. There are 4,214 garlic with good quality in china suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of garlic with good quality in china respectively. Garlic with good quality in china products are most popular in Mid East, North America, and Southeast Asia. You can ensure product safety by selecting from certified suppliers, including 1,350 with Other, 556 with ISO9001, and 224 with BRC certification. Buying Request Hub Haven't found the right supplier yet ? Let matching verified suppliers find you. Get Quotation NowFREE Do you want to show garlic with good quality in china or other products of your own company? Display your Products FREE now! Related Category Product Features Supplier Features Supplier Types Recommendation for you related suppliers related Guide related from other country
http://www.alibaba.com/countrysearch/CN/garlic-with-good-quality-in-china.html
CC-MAIN-2017-51
refinedweb
818
66.94
C Program to Generate the graph sheet using the grphics.h library. To use graphics.h, we have to install the drivers in to the the system by using the initgraph() function. Here we derive the graph of input sizes verses time taken for input sizes. x axis represents inputs(0,10000,20000,—-), y axis rep time(0,0.05,0.1,0.15—). Read more about C Programming Language. To browse more C Programs visit this link /*********************************************************** * You can use all the programs on * for personal and learning purposes. For permissions to use the * programs for commercial purposes, * contact [email protected] * To find more C programs, do visit * and browse! * * Happy Coding ***********************************************************/ #include "stdio.h" #include "conio.h" #include "graphics.h" void main() { int gd = DETECT, gm; int y = 0, x = 10, m[20], k[20], n, a[20], i; float b[20]; initgraph(&gd, &gm, "c:\tc\bgi"); printf("nntGenerating the Graphsnn"); printf("nEnter the no. of inputst"); scanf("%d", &n); printf("nEnter the input sizes and corresponding time takenn"); for (i = 0; i < n; i++) { printf("nEnter input sizet"); scanf("%d", &a[i]); printf("nEnter time takent"); scanf("%f", &b[i]); } cleardevice(); //represents y axis line(10, 0, 10, 400); //represents x axis line(10, 400, 600, 400); while (y <= 400) { line(0, y, 10, y); y = y + 20; } while (x <= 600) { line(x, 400, x, 410); x = x + 20; } outtextxy(20, 440, "1unit=20 pixels , origin is (10,400)"); outtextxy( 20, 450, "x axis represents inputs(0,10000,20000,----), yaxis rep time(0,0.05,0.1,0.15---)"); setcolor(5); for (i = 0; i < n; i++) { k[i] = (a[i] * 0.002); m[i] = (400 - (b[i] * 400)); putpixel(k[i], m[i], 11); } for (i = 0; i < n - 1; i++) line(k[i], m[i], k[i + 1], m[i + 1]); getch(); } generate Graph using grphics.h” I’m impressed, hunt for something regarding this.
https://c-program-example.com/2011/11/c-program-to-generate-graph-using-grphics-h.html
CC-MAIN-2021-17
refinedweb
321
63.7
As a relatively new developer, and a complete lightweight when compared to the rest of the Komodo development team, I find myself sharing snippets, errors and diffs for review quite often. Since I like to share (Mom and Dad taught me well), I thought it was important to make it easier to share in Komodo. At one point, a Komodo user was limited to using kopy.io to do this, and only in limited areas of Komodo. In 10.2, we’ve extended Komodo to allow our users to share more easily and in more ways. We wanted to add more endpoints (more kopy.ios), allow users to share text from more aspects of Komodos (logs, diffs, files, etc.) as well as allow users to more easily add their own custom endpoints without having to touch any of Komodo UI. We did this with two major additions: 1) A new “Sharing” SDK (I’ll explain) 2) Komodo + Slack integration (uses the above SDK to post content to Slack!) The Share SDK var share = require("ko/share"); The logical way to extend the existing UI with more “share points” (menu items to select an output) was to extend the existing kopy.io module’s UI integrations and move it into it’s own SDK. Thus ko/share was born. The share module now adds menus to the following source points in Komodo: - Editor context Share menu (Right click your file or selected text > Share…) - Dynamic toolbar share button - Log file window (Help > Troubleshooting > View Log File) - Any diff dialog - The trackchanges diff panel All of these interfaces have been augmented with a Share dropdown menu. Register your Module At this point you might be thinking “Hey neat Carey, you added another dropdown menu list to Komodo…so what?” Good question! If it was just a drop down all over the place that would be boring, so next we needed to add items to this list for all output modules dynamically. share.register(name, namespace, label); Enter share.register(). It takes a name (e.g. “kopy”), a namespace (e.g. “ko/share/kopy” in order to require your module.), and a label (e.g. “Share code on kopy.io”, which is the label shown in the menu list). Komodo uses CommonJS method of getting JS code and that requires a namespace (like the one mentioned above). A basic example you can follow to add your own namespace for your share implementation is as follows: require.setRequirePath("myShareProject",""). The best method to do this would be to include your code in a Komodo Addon but that’s to big a topic to cover here. Ask for help in our forums. Komodo then checks to make sure you implemented a share function in your module that Komodo can call later (this is explained in more detail in the next section). It then cycles through all of the UI sources in Komodo we mentioned above and adds a new menu item for you. Implement the Share Interface require("ko/share/kopy").share(data, meta) Above I mentioned a share function that the share.register method looks for. This is the one and only requirement of your share module–it must have a share function. This is the interface we use to pass content to your module. The Data The function should take a string of data and an Object of meta information. The data string is the content that will be posted as text to the output source. For example, in the case of kopy.io, data is the code displayed in the webpage: #!/usr/bin/env python3 print "why is this broken??" print("oh...duh...it's Python 3") The Meta The meta object holds basic meta information about the data you want to post. You can expect an object as follows: var meta = { langauge: "javascript", title: "MyFile.js" } Your Share module can use this information as you see fit. For example, you can post a file to kopy.io and leave it up to kopy.io to figure out what your content might be (text, Javascript, Python?) or you can use the language property and send that along in the API call sent to kopy.io. Since this module is so new, we are open to ideas for additional properties in the meta object, so please let us know if you have any suggestions. Slacking var slack = require("ko/share/slack"); Sorry, no more source links. This stuff’s not open sourced. Now that we have the Share module, the next step was to add some share endpoints. Kopy.io was a breeze since our lead Komodo Developer, Nathan Rijksen, had already written most of the necessary code back in Komodo 9.2. Since our team and MANY MANY others use Slack on a daily basis, we chose to add Slack. This project was a lot of firsts for me. I got to augment a Django Python server with a new API endpoint, do funky window management and event handling (imagine having to program your website by starting at the browser window then trying to access the loading page in a particular tab to trigger your onload event…FUN!), and trying to streamline a 100% async user experience. I even got to take a new service for a spin that Nathan added while I was building the Slack integration, ko/simple-storage. Simple-storage allows you to persist information about your addon, Userscript, etc. and have it persist between Komodo restarts. This is the recommended way to persist information rather than the traditional require("ko/prefs") method. Django API Endpoint In order for Komodo to post content to your various Slack Teams and Channels it needs to authenticate users. Slack has a system in place which requires you to have a server in the middle of the process to actually send the final key request to the Slack Auth servers. I started writing this in Node.js since that seems to be the go-to for API servers these days. In the end though, I decided to build it on Django. We already write so much Python in Komodo and we have a few other services built on a Django server. Besides, I’ve already written a Node API server in class and have never touched Django. Get moderately good at a ton of stuff and never become an expert at anything…that’s my advice kids (with tongue firmly pressed against my cheek)! Without going into too much boring detail, the server handles a request from Komodo with two temporary auth keys (one from Slack and one from Komodo), which are sent to another Slack authorization server that handles permissions. When that is returned from Slack, our server sends the new final auth key back to Komodo and it’s encrypted and saved to disk. The Slack Panel The panel that appears after you’ve authenticated and you’re picking a channel, message, etc. is plain. There doesn’t appear to be anything special about it. But that’s not true. What is of interest is that it’s built completely from the ko/ui SDK, just like the Start Up Wizard (Help > Run first start wizard again). There is ZERO markup involved there. Here’s a sample of what it looks like in the backend. You can copy and paste this code into the Komodo console to try it for yourself (View menu > Tabs & Sidebars > Console, or Ctrl (Cmd on OSX) + Shift + B then click the Console tab): panel = require("ko/ui/panel").create({ anchor: require("ko/windows").getMostRecent().document.documentElement, attributes: { backdrag: true, noautohide: true, width: "450px", class: "dialog" } }); panel.open(); var options = { attributes: { placeholder: "Title", col: 120, flex: 1 }}; var title = require("ko/ui/textbox").create(options); panel.add(title); All the fields that you fill out in the Slack integration panel are saved for use later using the new ko/simple-storage I mentioned above. Let’s have a look at that, then I think we should call it a day on this blog…oh ps. you can get rid of that panel I got you to create on your screen by just writing panel.close() in the JS console 😉 Simple Storage ko/simple-storage is meant to replace anything ko/prefs does that is not actually “preference” related as the standard storage for application, addon, or userscript information. It’s persistent and easier to use, which should make anyone customizing Komodo pretty happy. Here’s a small sample of how it works. var my_SS = require("ko/simple-storage").get("mine"); my_SS.storage.pizza = "It's so good it's scary, Carey!"; console.log("Is pizza good? "+my_SS.storage.pizza); The storage object is saved to disk and can be reloaded after a restart by using the same code. And when you’re done with it, you just remove it. var my_SS = require("ko/simple-storage").get("mine"); console.log("Is pizza still good? "+my_SS.storage.pizza + " Stop asking stupid questions."); require("ko/simple-storage").remove("mine"); Just open the Komodo Console(View menu > Tabs & Sidebars > Console, or Ctrl (Cmd on OSX) + Shift + B then click the Console tab) if you’d like to give it a try. So that was a long blog…but I hope you found it useful and can take advantage of these great ways to share with Komodo. As my mother always said…sharing is caring! Title photo courtesy of Web Hosting on Unsplash.
https://www.activestate.com/blog/slacking-off-with-komodo/
CC-MAIN-2020-10
refinedweb
1,581
72.97
24 May 2007 05:08 [Source: ICIS news] By Nurul Darni SINGAPORE (ICIS news)--Saudi Arabian Oil Co (Aramco) is set to secure higher naphtha premiums for July to December 2007 contracts, citing bullish market trend and reduced supplies in the second half of the year, traders said on Thursday. “Current market fundamentals are advantageous to the supplier. It doesn’t give existing contract buyers much room to negotiate for lower premiums,” one contract buyer with a Japanese trading company said. Aramco, the biggest Mideast naphtha supplier to ?xml:namespace> It raised offers of A180 naphtha at a $25/tonne premium to Aramco pricing formula, Rabigh naphtha at $23/tonne premium, Jubail naphtha at $21/tonne premium and A310 naphtha at $20/tonne premium. Buyers have balked at such lofty contract prices and most are still in talks with Aramco officials, where face-to-face negotiations are taking place in But the final contract prices could be achieved early next week, when all the buyers have reached an agreement, they added. “I would not be too surprised if one buyer agrees on the prices and others would have to follow suit. Aramco would just say take it or leave it,” a second buyer with a Japanese end user said. Aramco had informed its existing contract customers earlier that it would trim its naphtha supplies slightly to Aramco is establishing a $9.8bn refinery/petrochemicals complex at Rabigh via a joint venture with Sumitomo Chemical. Capacity from late 2008 will include 1.3m tonnes/year of ethylene and 900,000 tonnes/year of propylene. Its targeted high premiums came on the heels of recently concluded high prices by Abu Dhabi National Oil Co (Adnoc) with South Korean buyers earlier this month. Adnoc successfully raised its recently negotiated July 2007-June 2008 premiums, over its April 2007-March 2008 prices with no discounts given to any of its customers, buyers have said. Apart from Aramco, other key suppliers of naphtha from the Middle East
http://www.icis.com/Articles/2007/05/24/9031583/focus-aramco-set-on-higher-naphtha-premiums.html
CC-MAIN-2015-11
refinedweb
332
50.26
On Thu, 15 Feb 2001 10:42:32 +0100 Giacomo Pati <giacomo@apache.org> wrote: > Another issue raised into my head is if we should take the targeted > language > into the namespace url? > > 1) > 2) If both implementations define the same syntax (the same tags and attributes with the same behaviour) in my opinion they shouldn't differ in namespace. The editor (i.e. the one who writes XSP pages or stylesheets) won't/shoudn't care about the language of the XSP processor of a system. In addition this would make it possible to exchange the editors work between systems using different target languages. (Besides, this would put some (little) preasure on the logicsheet designers to keep there design clear and documented, so that different implementations stay in sync. ;-) Just my humble opinion! Conny Krappatsch -- ______________________________________________________________________ Conny Krappatsch mailto:conny@smb-tec.com SMB GmbH
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200102.mbox/%3C20010216090843.6f1c4d8b.conny@smb-tec.com%3E
CC-MAIN-2014-23
refinedweb
146
57.98
Compute The Diameters of Rod in C Language A program to compute the diameter in centimeters of steel rod, and aluminum rod, and a copper rod, which can withstand a particular compression load. The allowable compression stress of steel, aluminum, and copper is 25,000 lbs/m², 15,000 lbs/m², and 20,000 lbs/m², respectively. Area of rod = compression load ________________________ Allowable compression stress Area = π r^2 where diameter d= 2r Input the compression load. Print the type of material, load, allowable stress, and diameter. Used formatted output with field with specifications that align output. Solution Preview Hi, this is my c program. I run the code in turbo c++. Attached is my output. Hope it helps. #include <stdio.h> #include <math.h> void main() { double comprLoad,comprStress; /*to store load,stress of rod */ double d,area; /* for diameter and area of the rod */ char type[20]; /* rod type */ /* ... Solution Summary The solution gives a complete C program on computing the diameters of steel rod, aluminum rod, and copper rod using the given formula. The output of the program is also provided for reference.
https://brainmass.com/computer-science/c/compute-the-diameters-of-rod-in-c-language-540419
CC-MAIN-2017-13
refinedweb
187
57.87
How can I host multiple Flask Apps for Multiple Domains? Using this guide I created a basic Flask App in Python. This worked well, and without nginx I could connect to it using my domain. The issue is, that this wasn't scaleable to multiple apps on one droplet. To do so, I tried this: I connected my domain via the DNS Settings in Digital Domain, and linked my domain to to route to the IP. I did this by pointing an A record from my domain to my IP, and NS records to the Digital Ocean servers. Here are the screenshots: Domain Host Digital Ocean DNS Host An example setup from the DNS I fresh installed NGINX: sudo aptitude install nginx I made /etc/nginx/sites-enabled/defaultthis: upstream app_server_one { server 127.0.0.1:7000 fail_timeout=0; } upstream app_server_two { server 127.0.0.1:8000 fail_timeout=0; } server { listen 80 default_server; server_name: cloud.ajnin.me # [snip...] location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass; } } server { listen 80 default_server; server_name: cloud2.ajnin.me # [snip...] location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass; } } I started my nginx server: sudo service nginx start I created my two apps (note how both run on separate ports). Both of these run separately with nginx on default: app.py from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Cloud' if __name__ == '__main__': app.run(host='0.0.0.0', port=7000, debug=True) app2.py from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'Cloud2' if __name__ == '__main__': app.run(host='0.0.0.0', port=8000, debug=True) I started my two apps: sudo (nohup) python app.py & sudo (nohup) python app2.py & Doing this, I got the Error 'This webpage is not available' from Chrome. Testing each server individually it works by typing the server ip, but running them together and connecting with the domain doesn't seem to work. Some people have suggested that I create a VirtualHost, but I'm not sure how I could implement that for this system. Would anyone be able to point me in the right direction to how I can make this work? Thanks, Aj. (I would just like to appreciate the help @asb has done to help me do this. See his responses on this and this for more info) UPDATE: Just wanted to note that going to my ip:port for each server works. The domains aren't routing to these ip:port combinations though. UPDATE: This is the output for netstat -plunt UPDATE: FIXED! See for more info @Ajnin123 could you elaborate more? I couldn't merge the gist and the digital ocean tut. Many thanks. If you had code to share that would be great
https://www.digitalocean.com/community/questions/how-can-i-host-multiple-flask-apps-for-multiple-domains?comment=27510
CC-MAIN-2017-43
refinedweb
468
65.93
What gives SBUX its edge over competition? Savvy shoppers know how to get good deals Join the NASDAQ Community today and get free, instant access to portfolios, stock ratings, real-time alerts, and more! Consistently, one of the more popular stocks people enter into their stock options watchlist at Stock Options Channel is Home Depot Inc (Symbol: HD). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2015 expiration for HD. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $67.50 strike, which has a bid at the time of this writing of $1.59. Collecting that bid as the premium represents a 2.4% return against the $67.50 commitment, or a 3.3% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2015 expiration, for shareholders of Home Depot Inc (Symbol: HD) looking to boost their income beyond the stock's 2.4% annualized dividend yield. Selling the covered call at the $87.50 strike and collecting the premium based on the $1.55 bid, annualizes to an additional 2.8% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 5.2% annualized rate in the scenario where the stock is not called away. Any upside above $87.50 would be lost if the stock rises there and is called away, but HD shares would have to advance 10.7% from current levels for that to happen, meaning that in the scenario where the stock is called, the shareholder has earned a 12.6% return from this trading level, in addition to any dividends collected before the stock was called. Top YieldBoost HD?
http://www.nasdaq.com/article/interesting-january-2015-stock-options-for-hd-cm350146
CC-MAIN-2015-40
refinedweb
318
65.93
On Mar 23, 11:54 am, "nass" wrote: > hello everyone, > i am really confused with some of the c++ time() fn results i get: > let me put you into perspective: > > im writing a logging utility the log files created will contain 14byte > samples of certain values of interest. the last 4bytes of each sample > are its timestamp (timestamp of samples), that is the # of seconds > since the epoch. > so thats one 'date' (timestamp of samples) to consider. > > the 2nd date is the log file attribute timestamp (attribute > timestamp). of when it is created - the value anyway that you see when > you do an ls -l of the file. this is of the least importane for me but > i just am refering to it in case you understand smth i do not. > > last is the date string i use to create the filename with (filename > timestamp). explaining: since the data will be copied on a flash drive > from the ARM unit on which the log utility will be run and then copied > onto a computer, i decided i didn't want to have to deal with changing > file attributes so i hard coded the file creation date onto the > filename (since it is important). > > now that we have the 3 dates a little about the system: > its an ARM based embedded system and is running debian linux > i have set its date (with date -s) and since the unit is located in > greece the timezone is EET (east european time). > having set the date as such, when i call (in total) 3 functions to get > the time or the date > > the timestamp of the samples is returned by a driver that is written > in c (unlike everything else that is c++) and that uses the kernel > call do_gettimeofday(). > > FileTime GetTime () //FileTime is a typedef of unsigned long > //this function returns the time in seconds from the Epoch > { > time_t current_time; > current_time=time(NULL); > Time=((FileTime)current_time); > return (Time); > > }; > > and > > DateTime GetFDate() > { > > time_t current_time; > current_time=time(NULL); > DatTim.day=localtime(¤t_time)->tm_mday; > DatTim.month=localtime(¤t_time)->tm_mon; > DatTim.year=localtime(¤t_time)->tm_year+1900; > DatTim.LTime=GetFTime(); > > return (DatTim); > > }; > > where > > FTime GetFTime() > { > time_t current_time; > current_time=time(NULL); > > FTim.hour=localtime(¤t_time)->tm_hour; > FTim.minute=localtime(¤t_time)->tm_min; > FTim.second=localtime(¤t_time)->tm_sec; > > return (FTim); > > }; > > the DatTim.month returns to me (currently) february, instead of march! > (it generally returns 1 month before the current month). I use > GetFDate to write the filename date string. however the bash command > 'date' returns the right date... so the attribute timestamp is > correct, but the filename timestamp is wrong... any ideas? could it be > some bug? > > thank you for your help > nass may i also ask, does do_gettimeofday() which is lower level function than time() take into account daylight saving changes? thank you for your info nass
http://fixunix.com/unix/84065-time-null-returning-wrong-month.html
CC-MAIN-2015-18
refinedweb
468
59.13
Dealing with route params in Angular-5 It is quite common to have both the query and route parameters in any single page application. This post a quick tip sharing a little RxJS snippet that I wrote in order to read the query and route parameters at once. Before we talk about that, let us find out how to read any route/query parameters in your Angular 2+ application. There are multiple ways to achieve that. Reading from the Snapshot First and the simplest way to do that is reading them from the snapshot of the active route i.e. inject the instance of ActivatedRoute into your component’s constructor or pull it from the Injector and read it from there i.e. import { ActivatedRoute } from '@angular/router'; @Component({ selector: 'app-user-detail', templateUrl: 'user-detail.component.html' }) export class UserDetailComponent implements OnInit { constructor(private activeRoute: ActivatedRoute) { } ngOnInit() { const queryParams = this.activeRoute.snapshot.queryParams const routeParams = this.activeRoute.snapshot.params; // do something with the parameters this.loadUserDetail(routeParams.id); } } But there is a little gotcha here. As the name snapshot specifies, these parameters are from the snapshot of the route at the first load of the component. These values will be calculated for the first load of the component and they won’t change unless you reload the page. Why could it be a problem? Well let’s say you had a list of users on the sidebar and clicking any user loads this user detail component. It would work fine for the first time i.e. when none of the details were openend and you clicked any user and loaded the details for the first time. Now let’s say you want to open the details of another user, if you try and click the other one, it won’t work. Why? because the component is already loaded, angular is smart and won’t reload the component; it will just change the route params, which won’t have any affect on the component because we read from the initial snapshot and so we don’t have access to the update routed params. So how can we fix this? Well we can do that by adding a listener to the route params. Reading via Subscriptions As discussed above, the snapshot won’t get updated if we try to reload the current route with different route parameters. Good news! apart from the snapshot, active route also provides the query and route parameters in the form of observables. We can subscribe to those observables and thus whenever the route params might get changed we will get notified in our subscriber and so we can load the user details. Here is how it would look like in code: ngOnInit() { this.activeRoute.queryParams.subscribe(queryParams => { // do something with the query params }); this.activeRoute.params.subscribe(routeParams => { this.loadUserDetail(routeParams.id); }); } Perfect! Now the query and route parameters are not bound to the snapshot and whenever you will click any user from the sidebar, the subscriber will get fired and the new details will be loaded. Reading them at once But let’s say that our situation has been a little changed and we need both the query and route params at once. We need to read one parameter from the query params and the other from the route params and we need to send them both in the loadUserDetail API call. How can we do that? The Dirty Way The immediate solution that might come to your mind would be to nest the subscribers like below: ngOnInit() { // Nest them together and this.activeRoute.queryParams.subscribe(queryParams => { this.activeRoute.params.subscribe(routeParams => { this.loadUserDetail(routeParams.id, queryParams.type); }); }); } Or you might be tempted to write a varied version of this i.e. move these nested callbacks to a helper function and then pass it yet another callback accepting query and route parameters i.e. ngOnInit() { this.readUrlParams((routeParams, queryParams) => { this.loadUserDetail(routeParams.id, queryParams.type); }); } readUrlParams(callback) { // Nest them together and this.activeRoute.queryParams.subscribe(queryParams => { this.activeRoute.params.subscribe(routeParams => { callback(routeParams, queryParams); }); }); } none the less, still a dirty solution (). So how can we fix that? The answer is to use the functionality already provided by RxJS. Use RxJS RxJS is a really powerful library and you can do it in several different ways, but the one I like and find myself using the most is to use the combineLatest operator by using which we merge the route and query parameters and have a single observable giving us both in a single object. Here is how our updated example would look like import { ActivatedRoute } from '@angular/router'; // Add the observable and combineLatest import { Observable } from 'rxjs/Observable'; import 'rxjs/add/observable/combineLatest'; @Component({ selector: 'app-user-detail', templateUrl: 'user-detail.component.html' }) export class UserDetailComponent implements OnInit { constructor(private activeRoute: ActivatedRoute) { } ngOnInit() { // Combine them both into a single observable const urlParams = Observable.combineLatest( this.activatedRoute.params, this.activatedRoute.queryParams, (params, queryParams) => ({ ...params, ...queryParams}) ); // Subscribe to the single observable, giving us both urlParams.subscribe(routeParams => { // routeParams containing both the query and route params this.loadUserDetail(routeParams.id, routeParams.type); }); } } And that wraps up this little post on dealing with URL parameters. Do you have any tricks of your own, feel free to share them in the comments section below. Also if you would like to learn more about RxJS, here is a nice little website that you may wat to check. Until next time, stay tuned!
https://kamranahmed.info/blog/2018/02/28/dealing-with-route-params-in-angular-5/
CC-MAIN-2019-04
refinedweb
911
54.22
Parse::PerlConfig - parse a configuration file written in Perl use Parse::PerlConfig; my $parsed = Parse::PerlConfig::parse( File => "/etc/perlapp/conf", Handlers => [\%config, \&config], ); This module is useful for parsing a configuration file written in Perl and obtaining the values defined therein. This is achieved through the parse() function, which creates a namespace, reads in Perl code, evals it, and then examines the namespace's symbol table. Symbols are then processed into a hash and returned. The parse() function is exportable upon request. Parsing is not a simple do("filename"). Instead the filenames specified are opened, read, eval'd, and closed. The justification for this being twofold: I did not want surprises in what file was found; do("file") searches @INC. I wanted to be able to insert lexicals for the code in the file to see. Being able to define variables without having them parsed back out (remember, the namespace is searched) is a nice feature. Parsing (in this manner) requires a namespace. By default, the namespace is constructed by appending Namespace_Base to a unique identifier (currently, an encoded version of the filename, but don't rely on this). You can override this behaviour by specifying an explicit Namespace argument. Prior to eval'ing the contents of a configuration file the lexical hash %parse_perl_config is initialized with several keys (documented below); if a Lexicals argument was given each of the lexicals specified are initialized. There are a few caveats; lexicals specified in the Lexicals argument cannot override %parse_perl_config; keys specified in Lexicals cannot be code references, because code references cannot currently be reliably reconstructed; modifications to %parse_perl_config keys (other than Error, documented below) are discouraged, as the results are not defined. The %parse_perl_config hash contains the following keys: Making this key a true value will cause the error handler Error_eval to be called with the value. The namespace the file is being evaluated in. The name of the file being parsed. A hash of the arguments passed to parse(). Once the namespace has been setup, and the code eval'd, it is then parse()'s job to go through the namespace's symbol table and look for "things". What it looks for depends on the Thing_Order and Symbols arguments. After that, handlers are updated, and a hash reference of what was parsed out is returned. parse() takes a list of key-value pairs as its arguments and adds them to an argument hash. If the first argument to parse() is a hash or array reference, it is dereferenced and used as if it were specified as a list. All elements following this argument are added to the arguments hash, and they override any settings specified by the reference. This means the call: parse( { Files => "/home/me/config.conf", Error_default => 'fwarn' }, Files => "/home/you/config.conf" ); causes parse()'s argument hash to consist of the following (ignoring default settings): Files => "/home/you/config.conf", Error_default => "fwarn", Simply replace the braces, {}, with brackets, [], and you get the same result. This makes it convenient to store commonly-used arguments to parse() in a hash or array, and efficiently pass these arguments to parse(), while still allowing a seperate Files argument for each call. The below itemization of parse()'s arguments describes key-value pairs. Each item consists of a key name and a description of the expected value for that key. The value description requires some explanation. A single pipe, "|", indicates alternative values; only one of the values must be specified. Values bracketed with "<" and ">" indicate that value is not literal, but figurative. So, in the case of <coderef>, you must specify a code reference (a closure or reference to a named subroutine), not the literal string "<coderef>". Values without such bracketing are literal. Braces, {}, indicate a hash reference is required; brackets, [], indicate an array reference is required. Below each key-value description is a description of the default setting, followed by a description of what the argument means. default: none, this argument is required This is the file or files you wish to parse. If a file cannot be parsed for any reason the entire parse is not abandoned, the file is simply skipped (after calling an appropriate error handling function). Equivalent to Files <filename>. default: none By default, parse() simply returns a hash reference of symbol names and their values. Given a Handlers argument, parse() will add key-value pairs to each hash reference specified, and call each code reference specified with a single argument, the hash reference it returns. Equivalent to Handlers <hashref>|<coderef>. default: none The key-value pairs in the specified hashref are made into lexical variables in the configuration eval. See the section on Parsing for further information. default: '$@%&i*' Specifies the default thing order for symbols parsed from each configuration file. See the section Things for further information. default: false If set to any true value the filehandle opened on the configuration file is untainted before evaling the code contained therein. Because this involves loading IO::Handle, which involves quite a bit of code, the option is turned off by default. You will get taint exceptions if you don't specify this option while running in -T mode. Also, as the namespace is currently constructed, having a tainted filename will cause the namespace name to be tainted, so it is also untainted. In the case of an explicitly specified Namespace value, it will also be untainted. No other values are untainted. This includes any key-value pairs specified by the Lexicals argument; you must untaint those yourself, since there is no reasonable way for parse() to determine how best to untaint them. default: empty hashref This is an override for the Thing_Order argument above. The keys in the specified hashref are symbols you want parsed specially, the values the thing order (either a string or array reference). See the section Things for further information regarding thing order. default: generated from Namespace_Base and a unique identifier This option explicitly specifies the namespace the files are parsed in. See the section Parsing for further information. default: Parse::PerlConfig::ConfigFile Unless the Namespace argument is specified, the namespace a file is parsed in is generated by appending a cleaned up version of the filename to this setting. See the section Parsing for further information. See the section Error Handling for further information. "Things" (as taken from the Perl documentation, regarding the *foo{THING} syntax) are the Perl datatypes. These include scalars, arrays, hashes, subroutines, IO handles, and globs. Anywhere a "things" argument is required you can specify one of two things; a string containing the special "thing" characters, or an array reference of each thing's actual name. The thing characters are as follows: $ for scalar, % for a hash, @ for an array, & for a subroutine, i for an IO handle, and * for a glob. The full name for each coincides with the full name for each datatype in their respective glob slots: SCALAR for a scalar, HASH for a hash, ARRAY for an array, CODE for a subroutine, IO for an IO handle, and GLOB for a glob. parse() takes various Error_* and Warn_* arguments that determine how it handles any problems it encounters. Each argument can take one of several values. The error handling specifed by Error_default in the case of an Error_* argument, or Warn_default in the case of a Warn_* argument, is used. The error is ignored. Results in a call to CORE::warn() with a trailing newline, but only if $^W is set to a true value. Like warn, but the warning is raised regardless of $^W's value. Results in a call to CORE::die() with a trailing newline. The code reference will be called with a single argument, that of the error message. The error message is guaranteed to contain no trailing newlines (in case the code reference decides to die() or warn()). There are various handler arguments. Unless otherwise specified, the default handler is used (Error_default's or Warn_default's value). default: noop The default warning handler. Called just before a file is parsed to indicate parsing is about to begin. Called with any warnings issued by the eval'd file. default: warn The default error handler. Called if there is a problem with one of the arguments specified. Called if a configuration file specified was discovered to be a directory. Called if the open attempt on a configuration file fails. Called if the variable $parse_perl_config{Error} is set in the configuration file, or if there was an eval error. Called if there is a problem with a thing character or thing name in a thing argument (thing thing thing). Called if an unknown reference is encountered in the Handlers argument. Called if an invalid lexical name or a CODE reference value is encountered in the Lexicals argument. Called if either the constructed namespace (using Namespace_Base) or a specified Namespace value is invalid. This may indicate an error in the construction of a namespace name (the generation of a unique identifier), but it's most likely you specified Namespace_Base or Namespace with invalid characters. Due to the fact that the scalar slot in a glob is always filled it is not possible to distinguish from a scalar that was never defined (e.g. @foo was, but $foo was never mentioned) from one that is simply undef. Because of this, for example, if you have a thing order of $@ and code along the lines of $foo = undef; @foo = (); the 'foo' key of the hash will be an array reference, despite there being a scalar and $ coming first in the thing order. t/parse/symbols.t, t/parse/multi-file.t, t/parse/namespace.t Michael Fowler <michael@shoebox.net>
http://search.cpan.org/~mfowler/Parse-PerlConfig-0.05/PerlConfig.pm
CC-MAIN-2017-17
refinedweb
1,617
55.03
power When you use this function in a query, it raises the values in the first specified column to the power of the values in the second specified column in the specified dataset. For more information on using query functions and operators in a REST API request, see Queries. For an end-to-end description of how to create a query, see Creating a Query. The codeblock example below raises the values in the Depth column to the power of the values in the Magnitude column of the earthquake dataset, whose {DATASET_ID} is 90af668484394fa782cc103409cafe39. { "version": 0.3, "dataset": "90af668484394fa782cc103409cafe39", "namespace": { "power": { "source": ["Depth","Magnitude"], "apply": [{ "fn": "power", "type": "transform", }] } }, "metrics": ["power"], } When you submit the above request, the response includes an HTTP status code and a JSON response body. For more information on the HTTP status codes, see HTTP Status Codes. For more information on the elements in the JSON structure in the response body, see Query.
https://developer.here.com/documentation/geovisualization/topics/query-rule-power.html
CC-MAIN-2019-04
refinedweb
157
57.71
El volumen del audio source (0.0 a 1.0). The AudioSource’s volume property controls the level of sound coming from an AudioClip. The highest volume level is 1 and the lowest is 0 where no sound is heard. using UnityEngine; public class Example : MonoBehaviour { AudioSource m_MyAudioSource; //Value from the slider, and it converts to volume level float m_MySliderValue; void Start() { //Initiate the Slider value to half way m_MySliderValue = 0.5f; //Fetch the AudioSource from the GameObject m_MyAudioSource = GetComponent<AudioSource>(); //Play the AudioClip attached to the AudioSource on startup m_MyAudioSource.Play(); } void OnGUI() { //Create a horizontal Slider that controls volume levels. Its highest value is 1 and lowest is 0 m_MySliderValue = GUI.HorizontalSlider(new Rect(25, 25, 200, 60), m_MySliderValue, 0.0F, 1.0F); //Makes the volume of the Audio match the Slider value m_MyAudioSource.volume = m_MySliderValue; } }
https://docs.unity3d.com/es/2017.4/ScriptReference/AudioSource-volume.html
CC-MAIN-2020-45
refinedweb
138
50.23
C# // C# program to print second largest // value in a linked list using System; class GFG { // A linked list node public class Node { public int data; public Node next; }; // Function to add a node at the // begining of Linked List static Node push( Node head_ref, int new_data) { // allocate node Node new_node = new Node(); // put in the data new_node.data = new_data; // link the old list off the new node new_node.next = (head_ref); // move the head to point to the new node (head_ref) = new_node; return head_ref; } // Function to count size of list static int listSize( Node node) { int count = 0; while (node != null) { count++; node = node.next; } return count; } // Function to print the second // largest element static void print2largest(Node head) { int first, second; int list_size = listSize(head); // There should be atleast two elements if (list_size < 2) { Console.Write("Invalid Input"); return; } first = second = int.MinValue; Node temp = head; while (temp != null) { if (temp.data > first) { second = first; first = temp.data; } // If current node’s data is in between // first and second then update second else if (temp.data > second && temp.data != first) second = temp.data; temp = temp.next; } if (second == int.MinValue) Console.Write( “There is no second” + ” largest element\n”); else Console.Write(“The second largest ” + “element is ” + second); } // Driver Code public static void Main(String []args) { Node start = null; // The coned linked list is: //12 . 35 . 1 . 10 . 34 . 1 start = push(start, 1); start = push(start, 34); start = push(start, 10); start = push(start, 1); start = push(start, 35); start = push(start, 12); print2largest(start); } } // This code is contributed by 29AjayKumar The second largest element is 34 Recommended Posts: - Find the largest node in Doubly linked list - Find smallest and largest elements in singly linked list - Find a peak element in Linked List - Find the first duplicate element in the linked list - Create new linked list from two given linked list with greater element at each node - Create a linked list from two linked lists by choosing max element at each position - Majority element in a linked list - Second Smallest Element in a Linked List - Move first element to end of a given Linked List - Move last element to front of a given Linked List | Set 2 - Move last element to front of a given Linked List - Move all occurrences of an element to end in a linked list - Rearrange a linked list in to alternate first and last element - Search an element in a Linked List (Iterative and Recursive) - Maximum and Minimum element of a linked list, 29AjayKumar
https://www.geeksforgeeks.org/find-the-second-largest-element-in-a-linked-list/
CC-MAIN-2019-35
refinedweb
421
56.49
New release – WrightMap 1.2 Want to share your content on R-bloggers? click here if you have a blog, or here if you don't. A (too long in the making) new version of WrightMap is now available on CRAN (Actually, version 1.2.1 is already up). This new version includes a lot of features that have long been requested, including cut-point lines, alternative item displays (classical Wright Map look, histogram view), and differential item functioning plots among others. We have updated the package tutorials on our website to reflect the changes to this version, and we will be publishing additional posts showing the new features. We hope these changes can be of use to you, and we look forward to hearing your feedback! Change log for this version: Major changes: wrightMap split into data handling and plotting, each of which is further split into person & item side. The data handling functions are further split by filetype and the plotting functions are split by plot types. For data handling, this is meant to make it possible to have item and person data of different filetypes. For plotting, this is meant to make the plotting function more flexible and make it easier to add different plot styles. Details: wrightMap.default, wrightMap.CQmodel, wrightMap.character are now removed. There is only wrightMap.R, which calls the appropriate data handling and plotting functions helper functions for data handling parameter person.side accepts person plot functions personHist (default) and personDens parameter item.side accepts item plot functions itemModern (default), itemClassic, and itemHist API changes the “type” parameter in wrightMap is now called “item.type” to avoid collision with the “type” parameter in the “plot” function. For the same reason, the “type” parameter in fitgraph is now called “fit.type” the “use.hist” parameter in wrightMap is deprecated. Create a histogram with person.side = personHist (default) and density with person.side = personDens fitgraph and make.thresholds now explicitly include a version for numeric and matrix respectively. This should fix namespace problems for users who prefer to include external functions with :: notation. New functions: - plotCI - difplot - ppPlot New features: now possible to add points and ranges to person side can easily add cutpoints on item side added “classic” and “hist” item map “throld” parameters added to make.thresholds and wrightMap, allowing for the calculation of thresholds other than .5 “alpha” and “c.params” added to make.thresholds and wrightMap, supporting the 2PL and 3PL models label.items.cex and dim.lab.cex added to wrightMap to control label sizes support for ConQuest4 added “equation” added to CQmodel and wrightMap.CQmodel, to handle CQ output without a summary of estimation table it is now possible to plot the person and item sides separately Improvements: wrightMap now remembers your original graphical parameters and restores them after drawing the map fitgraph no longer calls a new window better handling of generics Bugfix: Fixes to CQmodel on reading the #IO variance errors and included fix for the var/covar matrix on CQ3 Fixed a bug where thr.lab.pos couldn’t take a matrix Fixed a bug where fitgraph.CQmodel assumed there was always a parameter called “item” in your table Fixed a bug where CQmodel assumed there were always an error column in the RMP tables wrightMap will no longer crash if the p.est parameter is null Other notes: - Removed some of the runtime.
https://www.r-bloggers.com/2016/03/new-release-wrightmap-1-2/
CC-MAIN-2022-27
refinedweb
569
56.76
NAME | SYNOPSIS | DESCRIPTION | ATTRIBUTES | SEE ALSO #include <floatingpoint.h>void string_to_decimal(char **pc, int nmax, int fortran_conventions, decimal_record *pd, enum decimal_string_form *pform, char **pechar); #include <stdio.h>void file_to_decimal(char **pc, int nmax, int fortran_conventions, decimal_record *pd, enum decimal_string_form *pform, char **pechar, FILE *pf, int *pnread); The char_to_decimal functions parse a numeric token from at most nmax characters in a string **pc or file *pf or function (*pget)() into a decimal record *pd, classifying the form of the string in *pform and *pechar. The accepted syntax is intended to be sufficiently flexible to accommodate many languages: whitespace value or whitespace sign value,where whitespace is any number of characters defined by isspace in <ctype.h>, sign is either of [+-], and value can be number, nan, or inf. inf can be INF (inf_form) or INFINITY (infinity_form) without regard to case. nan can be NAN (nan_form) or NAN(nstring) (nanstring_form) without regard to case; nstring is any string of characters not containing ')' or NULL; nstring is copied to pd->ds and, currently, not used subsequently. number consists of significand or significand efield where significand must contain one or more digits and may contain one point; possible forms are digits (int_form) digits. (intdot_form) .digits (dotfrac_form) digits.digits (intdotfrac_form) efield consists of echar digits or echar sign digits,.where echar is one of [Ee], and digits contains one or more digits. When fortran_conventions is nonzero, additional input forms are accepted according to various Fortran conventions: no Fortran conventions Fortran list-directed input conventions Fortran formatted input conventions, ignore blanks (BN) Fortran formatted input conventions, blanks are zeros (BZ) When fortran_conventions is nonzero, echar may also be one of [DdQq], and efield may also have the form sign digits. When fortran_conventions>= 2, blanks may appear in the digits strings for the integer, fraction, and exponent fields and may appear between echar and the exponent sign and after the infinity and NaN forms. If fortran_conventions== 2, the blanks are ignored. When fortran_conventions== 3, the blanks that appear in digits strings are interpreted as zeros, and other blanks are ignored. When fortran_conventions is zero, the current locale's decimal point character is used as the decimal point; when fortran_conventions is nonzero, the period is used as the decimal point. The form of the accepted decimal string is placed in *pform. If an efield is recognized, *pechar is set to point to the echar. On input, *pc points to the beginning of a character string buffer of length >= nmax. On output, *pc points to a character in that buffer, one past the last accepted character. string_to_decimal() gets its characters from the buffer; file_to_decimal() gets its characters from *pf and records them in the buffer, and places a null after the last character read. func_to_decimal() gets its characters from an int function (*pget)(). The scan continues until no more characters could possibly fit the acceptable syntax or until nmax characters have been scanned. If the nmax limit is not reached then at least one extra character will usually be scanned that is not part of the accepted syntax. file_to_decimal() and func_to_decimal() set *pnread to the number of characters read from the file; if greater than nmax, some characters were lost. If no characters were lost, file_to_decimal() and func_to_decimal() attempt to push back, with ungetc(3C) or (*punget)(), as many as possible of the excess characters read, adjusting *pnread accordingly. If all unget calls are successful, then **pc will be NULL. No push back will be attempted if (*punget)() is NULL. Typical declarations for *pget() and *punget() are: int xget(void) { ... } int (*pget)(void) = xget; int xunget(int c) { ... } int (*punget)(int) = xunget; If no valid number was detected, pd->fpclass is set to fp_signaling, *pc is unchanged, and *pform is set to invalid_form. atof(3C) and strtod(3C) use string_to_decimal(). scanf(3C) uses file_to_decimal(). See attributes(5) for descriptions of the following attributes: ctype(3C), localeconv(3C), scanf(3C), setlocale(3C), strtod(3C), ungetc(3C), attributes(5) NAME | SYNOPSIS | DESCRIPTION | ATTRIBUTES | SEE ALSO
http://docs.oracle.com/cd/E19683-01/816-0213/6m6ne37v0/index.html
CC-MAIN-2015-11
refinedweb
664
52.6
-03-2014 Record Information Rights Management: All applicable rights reserved by the source institution and holding location. Resource Identifier: aleph - 366622 oclc - 15802799 System ID: UF00028315:03380 This item is only available as the following downloads: ( PDF ) Full Text PAGE 1 What do you think about the 45 mph speed limit on County Road 486? Marsha Romanik Riddle County Road 486 should be at least 55 mph. Jake Muetzel 50-55 mph should be the minimum, especially on those long roads. Kelly L. Graham I used to drive that stretch going to CF in Ocala (from Crystal River) and 45 mph seems too slow. Easy speed trap if they leave it at that speed. There are no homes around so not sure why they put it so low. POLL FEBRUARY 3, 2014Floridas Best Community Newspaper Serving Floridas Best CommunityVOL. 119 ISSUE 180 50 CITRUS COUNTYSuper blowout: Seahawks dominate Broncos /B1 ENTERTAINMENT:Actor diesActor Philip Seymour Hoffman is found dead with a syringe in his arm. He was 46. /Page A6 INDEX Classifieds................B7 Comics....................B6 Crossword................B5 Editorial..................A10 Entertainment..........A4 Horoscope................A4 Lottery Numbers......B3 Lottery Payouts........B3 Movies......................B6 Obituaries................A6 TV Listings................B5 ONLINE POLL:Your choice?What assets do local libraries provide that are most useful to you? A. Computer access. B. Presentations, meetings and exhibits. C. I know I sound like and old-timer, but books are the main driver for me. D. Not qualified to answer, but guess Im long overdue to see what all is offered. To vote, visit www. chronicleonline.com. Click on the word Opinion in the menu to see the poll. Results will appear next Monday. Find last weeks online poll results./ Page A3 HIGH81LOW58Partly cloudy. South wind 10 mph. PAGE A4TODAY& next morning MONDAY QUESTION OF THE WEEK John Pepe I like 55 mph better than 45 if there is no reason to raise it. It would allow less time between points and better gas mileage. Erin Ewing 45 mph on CR 486 is ridiculous; its entirely too easy to speed. Its a wide road with less traffic than Hwy. 44, yet its limit is 15 mph slower. It makes no sense to me. Michele Queck Rose They need to increase it to 55 mph. There is no reason for it to be slower then all the other similar roads. Skip Brady All similar highways should have the same speed limit, curb or not. Contribute! Like us at facebook.com/ citruscounty chronicle and respond to our Question of the Week. INSIDE Balfour moved for school board seat MIKEWRIGHT Staff writerPoliticians often say they make sacrifices for their role in public office. Citrus County School Board member Sandy Balfour, appointed to office in June by Gov. Rick Scott, can back it up. Balfour took a huge pay cut and moved from her Pine Ridge neighborhood to a single-wide mobile home in Homosassa to get the appointment. The house she and her husband of 34 years, Tom, shared since 1999 is for sale. And moving from an 1,800square-foot house to a mobile has meant some personal downsizing. This has been a forced weaning, Balfour said. Thats the way Im looking it. I have to really qualify the things that have value in my life. Its forced me to think down my personal property. Balfour, an unsuccessful candidate for superintendent of schools in 2012, quickly added her name to the list of applicants in early 2013 after newly elected board member Susan Hale resigned. Shortly after submitting her application, Balfour said she called the governors office to check on the procedure. She said a staffer told her she wouldnt even be considered for the position if she didnt live in District 4, which comprises the southwest Citrus communities of Homosassa and Sugarmill Woods. I was not going to be considered unless I was willing to move, she said. They wouldnt even look at my rsum. Balfour wanted the appointment, but she and Tom Sandra Sandy BalfourCitrus County School Board member, District 4. See SCHOOL/ Page A8 Workforce Connection becoming CareerSource PATFAHERTY Staff writerWorkforce Connection is rolling out a new name and introducing its new location serving area jobseekers and employers. The agency has moved its one-stop career center in Citrus County from Inverness to Lecanto. It will have an open house at the new location Feb. 10. Also this month, Workforce Connection will change its name as part of a new unified CareerSource Florida brand designed to enhance access to workforce services for job-seekers and businesses throughout the state. Beginning in early 2014, Workforce Connection will become CareerSource Citrus Levy Marion. Floridas 23 other workforce organizations will adopt CareerSource brands with similar regional locaters. Darlene Goddard, Report: US abortion rate at lowest since 1973 DAVIDCRARY AP national writerNEW YORK The U.S. abortion rate declined to its lowest level since 1973, and the number of abortions fell by 13percent between 2008 and 2011, according the latest national survey of abortion providers conducted by a prominent research institute. The Guttmacher Institute, which supports legal access to abortion, said in a report being issued Monday that there were about 1.06million abortions in 2011 down from about 1.2million in 2008. Guttmachers figures are of interest on both sides of the abortion debate because they are more up-todate and in some ways more comprehensive than abortion statistics compiled by the federal Centers for Disease Control and Prevention. According to the report, See ABORTION/ Page A8 ON THE NET Guttmacher Institute: www .guttmacher.org/ See CAREER/ Page A8 Roads scholar MondayCONVERSATION MATTHEW BECK/ChronicleBeverly Hills resident Beverly Clemo, chairwoman of the Citizens Advisory Council of the Citrus County Transportation Planning Authority, watches traffic at North Forest Ridge Boulevard near her home, where she sees a need for a bicycle path and other traffic improvements near Forest Ridge Elementary School for the benefit of its 720 students. The CAC represents residents interests in safety and maintenance issues and in planning new roads. Beverly Clemo chairs Transportation Planning Authority advisory council CHRISVANORMER Staff writerFor two years, the Citizens Advisory Council of the Citrus County Transportation Planning Authority has represented resident road-users views to the decision-makers in road planning and road maintenance. The councils chairwoman, Beverly Clemo of Beverly Hills, takes a passionate interest in how and why decisions are made, while urging residents to steer their own course by taking part as Citrus County forms a partnership with Hernando County Metropolitan Planning Organization for future transportation planning. That future includes the Suncoast Parkway 2 and the County Road 491 Medical Corridor.CHRONICLE: Why did you join the Citizens Advisory Council? CLEMO: Gary Maidhof had an article in your newspaper asking for volunteers. I got a call from my sister who was a registered professional engineer and used to be with the Department of Transportation. She knows I share an interest in transportation. She knew about my background, of course, being a former elected official. For over 25 years, I served on Regional Council of Governments in Southeastern Michigan. So, I sent him a rsum and he called me to serve. We are short one member right now from Crystal River and have been for over a year. We need somebody from Crystal River. CHRONICLE: How did you become chairwoman of the CAC? CLEMO: By default; nobody else raised their hand. CHRONICLE: When the CAC started, what were you instructed was your focus? Was it setting priorities to get funding? CLEMO: We worked with Gary Maidhof for a short time. At one of our first meetings, he asked us all what would be our priorities. Interestingly enough, the south end of County Road 491, going toward the Suncoast, was breaking up badly, striping was bad on it, and we felt that the paving See MONDAY/ Page A8 WHAT: Citrus County T ransportation Planning Authority Citizens Advisory Council meeting. WHEN: 3 p .m. Wednesday. WHERE: Citrus County T ransit Center, 1300 S. Lecanto Highway, Lecanto. AGENDA: www .tbarta.com. PHONE: TB ARTA at 813-282-8200. PAGE 2 CHRISVANORMER Staff writerUpdates about Nature Coast Parkway, Floridas Future Corridors and Suncoast Parkway 2 will be presented Wednesday to the two advisory groups of the Citrus County Transportation Planning Organization (TPO). Bob Clifford, director of TBARTA (Tampa Bay Area Regional Transportation Authority) and consultant to the TPO, will present information for discussion. Nature Coast Parkway: The proposed Nature Coast Parkway is planned as a 47-mile toll facility extending the existing Turnpike Mainline near its northern terminus of I-75 northwestward to U.S. 19 north of the town of Inglis in Levy County. The facility was previously known as the Northern Extension of Floridas Turnpike. The preferred alignment is through Levy, Marion and Sumter counties. Potential Interchanges are at U.S. 19, U.S. 41, State Road 200, I-75 and State Road 44. Floridas Future Corridors Tampa Bay to Northeast Florida Study Area Concept: The Future Corridors initiative is a statewide effort led by the Florida Department of Trans porta tion (FDOT) to plan for the future of major transportation corridors critical to the states economic competitiveness and quality of life during the next 50 years. One identified study area is Tampa Bay to Northeast Florida, representing two of Floridas largest regions with large, diverse economies and growing transportation needs. Between these two regions, Gainesville and Ocala are emerging in importance as regional employment centers, par ticularly in innovation and logistics industries. Surrounding rural areas support a mix of agriculture, forestry, mining, recreation and manufacturing industries, and are collaborating on economic development strategies. This concept report identifies potential transportation strategies to help connect Tampa Bay and Northeast Florida and support the future growth of these two regions, as well as the less urbanized North Central Florida region that lies between them. Suncoast Parkway 2: The Suncoast Parkway currently extends from the Veterans Expressway in Tampa to U.S. 98 near the Hernando-Citrus county line, a distance of about 42 miles. The proposed Suncoast Parkway 2 project would extend northward about 27 miles through Hernando and Citrus counties from U.S. 98 to U.S. 19. The Florida Turnpike Enterprise is nearing completion of the 60percent design phase. As an important part of the Florida Intrastate Highway System (FIHS), the Suncoast Parkway 2 will provide a high-speed, highvolume facility on the west coast of Florida. In addition to its importance as a part of the FIHS, the Suncoast Parkway 2 project would provide needed relief to the local roadway network. As one of the fastest growing counties in the state, according to FDOT, traffic volumes in Citrus County are expected to increase dramatically in the coming decades. The Suncoast Parkway 2 project would provide additional traffic capacity to meet these future local traffic demands. Clifford will make the presentation at both meetings at 1:55p.m. for TAC members and 4p.m. for the CAC group. Both meetings are open to the public and comment from the public is heard at both the beginning and end of both meetings. Floridas Future Corridors will be the topic at the next meeting of the Citrus County Council, starting at 9a.m. Wednesday, Feb.12, at Beverly Hills Lions Club, 72 Civic Circle, Beverly Hills. Guest speakers will be FDOT representatives. A2MONDAY, FEBRUARY3, 2014CITRUSCOUNTY(FL) CHRONICLE LOCAL 22 YEARS IN CITRUS COUNTY!Owner Rickey RichardsonLicensed Hearing Aid SpecialistBrian LazioLicensed Hearing Aid Specialist Travel With Confidence . Miracle Ear Will Be There.Over 1,300 Miracle Ear Locations!H9TJ Crystal River Mall (Next to K-Mart) 352-795-1484 OPEN: Mon.-Fri. 10AM-5PMALSO. Lec 2/28HAAB 20/20 Eyecare N OW A CCEPTING Over 1,000 Frames In Stock with Purchase of Lenses AND Get a 2nd Pair of Glasses FREE ( $ 89.00 Value) FREE FREE Frames FREE Frames Frames proposed Nature Coast Parkway, marked by the red dotted line, would run from the Florida Turnpikes northern terminus in Wildwood to Lebanon Station in Levy County. Big road projects to come before advisers WHAT: TPO T ransportation Technical Advisory Committee (TAC) meeting. WHEN: 1 p .m. Wednesday. WHERE: Citrus County T ransit Center, 1300 S. Lecanto Highway (County Road 491), Lecanto. WHAT: TPO Citiz ens Advisory Committee (CAC) meeting. WHEN: 3 p .m. Wednesday. WHERE: Citrus County T ransit Center, 1300 S. Lecanto Highway (C.R. 491), Lecanto. PAGE 3 ERYN WORTHINGTON/ChronicleHarvey Jenkins, left, was proud to show Arnold Goldfoot, right, his 1957 Ford Mild Custom car Saturday at Chilsons Garages annual customer appreciation day and car show. Chilsons owner Charlie Chilson said after paying off a 25-year mortgage, he threw a party to celebrate. He enjoyed the celebration so much that he decided to continue it annually. Chilson said it is his way of thanking the community and his customers for their business for 33 years. Last year, more than 1,400 customers and friends and 130-plus cars helped Chilson celebrate. He was expecting even more this year. Abortion Coverage: The House on Jan. 28 voted, 227-188, to bar the use of federal funds to subsidize Affordable Care Act plans that cover abortion, even though the law already requires women to pay the share of the premium that applies to reproductive services. A yes vote was to send HR 7 to the Senate. Rich Nugent, Yes. Women's Medical Privacy: The House on Jan. 28 defeated, 192-221, a bid by Democrats to prevent HR 7 (above) from violating the medical privacy of any woman, including rape and incest victims, with respect to her choice or use of a healthinsurance policy. A yes vote was to adopt the motion. Nugent, No. New Farm and Food Law: The House on Jan. 29 approved, 251-166, the conference report on a five-year farm and food bill having a budget of nearly $100 billion annually. A yes vote was to pass a bill (HR 2642) that cuts food stamps by 1 percent, boosts farm exports, expands crop insurance, ends direct payments to growers, funds conservation programs and spurs rural development. Nugent, Yes. National Flood Insurance: The Senate on Jan. 30 voted, 67-32, to delay for four years steep increases in the National Flood Insurance Program's taxpayer-subsidized premiums. This would blunt reforms enacted in 2012 to trim the program's debt, which now stands at $24 billion due largely to covering damages from hurricanes Katrina and Rita in 2005 and Sandy in 2012. A yes vote was to send S 1926 to the House. Bill Nelson, Yes; Marco Rubio, Yes. Deficit Spending: Voting 64-35, the Senate on Jan. 29 reached a supermajority needed to exempt S 1926 (above) from statutory spending limits set by congressional budget resolutions. A yes vote was to advance a bill that would add $900 million over its first five years to the national debt and possibly more in later years. Nelson, Yes; Rubio, No. Flood-Insurance Premiums: The Senate on Jan. 30 defeated, 34-65, an amendment to S 1926 (above) allowing taxpayersubsidized flood-insurance premiums to rise by 25 percent annually until they reach market levels. A yes vote backed this plan in place of the bill's four-year delay of steep premium increases, starting this year. Nelson, No; Rubio, No. Key votes ahead: In the week of Feb. 3, the Senate will take up the farm-bill conference report and a bill on veterans' health benefits. The House schedule was to be announced. 2014 Thomas Reports Inc. Call: 202-667-9760.HOW YOUR LAWMAKERS VOTEDKey votes for the week ending Jan. 1 by Voterama in Congress Panel to hear three land-use casesThe Citrus County Planning and Development Commission will meet at 9 a.m. Thursday in Room 166 at the Lecanto Government Building, 3600 W. Sovereign Path, Lecanto, to hear three land-use applications. Robert and Roberta Schaefer will ask the county to vacate a 20-foot alley near West Homosassa Trail in Homosassa. Approval is recommended. Dan Wilson for Garnet Gregory will request a setback variance to allow a construction project at 1128 S. Ozello Trail, Crystal River. Approval is recommended with two conditions. Elizabeth H. Darr will request an after-the-fact, twopart variance to allow a construction project at 5135 S. Running Brook Drive, Homosassa. Denial is recommended.Old Courthouse benefit concert setOn Saturday, Feb. 22, Cote Deonath a 16-yearold Elvis tribute artist from Dunnellon will perform at 7:30p.m. in the Old Courthouse in Inverness. All proceeds to benefit the Old Courthouse Heritage Museum. Doors open at 7p.m. with cash bar and snacks available. Tickets for up-front and personal reserved seating are $35; other seats are $25. At noon Sunday, Feb. 23, there will be a Gospel Music and Brunch event in the upstairs at the Old Courthouse with Deonath singing Elvis renditions of inspirational music. Seating is limited to the first 120 people; no reserved seating. Doors open at 11:30a.m. Tickets are $25. For all tickets, call the Old Courthouse at 352-3416427 or 352-341-6436.N.C. Republicans meeting Feb. 8The Nature Coast Republican Club will meet at 9a.m. Saturday, Feb.8, at the Hampton Inn, Crystal River. The guest speaker will be Bob Schweickert Jr., who will present GroundHog Research. Call Connie at 352-746-7249. From staff reports STATE& LOCAL Page A3MONDAY, FEBRUARY 3, 2014 CITRUSCOUNTYCHRONICLE QUESTION: How should the United States respond to the threat of terrorism at the Winter Olympics in Sochi? Dont overthink the threat. The Russian government will safeguard the athletes and spectators. 29 percent (81 votes) Sorry, but our athletes should pass on these Olympics. The risk is too great. 13 percent (37 votes) We shouldnt back out, but the U.S. should be involved in coordinated safety initiatives with the Russians. 41 percent (117 votes) In light of President Putins disregard for the U.S. and his stance on social/humanitarian issues, we should boycott these games. 16 percent (46 votes) Total votes: 281. ONLINE POLL RESULTS Driver kills pedestrians Associated PressBRADENTON Three people are dead and five others have been hospitalized after a woman backed her SUV into a group of people Sunday outside a Florida clubhouse, authorities said. The crash happened outside the Sugar Creek Country Club at 11:20a.m. in Bradenton, about 45 miles south of Tampa. Residents had gathered for church services inside the building, which is. Its a tragic, tragic situation, Bueno said. Every time something like this occurs, it breaks your heart. Sugar Creek Estates resident and retired EMT Muriel Watts described the aftermath of the crash to the Bradenton Herald drivers car went into reverse. Around theSTATE Panama City Former Democratic Party leader diesPANAMA CITY.Florida Guard warns against cutsST. AUGUSTINE Florida National Guard officials are warning budget cuts under consideration by the U.S. Army would eliminate 1,000 people from its ranks. The Tampa Bay Times reported officials caution the cuts would hamper the Florida Guards ability to respond to hurricanes and other disasters. Florida Guard leaders claim the state already lags in National Guard funding. In 2012, the Florida Guards budget was $467 million. Ten other states had higher budgets, even though Florida is prone to hurricanes.Police: Couple ran sham collectionOCAL reported an officer approach Concannon and said hedannons phone rang. Police say Concannon admitted the operation was fake. From wire reports County BRIEFS Annual car show at Chilsons Garage Building Dreams to benefit Habitat for Humanity Special to the ChronicleHabitat for Humanity of Citrus County (HFHCC) will have its seventh annual fundraising event, Building Dreams a Wine and Food Pairing Benefit, Thursday, March 13, at Skyview Country Club, Terra Vista of Citrus Hills. The Building Dreams gala is a major source of support for Habitats mission to provide safe, decent, affordable housing for partner families in Citrus County. Every dollar raised helps pay for the lumber and nails, concrete and shingles, plumbers and permits needed to take a new home from site-work to completion. The public is invited to be part of this event. The evening will feature gourmet foods and outstanding wines from around the globe, live musical entertainment and an array of auction items. There are three ways the public can participate: Purchase tickets and enjoy the evening with friends and family. Provide an item for the silent auction. Become an event sponsor with a financial contribution to underwrite the event. Whether you donate cash or a giftin-kind, your contribution is tax deductible to the extent permitted by law. Habitat will provide a charitable deduction acknowledgement letter. Tickets are on sale now and can be acquired by contacting the Habitat Store at 352-563-2744 or calling 352-563-0700. Shooting case goes to trial this week Associated PressJACKSONVILLE A man with a gun. A black teen, shot dead. Was it murder or selfdefense? Jury selection is scheduled to begin Monday in Florida in the trial of 47yeardefense, perhaps laying the ground work for a case under Floridas stand your ground law. If the case sounds familiar, thatsvilles State Attorney Angela Corey, who will also be prosecuting the Dunn case. PAGE 4 Birthday You will interact well with others in the coming months. Pitch in and help organizations in which you believe. The more you experience this year, the better. Take advantage of whatever comes your way. Aquarius (Jan. 20-Feb. 19) If you trust friends with your secrets, you can expect them to blow the whistle. It is best not to depend on others. Pisces (Feb. 20-March 20) Your energy should be directed into moneymaking ventures. Dont hesitate to look into career opportunities that allow you to learn on the job. Aries (March 21-April 19) Superiors will appreciate your skills, knowledge and expertise. Network with contacts who will introduce you to people in influential positions. Taurus (April 20-May 20) Volunteer your services to raise your profile. Contribute what you can, and dont be shy regarding input, but be discreet about personal matters. Gemini (May 21-June 20) Dont expect to get a bargain. Avoid buying anything you dont really need. Be cautious while traveling and dont make promises you cannot keep. Cancer (June 21-July 22) You will gain support and assistance if you ask for help. A healthy debate will show your loyalty and make inroads with people you want to get to know better. Leo (July 23-Aug. 22) Travel for business or pleasure in order to make interesting connections. Be precise regarding what you have to offer. Virgo (Aug. 23-Sept. 22) Love and romance are on the rise, and an interesting development will take place with someone you know through work or extracurricular activities. Libra (Sept. 23-Oct. 23) Social events will lead to unusual opportunities. Your openness and sophisticated way of dealing with situations will attract someone who has plenty to offer. Scorpio (Oct. 24-Nov. 22) Look for someone unusual who will inspire you to pursue a lifelong dream. Working with others will encourage you to broaden your horizons. Sagittarius (Nov. 23-Dec. 21) Travel will lead to adventures, but dont be surprised if you end up in debt due to unexpected expenses. Capricorn (Dec. 22-Jan. 19) Domestic problems will surface if you cant get along with the people you live or deal with daily. TodaysHOROSCOPES Today is Monday, Feb. 3, the 34th day of 2014. There are 331 days left in the year. Todaysorks East River, killing 65 of the 73 people on board. On this date: In 1783, Spain formally recognized American independence. In 1913, the 16th Amendment to the U.S. Constitution, providing for a federal income tax, was ratified. In 1966, the Soviet probe Luna 9 became the first manmade object to make a soft landing on the moon. In 1972, the XI Olympic Winter Games opened in Sapporo, Japan. In 1994, the space shuttle Discovery lifted off, carrying Sergei Krikalev, the first Russian cosmonaut to fly aboard a U.S. spacecraft. Ten years ago:. One year ago: The Baltimore Ravens survived a partial power outage at the Super Bowl in New Orleans to edge the San Francisco 49ers 34-31. Todays Birthdays: Comedian Shelley Berman is 89. Football Hallof-Famer Fran Tarkenton is 74. Actress Bridget Hanley is 73. Actress Blythe Danner is 71.Singer Melanie is 67. Actress Morgan Fairchild is 64. Actress Pamela Franklin is 64. Actor Nathan Lane is 58. Actress Michele Greene is 52. Country singer Matraca Berg is 50. Actor Warwick Davis is 44. Actress Elisa Donovan is 43. Actor Matthew Moy is 30. Actress Rebel Wilson is 28. Thought for Today: Your friend will argue with you. Alexander Solzhenitsyn, Russian writer (19182008). Today inHISTORY CITRUSCOUNTY(FL) CHRONICLE Todays active pollen: Juniper, maple, oak Todays count: 8.7/12 Tuesdays count: 11.0 Wednesdays count: 11 Rowling says Potter ending may be wrongL interview was reported in The Sunday Times which also quoted actress EmmaWatson, who played Hermione, expressing doubts about the viability of her characters relationship with Ron. She told the newspaper that many fans doubt Ron can make Hermione happy over time.Online tour of Air Force museumDAYTON, Ohio Fans of Ohios National Museum of the U.S. Air Force dont have to brave the snowy highways to see an updated showcase of aircraft and exhibits. An online virtual tour of the Dayton area museums Cold War Gallery has been updated:. Its a 360-degree, in-depth look providing online viewers the chance to check museum galleries. According to the Dayton Daily News, the museum has exhibits detailing more than 100 years of military aviation history. Interactive materials allow users to click on exhibits.Ride Along No. 1 for third weekLOS. Focus Features chick flick from a male point of view, That Awkward Moment, starring Zac Efron, Michael B. Jordan and Miles Teller, has taken third place in its opening weekend with $9 million. Next weekends). From wire reports Associated PressBritish author J.K. Rowling poses for photographers Sept. 27, 2012, at the Southbank Centre in London. The author of the Harry Potter series of books said she is having second thoughts about the romantic content for her characters. A4MONDAY, FEBRUARY3, 2014 000GWR3 in Todays Citrus County Chronicle LEGAL NOTICES Meeting Notices . . . . . . . . . . . . . . . . . . B10 Foreclosure Sale/Action Notices . . . . B10 Forfeitures . . . . . . . . . . . . . . . . . . . . . . B10 Dissolution of Marriage Notices . . . . . B10 Surplus Property . . . . . . . . . . . . . . . . . B10 PAGE 5 CITRUSCOUNTY(FL) CHRONICLEMONDAY, FEBRUARY3, We file the paperwork . not you! ATT E NTION: All Federal Workers & Retirees If your BC/BS card looks like this... YOURE COVERED! 111 or 112 Enrollment Code Find Us Online At SecureTec Complete protection from the inside out. 000H7M5 Alzheimers Disease and Dementia ARE YOU AT RISK? According to a new study by Johns Hopkins University School of Medicine and the National Institute on Aging, men and women with hearing loss are much more likely to develop dementia and Alzheimers disease. People with severe hearing loss, the study reports, were 5 times more likely to develop dementia than those with normal hearing. The more hearing loss you have, the greater the likelihood of developing dementia or Alzheimers disease. Hearing aids could delay or prevent dementia by improving the patients hearing. 2011 Study by John Hopkins University School of Medicine and the National Institute on Aging Have you noticed a change in your ability to remember? PAGE 6 Philip Seymour Hoffman had syringe in arm Associated Press with what law enforcement officials said was a syringe in his arm. He was 46. The two officials told The Associated Press that glassine envelopes containing what was believed to be heroin were also found with the actor. The law enforcement officials, who spoke to The Associated Press on condition of anonymity because they are not authorized to talk about the evidence at the scene, said the cause of death was believed to be a drug overdose. Hoffman no matineeidol figure Wilsons War. He also received three. The law enforcement officials said Hoffmansmans family called the news tragic and sudden. We are devastated by the loss of our beloved Phil and appreciate the outpouring of love and support we have received from everyone, the family said in a statement. In one of his earliest mans pursuit of happiness. He was nominated for the 2013 Academy Award for best supporting actor for his role in The Master as the charismatic leader of a religious movement. The film, partly inspired by the life of Scientology founder L. Ron Hubbard, reunited the actor with Anderson. He also received a 2009 best-supporting nomination for Doubt, as a priest who comes under suspicion because of his relationship with a boy, and a best supporting actor nomination for Charlie Wilsons War, as a CIA officer. Born in 1967 in Fairport, N.Y., Hoffman was interested in acting from an early age, mesmerized at 12 by a local production of Arthur Millers All My Sons. He studied theater as a teenager with the New York State Summer School of the Arts and the Circle in the Square Theatre. He then majored in drama at New York University. Hoffman is survived by his partner of 15 years, Mimi ODonnell, and their three children. Walter Oberti, 93CRYSTAL RIVERWalter A. Oberti, 93, of Crystal River, Fla., passed away Jan.25, 2014. Born in Wanaque, N.J., he served in the U.S. Air Force during World War II as master sergeant crew chief and flight engineer (1942-1945). After his service, Walter became a pioneer in the rocket industry, working with Thiokol Chemical Corp., and was chief of test operations with Reaction Motors. He also served 35 years with the Pompton Lakes, N.J., Fire Department and First Aid Squad, as well as nine years on the Pompton Lakes Board of Education. Mr. Oberti was a life member of Rotary International and his hobbies included golfing, hunting, fishing and being a huge Tampa Bay Rays fan. He was preceded in passing by his wife of 67 years, Elsie (Moscone) Oberti. Survivors include his daughter, Patricia (Oberti) Manfredo of Land O Lakes; son, Wayne Oberti, M.D., and wife Linda of Jacksonville; two granddaughters, Christin Manfredo of Land O Lakes and Corinne Hamel and husband Max of St. Augustine; and two greatgrandchildren, Savannah and Evan Hamel of St. Augustine. A private committal was at Fero Memorial Gardens, where military honors were conducted. In lieu of flowers, donations may be made to Sertoma Speech and Hearing Foundation of Florida, 4443 Rowan Road, New Port Richey, FL 34653 or Pompton Lakes/Riverdale First Aid Squad, 700 Ramapo Ave., Pompton Lakes, NJ 07442. home.comRosalma Rice, 80Rosalma B. Rice, age 80, died Saturday, Feb.1, 2014. Chas. E. Davis Funeral Home with Crematory is in charge of private arrangements. Janet Ackley, 69FLORAL CITYJanet B. Ackley, age 69, Floral City, died Saturday, Feb.1, 2014. Chas. E. Davis Funeral Home with Crematory is assisting the family with private arrangements.Myrtle Samec, 92INVERNESSMyrtle H. Samec, 92, of Inverness, Fla., passed away Thursday, Jan.30, 2014, at her residence. She was born Oct.27, 1921, in Many, La., to the late Jessie Edward and Mary Jane (Armstrong) Bass. Myrtle was a homemaker and home health aide, and arrived in this area in 1985, coming from St. Petersburg, Fla. She was a Christian by faith, and enjoyed gardening, the outdoors and spending time with her family. Survivors include children Edward Samec III and Phyllis Raynor, both of Inverness, Peggy L. Samec of Floral City, Fla., Rita Tilka of Columbus, Ohio, and Pauline Onley of Shreveport, La.; brother Jessie E. Bass Jr.; sisters Amanda Heintschel and Murlene Thompson; nine grandchildren; and numerous greatand greatgreat-grandchildren. A Celebration of Life Gathering will be from noon to 2p.m. Saturday, Feb.8, 2014, at the Chas. E. Davis Funeral Home with Crematory, Inverness. In lieu of flowers, the family requests donations to Hospice of Citrus County, P .O. Box 641270, Beverly Hills, FL 34464. Sign the guest book at Wetzel, 90CRYSTAL RIVERBernice M. Wetzel, 90, of Crystal River and formerly of Sugarmill Woods, died Friday, Jan.31, 2014. Fero Funeral Home, Beverly Hills. A6MONDAY, FEBRUARY3, 2014CITRUSCOUNTY(FL) CHRONICLE Blackshears II Aluminum 795-9722 Free Estimates Licensed & Insured RR 0042388 Years As Your Hometown Dealer 000H8ZP HWY. 44 CRYSTAL RIVER 2013 2013 2013 2013 Rescreen Seamless Gutters Garage Screens New Screen Room Glass Room Conversions9W5 99 FREE Second Opinion Closing time for placing ad is 4 business days prior to run date. There are advanced deadlines for holidays. 000H8L7 Contact Anne Farrior 564-2931 Darrell Watson 564-2197 To Place Your In Memory ad, For Information and costs, call 726-8323 Burial Shipping Cremation Funeral Home With Crematory 000GSLM Myrtle Samec Walter Oberti Obituaries Actor found dead at home Associated PressActor Philip Seymour Hoffman poses March 5, 2006, with the Oscar he won for best actor for his work in Capote at the 78th Academy Awards in Los Angeles. Police said Hoffman was found dead Sunday in his apartment in New York City. PAGE 7 CITRUSCOUNTY(FL) CHRONICLEMONDAY, FEBRUARY3, 2014 A7 000H9SS Is Your Hearing Difficulty Earwax Buildup or Something More Serious? FREE Test Dates are available from February 3-7! MOST ARE AVAILABLE TO UPGRADE TO RECHARGEABLE TECHNOLOGY... No Need To Change The Battery On A Weekly Basis WE ACCEPT OVER 40 DIFFERENT INSURANCE PLANS So Call Us And We Will Give You All The Information You Need Before You Come In! ALL 100% DIGITAL HEARING AIDS FREE HEARING TEST AVAILABLE ARE YOU HARD OF HEARING? Call now to participate in this FREE Field Test DO YOU HAVE DIFFICULTY UNDERSTANDING WHAT PEOPLE ARE SAYING? DO YOU HAVE DIFFICULTY HEARING WHEN SPEAKING ON THE TELEPHONE? ARE PEOPLE TELLING YOU THAT THE VOLUME ON YOUR TV OR RADIO IS TOO LOUD? NuTech Hearing is offering FREE field tests of a major-name hearing aid provider for a remarkable new digital hearing aid. Not only is this FREE, you are under no obligation! Check out the 100% digital instruments which use the most up-to-date technology to provide better hearing invisibly. Avoid the sensation of stopped up ears that some experience with other hearing devices. You will invisibly hear more clearly. To participate you will have your hearing tested FREE OF CHARGE by one of our specialists who will review your results and determine your candidacy. After field testing this new technology, if you desire, you may keep your instrument at a huge savings just for participating! The benefits of hearing aids can vary greatly by the type of technology, the degree of your hearing loss, as well as the noise environment, hearing test accuracy and a proper fit. Dont miss this opportunity to see if you can improve your hearing at a very affordable price. Find out what you have been missing! M-F 9:00 to 4:00 p.m. Sat. and Sun. Appointment Needed CRYSTAL RIVER 352-794-6155 1122 N. Suncoast Blvd. (US 19) A block and an half south of Ft. Island Trail OCALA-EAST 352-861-2275 304 SW College Rd. Ste. 207 College Plaza, next to Red Lobster NEW LOCATION OCALA-WEST 352-671-2999 11250 SW 93rd Ct. Rd. Next to Chilis INVERNESS 352-419-7911 6161-C Gulf to Lake Hwy. 1/2 mile east of Walmart PAGE 8 Workforce Connections board chair, said at the time the name was announced that the local workforce development board supports the move to provide a more consistent message regarding our systems role as a talent source for Floridas business community. Frank Calascione, onestop career center manager, said the new location is easily accessible from all points of the county, may be reached via public transit and is convenient to the College of Central Florida and Withlacoochee Technical Institute, two of Workforce Connections training and education partners. The new office also offers an expanded resource area with twice the space and number of computers, and features a classroom/conference room and designated training lab with computers. Its open now, we moved in last week, Rusty Skinner, chief executive officer of Workforce Connection, said. Its a nice facility, were excited about it. It offers a lot more for job-seekers, better facilities for workshops and a better flow for staff and clients. Skinner added they made a lot of improvements last year in systems, software and expanded testing and are always looking for ways to enhance their services and identify transferrable skills, especially for job-seekers who have been unemployed a while. As for the new brand, he said this is part of their attempt to become really engaged with economic development. He also sees it as an opportunity to bring more jobs into the region. We have been told the governor is excited about rebranding, he said, citing the $30 million commitment that Gov. Rick Scott announced for a workforce training initiative. Skinner said that funding, unlike federal job money, will give them a flexibility they have lacked in the past. CareerSource Citrus Marion Levy will host an open house at the new location from 8a.m. to 5p.m. Monday. Feb.10, with a ribbon cutting set for 4:30p.m. The new facility is just off West Gulf-toLake Highway at 683 S. Adolph Point, Lecanto, a little more than half a mile east of the intersection with County Road 491. Contact Chronicle reporter Pat Faherty at 352-564-2924 or pfaherty@chronicleonline.com. CAREERContinued from Page A1 the abortion rate dropped to 16.9 abortions per 1,000 women ages 15 to reportspercent, to 1,720, between 2008 and 2011, and the number of abortion clinics declined by just 1percentpercent of all non-hospital abortions, an increase from 17percent in 2008. Carol Tobias, president of the National Right to Life Committee, described the overall drop in abortion numbers as evidence that the anti-abortion movements lobbying and legislative efforts were having an impact. It shows that women are rejecting the idea of abortion as the answer to an unexpected pregnancy, she said. Americans United for Life, another anti-abortion group engaged in the efforts to pass restrictive state laws, said Guttmachers numbers should be viewed skeptically because they are based on voluntary selfreporting by abortion providers. It is impossible really to know the true abortion rate, said the groups president, Charmaine Yoest. The report marked the 16th time since 1973, when abortion was legalized nationwide, that Guttmacher has attempted to survey all known abortion providers in the U.S. However, a section of the new report acknowledges that some abortions might not be tallied. The highest abortion rates were in New York, Maryland, the District of Columbia, Delaware and New Jersey; the lowest were in Wyoming, Mississippi, South Dakota, Kentucky and Missouri. However, Guttmacher said many women in Wyoming and Mississippi, where providers are scarce, go out of state to get abortions.Follow David Crary on Twitter at. com/CraryAP .A8MONDAY, FEBRUARY3, 2014CITRUSCOUNTY(FL) CHRONICLENATION/LOCALh93u Doors open at 4pm Starts at 6 PM Doors open at 4pm Doors open at 4pm Starts at 6 PM Starts at 6 PM NO CASH ON PREMISES Tuesday Bingo Doors Open 4:30pm Game 6 pm Kellner Auditorium 102 Civic Circle Beverly Hills 352-746-6258 000GX0D Free Coffee & Tea Refresh men t s A v ailable VIP Drawing Mystery Game Progressive Coverall B I N G O B I N G O FREE SPECIALS NEW EXCITING Select 7 Game 776 N. Enterprise Pt., Lecanto 746-7830 000H9J1 Visit our Showroom Next to Stokes Flea Market on Hwy. 44 Visit Our New Website For Great Specials Wood Laminate Tile Carpet Vinyl Area Rugs ~ 000H797 B 10 I 19 For a Day or Night of Fun and to Meet New Friends. Come and Play! To place your Bingo ads, call 563-5592 9203147 000H91UH6QK 000H7G F LORAL C ITY L IONS B INGO at the Community Building 726-5107 Every Wednesday 6:30 pm 25 cent games at 4pm 20 games 000H79K Bonanza 4 speed games and 18 regular games with Jackpot $24 in door prizes1RL ABORTIONContinued from Page A1 didnt want to be saddled with a Sugarmill Woods mortgage if they moved and she didnt get the appointment. The Balfours paid off an $80,000 mortgage on the Pine Ridge home in 2008, according to county records. The house is valued on the property appraisers website at about $145,000; the couple listed it for sale in November at $225,900. Balfour said her husband, a real estate agent, began looking for property in District 4 to purchase. They settled on a 2.3-acre parcel off Cardinal Street near County Road 491. The Balfours paid $37,000 for it on May 24. The deed was recorded on June 28 ironically, Scotts letter to Balfour offering her the school board appointment was dated the same day. Balfour said she and Tom began moving their things into the mobile home the day she received official word of the appointment. Its value for the dollar, two and a half acres, fenced in. The dogs love it, she said. It was the right decision for me. Economically, the Balfours couldnt make a quick lateral move. As a veteran teacher with a masters degree, Balfour earned $54,121. To serve on the school board, Balfour had to give up her job teaching at the Academy of Environmental Sciences. A school board member makes $33,282; Balfours first-year pay is at $30,604 because she didnt take office until a few weeks after July 1, when the school districts fiscal year begins. Balfour said she plans to get her name on the ballot to run for the remaining two years of the seat in this years election. Asked why she would make such a financial and lifestyle decision to earn a spot on the school board, Balfour said: Im at a turning point in my life. I need to be in position to make a difference. Ive got a sense of value to each day and what I can contribute. Ive been told by many in the district that school board experience is necessary to make an impact. Today, Im making an impact.Contact Chronicle reporter Mike Wright at 352563-3228 or mwright@ chronicleonline.com. SCHOOLContinued from Page A1 WHAT: Car eerSource Citrus Marion Levy open house.. WHEN: 8a.m. to 5p.m. Monday. Feb.10. WHERE: 683 S Adolph Point, Lecanto, off State Road 44. issue, not widening it or anything, just the paving issue should be addressed. That was two years ago. Mile by mile, its getting done. Not as fast as we would like it, but theyre going to do another section this year, hopefully, and then weve got another section on our list. CHRONICLE: Since (former county Development Services director) Gary Maidhof unfortunately passed away, the TPO has had Bob Clifford, director of TBARTA (Tampa Bay Area Regional Transportation Authority), as its consultant. How is that working out? CLEMO: TBARTA is wonderful. I dont think our committee or the TPO could survive without TBARTA to the level we are at now and gotten the funds that are coming into the county. Their expertise has really been a savior for us. CHRONICLE: Has the funding increased because there is a TPO? CLEMO: I think its a combination of things. They are really getting their act together. Some of our projects are eligible for federal funds. North Florida Avenue, for example, has been on the wish list for quite a while. That is a project that is eligible for federal funds. CHRONICLE: What happens when your committee wants something? CLEMO: It came to our attention that there was a problem on Lecanto (Highway) with people driving around the school buses. When the school buses were stopping, people werent stopping. We asked TBARTA to bring them in. They brought the sheriff in and the school district in and we talked about what the problem was. One serious area was at the nursery school just south of Gulf-to-Lake on the west side of the street (C.R. 491). Parents were driving in there and school buses and there was traffic on the other side of the road. The short-term solution that the county did was to put the dividers up in the center. But its not just us requesting it. Letters go out. CHRONICLE: Do you have any concern about joining with Hernando MPO? CLEMO: One concern I do have with the MPO merger with Hernando is people might not want to drive there. CHRONICLE: How successful are you in getting people to come to your meetings? CLEMO: We meet at 3 p.m. the first Wednesday of the month and we are there until 5p.m. at least, which is just a terrible time for a lot of people to come. No, we dont get a lot of people coming. CHRONICLE: How can people contact you? CLEMO: We are not currently listed on the county website. I think its something thats fallen through the cracks. Hernando MPO will have staff, so hopefully there will be a number that people can call. Now people end up calling TBARTA, which is good. Contact Chronicle reporter Chris Van Ormer at 352-564-2916 or cvanormer@chronicle online.com. MONDAYContinued from Page A1 Paul Ryan: Immigration legislation unlikely in Associated PressWASHINGTON Days after House Republicans unveiled a roadmap for an overhaul of the nations broken immigration system, one of its backers said legislation is unlikely to pass during this election year. Rep. Paul Ryan, R-Wis., said distrust of President Barack Obama runs so deep in the Republican caucus that hes skeptical the GOP-led House would pass any immigration measure. He said a plan that puts security first could only pass if lawmakers believe the administration would enforce it an unlikely prospect given Republicans deep opposition to Obama. This isnt a trust-butverify, this is a verify-thentrust approach, Ryan said. Last week, House Republicans announced their broad concerns for any immigration overhaul but emphasized they would tackle the challenge bill-by-bill. Immigration legislation is a dicey political question for the GOP. The partys conservative base opposes any measure that would create a pathway to citizenship for immigrants living here illegally, but many in the party worry that failing to act could drive many voters to Democratic candidates. In 2012, Obama won reelection with the backing of 71 percent of Hispanic voters and 73 percent of Asian voters. The issue is important to both blocs. Republicans have preemptively been trying to blame the White House for immigration legislations failure, even before a House bill comes together. House Majority Leader Eric Cantor said the dont think government will enforce the law anyway, Rubio said, recounting conversations hes dont want to have a permanent separation of classes or two permanent different classes of Americans in this country. Last week, Obama suggested that hes open to a legal status for immigration that falls short of citizenship, hinting he could find common ground with House Republicans. Im going to do everything I can in the coming months to see if we can get this over the finish line, Obama said Friday. PAGE 9 CITRUSCOUNTY(FL) CHRONICLEMONDAY, FEBRUARY3, 2014 A9 000HAWD PAGE 10 OPINION Page A10MONDAY, FEBRUARY 3, 2014 Become organ donorValentines. Organ, tissue and blood donations help those suffering from organ failure, spinal and bone injuries, burns, hearing and vision loss. In the U.S. today, more than 117,000 people of all ages are waiting for their gift of life. Every 10 minutes someone is added to the transplant waiting list. It could happen to any one of us: you, your sister or son, co-worker or pastor. As you enjoy this Valentines Day with your loved ones, think about the thousands of people still on the transplant waiting list and the families who love them. Each one of us has the power to give the gift of life by becoming an organ/tissue/blood donor. Take the time, right now, to educate yourself about the need for donors. Visit for information on living and cadaver donation; how to donate or be a recipient; links to related organizations; and information on government policies and regulations involved in organ and tissue donation and transplantation, one of the most regulated areas of health care today, then sign-up to be an organ/tissue donor at www. donatelifeflorida.org. You can also sign-up when obtaining or renewing your driver license online or in person at a DMV office. Share the love; with your help we can end the wait.Dotti Hydue and Wm. Betz MorristonInterest of fairnessThis is in response to your Jan. 16 Chronicleeditorial, Lets have adult conversations about the issues, that accused myself, a candidate for Citrus County Commissioner, District 2, of being involved in a silly season squabble for my stance and complaint to the chairman of the Republican Executive Committee for a fair Republican primary without an elected official expressing bias and attempting to affect the election prior to the primary election that is against Republican Party policy. The unfair bias before the election primary, without even going into unfair bias against myself is evident even by theChroniclearticle of Jan. 6, Candidate, commissioner exchange terse emails, in which Commissioner (Scott) Adams said he would not be supporting incumbents. Even though I am not an incumbent at this point, this type of public statement alone by any elected official should be avoided for a non-biased election. Furthermore, my intention has been for fairness for this election so the voters will not be blinded by the political power plays of any elected official to ensure that according to the title of your editorial, we can have adult conversations about the issues. Fairness is mature and not silly.Renee Christopher-McPheeters Homosassa Doug Varrieur likes to shoot. Problem is, its 25 miles to the nearest range, where they charge $45 an hour. Whats a gun enthusiast to do? Lucky for him, Varrieur lives in Florida. Problem solved. Just erect a makeshift range in the back yard and fire away. Its perfectly legal. Re-read that if you want. Its just as nutty the second time around. In a story by my colleague Cammy Clark that appeared in Sundays Miami Herald, we learn that Varrieur, who lives on Big Pine Key, once complained to a gunshop owner about what a pain it was going to the range to shoot. The owner put him onto Florida statute 790.15, which lists the conditions under which one may not legally discharge a firearm in the state. Turns out there arent many. You may not shoot in any public place or on the right-of-way of any paved public road, highway or street, over any road, highway or occupied premises, or recklessly or negligently at your own home. Otherwise, let er rip. There are no mandatory safety requirements. Indeed, the language about recklessness and negligence doesnt take the precautions the law says he doesnt have to bother with? For what itsre trying to make it legal to take one to church. So this isnt just Florida. Its America. We live in states of insanity. As it happens, I have been corresponding with a reader who wrote me with what I regarded as promising ideas for moving the gun-rights argument forward. They included mandatory gun-safety training and mandatory liability insurance. The dialogue faltered on his contention that he needs his gun because crime is spiraling out of control, and the country is not as safe as it was 20 years ago. This, of course, is false: crime is at historic lows. In 1993, according to the FBI, the violentcrime rate was 747.1 per 100,000 people. In 2012, the most recent year for which figures are available, it was 386.9. Almost 10,000 fewer people were murdered in 2012 than in 1993. My reader was impressed with none of this. Forget stats, he said, talk to victims. If I did, Id learn that road rage and knockout incidents are way up and that nightly, there are home invasions, robberies, stabbings and, ahem, shootings. His insistence on perception over fact is emblematic of the nation weve become, so terrified by local TV news and its overreportage of street crime that we think every shadow has eyes and we need guns in school, the bar, the movies, church. Until some of us get over this media-driven paranoia, even promising ideas for ending the guns impasse are doomed. So I will close with some words of advice to anyone thinking of visiting or living in Florida or any other state of American insanity. One word, actually: Duck.Leonard Pitts is a columnist for The Miami Herald, 1 Herald Plaza, Miami, FL 33132. Readers may contact him via email at lpitts@ miamiherald.com. The trouble with the profit system has always been that it was highly unprofitable to most people.E.B. White, 1944 Trigger happy in the gunshine ACCRUAL BUSINESS In banks farewell, a story bigger than Citrus Theres one less bank in Citrus County. CenterState Bank, one of the 20 banks doing business in the county, will close its Crystal River and Inverness locations in April. David Donato, CenterStates community president for Citrus and Hernando counties, characterized the retrenchment by saying Some markets are more depressed than others. Were not sure thats entirely true: Citrus may not be booming, but there are plenty of banks thriving within the county. Saturation, and not depression, is the more likely culprit. The closings coincided with internal restructuring the bank hopes will cut costs and make it a more attractive lender to larger spenders. Indeed, as it gets smaller, CenterState is doing its best to get bigger. The banks two Citrus branches were among eight shuttered across the state, and yet Thursday the bank announced its acquisition of First Southern Bancorp Inc., which has 17 branches scattered throughout central, northeast and southeast Florida. Rather than being a sign of Citrus Countys continuing economic decline, the broader story in CenterStates departure is the story of our time: a small player trying to find its footing in a rapidly shifting market dominated by multinational corporations amid a sea change in the way business is done. The banking sector looked much different when CenterState emerged in the early s, and its competition has only consolidated, leaving it up against global conglomerates that are too big to fail and have, it must seem, unlimited resources to devote to courting customers. The loss of the banks presence in Citrus County is unfortunate, and should serve as a reminder that economic development needs to be at the forefront of the community conversation. Nevertheless, were confident that CenterStates Citrus customers will be well served by whichever competitor is fortunate enough to earn their business. THE ISSUE:CenterState Bank pulls out of Citrus County.OUR OPINION:A casualty, but not a catastrophe. Build for animals and homelessI see no reason why Citrus County cannot build a new home for the animals, a new shelter, and build a home for the homeless. Theres no reason why we cant do both. We have a lot of retirees with good skills that would help do this.Whats new about that?I just saw on TV where Florida is one of the first states in the United States that is going to allow driverless cars that is, the new computer-driven cars where the computer drives the car. So Florida is going to be one of the first for driverless cars. Im like, what else is new? Tell me the weather!Can somebody please tell me what the temperature is in Homosassa Springs, Homosassa Springs, Fla.? We have gotten a new channel for the weather and it stinks. Its every place else except where we were. Give us back our channel. Youve taken away a lot of things. Weve got Puerto Rican channels now, Mexican channels. What are you doing to our television? We shouldnt even have television anymore. Television st Leonard PittsOTHER VOICES They should all sufferToday is Friday, Jan. 17, and Im reading in the Chronicle about a lawyer who was upset because a killer illegally suffered during his execution. Well, who speaks for the victim? Who speaks for the victims family, how they have suffered all these years? I really dont care if he suffered or not. In fact, I hope they all suffer when they get executed. They deserve it.Eye for an eyeI read a small article that a lawyer in Ohio said that a condemned killer that he represented suffered during his execution. Gee, I wonder if anybody worried about the person that this killer murdered. Did that person suffer? An eye for an eye, a tooth for a tooth. If he suffered, tough. He got what he deserved.Good newsA condemned killer in Ohio suffered during his execution, according to witnesses and I was certainly glad to hear that. CITRUSCOUNTYCHRONICLE Hot Corner: EXECUTION PAGE 11. 352-746-2. 352795-2259. Citrus County Veterans Coalition 9 a.m. to 1 p.m. Tuesdays, 1039 N. Paul Drive, Inverness. Open to Citrus County veterans and their family members in need. 352400-8952 or 352-527-45. 352-5-6375100 or. North Oak Baptist Church Food Pantry 11:30 a.m. to 1 p.m. the last Thursday monthly. Serving Citrus Springs, Dunnellon and Beverly Hills. Call 352-746-1500. Food is distributed. 352-344. 352-5 weekly, families are only eligible for food once a month. Call 352-628-9087 or 352-302-9925. Hernando Seventh-day Adventist Church, 1880 N. Trucks Ave., Hernando, from 10 a.m. to noon the second and fourth Tuesdays monthly. Call 352-212-5159. LifeSouth bloodmobile schedule for February.. Gulfto-Lake Highway, open from 8a.m. to 4:30p.m. weekdays, (6:30p.m. Wednesdays), 8a.m. to 1:30p.m. Saturdays and closed Sundays. Visit for details. 1 to 5 p.m. Monday, Feb. 3, Seven Rivers Regional Medical Center, 6201 N. Suncoast Blvd., Crystal River. 10 a.m. to noon Monday, Feb. 3, Walmart Supercenter, 1936 N. Lecanto Highway, Lecanto. 10 a.m. to 5 p.m. Tuesday, Feb. 4, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. 11 a.m. to 4 p.m. Wednesday, Feb. 5, Love Chevrolet, 2209 State Road 44 W., Inverness. 8 a.m. to 2 p.m. Friday, Feb. 7, Citrus High School, 600 W. Highlands Blvd., Inverness. 9 a.m. to 3 p.m. Saturday, Feb. 8, Disabled American Veterans Chapter 158, 1801 N.W. U.S. 19, Crystal River. 8:30 a.m. to 1 p.m. Sunday, Feb. 9, St. Scholastica Catholic Church, 4301 W. Homosassa Trail, Lecanto. 2 p.m. to 5 p.m. Sunday, Feb. 9, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. 11 a.m. to 6 p.m. Monday, Feb. 10, Walmart Supercenter, 1936 N. Lecanto Highway, Lecanto. 8:30 to 11:30 a.m. Tuesday, Feb. 11, Stoneridge Landing Clubhouse, Inverness. 12:30 to 4 p.m. Tuesday, Feb. 11, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. 10 a.m. to 5 p.m. Wednesday, Feb. 12, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. 9 a.m. to 5 p.m. Thursday, Feb. 13, Citrus Memorial Health System, 502 Highland Blvd., Inverness. 9 a.m. to 5 p.m. Friday, Feb. 14, Citrus Memorial Health System, 502 Highland Blvd., Inverness. 5 to 7 p.m. Saturday, Feb. 15, Walmart Supercenter, 1936 N. Lecanto Highway, Lecanto. Noon to 4 p.m. Saturday, Feb. 15, American Legion Post No. 155, 6585 W. Gulf-to-Lake Highway, Crystal River. 8:30 a.m. to 12:30 p. 3 to 6 p.m. Wednesday, Feb. 19, West Citrus Elks Lodge No. 2693, 7890 W. Grover Cleveland Blvd., Homosassa. 11 a.m. to 2 p.m. Wednesday, Feb. 19, Walmart Supercenter, 3826 S Suncoast Blvd., Homosassa. 8 a.m. to 2 p.m. Thursday, Feb. 20, Crystal River High School, 3195 Crystal River High Drive, Crystal River. 10 a.m. to 5 p.m. Friday, Feb. 21, Village Cadillac-Toyota, 2431 S. Suncoast Blvd., Homosassa. 10 a.m. to 5 p.m. Friday, Feb. 21, Walmart Supercenter, 2461 W Gulf-to-Lake Highway, Inverness. 9:30 a.m. to 3:30 p.m. Saturday, Feb. 22, Howards Flea Market, 6373 S. Suncoast Blvd., Homosassa. 7:45 a.m. to 12:30 p.m. Sunday, Feb. 23, First United Methodist Church of Homosassa, 8831 W. Bradshaw St. 1:30 to 3:30 p.m. Sunday, Feb. 23, Walmart Supercenter, 3826 S. Suncoast Blvd., Homosassa. 1 to 4 p.m. Monday, Feb. 24, Arbor Trail Rehab & Skilled Nursing, 611 Turner Camp Road, Inverness. 10 a.m. to 12 p.m. Monday, Feb. 24, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. 9 a.m. to 1 p.m. Tuesday, Feb. 25, Citrus County Tax Collectors Office, 210 N. Apopka Ave., Inverness. 2 to 5 p.m. Tuesday, Feb. 25, Walmart Supercenter, 2461 W. Gulf-to-Lake Highway, Inverness. Noon to 4 p.m. Wednesday, Feb. 26, Lowes, 2301 E. Gulf-to-Lake Highway, Inverness. 5 to 8 p.m. Wednesday, Feb. 26, VFW Post 10087, West Vet Lane, Beverly Hills. Noon to 5 p.m. Thursday, Feb. 27, Sumter Electric Cooperative, U.S. 301 and Sumter County Road 471, Sumterville. 5 to 6 p.m. Friday, Feb. 28, 5 Points of Life Kids Marathon, 3810 W. Educational Path, Lecanto. LOCALCITRUSCOUNTY(FL) CHRONICLEMONDAY, FEBRUARY3, 2014 A11 Womens Conference The Womens Ministries of Cornerstone Baptist Church will host Lisa Whelchel 000H4XY Regular Ticket Price is $25 which includes a box lunch. Join us for a day at the Corner. Lisa will speak on the two topics, The Facts of My Life & Friendship for Grownups Saturday, Feb. 8, 2014 9:30 am 2:30 PM Doors open at 8:15 am Lisa is an American actress, singer, songwriter, author and public speaker. She is In 2012, Lisa Appeared on Survivor in the Philippines. She has been a regular speaker with Women of Faith conference since 2009. Tickets can be purchased in the Church Office or on line at Location: Cornerstone Baptist Church 1100 W Highland Blvd, Inverness 352-726-7335 Email: strongfoundations@hotmail.com This Saturday, Dont Miss It! STAY CLEAN STAY CLEAN IN 1 4 IN 1 4 000H7RZ BloodDRIVES FoodPROGRAMS PAGE 12 Associated PressKIEV, Ukraine Ukraines president will return today from a short sick leave that had sparked a guessing game he was taking himself out of action in preparation to step down or for a crackdown on widespread antigovernment protests. Viktor Yanukovychs office made the announcement about the presidents return the same day as protesters seeking his resignation held one of their largest gatherings in recent weeks. About 20,000 people assembled at the main protest site in Kievs central square on Sunday. Yanukovychssss city hall, which protesters have seized, is being used as an operations center and dormitory key to supporting the extensive protester tent camp on the nearby Independence Square. Ukrainian president to end sick leave Associated PressRiot police officers block a street Sunday in front of barricades and protesters at the monument to Viacheslav Chornovil, a prominent politician in Ukraine and a former Soviet political prisoner, in central Kiev, Ukraine. Parade Associated PressLauren Tandy, 5, left, goes nose to nose with her St. Berdoodle Capone, 1, after passing the judging stand Sunday in the Doggie Fashion Parade in Detroit. Religious leaders, farmers pray RENO,.Police detain dead girls momNAPA, Calif. A Northern California woman wanted for questioning in the death of her 3-year-old daughter was detained along with her boyfriend Sunday at a San Francisco Bay Area commuter train station and turned over to authorities who are investigating the childsgers daughter was found dead in her bed by officers conducting a child welfare check on Saturday afternoon. The child, whom police have not yet named, showed signs of having been sexually assaulted and blunt force trauma, Napa police Lt. Debbie Peecock said.Steroid use high among gay boysCHICAGO Gay and bisexual teen boys use illicit steroids at a rate almost six times higher than do straight kids, a dramatic disparity that points up a need to reach out to this group, researchers say. Reasons for the differences are unclear. The study authors said its. From wire reports Nation BRIEFS NATION& WORLD Page A12MONDAY, FEBRUARY 3, 2014 CITRUSCOUNTYCHRONICLE Voting Associated PressA man casts his vote at a polling station Sunday during presidential election in Panchimalco, on the outskirts of San Salvador, El Salvador. Syrian air raids kill dozensBEIRUT Syrian government helicopters and warplanes unleashed a wave of airstrikes on more than a dozen oppositionheld neighborhoods in the northern city of Aleppo on Sunday, firing missiles and dropping crude barrel bombs in a ferocious attack that killed at least 36 people, including 17 children, activists said. Aleppo has been a key battleground in Syrias civil war since rebels swept into the city in mid-2012 and wrested most of the eastern and southern neighborhoods from the government. PM: Boycotts wont hurt Israels comments and the aggrieved Israeli response led the main TV news shows Sunday, signaling a growing concern here that the world will use economic pressure to extract concessions. New Saudi law alarms activistsDUB kingdoms ruling Al Saud family firmly in control amid the demands for democratic reform that have grown louder since the Arab Spring protests that shook the region in 2011 and toppled longtime autocrats. Moderate quake hits southern IranTEHRAN, Iran Iranian state TV said a magnitude 5.5 earthquake has jolted a sparsely populated area in the countrys south. The report said the quake hit Sunday evening in the district of Goharan, about 750 miles southeast of the capital Tehran. World BRIEFS From wire reports Thai elections peaceful Crisis far from over Associated PressBANGKOK Thailand held nationwide elections without bloodshed Sunday despite widespread fears of violence. But the countrysold polls its hanging by a thread. Television stations, which normally broadcast electoral results, were reduced to projecting graphics not of party victories and losses, but of which constituencies were open or closed. Top Republicans say they stand by Christie Associated PressTRENTON, N.J. High-profile Republicans were adamant Sunday that New Jersey Gov. Chris Christie should not resign from his post as chairman of the Republican Governors Association following a former allysies state hosts the Super Bowl. Also Sunday, a member of Christies administration who was subpoenaed by lawmakers investigating the lane closings confirmed she had resigned. Christina Genovese Renna left the governors didnt respond Saturday when some spectators booed him at an appearance in New York Citys Times Square. He planned to watch Sundays game with his family from a luxury box at MetLife Stadium. Giuliani, appearing on CBS Face the Nation took aim at the credibility of two figures central to the scandal: John Wisniewski, whos leading the investigative probe, and David Wildstein, the former Christie loyalist who as an executive at the Port Authority of New York and New Jersey last year ordered the lane closures after receiving Kellys email, as someone with less than pure motives. He said Wildstein wants somebody else to pay his legal bills and he cant towns Democratic mayor. The U.S. Attorneys office is also investigating. On Friday, Wildsteins lawyer wrote a letter to the Port Authority saying evidence exists that Christie knew about the traffic jams in Fort Lee as they happened. He did not disclose any evidence in the letter. Associated PressA voter holds her identification and the chains that held the gate of the polling station closed as she demands the right to vote Sunday during general elections in Bangkok, Thailand. PAGE 13 Basketball/B2 Scoreboard/B3 Auto racing/B3 Golf/B4 Puzzles/ B5 Comics/ B6 Classifieds/ B7 NHL, tennis/B7 USF falls short in upset bid at Cincinnati. / B2 SPORTSSection BMONDAY, FEBRUARY 3, 2014 CITRUSCOUNTYCHRONICLE SUPER BOWL XLVIII Seahawks 43 Broncos 8 Associated PressDenvers Demaryius Thomas (88) is tackled by Seattles K.J. Wright (50), Bobby Wagner (54) and Mike Morgan after making a reception Sunday during the first half of Super Bowl XLVIII in East Rutherford, N.J. The NFLs top-ranked defense dominated the No. 1 offense in the league, as the Seahawks won their first Super Bowl title, 43-8. Seahawks throttle Broncos Associated PressEAST RUTHERFORD, N.J.sattles dominance were a 69-yard interception return touchdown by linebacker Malcolm Smith to make it 22-0, and Percy Harvins sensational 87-yard kickoff runback to open the second half. Smith was the gameshittingnings ability to win the biggest games. Seattles Percy Harvin (11) runs from Denvers David Bruton (30) while returning a kickoff 87 yards for a touchdown Sunday to open the second half of Super Bowl XLVIII in East Rutherford, N.J. Seattle outside linebacker Malcolm Smith reacts Sunday as he returns an interception for a touchdown against the Denver Broncos during the first half of Super Bowl XLVIII in East Rutherford, N.J. Smith was named MVP of the game. Seattle LB Smith earns MVP award Associated PressEAST RUTHERFORD, N.J.Malcolm Smith always was ready to step in when the Seattle Seahawks needed him. Now hes. See SEAHAWKS/ Page B4 See MVP/ Page B4 PAGE 14 B2MONDAY, FEBRUARY3, 2014CITRUSCOUNTY(FL) CHRONICLEBASKETBALL No. 13 Cincinnati rallies to top USF 50-45 Another loss on the road Michigan, Pitt suffer upset losses Associated PressCINCINNATI Sean Kilpatrick wasnt feeling good. His shot showed the symptoms. The American Athletic Conferencespatricks lead. Hes going to come and perform every night, sick or not sick, said forward Justin Jackson, who added 15 points. Its what I expected out of him. Cincinnati trailed by three points with 8 minutes left. Kilpatrick took over the game and extended Cincinnatis best start since the 2001-02 season, when it was in Conference USA. The senior guard made six free throws and two driving lay-ins, scoring 10 of Cincinnatis final 12 points. Cincinnati survived the close call at home, where it has won 17 in a row, including 15 this season. Kilpatrick was sick the past two days, missing practice on Saturday. He still wasnt feeling very well on Sunday. Itsincinnatis career scoring list with 1,891 points. Oscar Robertson scored 2,973, and Steve Logan is second at 1,985. The guys going to be knock on wood the second guy in the history of this program to score over 2,000 points, coach Mick Cronin said. Hes.Indiana 63, No. 10 Michigan 52BLOOMINGTON, Ind. Yogi Ferrell scored 27 points, hitting seven 3-pointers in eight tries, to lead unranked Indiana to a 63-52 upset of No. 10 Michigan.. Virginia 48, No. 18 Pittsburgh 45PITTSBURGH Malcolm Brogdon made a last-second 3-pointer to give Virginia a 48-45 victory against No. 18 P. Associated PressSouth Florida guard Corey Allen Jr. (4) drives Sunday between Cincinnati guard Kevin Johnson, left, and forward Justin Jackson (5) in the second half in Cincinnati. No. 23 FSU falls to Wake Forest again Associated PressCHAPEL HILL, N.C. Adrienne Motley scored a career-high 27 points as Miami upset No. 6 North Carolina 83-80. Keyona Hayes added 14 points, including a crucial second-chance layup with a minute to go, for the Hurricanes (12-10, 4-5 Atlantic Coast Conference), who snapped a threegame losing streak. Xylina McDaniel and Diamond DeShields had 18 points each for the Tar Heels (17-5, 5-3), who fell behind by 19 in the first half after a 22-4 run by Miami. No. 2 Notre Dame 88, No. 3 Duke 67DURHAM, N.C. Kayla McBride had 23 points and 11 rebounds, and No. 2 Notre Dame remained unbeaten by routing No. 3 Duke 88-67, the Blue Devils first ACC loss at home since 2008. Jewell Loyd scored 17 points and Lindsay Allen and Natalie Achonwa had 15 apiece for the Fighting Irish (21-0, 8-0). They claimed sole possession of first place by bringing a decisive end to the Blue Devils 42-game winning streak in home conference games. Tricia Liston scored 23 points for Duke (21-2, 8-1).No. 4 Stanford 79, No. 21 California 64BERKELEY, Calif. Chiney Ogwumike had 29 points, eight rebounds and four assists and No. 4 Stanford ran its winning streak to 20 games by beating 21st-ranked California 79-64 in the second meeting between the rivals in a fourday span. Amber Orrange added 13 points for the Cardinal (21-1, 10-0 Pac-12), whose only loss came Nov. 11 at No. 1 Connecticut. Brittany Boyd had 20 points, six rebounds and six assists for California (14-7, 6-4).No. 5 Louisville 79, South Florida 59LOUISVILLE, Ky. Tia Gibbs had 19 points, hitting five 3-pointers, to help No. 5 Louisville win its 15th straight game with a 79-59 victory over South Florida..No. 7 South Carolina 78, Missouri 62COLUMBIA, S.C. Tiffany Mitchell scored 20 points and No. 7 South Carolina reached 20 victories for the third straight season with a 78-62 win over Missouri.. Bri Kulas had 21 points to lead the Tigers (14-8, 3-6).No. 8 Maryland 89, Syracuse 64SYRACUSE, N.Y. Freshman guard Lexie Brown scored a season-high 31 points, going 7 of 8 from beyond the arc, and No. 8 Maryland beat Syracuse 89-64 to snap a three-game losing streak. The Orange (16-6, 5-4 Atlantic Coast Conference) dropped to 0-4 all-time against Maryland (17-4, 5-3), including a 77-62 loss two weeks ago.No. 10 Tennessee 64, Alabama 54TUSCALOOSA, Ala. Cierra Burdick scored 21 points and No. 10 Tennessee rallied to beat Alabama 64-54. Despite trailing 26-18 at halftime, the Lady Vols (18-4, 7-2 Southeastern Conference) were able to outscore the Crimson Tide 46-28 in the second half. Daisha Simmons led Alabama (10-12, 3-6 SEC) with 18 points, and Shafontaye Myers added 11.No. 12 Penn State 79, Northwestern 75CHICAGO Maggie Lucas scored 19 points and Ariel Edwards added 15 as No. 12 Penn State held off Northwestern 79-75. Dara Taylor, Peyton Whitted and Talia East each scored 11 for the Lady Lions (17-4, 8-1 Big Ten). Maggie Lyon led Northwestern (14-8, 4-5) with 26.No. 13 Kentucky 63, No. 14 LSU 56LEXINGTON, Ky. Bria Goss scored 11 points, including two clutch free throws with 10.8 seconds remaining, to lift No. 13 Kentucky to a 63-56 win over 14th-ranked LSU at Memorial Coliseum. The Wildcats (17-5, 5-4 Southeastern Conference) rebounded from Thursdays 58-56 setback at Georgia. Raigyne Moncrief scored 14 points in the second half and led LSU (17-5, 6-3) with 19 points.No. 15 Arizona St. 97, Oregon 94TEMPE, Ariz. Joy Burke had 22 points and 15 rebounds and Promise Amukamara scored 21 as No. 15 Arizona State held off Oregon 97-94. Katie Hempen added 16 points, Elisha Davis scored 11 and Deja Mann had 10 assists for Arizona State (19-3, 8-2). Freshman Chrishae Rowe scored 39 points to lead the Ducks (12-9, 3-7).No. 16 Vanderbilt 71, No. 17 Texas A&M 69NASHVILLE, Tenn. Morgan Batey made two free throws with 2.2 seconds to go to lift No. 16 Vanderbilt to a 71-69 victory against No. 17 Texas A&M. Batey finished with 17 points. Christina Foggie added 14 points for the Commodores (17-5, 6-3 SEC). Gilbert had 26 points and 12 rebounds to lead the Aggies (17-6, 7-2).Michigan St. 89, No. 19 Purdue 73EAST LANSING, Mich. Aerial Powers had 19 points, 11 rebounds and five assists to lead Michigan State past No. 19 Purdue 89-73. Jasmine Hines had 16 points, Klarissa Bell added 15 and Tori Jankoska scored 14 for MSU (15-7, 7-2 Big Ten). Whitney Bays led the Boilermakers (157, 5-5) with 22 points and six rebounds.Wake Forest 78, No. 23 Florida State 54WINSTON-SALEM, N.C. Chelsea Douglas made five 3-pointers en route to 32 points as Wake Forest beat No. 23 Florida State for the third straight time, 78-54. Dearica Hamby battled through foul trouble to finish with her 18th doubledouble, 22 points and 10 rebounds, of the season. Freshman Jill Brunori grabbed 15 rebounds for Wake Forest (12-10, 3-6 Atlantic Coast Conference). Natasha Howard had 22 points, six rebounds and five blocks for Florida State (15-6, 3-5). Associated PressMiamis Adrienne Motley (23) goes to the basket Sunday against No. 6 North Carolinas Jessica Washington (24), Danielle Butts (10) and Allisha Gray, right, in Chapel Hill, N.C. The Hurricanes won 83-80. Rondo scores 19 for Boston in 96-89 win over Magic Associated PressBOSTON Rajon Rondo had season highs with 19 points and 10 assists and the Boston Celtics snapped a fourgame leagues bottom teams. Boston entered Sunday winless in six games since Rondostons lead to 75-68 with 9:01 left to play. ETwaun Moore and Davis added layups during an 10-2 run that pulled the Magic within 77-76 on a pair of free throws by Kyle OQu. Associated PressBoston forward Jeff Green shoots at the basket Sunday as Orlando forward Tobias Harris (12) tries to block in the first quarter in Boston. The Celtics won 96-89. PAGE 15 SCOREBOARDCITRUSCOUNTY(FL) CHRONICLE On the AIRWAVES TODAYS SPORTS MENS COLLEGE BASKETBALL 7 p.m. (ESPN) Notre Dame at Syracuse 7 p.m. (ESPNU) Hampton at Morgan State 7 p.m. (FS1) Xavier at Villanova 9 p.m. (ESPN) Iowa State at Oklahoma State 9 p.m. (ESPNU) Prairie View A&M at Alabama A&M 9 p.m. (FS1) Georgetown at DePaul 12 a.m. (ESPNU) Notre Dame at Syracuse (same-day tape) 3 a.m. (ESPNU) Iowa State at Oklahoma State (same-day tape) NBA BASKETBALL 7 p.m. (FSNFL) Orlando Magic at Indiana Pacers 7:30 p.m. (SUN) Detroit Pistons at Miami Heat 8 p.m. (NBA) San Antonio Spurs at New Orleans Pelicans WOMENS COLLEGE BASKETBALL 7 p.m. (ESPN2) Baylor at Oklahoma NHL HOCKEY 1 p.m. (NHL) Detroit Red Wings at Washington Capitals (taped) 3 p.m. (NHL) Winnipeg Jets at Montreal Canadiens (taped) 7:30 p.m. (NBCSPT) Colorado Avalanche at New Jersey Devils Note: Times and channels are subject to change at the discretion of the network. If you are unable to locate a game on the listed channel, please contact your cable provider. Prep CALENDAR TODAYS PREP SPORTS GIRLS TENNIS 4 p.m. Citrus at The Villages NFL PlayoffsW At East Rutherford, N.J. Seattle 43, Denver 8Seahawks 43, Broncos 8Seattle 81414743 Denver 0080 First Quarter SeaAvril safety, 14:48. SeaFG Hauschka 31, 10:21. SeaFG Hauschka 33, 2:16. Second Quarter SeaLynch 1 run (Hauschka kick), 12:00. SeaSmith 69 interception return (Hauschka kick), 3:21. Third Quarter SeaHarvin 87 kickoff return (Hauschka kick), 14:48. SeaKearse 23 pass from Wilson (Hauschka kick), 2:58. DenD.Thomas 14 pass from Manning (Welker pass from Manning), :00. Fourth Quarter SeaBaldwin 10 pass from Wilson (Hauschka kick), 11:45. A,529. SeaDen First downs 1718 Total Net Yards341306 Rushes-yards29-13514-27 Passing 206279 Punt Returns 0-01-9 Kickoff Returns2-1075-105 Interceptions Ret.2-710-0 Comp-Att-Int 18-26-034-49-2 Sacked-Yards Lost0-01-1 Punts 1-45.02-30.0 Fumbles-Lost 0-04-2 Penalties-Yards10-1045-44 Time of Possession31:5328:07 INDIVIDUAL STATISTICS RUSHINGSeattle, Harvin 2-45, Lynch 15-39, Wilson 3-26, Turbin 9-25. Denver, Moreno 5-17, Anderson 2-9, Ball 6-1, Manning 1-0. PASSINGSeattle, Wilson 18-25-0-206, Jackson 0-1-0-0. Denver, Manning 34-49-2-280. RECEIVINGSeattle, Baldwin 5-66, Kearse 465, Tate 3-17, Willson 2-17, Lockette 1-19, Miller 1-10, Robinson 1-7, Harvin 1-5. Denver, D.Thomas 13-118, Welker 8-84, J.Thomas 427, Moreno 3-20, Tamme 2-9, Ball 2-2, Anderson 1-14, Decker 1-6. MISSED FIELD GOALSNone.Super Bowl champs2014Seattle (NFC) 43, Denver (AFC) 8 2013Baltimore (AFC) 34, San Francisco (NFC) 31 2012N.Y. Giants (NFC) 21, New England (AFC) 17 2011Green Bay (NFC) 31, Pittsburgh (AFC) 25 2010New Orleans (NFC) 31, Indianapolis (AFC) 17 2009Pittsburgh (AFC) 27, Arizona (NFC) 23 2008N.Y. Giants (NFC) 17, New England (AFC) 14 2007Indianapolis (AFC) 29, Chicago (NFC) 17 2006Pittsburgh (AFC) 21, Seattle (NFC) 10 2005New England (AFC) 24, Philadelphia (NFC) 21 2004New England (AFC) 32, Carolina (NFC) 29 2003Tampa Bay (NFC) 48, Oakland (AFC) 21 2002New England (AFC) 20, St. Louis (NFC) 17 2001Baltimore Ravens (AFC) 34, N.Y. Giants (NFC) 7 2000St. Louis (NFC) 23, Tennessee (AFC) 16 1999Denver (AFC) 34, Atlanta (NFC) 19 1998Denver (AFC) 31, Green Bay (NFC) 24 1997Green Bay (NFC) 35, New England (AFC) 21 1996Dallas (NFC) 27, Pittsburgh (AFC) 17 1995San Francisco (NFC) 49, San Diego (AFC) 26 1994Dallas (NFC) 30, Buffalo (AFC) 13 1993Dallas (NFC) 52, Buffalo (AFC) 17 1992Washington (NFC) 37, Buffalo (AFC) 24 1991N.Y. Giants (NFC) 20, Buffalo (AFC) 19 1990San Francisco (NFC) 55, Denver (AFC) 10 1989San Francisco (NFC) 20, Cincinnati (AFC) 16 1988Washington (NFC) 42, Denver (AFC) 10 1987N.Y. Giants (NFC) 39, Denver (AFC) 20 1986Chicago (NFC) 46, New England (AFC) 10 1985San Francisco (NFC) 38, Miami (AFC) 16 1984L.A. Raiders (AFC) 38, Washington (NFC) 9 1983Washington (NFC) 27, Miami (AFC) 17 1982San Francisco (NFC) 26, Cincinnati (AFC) 21 1981Oakland (AFC) 27, Philadelphia (NFC) 10 1980Pittsburgh (AFC) 31, L.A. Rams (NFC) 19 1979Pittsburgh (AFC) 35, Dallas (NFC) 31 1978Dallas (NFC) 27, Denver (AFC) 10 1977Oakland (AFC) 32, Minnesota (NFC) 14 1976Pittsburgh (AFC) 21, Dallas (NFC) 17 1975Pittsburgh (AFC) 16, Minnesota (NFC) 6 1974Miami (AFC) 24, Minnesota (NFC) 7 1973Miami (AFC) 14, Washington (NFC) 7 1972Dallas (NFC) 24, Miami (AFC) 3 1971Baltimore Colts (AFC) 16, Dallas (NFC) 13 1970Kansas City (AFL) 23, Minnesota (NFL) 7 1969N.Y. Jets (AFL) 16, Baltimore Colts (NFL) 7 1968Green Bay (NFL) 33, Oakland (AFL) 14 1967Green Bay (NFL) 35, Kansas City (AFL) 10Citrus County SpeedwayRace finishes for Feb. 1 Non-Winged Sprints No.DriverHometown 22Aaron PierceIndianapolis 21Jimmy Alvis Sr.Seffner 11Joey AgvilarTampa 5Mickey KempgensTampa 0Dude TeateLeesburg 19Keith ButlerRiverview 55Tommy NicholsTampa 41Ty DeCaireWesley Chapel 18Shane ButlerBushnell 33John SanburnEustis 25Steve HeislerPlant City 69Rick VoiseyWesley Chapel 92Dave RetzlaffBrooksville 4Jason BradfordAvon Park 4x4Jimmy MiltnerAntioch 81Herb Neumann Jr.Inverness 44Bill PettijohnLand O Lakes 63Terry TaylorDunedin 31Travis BliemeisterVenice 14Richie CorrPlant City 75Russell JonesLand O Lakes 17Todd DonaldsonMulberry 7Steven BradleyInverness 3Garrett GreenValrico 8Kurt TaylorBrandon 84Johnny GilbertsonDover Pro Trucks No.DriverHometown 59xBecca MonopoliLakeland 4Dustin DunnJupiter 7Dylan MartinLakeland 17Nicholas MalvertySpring Hill 33Red (John) VannWesley Chapel 28Michael LaplantTampa Sportsman No.DriverHometown 13Aaron WilliamsonLakeland 11Charlie BrownLakeland 4Jay WitfothBeverly Hills 66Andy NichollsOrlando 59John InmanTampa Mod Mini Stocks No.DriverHometown 7Clint FoleyDunnellon 34Kevin HarrodFloral City 44Michael LawhornClermont 01Johnny SinerHomosassa 24Phil EdwardsCrystal River 47Richard KuhnOcala 99Leroy MooreHernando Beach 94Keith RoggenLakeland 98James EllisBrooksville Mini Stocks No.DriverHometown 20Shannon KennedySummerfield 24Tim ScaliseLutz 11Jerry DanielsWeirsdale 73Jason TerryBelleview 22Mark PattersonWebster 33Bill RyanBushnell 51Buddy MallorySummerfield 199Tasha LambertBeverly Hills Pro Hornet Division No.DriverHometown 99Raymond VannWesley Chapel 19Mike LaceyTampa 6Jeff Lacey 12Rocky ReinholdTampa 98Marvin ArmstrongWildwood 00Willie LaceyWesley ChapelNBA standingsEASTERN CONFERENCE Atlantic Division WLPctGB Toronto2522.532 Brooklyn2025.4444 New York1928.4046 Boston1633.32710 Philadelphia1533.31310 Southeast Division WLPctGB Miami 3313.717 Atlanta 2521.5438 Washington2323.50010 Charlotte2128.42913 Orlando1336.26521 Central Division WLPctGB Indiana 3610.783 Chicago2323.50013 Detroit 1927.41317 Cleveland1631.34020 Milwaukee839.17028 WESTERN CONFERENCE Southwest Division WLPctGB San Antonio3413.723 Houston3217.6533 Memphis2620.5657 Dallas2721.5637 New Orleans2026.43513 Northwest Division WLPctGB Oklahoma City3811.776 Portland3413.7233 Minnesota2324.48914 Denver 2223.48914 Utah 1631.34021 Pacific Division WLPctGB L.A. Clippers3416.680 Phoenix2918.6173 Golden State2919.6044 L.A. Lakers1631.34016 Sacramento1532.31917 Sundays Game Boston 96, Orlando 89 Todays.NHL standingsEASTERN CONFERENCE Atlantic Division GPWLOTPtsGFGA Boston543516373164119 Tampa Bay553218569162137 Toronto573021666170176 Montreal562921664137139 Detroit5524191260144158 Ottawa5524211058158176 Florida552127749133174 Buffalo541531838105161 Metropolitan Division GPWLOTPtsGFGA Pittsburgh553815278176132 N.Y. Rangers563023363145140 Columbus552823460163154 Philadelphia562723660152163 Carolina542520959137151 Washington562522959164172 New Jersey5623211258132140 N.Y. Islanders572128850159191 WESTERN CONFERENCE Central Division GPWLOTPtsGFGA Chicago5733101480200158 St. Louis543712579185125 Colorado543514575165142 Minnesota572921765140144 Dallas 552521959158160 Nashville572523959142172 Winnipeg572725559161166 Pacific Division GPWLOTPtsGFGA Anaheim574012585189139 San Jose563515676168134 Los Angeles573021666134122 Vancouver562720963142147 Phoenix5526191062159164 Calgary 552127749132173 Edmonton571833642147194 NOTE: Two points for a win, one point for overtime loss. Sundays Games Washington 6, Detroit 5, OT Winnipeg 2, Montreal 1 Todays. Florida LOTTERY Here are the winning numbers selected Sunday in the Florida Lottery: CASH 3 (early) 7 4 2 CASH 3 (late) 1 7 3 PLAY 4 (early) 9 7 5 8 PLAY 4 (late) 0 0 2 1 FANTASY 5 11 16 29 30 33 Players should verify winning numbers by calling 850-487-7777 or at. Saturdays winning numbers and payouts: Powerball: 5 12 15 27 38 Powerball: 7 5-of-5 PBNo winner No Florida winner 5-of-57 winners$1 million 2 Florida winners Lotto: 11 12 20 23 33 44 6-of-6No winner 5-of-652$3,021.50 4-of-61,952$59 3-of-637,974$5 Fantasy 5: 2 11 16 23 24 5-of-53 winners$93,000.93 4-of-5437$102.50 3-of-513,015$9.50MONDAY, FEBRUARY3, 2014 B3 PGA Waste Management Phoenix OpenSunday, At TPC Scottsdale, Scottsdale, Ariz., Purse: $6.2 million, Yardage: 7,152, Par: 71, Final: Kevin Stadler (500), $1,116,00065-68-67-68 268-16 Graham DeLaet (245), $545,60067-72-65-65 269-15 Bubba Watson (245), $545,60064-66-68-71 269-15 Hunter Mahan (123), $272,80066-71-65-68 270-14 Hideki Matsuyama (123), $272,80066-67-68-69 270-14 Charles Howell III (92), $207,70070-69-67-65 271-13 Brendan Steele (92), $207,70066-74-62-69 271-13 Ryan Moore (92), $207,70066-71-64-70 271-13 Harris English (80), $179,80065-67-69-71 272-12 Webb Simpson (75), $167,40068-72-67-66 273-11 Pat Perez (70), $155,00065-68-70-71 274-10 Cameron Tringale (61), $130,20071-67-69-68 275-9 John Mallinger (61), $130,20067-72-67-69 275-9 Matt Jones (61), $130,20065-65-72-73 275-9 Scott Piercy (55), $102,30067-67-75-67 276-8 Morgan Hoffmann (55), $102,30069-66-70-71 276-8 Greg Chalmers (55), $102,30065-67-71-73 276-8 Jason Kokrak (55), $102,30066-69-68-73 276-8 John Merrick (48), $63,30275-65-69-68 277-7 Michael Thompson (48), $63,30272-68-70-67 277-7 Kevin Na (48), $63,302 70-70-68-69 277-7 William McGirt (48), $63,30265-69-73-70 277-7 Justin Hicks (48), $63,30271-70-69-67 277-7 Martin Laird (48), $63,30267-68-71-71 277-7 John Rollins (48), $63,30272-67-67-71 277-7 Patrick Reed (48), $63,30267-67-71-72 277-7 Roberto Castro (48), $63,30272-69-70-66 277-7 Chris Stroud (48), $63,30270-67-68-72 277-7 Geoff Ogilvy (40), $40,30071-70-68-69 278-6 Ken Duke (40), $40,300 70-67-72-69 278-6 Bryce Molder (40), $40,30067-71-70-70 278-6 Spencer Levin (40), $40,30067-69-70-72 278-6 Nick Watney (40), $40,30069-68-68-73 278-6 Bill Haas (36), $33,480 69-68-71-71 279-5 Jason Bohn (36), $33,48070-70-70-69 279-5 Jonas Blixt (36), $33,48068-71-72-68 279-5 Camilo Villegas (32), $27,90070-71-68-71 280-4 Gary Woodland (32), $27,90067-72-72-69 280-4 Brian Davis (32), $27,90072-69-70-69 280-4 Matt Every (32), $27,900 72-66-67-75 280-4 Ricky Barnes (32), $27,90071-67-67-75 280-4 Chris Smith (27), $21,08070-69-71-71 281-3 Phil Mickelson (27), $21,08071-67-72-71 281-3 James Driscoll (27), $21,08067-70-73-71 281-3 David Lingmerth (27), $21,08072-68-68-73 281-3 K.J. Choi (27), $21,080 71-70-69-71 281-3 Ben Crane (27), $21,080 69-69-69-74 281-3 Erik Compton (21), $15,77367-72-71-72 282-2 Ryan Palmer (21), $15,77376-64-70-72 282-2 David Lynn (21), $15,77372-66-70-74 282-2 Aaron Baddeley (21), $15,77368-70-73-71 282-2 Jhonattan Vegas (21), $15,77371-66-75-70 282-2 Brendon de Jonge (16), $14,28566-73-70-74 283-1 Robert Garrigus (16), $14,28570-70-70-73 283-1 Brian Stuard (16), $14,28573-68-69-73 283-1 Martin Kaymer (16), $14,28569-71-71-72 283-1 Kevin Streelman (16), $14,28571-68-74-70 283-1 David Hearn (12), $13,76468-70-73-73 284E Nicolas Colsaerts (12), $13,76469-68-74-73 284E J.B. Holmes (12), $13,76473-68-70-73 284E Charley Hoffman (8), $13,20670-71-69-75 285+1 Jonathan Byrd (8), $13,20668-73-69-75 285+1 Brandt Snedeker (8), $13,20670-64-72-79 285+1 Brian Gay (8), $13,206 69-71-71-74 285+1 Sang-Moon Bae (8), $13,20667-73-71-74 285+1 John Peterson (8), $13,20668-70-74-73 285+1 Kiradech Aphibarnrat (0), $12,71066-71-73-76 286+2 Fred Funk (4), $12,710 69-71-76-70 286+2 Y.E. Yang (1), $12,276 64-73-75-75 287+3 Mark Calcavecchia (1), $12,27670-71-71-75 287+3 Scott Langley (1), $12,27671-70-71-75 287+3 Derek Ernst (1), $12,276 72-69-72-74 287+3 Steven Bowditch (1), $12,27671-69-75-72 287+3 Ben Curtis (1), $11,842 68-72-73-75 288+4 Joe Ogilvie (1), $11,842 71-70-77-70 288+4 Chris Kirk (1), $11,656 65-73-75-76 289+5 Vijay Singh (1), $11,532 69-72-75-76 292+8European Tour Omega Dubai Desert ClassicSunday, At Emirates Golf Club (Majlis Course), Dubai, United Arab Emirates, Purse: $2.5 million, Yardage: 7,316, Par: 72, Final: Stephen Gallacher, Scotland66-71-63-72 272 Emiliano Grillo, Argentina71-67-69-66 273 Brooks Koepka, United States69-65-70-70 274 Romain Wattel, France 68-73-67-66 274 Mikko Ilonen, Finland 69-72-70-64 275 Thorbjorn Olesen, Denmark71-68-65-71 275 Robert Rock, England 67-70-68-70 275 Steve Webster, England 71-70-64-70 275 Paul Casey, England 70-72-67-67 276 Rory McIlroy, Northern Ireland63-70-69-74 276 Edoardo Molinari, Italy 65-72-68-71 276 Bernd Wiesberger, Austria70-70-68-68 276 Thomas Bjorn, Denmark 72-70-68-67 277 Darren Fichardt, South Africa69-72-66-70 277 Soren Hansen, Denmark67-71-71-68 277 Francesco Molinari, Italy 69-69-71-68 277 Brett Rumford, Australia 69-70-71-67 277 Paul Waring, England 70-70-68-69 277 Danny Willett, England 71-65-73-68 277 Jamie Donaldson, Wales69-68-70-71 278 Simon Dyson, England 69-69-73-67 278 Damien McGrane, Ireland66-70-71-71 278 Also Joost Luiten, Netherlands70-69-70-70 279 Henrik Stenson, Sweden70-67-75-68 280 Tiger Woods, United States68-73-70-71 282 Colin Montgomerie, Scotland70-70-69-74 283 Fred Couples, United States70-71-73-70 284 Paul Lawrie, Scotland 68-71-72-73 284New Zealand Womens OpenFinal Round, (a denotes amateur): Mi Hyang Lee, South Korea72-72-63 207 Lydia Ko, New Zealand 69-69-70 208 Seon Woo Bae, South Korea68-71-70 209 Beth Allen, United States 71-68-70 209 Anya Alvarez, United States70-66-73 209 Sarah Jane Smith, Australia69-77-65 211 Bree Arthur, Australia 75-70-66 211 Marion Ricordeau, France 74-69-68 211 Nikki Campbell, Australia 72-72-68 212 Hyun Soo Kim, South Korea74-66-72 212 Stacey Lee Bregman, South Africa71-75-67 213 Lorie Kane, Canada 74-72-67 213 Breanna Elliot, Australia 74-70-69 213 Charley Hull, England 69-73-71 213 Marta Silva Zamora, Spain 72-70-71 213 Jing Yan, China (a) 73-69-71 213 Jessica Speechley, Australia70-70-73 213 Kyu Jung Baek, South Korea70-69-74 213 NCAA Basketball FAVORITE LINEUNDERDOG at Delaware 6 Northeastern at Villanova 11 Xavier at Drexel 10UNC Wilmington at Syracuse 15 Notre Dame Georgetown 4 at DePaul at Oklahoma St. 7 Iowa St. Iona 6at Monmouth (NJ) at Manhattan 15 St. Peters at Rider 8 Fairfield at Montana St. 1 Montana at Georgia St. 13South Alabama at Furman Pk Samford at Morgan St. 4 Hampton at Alabama A&M 8 Prairie View NBA FAVORITE LINE O/U UNDERDOG at Indiana 13 (192) Orlando Portland 3(209) at Washington at Miami 11 (206) Detroit at Brooklyn 10(205) Philadelphia New York 7(197) at Milwaukee at Oklahoma City 6 (189) Memphis San Antonio 4(192) at New Orleans at Dallas 10 (208) Cleveland Toronto 5 (191) at Utah L.A. Clippers 4 (209) at Denver Chicago 1(195) at Sacramento HOCKEY National Hockey League CAROLINA HURRICANES Recalled G Cam Ward from Charlotte (AHL). COLUMBUS BLUE JACKETS Recalled D Tim Erixon from Springfield (AHL). DETROIT RED WINGS Assigned C Cory Emmerton to Grand Rapids (AHL). NASHVILLE PREDATORS Reassigned Fs Simon Moser and Colton Sissons to Milwaukee (AHL). After delay, season begins at Citrus County Speedway SEANARNOLD CorrespondentINVERNESS After Fridays rain pushed back the Citrus County Speedways grand reopening by a day, the wait for a new season came to an end with six feature races on Saturday. While out-of-county drivers claimed each of the main events, it was Aaron Pierce, of Indianapolis, Ind., who traveled the farthest of the bunch and beat out the largest field of the night, in prevailing over 23 non-winged sprint cars in 30 laps. Aaron Williamson (Sportsman), Clint Foley (Modified Mini Stocks) and Raymond Vann (Pro Hornets) each scored wire-to-wire feature wins after taking their respective heats. Pierce started the race in the middle of the pack and climbed to the fourth position behind Tampas Joey Agvilar and Mickey Kempgens and leader Jimmy Alvis Sr., of Seffner by lap six. Kempgens and Pierce made it past Agvilar on 13, before the pair touched on 16, putting Kempgens in a spin coming out of the fourth turn. Pierce wasnt penalized, but later apologized, saying he was unintentionally at fault. Pierce grabbed the lead from Alvis, who finished second, moments before a caution on 25, and held on from there. Agvilar finished third. (Kempgens and I) are friends, said Pierce, who expects to return to Citrus, and hes driven my car before at the Little 500 (in Indiana). I locked both front wheels trying to keep from hitting him. I barely got him, but it was while he was cutting across. In Mini Stocks, Summerfields Shannon Kennedy (No. 20) chased down Lutzs Tim Scalise midway through the nights opening feature. Scalise, the heat winner, went on to finish second, between feature winner Kennedy and third places Jerry Daniels of Weirsdale. Websters Mark Patterson was in third place on lap 19 when an engine rod broke on his No. 22. Lakelands Becca Monopoli cruised to a win over a five-car field in the 35-lap Pro Trucks feature after she pushed her way to the front with a strong move on the second lap. Monopoli, who started in third and briefly went three-wide in the early moments, reestablished her lead over Jupiters Dustin Dunn (second-place finish) and Lakelands Dylan Martin (third) on lap three, following the only yellow flag of the race. Foley didnt appear to have much trouble defeating seven other cars in the Mod Minis feature, but he did lose his exhaust pipe, causing exhaust fumes to leave some burns under the Dunnellon drivers right arm during the race. It got really hot in there, he said. Floral Citys Kevin Harrod ran second the entire 25-lap race, and Clermonts Michael Lawhorn moved up a row from his starting position to get third place. Like Foley and Monopoli, Williamsons feature win brought little drama, though his last-minute decision to race at Citrus meant he and his No. 13 arrived moments before the checkin deadline. The Lakeland driver extended his pole-position lead over the first 14 laps, and regained his advantage after a couple of cautions in the second half of the five-car, 25-lap event. Fellow Lakeland driver Charlie Brown took second place, and Beverly Hills Jay Witfoth, in placing third, was the only Citrus County driver to score a top three finish on the night. Super Late Models headline next Saturdays racing action with a seasonopening 50-lap event, while Mod Minis, Street Stocks, Pure Stocks, Mini Stocks, Pure and Street Stocks Figure 8s and a fan participation race round out the schedule. PAGE 16 B4MONDAY, FEBRUARY3, 2014CITRUSCOUNTY(FL) CHRONICLESPORTS Gallacher wins Dubai Desert Classic by 1 stroke Associated Press didnt make any putts, so it was one of those days, McIlroy said. I thought if I could get to 16 under it would be good enough and it turned out that it was as thats what Stephen got to. I just wasnt able to play well enough to get there. It was just one of those days. Anything that could go wrong, did.Lee wins New Zealand Womens OpenCHRISTCHURCH, New Zealand Mi Hyang Lee of South Korea shot a course-record 9-under 63 Sunday to win the Womens. Associated PressStephen Gallacher reacts Sunday on the second hole during the final round of the Dubai Desert Classic in Dubai, United Arab Emirates. Gallacher shot a final round 72 to become the first player to successfully defend the Dubai Desert Classic title. Associated PressSCOTTSDALE, Ariz.atsons par try slid by the left side to end the tournament. Stadlers previous biggest win was in Australia in the European Toursaps No. 1 offense and defense, the D dominated. Its all about making history, All-Pro safety Earl Thomas said. This was a dominant performance from top to bottom. Denver fell to 2-5 in Super Bowls, and by the end many of Mannings didnt Chicagos Devin Hesters kickoff return to open the 2007 game against Manningsnings third-down pass to Julius Thomas sailed way too high and directly to safety Kam Chancellor, giving the Seahawks the ball at Denversnings arm as he was throwing, the ball fluttered directly to Smith, who took off down the left sideline for a 69-yard interception TD. Manning trudged to the sideline, a look of disgust on his face. That look didnt improve when, after a drive to the Seattle 19, his fourth-down pass was tipped by Chris Clemons and fell harmlessly to the Meadowlands turf. So did Denvers reputation as an unstoppable force. SEAHAWKSContinued from Page B1 Sure did. And it was rather appropriate that a member of Seattles league-leading D would be the MVP of the Super Bowl, considering the way the Seahawks shut down Manning and Denversattles NFC championship game victory over the San Francisco 49ers two weeks ago, grabbing the football after Sherman tipped it away from receiver Michael Crabtree in the end zone. And then, in the biggest game of all, Smithsattles success this season. First and foremost, he plays defense, the unit that is the heart and soul of the team. Hes leagues policy on performanceenhancing. MVPContinued from Page B1 Associated PressKevin Stadler watches Bubba Watson miss a putt on the 18th hole Sunday, making Stadler the winner of the Phoenix Open in Scottsdale, Ariz. It was the first PGA Tour victory for Stadler. Stadler wins title after Watson misses putt PAGE 17 CITRUSCOUNTY(FL) CHRONICLEENTERTAINMENT MONDAY, FEBRUARY3, 2014 B5 PHILLIPALDER Newspaper Enterprise Assn dummys dummys queen, hoping the lead was away from the king. So East should play his king at the first trick, confident it will win, then return the five, his original fourth-highest. The defenders will run the suit for down one. Duck Quacks Duck Quacks Lost Gold of the Dark Ages: Revealed Brain Games Brain Games Brain Games None of the Duck Quacks Duck Quacks Brain Games None of the (NICK) 28 36 28 35 25Sponge.Sponge.Sam & WitchFull HseFull HseFull HseFull HseFull HseFull HseFriendsFriends (OWN) 103 62 103 NY ERNY ERNY ERNY ERRaising Whitley PGMoms Got GameMoms Got GameRaising Whitley PG (OXY) 44 123 Movie G Ferris Buellers Day Off (1986)Movie G (SHOW) 340 241 340 4Assault on Wall Street (2012) Dominic Purcell. Premiere. (In Stereo) R Richard Pryor: Omit the Logic (2013) NR Quality Balls: The David Steinberg Story (2013) NR Inside Comedy Billy Joel: Trust (SPIKE) 37 43 37 27 36 Kick-Ass (2010) (In Stereo) R The Fast and the Furious (2001) Vin Diesel. An undercover cop infiltrates the world of street racing. 2 Fast 2 Furious (2003) Paul Walker. Two friends and a U.S. customs agent try to nail a criminal. (STARZ) 370 271 370 Big Trouble in Little China The Incredibles (2004) Voices of Craig T. Nelson. (In Stereo) PG Iron Man 3 (2013, Action) Robert Downey Jr. (In Stereo) PG-13 Wall Street (1987) R (SUN) 36 31 36 Courtside Jones The Game 365 Heat Live! (Live) NBA Basketball Detroit Pistons at Miami Heat. From the AmericanAirlines Arena in Miami. (Live) Heat Live! (Live) Inside the Heat (N) Israeli Bask. Driven (SYFY) 31 59 31 26 29 The Adjustment Bureau (2011, Suspense) Matt Damon. PG-13 Bitten Grief (N) (In Stereo) Being Human Panic Womb (N) Lost Girl Turn to Stone (N) Bitten Grief (In Stereo) (TBS) 49 23 49 16 19SeinfeldSeinfeldSeinfeldFam. GuyFam. GuyFam. GuyFam. GuyBig BangBig BangBig BangConan (N) (TCM) 169 53 169 30 35 The Age of Innocence (1993, Drama) Daniel Day-Lewis. PG A Star Is Born (1954, Musical) Judy Garland. An actor turns to alcohol as his wife becomes a megastar. PG Gate of Hell (1953) NR (TDC) 53 34 53 24 26The Devils Ride Enemy Within The Devils Ride War Is Now Rods N Wheels (In Stereo) Rods N Wheels (N) (In Stereo) PG The Devils Ride (N) (In Stereo) Rods N Wheels (In Stereo) PG (TLC) 50 46 50 29 30Sister Wives Bigger & BatterCakeCakeCakeCakeHoneyHoneyCakeCake (TMC) 350 261 350 Out of Sight (1998, Crime Drama) George Clooney. (In Stereo) R The World According to Dick Cheney The life of the former vice president. MA, L,V The Reluctant Fundamentalist (2012) Riz Ahmed. (In Stereo) R (TNT) 48 33 48 31 34Castle Seconds PG (DVS) Castle The Limey PG (DVS) Castle Headhunters (In Stereo) PG Castle Undead Again PG Perception Alienation Hawaii Five-0 I Ka Wa Mamua (TOON) 38 58 38 33 Johnny TTeenAdvenRegularStevenAnnoyingKing/HillClevelandFam. GuyRickAmericanFam. Guy (TRAV) 9 106 9 44Bizarre FoodsFoodFoodBizarre FoodsBizarre FoodsHotel Impossible (N)Hotel Impossible (N) (truTV) 25 55 25 98 55LizardLizardLizardLizardLizardLizardLizardLizardFull Throttle SaloonPanic Panic (TVL) 32 49 32 34 24GriffithGriffithGilliganGilliganGilliganGilliganRaymondRaymondRaymondRaymondKingKing (USA) 47 32 47 17 18NCIS: Los Angeles Backstopped NCIS: Los Angeles The Fifth Man PG WWE Monday Night RAW (N) (In Stereo Live) PG, V NCIS: Los Angeles History PG (WE) 117 69 117 Law & Order Homesick PG Law & Order Aftershock PG CSI: Miami High Octane CSI: Miami Going, Going, Gone CSI: Miami Come As You Are CSI: Miami Backstabbers (WGN-A) 18 18 18 18 20Funny Home VideosFunny Home VideosFunny Home VideosFunny Home VideosFunny Home VideosFunny Home Videos isnt everything. If he truly did spend more time caring for your parents than the rest of you, he may deserve more than you think. Dear Annie: My spouse and I choose to abstain from alcohol. We dont well. Annie, if only people realized that the only thing that really ends up mattering in life is people, family and the relationships you build. The world would be a better, stronger place. elses. Start by inviting them to a gathering in your home. You dont need to serve alcohol, but you also dont have to make an issue of it. Dear Annie: I read the letter from Uncomfortable, who didnties Mailbox is written by Kathy Mitchell and Marcy Sugar, longtime editors of the Ann Landers column. Email your questions to anniesmail box@comcast.net, or write to: Annies Mailbox, c/o Creators Syndicate, 737 Third St., Hermosa Beach, CA 90254. To find out more about Annies Mailbox, visit the Creators Syndicate Web page at. ANNIES MAILBOX Bridge (Answers tomorrow) FIGHTAVOID LOCKETGOBBLE Saturdays Jumbles: Answer: Kicking the ball between the uprights to win the game was his FIELD GOALUROD BUGRY CHOSOM HERTOB Tribune Content Agency, LLC All Rights Reserved. Jumble puzzle magazines available at pennydellpuzzles.com/jumblemags Print answer here: MONDAY EVENING FEBRUARY 3,Game Night Sports Illustrated Swimsuit: 50 Years NewsJay Leno # (WEDU) PBS 3 3 14 6World News Nightly Business PBS NewsHour (N) (In Stereo) Antiques Roadshow Detroit (N) G Antiques Roadshow Eugene G POV American Promise Sons progress through private school. (N) PG % (WUFT) PBS 5 5 5 41News at 6BusinessPBS NewsHour (N)Antiques RoadshowAntiques RoadshowPOV American Promise (N) PG ( (WFLA) NBC 8 8 8 8 8NewsNightly NewsNewsChannel 8 Entertainment Ton.Hollywood Game Night (N) Sports Illustrated Swimsuit: 50 Years of Beautiful (N) (In Stereo) D NewsJay Leno ) (WFTV) ABC 20 20 20 NewsWorld News Jeopardy! (N) G Wheel of Fortune The Bachelor (N) (In Stereo) Castle Dressed to Kill (N) PG Eyewit. News Jimmy Kimmel (WTSP) CBS 10 10 10 10 1010 News, 6pm (N) Evening News Wheel of Fortune Jeopardy! (N) G How I Met2 Broke Girls Mike & Molly Mom (N) Intelligence The Rescue (N) 10 News, 11pm (N) Letterman ` (WTVT) FOX 13 13 13 13FOX13 6:00 News (N) (In Stereo) TMZ (N) PG The Insider (N) Almost Human Unbound (N) The Following Trust Me (N) FOX13 10:00 News (N) (In Stereo) NewsAccess Hollywd 4 (WCJB) ABC 11 11 4 NewsABC EntInside Ed.The Bachelor (N) (In Stereo) Castle (N) The Bachelor (N) (In Stereo) Castle Dressed to Kill (N) PGLaw & Order: SVULaw & Order: SVUCops Rel.Cops Rel.SeinfeldCommun H (WACX) TBN 21 21 HealingThe 700 Club) EngagementEngagementThe Arsenio Hall Show O (WYKE) FAM 16 16 16 15Animal Court Citrus Today County Court Little Miracles Zorro PGYour Plumber Movin OnCold Squad (DVS) Eye for an EyeThe Comedy Shop S (WOGX) FOX 13 7 7SimpsonsSimpsonsBig BangBig BangAlmost Human The Following FOX Criminal Minds Criminal Minds Criminal Minds Criminal Minds Criminal Minds (A&E) 54 48 54 25 27Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Duck Dynasty Bad Ink (N) Bad Ink (N) Andrew Mayne Andrew Mayne Andrew Mayne Andrew Mayne (AMC) 55 64 55 Behind Enemy Lines (2001, Action) Owen Wilson, Gene Hackman. PG-13 Shooter (2007, Suspense) Mark Wahlberg. A wounded sniper plots revenge against those who betrayed him. R Shooter (2007) R (ANI) 52 35 52 19 21Finding Bigfoot (In Stereo) PG To Be AnnouncedFinding Bigfoot (In Stereo) PG Gator Boys Cat Scratch Fever PG Beaver Bros Beaver Bros Finding Bigfoot (In Stereo) PG (BET) 96 19 96 106 & Park: BETs Top 10 Live (N) PG Kings Ransom (2005) Anthony Anderson. A businessman plots his own kidnapping to foil his wife.Dirty Laundry (2006) Rockmond Dunbar. A closeted gay man learns that he has a 10-year-old son. PG-13 (BRAVO) 254 51 254 Vanderpump RulesVanderpump RulesReal HousewivesVanderpump RulesVanderpump RulesHappensSouthern (CC) 27 61 27 33South Park MA Tosh.0 Colbert Report Daily ShowFuturama PG Futurama PG South Park MA South Park MA South Park MA South Park MA Daily ShowColbert Report (CMT) 98 45 98 28 37The Dukes of Hazzard G The Dukes of Hazzard G The Dukes of Hazzard G Mrs. Doubtfire (1993, Comedy) Robin Williams. An estranged dad poses as a nanny to be with his children. PG Jessie G Austin & Ally G Dog With a Blog G Gravity Falls Y7Secret of the Wings (2012) Voices of Mae Whitman. G Jessie G Austin & Ally G Liv & Maddie Jessie G A.N.T. Farm G (ESPN) 33 27 33 21 17SportsCenter (N)College Basketball College Basketball SportsCenter (N) (ESPN2) 34 28 34 43 49AroundPardonWomens College Basketball NBA Coast to Coast (N) (Live) Olbermann (N) (EWTN) 95 70 95 48At LastProvideDaily Mass G The Journey HomeEvangeRosaryWorld Over Live PGThe HeartWomen (FAM) 29 52 29 20 28The Middle PG The Middle PG Switched at Birth Fountain Switched at Birth (N) (In Stereo) The Fosters Stef and Lena visit Callie. The Fosters Stef and Lena visit Callie. The 700 Club (In Stereo) G (FLIX) 118 170 The Color of Money (1986) Paul Newman. Premiere. (In Stereo) R Ed Wood (1994, Biography) Johnny Depp. Premiere. (In Stereo) R The Motorcycle Diaries (2004, Biography) Gael Garca Bernal. R Buy ThisMy. DinDinersDiners (FS1) 732 112 732 FOX Football DailyCollege Basketball Xavier at Villanova. (N)College Basketball Georgetown at DePaul.FOX Sports Live (N) (FSNFL) 35 39 35 ShipMagicNBA Basketball Orlando Magic at Indiana Pacers. MagicColl. Footb. World Poker Tour (FX) 30 60 30 51 Knight and Day (2010) Tom Cruise. PG-13 Crazy, Stupid, Love. (2011, Romance-Comedy) Steve Carell, Ryan Gosling. PG-13 Archer (N) MA Chozen (N) MA Archer MAChozen MA (GOLF) 727 67 727 Golf Central (N)The Golf Fix (N) GIn PlayIn PlayFeherty Feherty Golf Central (HALL) 59 68 59 45 54Home Improve. Home Improve. Home Improve. Home Improve. The Good Wife (In Stereo) The Good Wife After the Fall Frasier PGFrasier PGFrasier PGFrasier PG (HBO) 302 201 302 2 2In Good Comp. Vehicle 19 (2013, Suspense) Paul Walker. R REAL Sports With Bryant Gumbel PG Jack the Giant Slayer (2013) Nicholas Hoult. (In Stereo) PG-13 Looking MA Girls MA (HBO2) 303 202 303 Date Movie The Presence (2010) Mira Sorvino. (In Stereo) PG-13 Real Time With Bill Maher MA True Detective The Locked Room MA Girls MA Looking MA The Terminator (1984) No Tomorrow PG Swamp People Ten Deadliest Hunts Swamp People Gator Recon (N) Swamp People Once Bitten PG Appalachian Outlaws Tit For Tat PG The Curse of Oak Island PG (LIFE) 24 38 24 31 Life Is Not a Fairytale: The Fantasia Barrino Story (2006) Fantasia Barrino. NRThe Gabby Douglas Story (2014, Drama) Regina King, S. Epatha Merkerson. Beyond the HeadlinesKim of Queens PG (LMN) 50 119 The Coverup (2008, Crime Drama) Eliza Dushku, Brian Krause. (In Stereo) NR Just Ask My Children (2001) Virginia Madsen. (In Stereo) (DVS) Fifteen and Pregnant (1998, Drama) Kirsten Dunst, Park Overall. (In Stereo) (MAX) 320 221 320 3 3 Arlington Road (1999, Suspense) Jeff Bridges. (In Stereo) R Banshee Bloodlines MA Constantine (2005, Fantasy) Keanu Reeves. (In Stereo) R Lingerie MA Girls Guide WANT MORE PUZZLES? Look for Sudoku and Wordy Gurdy puzzles in the Classified pages. PAGE 18 CITRUSCOUNTY(FL) CHRONICLECOMICS B6MONDAY, FEBRUARY3, 2014Pickles Crystal River Mall 9; 564-6864 Years A Slave (R) 1 p.m., 4 p.m., 7 p.m. August: Osage County (R) 1:45 p.m., 4:45 p.m., 7:45 p.m. I, Frankenstein (PG-13) 4:15 p.m. I, Frankenstein (PG-13) In 3D. 2 p.m., 8 p.m. No passes. Jack Ryan: Shadow Recruit (PG-13) 1:10 p.m., 4:10 p.m., 7:10 p.m. No passes. Labor Day (PG-13) 1:20 p.m., 4:20 p.m., 7:20 p.m. Lone Survivor (R) 1:40 p.m., 4:40 p.m., 7:40 p.m. That Awkward Moment (R) 1:30 p.m., 4:30 p.m., 7:15 p.m. The Nut Job (PG) 1:15 p.m., 7:15 p.m. The Nut Job (PG) In 3D. 5 p.m. No passes. Ride Along (PG-13) 1:50 p.m., 4:50 p.m., 7:50 p.m. Citrus Cinemas 6 Inverness; 637-3377 Frozen (PG) 1:30 p.m., 4:30 p.m., 7:10 p.m. I, Frankenstein (PG-13) 1:20 p.m., 7:15 p.m. I, Frankenstein (PG-13) In 3D. 3:50 p.m. No passes. Jack Ryan: Shadow Recruit (PG-13) 1:10 p.m., 4:15 p.m., 7:20 p.m. Lone Survivor (R) 1 p.m., 4 p.m., 7 p.m. That Awkward Moment (R) 1:40 p.m., 4:20 p.m., 7:30 p.m. The Nut Job (PG) 1:50 p.m., 7:25 p.m. The Nut Job (PG) In 3D. 4:40 p.m. No passes. Visit for area movie listings and entertainment information. Peanuts Garfield For Better or For Worse Sally Forth Beetle Bailey Dilbert The Grizzwells The Born Loser Blondie Doonesbury XJ X YXKKZD AM MXLK BJ XG ZOWDZJJBAG KNXK WDZLZCZJ YXGF XG ZOWDZJJBAG KNXK BJGK. VXHDZGLZ R. WZKZDPrevious Solution: Movie acting suits me because I only need to be good ninety seconds at a time. Bill Murray (c) 2014 by NEA, Inc., dist. by Universal Uclick 2-3 PAGE 19 SPORTSCITRUSCOUNTY(FL) CHRONICLEMONDAY, FEBRUARY3, 2014GWRM HELP WANTED000H9EVSales/ManagementIf you are looking for a career not just a job, call immediately352-628-2555To Schedule Your InterviewFull Company Benefits:Medical, Dental, 401K, Bonuses & MorePrevious Sales Experience Helpful 000GWS0 Pest Control Inspectors/SalesWanted for Citrus/ Sumter Co. Salary, Plus Commissions. Company vehicle. APPL Y IN PERSON 3447 E. Gulf to Lake Hwy. PAINTERS, Exp.352-400-05 COOK/PREP/ PIZZA MAKERCall (352) 628-7827 EXP. LINE COOKApply in Person at Crackers Bar & Grill e-mailed to: kokeefe@ b-scada.com Lic. Massage Therapistin Neuromuscular, and Sports Massage therapy. Kinesiology background helpful but not mandatory. Perfect room in downtown Inverness -studio. Rental rate negotiable. call for interview 352-476-4352 TooJays Gourmet Deli is currently hiring year round positions in both of our restaurants located in The Villages. We are interested in supporting you achieve your New Year plans by encouraging you to bring your talents to us for a new career. We are currently hiring high-powered back-of-the-house people who desire to produce our high-quality food in a casual environment surrounded by dedicated team members and a supportive and hands on management team. These are year round, not seasonal positions. Starting wages range from $10.00 to $13.00. We are also looking for BOH leads or shift supervisors starting at $15.00. We offer great benefits including meal benefits. If this sounds like to perfect way to start your new career, send your resume today or apply in person at TooJays in Lake Sumter, 1129 Canal Street or TooJays in Spanish Springs, 990 Del Mar Drive. Email to LKS@toojays.com or VIL@toojays.com. Hair StylistClientele preferred, not necessary. Salon Bev Hills 352-527-9933 Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 EXPMEDICALCODING/BILLINGF/T Wanted for office based medical practice in Inverness. Fax Resume to: (352) 726-5818 F/T, HYGIENTISTFor Busy Dunnellon Dental Practice Email Resume to: jandj95@aol.com Large Female tortoise shell cat in vicinity of 488 Dunnellon Rd 1/24 (352) 563-2987 Lost Black Cat Name Mamba Last seen Paradise Pt. Road. by Ale House REWARD (727) 481-3010 RECEIPE BOOK Left in a shopping cart at the Crystal River Publixs. Call to identify (352) 563-0756 Padala Medical Center Located in Lecanto near new Walmart Accepting new Patients all ages. Open M-F until 8 pm Call for appt 352-436-4428 Walk-ins Welcome/ Urgent Care I I I I I I I I Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 I I I I I I I I 4 Choice Cemetery Lots at Fero Memorial Selling Separately or together 352-746-5019 I I I I I I I I Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 I I I I I I I I U-Pull-It with thousands of vehicles offering lowest price for parts 352-637-2100 Collection of Lecanto High Prom Glasses. 1988 to 2006 (352) 560-6108 Free oak firewood, 352-344-2321 FRESH CITRUS @BELLAMY GROVELocated 1.5 mi. E. on Eden Dr. from hwy 41 STRAWBERRIES GIFT SHIPPING 8:30a-5p Closed Sun 352-726-6378 citruschronicleFollow the. Diabetic Test Strips a diabetic needs unopened, unexpired boxes, we pick-up, call Mike 386-266-7748 $$WE PAYCASH$$ Youve Got It!Somebody Wants It!(352)563-5966. Your world first.Every Dayvautomotive Classifieds NHLBRIEFS Capitals 6, Red Wings 5, OTWASHINGTON Alex Ovechkin scored his NHLleading 39th goal on a power play 2:37 into overtime, and the Washington Capitals beat the Detroit Red Wings 6-5 Sunday to earn a split of a home-andhome set and tighten up things a bit more in the bottom half of the Eastern Conference. Ovechkins. Michal Neuvirth made 25 saves.Jets 2, Canadiens 1MONTREAL Michael Frolik scored in the third period to give the Winnipeg Jets a 2-1 victory over the Montreal Canadiens. Tobias Enstrom also scored for the Jets and Al Montoya stopped 30 shots. Brian Gionta scored for Montreal. Carey Price, playing his second game in as many days, made 33 saves. The Jets have won eight of 10 since Paul Maurice took over as head coach on Jan. 12. Murray, Brits top US in Davis Cup Americans fall in opening round of play Associated PressSAN DIEGO Wimbledon champion Andy Murray beat Sam Querrey 7-6 (5), 6-7 (3), 6-1, 6-3 to clinch Britains opening-round Davis Cup victory against the United States on Sunday at Petco Park. temporary court built in left field at the downtown home of baseballs San Diego Padres. He joined his teammates in a celebration huddle on the red clay court. Britain clinched the match at 3-1. The fourth singles match was canceled. Murray reached match point on Querreys. Querrey won the secondset. Associated PressBritains Andy Murray celebrates Sunday after winning his match against the United States Sam Querrey at the Davis Cup in San Diego, Calif. PAGE 20 B8MONDAY,FEBRUARY3,2014 CLASSIFIEDS CITRUSCOUNTY( FL ) CHRONICLE All Tractor & Tree Work Land Cleared, Hauling 1 time Cleanup, Driveways (352) 302-6955 Bruce Onoday & Son Free Estimates Trim & Removal 352-637-6641 Lic/Ins CLAYPOOLS Tree Serv. Now Proudly Serving Citrus Co. Lic/Ins. Free Est.. ATREE SURGEON Lic. & Ins. Lowest Rates Free est. (352)860-1452 . Stylists wanted! MVP Clips is hiring lic. stylists for a sports theme barbershop. Manager and Asst Manager positions avail. 302-9779 or mvp_clips@yahoo.com GOTLEAVES DR POWER VAC Call John 607-760-3919 A-1 Design & Install Plant*Sod*Mulch Weed*Trim*Clean lic/ins 352-465-3086 Lawncare-N-More Friendly Family Services for over 21 yrs. 352-726-9570 THE KLEEN TEAM Residential/Comm. Lic., Bonded, Insured (352) 419-6557 Af for dable Handyman FAST 100% Guar. AFFORDABLE RELIABLE Free Est 352-257-9508 Lawncare-N-More Friendly Family Services for over 21 yrs. 352-726-9570A+TECHNOLOGIES All Home Repairs. All TVs Installed lic#5863 352-746-3777 **ABOVE BIANCHI CONCRETE INC.COM Lic/Ins #2579352-257-0078 CURBAPPEAL Yardscape, Curbing, Flocrete. River Rock Reseals & Repairs. Lic. (352) 364-2120 AFFORDABLE Top Soil, Mulch, Stone Hauling & Tractor Work (352) 341-2019A SMITTYSAPPLIANCE JEFFS CLEANUP/HAULING Clean outs/ Dump Runs Brush Removal. Lic. 352-584-5374 Your world first.Every Dayvautomotive Classifieds 000GWRQ 3 Dapple Dachshund Puppies, all female w/papers, pls call Sylvia (727) 235-2265. Shih Poo Puppies, 2 males, 1 females Schnauzer Pups 8 wks Shih-TZu Pups Born Jan. 21, 352-795-5896 628-6188 Evenings SHIH-TZU PUPS, AvailableRegistered. I I I I I I I I Tell that special person Happy Birthday with a classified ad under Happy Notes. Only $28.50 includes a photo Call our Classified Dept for details352-563-5966 I I I I I I I I Model Rail RoadN Scale (352) 564-8605 WANT TO BUY HOUSE or MOBILE Any Area, Condition or Situation Fred, 352-726-9369TWIN SHEETSETS ROBIN EGG BLUE SPRING MAID QUALITYUSED $25 634-2004 GAS LOG FIREPLACE Set, Complete with everything needed, to be used with propane gas, Cash + Carry. $200 352-586-7820 MARYJANES HOME COLLECTION CHENILLETWIN BEDSPREAD ECRU $40 634-2004 TOASTER OVEN, COFFEE MAKER & ELECTRIC MIXER $20 352-613-0529 TWIN BED SKIRT EYELETTRIM 100% COTTON ECRU, USED $15 634-2004 UMBRELLASTAND BLACKAND GOLD METALORNATE $40 634-2004 VACUUM CLEANER LG upright compressor compact, pet care, like new,bagless $150 (352) 465-9395 CORNING WARE ELECTRIC COFFEE POT-6 cups, cornflower pattern, Ex., $20. 352-628-0033 DENON STEREO RECEIVERAM/FM PRECISIONAUDIO RECEIVER. FIRST 100.00. 464-0316 Exercise Bike Life Style D1000 Arm & Leg with Monitor $60 King Size memory foam 2mattress pad w/ cover. Exc cond. Pd $135 asking $70 (352) 794-3907AM/FM RECEIVER FIRST 100.00 464 0316 Mini BikeWith 196 CC, 6.5HP New Motor, New Chain $335. (352) 726-0839 SCHWINN CRUISER SS WOMENS BIKE26 x 2-1/8 tires & alloy wheels, single speed, $65. 628-0033 Fold Away Bed Plus Mattress $75. (352) 527-7919 Queen Sz. Bed w/Headboard has mirror & shelves, 3 drawers on each side at bottom. $75 obo (352) 621-5265 RECLINER Recliner,Swivel Rocker, Dark Blue Looks Good,$40. 352-746-6813 SOFAbrown neutral color, excellent condition $90. Ask for Mimi (352) 795-7285 TABLES Coffee table & 2 matching end tables. Heavy glass w/ beautiful stucco like bases. $75 (352) 249-7168Troy Bilt, Auto, 42, 20 HP, $825 Gas Weed eater Troy Bilt $65 (352) 794-6761 INVERNESS Y ar d Sale Extravaganza Feb. 7 & 8 7am-3pm 1190 Stately Oaks DriveHUGE SALEToo Many items to list!! MENS CLOTHING 3 CASUALPANTS 36X30 & 2 CASUAL SHIRTS LARGE $20 352-613-0529 MENS JACKETLondon Fog Size 40/42 Excellent Condition $25 Call 726-0040 BROTHER FAX COPIER SCANNER WITH MANUALONLY35.00 464 0316 2 DAHON FOLDING BICYCLES Like new condition. Great for RV or car trunk. $50 each 352-564-0788 4 WHEELWALKERseat, hand brakes & wheel locks, folds for storage, $45. 628-0033 225/75R-16 Goodyear light truck tire GREATSHAPE ONLY$50 352-464-0316 7-5 GALLON METAL OLD FUELCANS WITH SPOUTSALLFOR $80.00 464-0316 Antique Cast Iron Wood Stove w/screen good working stove good cond. $375. (352) 246-3500 APPLIANCES like new washers/dryers, stoves, fridges 30 day warranty trade-ins, 352-302-3030 BUTTERFLYLAMP multicolor glass, Tiffany-like, 2 light levels, BEUTIFALL, ($30) 352-613-7493 CAMCORDER Panasonic Camcorder with case -Excellent Condition $95.00 352-746-5421 CD COLLECTION 25 CDs for $25 Call 726-0040 KAROKE MACHINE WITH CD PLAYER & 5.5 SCREEN WITH GRAPHICS $100 352-341-6920 SHARPSPEAKERS 2 10 150 WATTS $25 352-613-0529AGONTABLE & Magic Chef Chest Freezer 7.2 cubic ft. $150. obo (352) 464-0100 REFRIGERATOR 1.7 cu. ft. dorm size excellent condition $45.00 352 746-9250 Refrigerator Freezer GE gd Cond $100 Oak Table $65 (352) 226-3883 REFRIGERATOR LG, 28 CF, S.S., side by side, ice/water in door, $600 (352) 527-8663 SMITTYSAPPLIANCE REPAIR.Also W anted Dead or Alive W ashers & Dryers. FREE PICK UP! 352-564-8179 STOVE, 20 electric, white clean, works good. $125. Homosassa (678) 617-5560 or 352-513-5580 DRIVERSFor Floral Holiday deliveries must have Van or SUV (352) 726-9666 NEED MONEY?Like to Talk on Phone?Appt. Setters NeededDaily/Weekly Bonuses 352-628-0254 Security for a ShelterParttime EveningsFax or email resume 352-489-8505 sipperd@ bellsouth.net KETTLE CORN BUSINESS FOR SALE $5,900. Money Maker See ad & pics. / Ocala Craigs List (352) 344-0025 Need a JOB?#1 Employment source is Classifieds PAGE 21 MONDAY,FEBRUARY3,2014B 9 CITRUS COUNTY (FL) CHRONICLE CLASSIFIEDS 000GWRE Tweet Tweet Tweet Follow the Chronicle on citruschroniclenews as it happens right at your finger tips 4/3 Triplewideon 2-1/2 acres in green acres in Homosassa beautiful wooded lot $139,995. SELLER FINANCING Call 352-726-4009 Have horses or want them? 4/3 Triplewide with family room and fireplace den off master bed room would make for great office on 9 plus acres mol with horse corals west side of US 19 Homosassa, Fl. $229,995. SELLER FINANCING Call 352-726-4009 HOMOSASSA4/2, BLOCK HOME, MOTHER IN LAWAPT. decking, 1/4 ac, fenced, lots of privacy $65,000 (305) 619-0282, Cell 3/2 with family roomfireplace, glamour bath quiet neighbor hood in Homosassa. 89,995. SELLER FINANCING Call 352-726-4009 TAMI SCOTTExit Realty Leaders 352-257-2276 exittami@gmail.com When it comes to Realestate ... Im there for you The fishing is great Call me for your new Waterfront Home LOOKINGTO SELL? CALLMETODAY! 2 4/2 Doublewideon 1 Plus Acres, MOL Fireplace Glamour Bath, large walk-in closets all bedrooms, off US 200 in Hernando Fl. $89,995 SELLER FINANCING Call 352-726-4009. Need a JOB?#1 Employment source is Classifieds BEVERLYHILLS3/2, EZ Terms, $575 mo. 697-1457 CITRUS SPRINGS2/2/1,Cornor lot,nice back porch $675/mo 1st& last 352-220-2958 RENT T O OWN Inv 3 bd/ No credit ck! 352-464-6020 JADEMISSION.COM 3 STATE VIEWS! Natl Forest Access. 1.84AC-$24,900 Prime, wooded, mountaintop acreage with majestic three state views. EZ access US National Forest. Incredible 4 season recreation. Paved roads, underground power, fiber optic cable & municipal water. Perfect for primary/vacation/retirement home. Excellent financing. Only one available, wont last. Call owner now 866-952-5303, x120. FOR RENT 3200 Sq. Ft. COMMERCIAL BLDG Large Paved Parking Lot, Cent. Heat/Air Open Floor Plan 1305 Hwy 486 ** 352-584-9496/464-2514 CITRUS HILLS2/2, Carport, Furnished & Unfurn. Extra Clean. (352) 613-4459 INVERNESSclean, attractive 2/2/1 3619The. $25,000 8323 W Charmaine Dr. Homasassa, Fl must see to appreciate 615-692-4045 CRYSTALRIVER** NICE** Secret Harbour Apts. Newly remodeled 2/1 $575 Unfurn. Incl Water,lawn, garbage, W/D hook-up. 352-257-2276 BRINGYOUR, $560 mo. Near Walmart & 2/1 $515. mo. 352-464-3159 1999 Mobile Home 28x60, bank owned, Repo, Great Shape FinancingAvailable. FLORAL CITY 2BR/1BA 12x56 MH on 80x152 ft lot.$21,000. Furnished. Needs a little work. (352) 726-8873. Asking $15,000 Drive by then call 115 N. West Ave. Inverness 352-621-0559 MUST SEE! Homosassa/ReadyTo Move In! 2006, 32x80, 4/2, Owner Financing. $86,900 obo 352-795-2377 Owner Financing Available for Mobile Homes! Call for Details 352-795-2377 PAGE 22 B10MONDAY,FEBRUARY3,2014 CLASSIFIEDS CITRUSCOUNTY( FL ) CHRONICLE 424-0203 MCRN BEVERLYHILLS MSBU PUBLIC NOTICE NOTICE IS HEREBYGIVEN that the Beverly Hills Advisory Council will meet on Monday, January 13, 2014, January 6, 2013.. 418-0203 MCRN Schwieterman, James C. 09-2013-CA-001196 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT OF FLORIDA IN AND FOR CITRUS COUNTY GENERAL JURISDICTION DIVISION CASE NO. 09-2013-CA-001196 FIFTH THIRD MORTGAGE COMPANY, Plaintiff, vs. JAMES C SCHWIETERMAN, et al., Defendants. NOTICE OF ACTION To LISSETTE C. QUINTERO A/K/A LISSETTE COROMOTO QUINTERO, 4402 W. GALLAGHER STREET, CITRUS SPRINGS, FL 34433 LAST KNOWN ADDRESS STATED, CURRENT RESIDENCE UNKNOWN 419-0203 MCRN Watson, Susan 2013-CA-001296 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT OF FLORIDA IN AND FOR CITRUS COUNTY CASE NO.: 2013-CA-001296 REVERSE MORTGAGE SOLUTIONS, INC., Plaintiff, vs. SUSAN A. WATSON A/K/A SUSAN COLLINS WATSON, Defendants. NOTICE OF ACTION To the following Defendant(s): ALL UNKNOWN HEIRS, CREDITORS, DEVISEES, BENEFICIARIES, GRANTEES, ASSIGNEES, LIENORS, TRUSTEES AND ALL OTHER PARTIES CLAIMING AN INTEREST BY, THROUGH UNDER OR AGAINST THE ESTATE OF DOROTHY J. COLLINS A/K/A DOROTHY JEAN COLLINS, YOU ARE NOTIFIED that an action for Foreclosure of Mortgage on the following described property: LOT 6, BLOCK 4, OF ANGLERS LANDING PHASE ONE, ACCORDING TO PLAT THEREOF RECORDED IN PLAT BOOK 13, PAGES 76 AND 77, OF THE 5th day of December, 2013. [SEAL] ANGELA VICK, As Clerk of the Court /s/BY: Vivian Cancel, Deputy Clerk Published in the CITRUS COUNTY CHRONICLE Jan. 27 & Feb.3, 2014. 12-02136-1 420-0203 MCRN Brann, Dorothy 2013-CA-001217 NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT OF FLORIDA IN AND FOR CITRUS COUNTY CASE NO.: 2013-CA-001217 REVERSE MORTGAGE SOLUTIONS, INC., Plaintiff, vs. DOROTHY BRANN, Defendants. NOTICE OF ACTION To the following Defendant(s): ALL UNKNOWN HEIRS, CREDITORS, DEVISEES, BENEFICIARIES, GRANTEES, ASSIGNEES, LIENORS, TRUSTEES AND ALL OTHER PARTIES CLAIMING AN INTREREST BY, THROUGH UNDER OR AGAINST THE ESTATE OF JOHN MAZEIKA, YOU ARE NOTIFIED that an action for Foreclosure of Mortgage on the following described property: LOTS 26 AND 27, BLOCK B, EAST COVE UNIT NO. 1, ACCORDING TO THE PLAT THEREOF AS RECORDED IN PLAT BOOK 4, PAGE 82, OF THE PUBLIC RECORDS OF CITRUS COUNTY, FLORIDA. TOGETHER WITH 1981 MOBILE HOME SERIAL #GDLCFLI 5814892A & B-02173-1 421-0203 MCRN Kramer, Irving 2013-CA-001114 A NOA PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CASE NO. : 2013 CA 001114 A SUNTRUST MORTGAGE, INC. Plaintiff, v. UNKNOWN SPOUSE OF IRVING KRAMER, ET A L. Defend ants. NOTICE OF ACTION TO: IRVING KRAMER,: 1082 ALADDIN RD, SPRING HILL, FL 34609; 3400 S. DIAMOND AVE., INVERNESS, FL 34452 -ANDTO: THE INVERNESS HIGHLANDS SOUTH AND WEST CIVIC ASSOCIATION, INCORPORATED, whose last known principal place of business was: 5189 S. ROBERT BLAKE AVE. INVERNESS, FL 34452 YOU ARE NOTIFIED that an action to foreclose a mortgage on the following property in Citrus County, Florida, to-wit: LOT 10, IN BLOCK 317, OF INVERNESS HIGHLANDS WEST, ACCORDING TO THE PLAT THEREOF, AS RECORDED IN PLAT BOOK 5, PAGES 19 TO 33, INCLUSIVE, OF THE PUBLIC RECORDS OF CITRUS COUNTY, FLORIDA. before February 27, 2014 or within thirty (30) days after the first publication of this Notice of Action, and file the original with the Clerk of this Court at 110 North Apopka Avenue, Inverness, FL 34450, either before service on Plaintiffs attorney or immediately thereafter; otherwise, a default will be entered against you for the relief demanded in the complaint petition. WITNESS my hand and seal of the Court on this 27th day of December, 2013. ANGELA VICK, Clerk of the Circuit Court . January 27 & February 3, 2014. 617111495 846-0110 FCRN Miller, Sean M. 2013-CA-000764: 9295 North Citrus Springs Blvd., Citrus Springs, Florida 34434 YOUARE NOTIFIED that an action to foreclose a mortgage on the following property in CITRUS County, Florida, to-wit: LOT 12, BLOCK 192, OF CITRUS SPRINGS UNIT 4, ACCORDING TO THE PLAT THEREOFAS RECORDED IN PLAT BOOK 5, PAGE 133 THROUGH 152, OF THE PUBLIC RECORDS OF CITRUS33771, on or before February 3, 2014, or within thirty (30) days after the first publication of this Notice of Action, and file the original with the Clerk of this Court at 110 N ApopkaAvenue, Inverness, FL34450, either before service on Plaintiffs attorney or immediately thereafter; otherwise, a default will be entered against you for the relief demanded in the complaint petition. WITNESS my hand and seal of the Court on this 19th day of November, 2013. Angela Vivk,Clerk of the Court (SEAL)By:/s/ Vivian Cancel, Deputy Clerk Published in the Citus County Chronicle, Jan. 3, 10, 27 & February 3, 2014. YOU ARE NOTIFIED that an action for Foreclosure of Mortgage on the following described property: LOT 4, BLOCK 847, OF CITRUS SPRINGS, UNIT TEN, ACCORDING TO THE MAP OR PLAT THEREOF AS RECORDED IN PLAT BOOK 6, PAGES 67 THROUGH 78,-05355-1 422-0130 MCRN Beville, Leila Kay 2014-CA-94 NOF PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CASE NO. 2014-CA-94 IN RE: THE FORFEITURE OF ONE THOUSAND FOUR HUNDRED NINETY-EIGHT AND NO/100 DOLLARS ($1,498.00) IN U.S. CURRENCY BY JEFFREY J. DAWSY, AS SHERIFF OF CITRUS COUNTY, FLORIDA. Petitioner, vs. LEILA KAY BEVILLE, Claimant. NOTICE OF FORFEITURE THE CITRUS COUNTY SHERIFFS OFFICE has seized and intends to have forfeited to it ONE THOUSAND FOUR HUNDRED NINETY-EIGHT AND NO/100 DOLLARS ($1,498.00) (the Currency) pursuant to the Florida Contraband Forfeiture Act, Chapter 932, Florida Statutes. The Aforementioned Currency was seized by JEFFREY J. DAWSY, AS SHERIFF OF CITRUS COUNTY, FLORIDA (hereinafter, CCSO), on December 17, 2013, in the vicinity of N. Rock Crusher Road and W. Venable Street, Crystal River, Currecy. Con traband For feitur e Act jf@bradshawmountjoy.com Published in the CITRUS COUNTY CHRONICLE: February 3 & 10, 2014 423-0224 FCRN England Vs, Anderson 2013-DR-1040 Notice of Action PUBLIC NOTICE IN THE CIRCUIT COURT OF THE FIFTH JUDICIAL CIRCUIT IN AND FOR CITRUS COUNTY, FLORIDA CASE NO.: 2013-DR-1040 IN RE: THE MATTER OF: JENNIFER ENGLAND, Petitioner/Mother, and STEPHEN ANDERSON, Respondent/Father. NOTICE OF ACTION TO: STEPHEN ANDERSON 5411 W. Corinas Court Homosassa, Florida 34446 YOU ARE NOTIFIED that an action for a Supplemental Petition to Modify has been filed against you in the above referenced matter. You are required to serve a copy of your written defenses, if any, to it on MEGAN T. FITZPATRICK, Florida Bar # 84987, Fitzpatrick Law P.A., 213 North Apopka Avenue, Inverness, Florida 34450, the attorney for the Petitioner/Mother, on or before March 5, 2014 and file the original with the clerk of this Court before service o the Petitioner/Mother or immediately thereafter. If you fail to do so, a default may be entered against you for the relief demanded in the petition. Copies of all court documents in this case, including orders, are available at the Clerk of the Circuit Courts office. You may review these documents u pon request. You must keep the Clerk of the Circuit Courts office notified of your current address. Future papers in this lawsuit will be mailed to the address on record at the clerks office. WARNING: R ule 12.285, Florida Family Law Rules of Procedure, requires a certain automatic disclosure of documents and information. Failure to comply can result in sanctions, including dismissal or striking of pleadings. DATED: January 14, 2014 CLERK OF THE CIRCUIT COURT /S/ VIVIAN CANCEL, As Deputy Clerk Published in the Citrus County Chronicle February 3, 10, 17 & 24,6 FORD05 Escape XLTNonsmoker, low mi. Mint Cond All pwr, serv recs avail $6800. 352-563-5217 FORD1999, Expedition, Eddie Bauer Edition, leather $3,999 352-341-0018 HONDA2007, Element, Hard to find, cold A/C, runs great, Must See, Call (352) 628-4600 Your Worldof garage sales Classifieds ww.chronicleonline.com CAMPER 2003 Starcraft Aruba pull behind. 28 ft., 1 slide $7000 obo (352) 628-1126AIR HOSE2007Aux Inflator Kit for air-suspension truck models. new. $23 860-2701. **BEST PRICE** For Junk & Unwanted Cars-CALLNOW **352-426-4267** Autos, Trucks, SUVs & Vans-Cash Pd LarrysAuto1999 Concorde LX, V6 2.7 LTR,Automatic, It has all the Extras, 123,000 miles, Runs great, Very Good Condition, $2,500 352-586-7820 CHRYSLER2000, Sebring Convertible, low miles $5,488. 352-341-0018,000. (352) 503-9290 Patrick Liquidation SaleHelp Us Stay in Biz. RENT-BUY-SELL CAR-TRUCK-BOATwww. BestNatur eCoast Pr operties.com To view my properties AGuaranteed Offer in 48 Hours! We Buy Homes! www .dbuys homes.com 800-741-6876 WE BUY HOMES Any Condition Quick Closings Natur e Coast Homes (352) 513-4271 OUTBOARD MOTOR EVENRUDE 4.5 HP 15 in .shaft, fresh water motor $475 call (352) 613-8453 POLAR2005, 19 Ft ., center console, 115 HP, Yamaha, excel. cond. Everything for fishing. $12,900 352-270-2015 WE HA VE BOA TS GULF TO LK MARINE We Pay CASH For Used Clean Boats Pontoon, Deck & Fishing Boats **(352)527-0555** boatsupercenter.com No Junk or pull behind 726-2494, 201-7014 CASITA2003 17Freedom DeluxeAerodynamic, fiberglass travel trailer. Loaded. Easy to tow with small vehicle. Microwave, 3 way fridge-freezer, under-shelf TV,CD,DVD,radio. RoofAC Gas heater etc.etc. $11,000 OBO Telephone 352 527-1022 e-mail mmlesser@tampabay.rr. com LaWanda WattTHE SNOWBIRDS ARE COMING! ** NOW Call: Jack Lemieux Cell (305) 607-7886 Realty USA INC 407-599-5002 Whispering Pines Villa INVERNESS 2/2/1, NEW Carpet, Tile, Paint,All appliances including washer/dryer. $69,900. 352-726-8712
https://ufdc.ufl.edu/UF00028315/03380
CC-MAIN-2019-26
refinedweb
22,794
64.71
Multimethods Methods are the workhorses of Magpie. Most of the code you write will reside inside a method, and most of that code will in turn be calls to other methods. A method is an executable chunk of code (an expression to be precise) that is bound to a name and has a pattern that describes the argument it expects. Defining Methods Methods are defined using the def keyword. def greet() print("Hi!") end Here we've defined a method named greet whose body is a block containing a single greet() We can define a method that takes an argument by using a pattern in its definition. def greet(who) print("Hi, " + who) end greet("Fred") // Hi, Fred In this case, the pattern who is a simple variable pattern, but more complex patterns can be used: def greet(who is String, whoElse is String) print("Hi, " + who + " and " + whoElse) end greet("Fred", "George") Here we have a record pattern with two fields that must both be strings. We call it by passing it a record of two strings: "Fred", "George". This may seem a bit strange, but it's important to note that we are not passing two arguments. In Magpie, methods (and functions) always take a single argument. It's just that the argument may be a record which the method destructures. The end result is that it works pretty much like other languages, but there's some conceptual unification going on under the hood. The destructuring initialization that you can do when declaring variables is the exact same process used when splitting out arguments to a method, or selecting a catch clause when an error is thrown. Left and Right Arguments Method calls are infix expressions, which means that an argument may appear to the left of the name, to the right, or both. (More pedantically, the record that forms the single argument may have fields which appear to the left and right of the name.) The greet methods we've defined so far only have a right argument. Methods which only have a left argument are informally called getters. def (this is String) isEmpty this count == 0 end This defines a getter isEmpty whose left argument must be a string. It takes no right argument. It can be called like this: "not empty" isEmpty // false And, finally, methods can have arguments on both sides: def (this is String) greet(other is String) print("Hi, " + other + ", I'm " + this) end "Fred" greet("George") When you define a method, the argument patterns always need to be in parentheses. When you call a method, only the right one must. If you chain several method calls, they associate to the left (like most languages). For example, this: people find("Smith") firstName greet("George") is the same as: ((people find("Fred")) firstName) greet("George") Setters In addition to getters, you can also define setter methods. Like getters, setters are just regular methods, but they have special syntax to make them look like assignment. You define a setter like so: def (this is Person) name = (name is String) print("Set name to " + name + "...") end The = followed by a pattern tells Magpie that you're defining a setter. We can call the above setter: person name = "Fred" A setter may also have a right argument: def (this is Contact) phoneNumber(type is String) = (number is String) print("Set " + type + " number to " + number) end jenny phoneNumber("Home") = "867-5309" Magpie tries to roll as much behavior under method calls as possible and getters and setters are a good example of that. It's worth noting that everything that looks like a getter or setter is just a method call. When you're accessing fields in a class, you're just calling getter and setter methods that have automatically created implementations. Indexers In addition to setters, Magpie has one more little bit of extra syntax for method calls. To make working with collection-like objects easier, it provides indexer methods. These are essentially methods whose name is []. The left argument appears before the brackets, and the right argument is inside them. list[2] Here, we're accessing an element from some list variable by calling an indexer method. The left argument is list and the right argument is 2. Aside from their syntax, there is nothing special about indexers. They're just methods like any other and you're free to define your own indexers, like so: def (this is String)[index is Int] this substring(index, 1) end Here, we've defined an indexer on strings that takes a number and returns the character at that index (as a string). As you'd expect, the right argument can also be a record: defclass Grid val width is Int val height is Int val cells is List end def (this is Grid)[x is Int, y is Int] cells[y * width + x] end Here we've defined a Grid class that represents a 2D array of cells. It includes an indexer for getting the cell at a given coordinate. You can call it like this: val cell = grid[2, 3] You can also define indexer setters which combines the syntax of those: def (this is Grid)[x is Int, y is Int] = (value) cells[y * with + x] = value end grid[2, 3] = "some value" Method Scope Magpie's method call syntax looks similar to other OOP languages where a "receiver" argument precedes the method. We've seen some examples where we define methods whose left argument is a built-in type like String. In other languages, this is called monkey-patching and doing it is fraught with peril. The reason is that when you invoke a method in those languages, it looks up the method on the class of the receiver. If two unrelated parts of the program define a method with the same name on the same class, those two methods will collide. When we call it later we may find the wrong one. In Magpie (as in CLOS) methods are not owned by classes. Instead, methods reside in lexical scope, just like variables. When you call a method, the method is found by looking for it in the scope where the call appears, and not on the class of any of the arguments. When a method goes out of scope, it disappears just like a variable. do def (this is String) method() print(this + " first") end "a" method() // a first end a method() // ERROR! do def (this is String) method() print(this + " second") end "a" method() // a second end It is impossible to have a method collision in Magpie. If you try to define two methods with the same name and pattern in the same scope, it will throw an error. This way, you can define methods that have a nice readable calling syntax without having to worry about breaking code in some other part of the codebase. Checking for pattern collisions hasn't been implemented yet. Multimethods In Magpie, all methods are multimethods. This is one of the places where the language really steps it up compared to other dynamic languages. In the previous section, we noted that it's an error to define two methods with the same name and pattern in the same scope. That qualifier is important. It's perfectly fine to define two methods with the same name but different patterns. def double(n is Int) n * 2 end def double(s is String) s + s end Here we've defined two double methods, one on strings and one and numbers. Even though they are defined in the same scope, these don't collide with each other. Instead, these are combined to form a single double multimethod containing two specializations. When you call a multimethod, it looks through the methods it contains and their patterns. It then selects the most appropriate pattern, and calls the method associated with it. double(3) // 6 double("ma") // mama In simple terms, this means Magpie lets you overload methods, which is pretty unusual in dynamic languages. It's also more powerful than most static languages because it's selecting the most appropriate method at runtime where overloading in a language like Java is done at compile time. Since all methods actually take a single argument, we're free to specialize a multimethod on the left argument, right argument, or both. You can also specialize on different record patterns. def (this is String) double this + this end def (this is Int) double this * 2 end def double(s is String, s is Int) s + s, n * 2 end "left" double // leftleft 3 double // 6 double(3, "do") // 6, dodo As long as you don't provide two specializations with the exact same pattern, you are free to define as many as you want. If you call a multimethod with an argument that doesn't match any of the specializations, it will throw an error. do double(true) catch is NoMethodError print("We didn't specialize double on bools") end We didn't define a double method that accepts a boolean, so when we call it, it will throw a NoMethodError which gets caught here to print a warning. Linearization TODO(bob): Methods are partial order now, not a strict linearization. Need to update this. The previous section says that the "most appropriate" method is selected based on the argument. In the examples we've seen so far, only one method is a possible match, so most appropriate is pretty easy. If multiple methods match the argument, we need to determine the best one. Magpie (and other languages) call this linearization. def odd?(0) false end def odd?(n is Int) not(odd?(n - 1)) end Here we have an odd? multimethod with two specializations. If we call it and pass in 0, then both specializations match. Which is best? To answer this, Magpie has a few relatively simple rules that it uses to order the patterns. Before we get to those rules, it's important to understand one thing that does not affect ordering: the order that methods are defined in a program has no effect on linearization. Pattern Kind First, different kinds of patterns are ordered. From best to worst: - Value patterns - Record patterns - Type patterns - Wildcard patterns For variable patterns, we look at its inner pattern. (If it doesn't have one, it is implicitly _, the wildcard pattern.) The above list addresses our odd? example: the first method will win since a value pattern ( 0) takes precedence over a type pattern ( is Int). To linearize two patterns of the same kind, we need more precise rules. Class Ordering To order two type patterns, we look at the classes being compared and see how they are related to each other. Subclasses take precedence over superclasses. defclass Parent end defclass Child is Parent end def sayClass(is Parent) print("Parent") end def sayClass(is Child) print("Child") end sayClass(Child new()) Here, both methods match because an instance of Child is also an instance of Parent. In this case, the second method specialized to is Child wins because Child is a subclass of Parent. Record Ordering It's possible for two record patterns to match the same argument. def printPoint(x: x, y: y) print(x + ", " + y) end def printPoint(x: x, y: y, z: z) print(x + ", " + y + ", " + z) end printPoint(x: 1, y: 2, z: 3) A record pattern matches as long as the argument has the fields the record requires. Extra fields are allowed and ignored, so here both methods match. It's also possible for records with the same fields but different field patterns to match: def sameGeneration?(a: is Parent, b: is Parent) true end def sameGeneration?(a: is Child, b: is Child) true end def sameGeneration?(a: _, b: _) false end Ordering these can be complex, so the linearization rules for records a bit subtle. The first requirement is that the records must specify the same fields, or one must be a subset of the other. If not, they cannot be ordered and an AmbiguousMethodError is thrown. def say(x: x) print("x " + x) end def say(y: y) print("y " + y) end say(x: 1, y: 2) It's unclear what the programmer was even trying to accomplish here, and Magpie can't read your mind. So in cases like this, it just raises an error to signal its confusion. Our first example doesn't have this problem, though. The first definition of printPoint is a subset of the former, so there's no ambiguity. In that case, it proceeds to the next step. There are several signals we can rely on to tell us which record should come first. We'll call those lean. To order records in a predictable way, those signals need to agree with each other. The first signal is if one record has fields the other doesn't. In our first printPoint example, the second method has an extra z: field. That means we lean towards preferring that method since it "uses more" of the argument that gets passed in. Next, we go through the fields that the two records have in common and linearize their patterns. Whichever pattern wins is a lean towards that record. (If the patterns compare the same, like the x and y fields in the two printPoint methods, then there's no lean one way or the other.) If all of the leans are towards one record, it wins. If the leans are inconsistent then it's ambiguous. This is all a bit fishy, so some examples should clarify: x: x, y: y x: x, y: y, z: z // Second wins: more fields x: x, y: y y: y, z: z // Ambiguous: neither is a subset of the other a: is Parent, b: is Parent a: is Child, b: is Parent // Second wins: Child is more specific a: is Parent, b: is Child a: is Child, b: is Parent // Ambiguous: fields disagree The general theme here is that it tries to pick records that are "obviously" the more specific one where "more specific" means more fields or more precise fields. If it isn't crystal clear which one the programmer intended to win, Magpie just throws its hands up and pleads confusion.
http://magpie.stuffwithstuff.com/multimethods.html
CC-MAIN-2014-15
refinedweb
2,387
69.41
Before being able to work on this problem, you should know what we are talking about. In a few words, a BST is a tree in which for each node is true that its left children are strictly smaller and the right one are strictly bigger. In Python, if a node is represented in this way class Node: def __init__(self, data): self.data = data self.left = None self.right = NoneWe can check a tree passing its root to a recursive function like this one: def is_bst(node, left, right): # 1 if node is None: # 2 return True if not left < node.data < right: # 3 return False return is_bst(node.left, left, node.data) and is_bst(node.right, node.data, right) # 41. The first parameter, node, is an instance of the class Node. The other two parameters are the limit in which the value is supposed to be. 2. An empty tree is a BST. This comes handy to manage the leaves of the tree. 3. The node value should lie in the specified interval, otherwise this is not a BST. 4. If the current node passes the check, we explore the rest of the tree. On the left we restrict the interval cutting it to the right, on the right we cut it on the left. Couple of questions. What should we pass as left and right for the root? And, What if we don't want to accept an empty tree as a valid BST? I think that in Python a good solution could be having a starting call like is_bst(root) that do not check for an interval and define on its own what to do in case of None. And then it call the above is_bst() for all the other nodes.
http://thisthread.blogspot.com/2017/02/hackerrank-trees-is-this-binary-search.html
CC-MAIN-2018-43
refinedweb
295
83.86
Summary: Learn about using Windows PowerShell and specifying different calendar types to use with dates. Microsoft Scripting Guy, Ed Wilson, is here. Today is day four of my PowerShell Essentials for the Busy Admin series of webcasts. The series has been a lot of fun, and the feedback so far has been great. It is encouraging to see the level of interest in this series. In addition, there have been a decent amount of questions via the scripter@microsoft.com email alias about the series of live meetings in addition to the 2012 Scripting Games. Today is the Ides of March. For the Scripting Wife, it means March Madness is underway. For me, I think back to the play by William Shakespeare, Julius Caesar. Beware the Ides of March… If you want to work with calendars in Windows PowerShell, perhaps you should also beware of the Ides of March—or at least be aware of how to work with different calendar types. Anyway, the .NET Framework classes are fun to play with. If I want to create a specific date, I can use the constructor for the System.DateTime .NET Framework class and create a date. To do this, I use the New-Object cmdlet to specify the name of the .NET Framework class I want to use, and I pass the arguments for the constructor to the ArgumenLlist parameter. With the System.DateTime .NET Framework class, there are several different constructors. As a matter of a fact, MSDN lists 11 different constructors on the System.DateTime details page. To create a new DateTime object and specify the year, month, and day, requires me to use a particular constructor. Using Windows PowerShell to create a new instance of the System.DateTime class by passing the year, month, and day is shown here. PS C:\> New-Object system.datetime -ArgumentList 2012,3,15 Thursday, March 15, 2012 12:00:00 AM Of course, I can do the same thing by using the Get-Date cmdlet. I am not certain it is any easier, but it is a bit less typing. It is definitely easier to read the Get-Date command. This command and the output associated with the command are shown here. PS C:\> Get-Date -Month 3 -Day 15 -Year 2012 Thursday, March 15, 2012 5:29:42 PM The easiest way to create a specific date is to cast a string to the DateTime object. This technique is shown here. PS C:\> [datetime]"3/15/12" Thursday, March 15, 2012 12:00:00 AM All three of the previous examples, however, use the same calendar—the Gregorian calendar. A problem arises if I need to specify a particular calendar. To do this, I need to first create an instance of the specific calendar I want to use. Next, I use the constructor for the System.DateTime class that permits supplying the specific calendar. Because today is the Ides of March, it makes sense to use the Julian calendar as an example. Julius Caesar ordered a calendar reform that resulted in the creation of the Julian calendar. The Julian calendar is the predecessor of the Gregorian calendar. Luckily, the .NET Framework makes these types of conversions from calendar to calendar really easy. I only need to create a new instance of the JulianCalendar class to be able to use it in the constructor to create a Julian calendar type of date. Once again, I use the New-Object cmdlet to create an instance of the class. But I do not need to pass any arguments to the JulianCalendar class. Therefore, no ArgumentList parameter is needed. In the code that follows, I use the New-Object cmdlet to create a new instance of the JulianCalendar class, and I store the resulting calendar in the $j variable. I then use this calendar in the New-Object command that creates a specific date. The result is a specific date, reflected in a specific calendar. These two lines of code are shown here. $j = new-object system.globalization.julianCalendar New-Object system.datetime -ArgumentList 2012,3,15,$j There are lots of different types of calendars defined in the .NET Framework. All of the calendars reside in the System.Globalization .NET framework namespace. To get a feel for the different calendars, spend a bit of time on MSDN and review the various calendar classes. In the image that follows, I first create a DateTime object that represents March 15, 2012, and I display that DateTime object to the Windows PowerShell console. I then create an instance of the JulianCalendar class, and store that calendar in the $j variable. I next use the New-Object cmdlet and create the date March 15, 2012 by using the Julian calendar. These commands and the associated output are shown in the image that follows. If you want to know what a specific calendar does, you can create the calendar object, store it in a variable, and then display the contents of that variable. Information returned by doing this includes the minimum supported date and the maximum supported date. In the image that follows, I list information for both the Julian calendar and the Gregorian calendar. Anyway, beware the Ides of March—especially if you are not sure what calendar is being quick comment on international dates. Regarding [datetime]"3/15/12" The syntax shows month/day/year and results in Thursday, March 15, 2012 12:00:00 AM Living in the UK I'm more used to day/month/year but if I try PS> [datetime]"51/3/12" Cannot convert value "51/3/12" to type "System.DateTime". Error: "String was not recognized as a valid DateTime." I have to do [datetime]"3/15/12" Over the years I have found that its safer for me to do [datetime]"15 March 2012" as it always works. This is caused by the culture settings in PowerShell PS> Get-Culture LCID Name DisplayName —- —- ———– 2057 en-GB English (United Kingdom) PS> Get-UICulture LCID Name DisplayName —- —- ———– 1033 en-US English (United States) @Richard.Siddaway Having saved me the need to note the locale dependency in the article's example, there is a solution: Use ISO format. This is unambiguous with a four digit year: PS> [datetime]"2012-08-10" will always give you midnight 10 August 2010. Hi Ed, a good short article that explains some details of dates. It is quite ingteresting what we can do with dates and we should pay attention to date formats! @Richard: Right! Living in Germany, I encounter the same problems, you do! But there is one thing, I can asure you … you typed in: [datetime]"51/3/12" This will never be a valid date … not in the UK, not in the US and not here :-))) Klaus (Schulte)
https://blogs.technet.microsoft.com/heyscriptingguy/2012/03/15/beware-the-ides-of-marchbut-by-whos-powershell-calendar/
CC-MAIN-2018-13
refinedweb
1,135
64.71
Agenda See also: IRC log <trackbot> Date: 08 March 2012 dschulze@adobe.com <scribe> scribe: krit scribenic: krit <scribe> scribenick: krit ed: first topic time changes ... australia with the latest change? ... 3 weeks time all transit. Then we review chnages to times cyril: I think we should switch to morning in europe Tav: doesn't work for the damn americans <ChrisL> lol cyril: doesn't work for east coste cabanier: what about west coast cyril: haha Tav: 1h later in europe one hour later in australia last year <ed> shows 20.00 UTC May 1, 2012 krit: cabanier: 1 hour later is fine for us <ed> so, sydney 6am... maybe not great cabanier: 1 earlier is not that good for australia cyril: we should check with cameron krit: stay for now? ed: stay till 3 weeks when all countries switched cyril: 2 more weeks ed: keep the time for 2 more weeks … more discussions about time shift ed: no decision now? all: lets stay for the next 2 weeks resolution: keep the time for 2 weeks <scribe> ACTION: erik will send a mail for time shift and time change [recorded in] <trackbot> Created ACTION-3245 - Will send a mail for time shift and time change [on Erik Dahlström - due 2012-03-15]. next topic: template for tests ChrisL: mails on mailing list <ed> Tav: cameron said that html tag inside svg and try to add html inside svg you'' break ChrisL: cameron suggested a lot more chnages ... he seems to wnat to have a whole new layer Tav: link and meta are not part of svg ChrisL: we had a resolution to add them ... anyway we should ask peter ... wecould add it to our svg naemspace ed: html5's parsing algorithm would break out into html mode if we didn't parent the link and meta elements inside a foreignobject ChrisL: it seems to be a clear resolution that al changes might break html5 parsing. for those test we need something different Tav: cameron suggestions is a bit simplper ChrisL: i didn't like it <ChrisL> I think he has oversimplified and it will loose functionality Tav: he is not removing anything ... he just implementes in another way ... he is just not put html5 into ahead ?? Tav: th eproblem with meta tag. ... he still has a data tag Tach if an html5 parser finds a link in a svg tag ChrisL: what happens? Tav: I don't know ChrisL: html parser don't care about the tags at all ... a framwork in place to start writing tests ... otherwise they don't get it wrten ed: I prefer simplified structure ... no namespaces Tav: I agree ChrisL: we had the test criteria … where did it went? Tav: the test harness has to be modified to do that ChrisL: is that in the head and a meta ... simplicity is goood but Tav: link to the different parts of the spec ... and which parts it needs to pass <Tav> Tav: test assertions: ChrisL: it is all in the content attribute ... no markup at all Tav: above in the spec links, there is a link that tells you what part of the spec gets tested ChrisL: thats fine <cyril> seems broken <ed> the tags that break out of "foreign" mode (aka SVG in html): (link is not among those) ChrisL: ok ed: meta does break out Tav: link is ok ed: yes Tav: you still need sth for the ... description is ok ed: could we use svg metadata element? ed: just sth to put test meta data on ... should be fine ChrisL: would be fine than ... peter should know what we want to do ed: metadata should replace meta eleemnt ... link lists all the tags that break out ... link element would get unknown svg element ... needs verification ChrisL: so no other namespace, no head ... just metadata? <ed> ed: I dont see a circle in opera and safari <ChrisL> <!DOCTYPE html> <ChrisL> <svg> <ChrisL> <link/> <ChrisL> <circle r=200> ChrisL: I see the circle in FF with this example Tav: I don't <ChrisL> ff11beta6 ed: I don't see a circle either Tav: now I see ... I put a space before the slash <ed> cyril: we should move on ChrisL: we should have a decision ed: link is fine resolution: we will use the link element as proposed. Use metadata instead of head and meta <ed> example: instead of <meta name="flags" content="TOKENS" /> we will use <metadata name="flags" content="TOKENS" /> Tav: what cameron sugeested with metadat a Tav: basically <Tav> ed: something else on his proposal Tav: yes, the very last thing ... that we diagree too ... other issues with copy right… a list that ChrisL sent ed: Tav'll send a final proposal to the list <ChrisL> ed: peter should review ChrisL: ed can you edit the wiki? ed: ok <ChrisL> s/ ed can you edit the wiki?/I am editing the wiki ChrisL: metadata for flags ? <ed> <metadata name="flags" content="TOKENS" /> Tav: the desc tag for the text description ed: should it be long or short descr? Tav: as long as you need/ ChrisL: if we agree, than we ask peter Tav: ok ... what about copyright ed: use BSD for the suite ChrisL: put the license into one place and link to it Tav: I agree <scribe> ACTION: ChrisL will edit the wiki page and mention how to add license and copyright [recorded in] <trackbot> Created ACTION-3246 - Will edit the wiki page and mention how to add license and copyright [on Chris Lilley - due 2012-03-15]. Tav: what about revision? ... we have version controll system, so no extra revisions ChrisL: we don't put the number in the file ... not productive to do that ... the same for the test frame ed: agree Tav: I tried to add the testing to the rep, but it failed ed: ChrisLdo you edit the wiki and mention what we do about revision? ChrisL: I will … discussion about problems for Tav to submit stuff ed: back to the requirements <ed> ed: resolve accept the structure of tests <ed> agreed to the section: Structure after 8 March 2012 telcon ed: anyone against this template? silence resolution: Accept the testing template <cyril> ChrisL: we should drop xlink on <a> ed: xlink:role, xlink:arcrole and xlink:title:title might make sense ChrisL: title element is better than xlink:title ... we should use title element ed: no strong objections ... fine for ,e resolution: drop xlink attributes role, arcole, title <ChrisL> resolution: SVG2 will drop xlink attributes role, arcole, title resolution: SVG 2 will drop the xlink attributes role, arcole, title <ed> ChrisL: improve text is fine <cyril> ed: we should keep it, but remove xlink:title from the spec resolution: SVG 2 will include Improved text for the indicating links <ChrisL> resolution: svg2 will port the text from svgt1.2 ed: next one <scribe> new scripting featuress ed: svg tiny is more simple ... on media fragment ... they are not incompatible <ChrisL> resolution: svg2 will merge the svg1.1se text and the svgt12 text on fragment identifiers link traversal <ChrisL> oh and add media fragments ChrisL: what about media fragments? ed: we should look at it ... is part of the same feature ... more the same thing ... rephrase resolution? <ChrisL> ACTION: chrisl to merge the svg1.1se and svgt1.2 fragment identifier text and consider adding in media fragments for partial images [recorded in] <trackbot> Created ACTION-3247 - Merge the svg1.1se and svgt1.2 fragment identifier text and consider adding in media fragments for partial images [on Chris Lilley - due 2012-03-15]. ed: procsessing inline scripts <ed> cyril: i expected that there were differences between svg 1.1 and 1.2 tiny ChrisL: html5 has similar things ... we should compatible to html5 ed: reasonable ChrisL: we should look at it in detail first ... needs an action cyril: we already have a resolution for async,. sooo ed: the section on tiny is very short ... you have to look at the type attribute and so on ... there is nothing similar in SVG 1.1 resolution: SVG 2 will define how inline scriptable content will be processed, in compatible way to HTML5 <ed> ed: new scriptiing features ... we should port it over 'script element text' resolution: SVG 2 will merge SVG1.1SE and SVG 1.2 Tiny on script element text ed: next one change erik to SVG WG and copy paste resolution on wiki page <ed> <cyril> RESOLUTION: SVG 2 will use the relevant parts from 1.2T and align with the html script element. ed: next is animation <ed> <ChrisL> erik: svgt1.2 defines what happens when there are errors in a begin-value-list ed: tiny is more specific what to do on attribute for wrong content resolution: SVG 2 will apply changes of SVG 1.2 tiny on animation module ... SVG 2 will apply changes from SVG 1.2 tiny to the SVG animation section ed: fonts are no modification in tiny <ed> ed extenisibility we have the xlink:href attribute on foreignObject krit: do we have the same rules like for iframe if we support xlink:href? ed: customers want to use it for some magic things, but basically as a plugin frame ... it would be fine to have it. resolution? krit: so. should get an action to check security problems cabanier: most rules of iframe have to apply here as well resolution: SVG 2 will support xlink:href on foreignObject element after security verification <scribe> ACTION: ed will verify that xlink:href won't introduce security issues on foreignObject [recorded in] <trackbot> Created ACTION-3248 - Will verify that xlink:href won't introduce security issues on foreignObject [on Erik Dahlström - due 2012-03-15]. 10 items left + 11 without decision <ed> trackbot, end telcon This is scribe.perl Revision: 1.136 of Date: 2011/05/12 12:01:43 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/weels/weeks/ Succeeded: s/html5 might break the parsing algorithm there/html5's parsing algorithm would break out into html mode if we didn't parent the link and meta elements inside a foreignobject/ Succeeded: s/it// Succeeded: s/safaro/safari/ Succeeded: s/cyril/Tav/ Succeeded: s/ed: i'll send a final proposal to the list/ed: Tav'll send a final proposal to the list/ FAILED: s/ ed can you edit the wiki?/I am editing the wiki/ Succeeded: s/defs/desc/ Succeeded: s/xlink/xlink:role, xlink:arcrole and xlink:title/ Succeeded: s/resolution: drop xlink attributes/resolution: drop xlink attributes role, arcole, title/ Succeeded: s/test/text/ Succeeded: s/function/features/ Succeeded: s/eirk/erik/ Succeeded: s/for customization/for some magic things, but basically as a plugin frame/ Succeeded: s/fo/foreignObject/ Found Scribe: krit Inferring ScribeNick: krit Found ScribeNick: krit Default Present: +1.415.832.aaaa, +61.2.980.5.aabb, krit, cyril, +33.9.53.77.aacc, Tav, ed, ChrisL Present: +1.415.832.aaaa +61.2.980.5.aabb krit cyril +33.9.53.77.aacc Tav ed ChrisL Agenda: Found Date: 08 Mar 2012 Guessing minutes URL: People with action items: chrisl ed erik WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]
http://www.w3.org/2012/03/08-svg-minutes.html
CC-MAIN-2016-30
refinedweb
1,882
70.02
Betsy Aoki's WebLogCommunity Program Manager Evolution Platform Developer Build (Build: 5.6.50428.7875)2009-05-12T17:50:00ZInfographic of Microsoft Technologies<p>Loved this - found on <a href="">Steve Clayton's Blog.</a></p> <p> </p> <p><img src="" alt="" width="540" /></p> <p> </p> <p>Microsoft Technologies - An infographic by the team from <a href="">Next at Microsoft</a></p> <p>Hi-res PDF is <a href="">here</a> (to read the tiny type :) ).</p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy on the treadmill desk: first day back after long absence<p>Hmm, well. I can say the Treadmill desk prompted me to lose 5 lbs in about as many weeks. That's the good news.</p> <p>The bad news is, that I also was wearing the wrong orthotics and the wrong shoes (orthotics meant to go in dress shoes only, and because of the treadmill not having a cushion I went for more cushiony Brooks (Glycerin instead of Trance for you running wonks and wonkettes).</p> <p>The 24+ hours in one week of Skyrim/treadmill walking didn't help either. So I had to endure (as did my coworkers) 3 weeks of Das Boot (and a cortisone heel shot, to combat the plantar fasciitis that did not succumb to mondo ibuprofen and too much SXSW walking. (see a pattern here?)</p> <p> </p> <p><a href=""><img border="0" alt="" src="" /></a></p> <p>The reason for Das Boot post-shot is, since I apparently can't be trusted not to walk around (and I do have a community outreach job to do) I need to be wearing something that keeps the pressure off that foot as much as possible. Aya has already mocked me for making a velcro "swish swish" sound when I walk -hardly menacing when I was going more for Dark Lord of the Sith style.</p> <p>So, after 3 weeks of babying and mocking and no SXSW style conferences, Das Boot has come off and I am flat-footed again (wearing Brooks Trance with new orthotics.)</p> <p>I did invest in this little number - a calf stretcher. For those of you who like the stairmaster or naturally have tight calves, this wedge thing is a godsend in that the incline forces the stretch and varying your position while standing on it (or adjusting the incline) does even more for you as your muscles loosen.</p> <p> <a href=""><img border="0" alt="" src="" /></a></p> <p> </p> <p>I don't believe my results are that typical - I have fussy feet, and overdid it, and am trusting that a gradual buildup of walking will do it.As now have the right shoes, right orthotics, and soon, the right foot to put forward on it.</p> <p>Besides, the rule was - no Skyrim shall be played not on that treadmill and I intend to finish that game in 2012!</p> <p>Cheers!</p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy from the past - the computers of 1985<p>A recent trip to visit my parents unearthed gems of a forgotten era. (Yes, I realize bowties are coming back and possibly never left, but ponder the Boston Computer Society Publication aka Computer Update).</p> <p>First we ask ourselves - will online shopping replace a trip to the store? Note the Nov/Dec 1985 date up top here. :)</p> <p><a href=""><img border="0" alt="" src="" /></a></p> <p>Dude's socks are atrocious but note the floppy disk-related (word processor?) he's got in his hot little hands,</p> <p> </p> <p>Next, the goods from my own days of yore - my first computer. The DEC VT 180 (<a href="">made by Digital Equipment Corporation.</a>)</p> <p> We couldn't find the dot matrix printer that went with it ( remember the paper with the holes streaming along either side? That was it!).</p> <p>The left "A" drive was for the boot disk. The right was to store your word docs and BASIC programs on . It whirred and made an awful racket.</p> <p>What you can't see very well is the nonsense on the screen - in theory it would be processing the boot disk but ours was so old...</p> <p> </p> <p><a href=""><img border="0" alt="" src="" /></a></p> <p> <a href=""><img border="0" alt="" src="" /></a></p> <p>rinting is blurry on the post-it but .. things were much simpler then. I think we determined that it had 64K of RAM.</p> <p>Other people had Atari's and commodores - I had a computer for use from the Boston area.</p> <p>That's why it's so fun to work on <a href="">Bing Booster</a> startup program in Boston - look how far we've come! :)</p> <p> .</p> <p> </p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy Observations of an Engineer on a PR team<p><a href=""><img style="margin: 5px 10px; border: 0px currentColor; float: left;" alt="Betsy with Business Chicken Photo" src="" width="155" height="191" /></a>I)</p> <a href="">Stefan</a>'s level of hair volume or <a href="">Aya</a>'s level of coolhunting event intuition </p> <p.</p> <p>1) <strong>As a program manager, dates are important</strong> – for internal political reasons, for release to manufacturing or business reasons, or for point of pride – a well-run project with no whammies hits its dates. <strong> As a PR person, dates become everything, good or bad.</strong>.</p> <p>2) <strong>Upgrade your gadgets and your wardrobe.</strong>.</p> <p>3) <strong>Only sometimes is the story about the sausage filling</strong> .</p> <p>4) <strong>Like software methodology, there are formulas, metrics, and fads in PR – and they still manage to fail capturing what makes greatness.</strong>.</p> <p>5) <strong>PR is more of a <a href="">female-dominated industry</a>, with <a href="">attendant wage issues</a>.</strong>.</p> <p>6)<strong> Just as nerds have taken over the mainstream, so has PR – in its own way.</strong>)</p> <p>7) <strong>Attention to detail transfers over. </strong>.</p> <p>8) <strong>Two-drink minimum observed by Dilbert, continues to be observed.</strong>. :)</p> <p>9) <strong>There’s more swag in PR and marketing. </strong.</p> <p>10) <strong>Making it all up, or finding a new spin, is not a bad thing.</strong>.</p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy hackathons are not nonsense - or how a short term art salon/NaNoWriMo is pragmatically useful<p><a href=""><img border="0" alt="" src="" /></a></p> <p><span style="font-size: x-small;"><em>View from coffee line, rockstar TEDActive barista making my drink 2011 . You learn a lot by watching the process!<br /></em></span></p> <p>Thanks to a link from Dare Obasanjo, I was reading Scripting News " <a href="">Hackathons are Nonsense</a>.".</p> <p?</p> <p.</p> <p.</p> <p.</p> <p.</p> <p <a href="">NaNoWriMo</a> -? :)</p> <p>In a few hours, I'll be flying to TEDActive where we (Bing) have an interesting hackathonish type project going on and THEN I'm going to Angelhack in Boston (see <a href=""></a>) ..</p> <p> PS if you want to see how the Wall Street Journal views hackathons going mainstream, they've got a decent overview article on that here: <a href=""></a> .</p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy desk to Treadmill desk - an experiment for the new year<p>If you've heard the media stories or are just on a nerd health kick for 2012 and want to know more about both standing and treadmill desks, read on for what I've learned so far. Nothing here constitutes medical advice - talk to your doctor before making these changes.</p> <p><strong>Standing desk</strong></p> <p <a href="">"Is Sitting a Lethal Activity" article in the New York Times</a>.</p> <p>But I knew from working CES booths that all-day standing can be painful, so I read <a href="">Gina Trapani's switch to a standing desk</a> to get a feel for what it would be like to take this on.I consulted <a href="">Jeremiah Andrick</a>, who works from home when not traveling and swears by his (my inquiries and others' led to this <a href="">blog post of his</a> on his setup).</p> <p :) ).</p> <p><a href=""><img border="0" alt="" src="" width="296" height="191" /></a> </p> <p> </p> <p><strong>Obervations after a few weeks:</strong>.</p> <p.</p> <p>One thing I did do after a day in nice designer boots (usually I wear sneakers to work) was personally invest in a <a href="">anti-fatigue mat.</a> (Microsoft might provide one but I didn't bother to check.) </p> <p.</p> <p.</p> <p>But for home use, I had to go a different route.</p> <p> <a href=""><img border="0" alt="" src="" width="296" height="526" /></a></p> <p><strong>The Skinny on My Treadmill desk</strong></p> <p.</p> <p>If you have money to blow, and/or need wider desk, there's the original <a href="">Steelcase Walkstation</a> variations of treadmill desk.</p> <p><a href=" build a desk setup">Instructables: gym treadmill in place, now you need to build a desk setup</a></p> <p><a href="">Folding option</a> - if you have a folding treadmill and need to fold the desk away too</p> <p><a href="">If you have an old Ikea Jerker desk</a></p> <p>If you can't find a Jerker, a friend suggested experimenting with Ikea's <a href="">Frederik </a>(his coworkers are standing desk users, what we aren't sure is how it may fit your treadmill) . I had an original Ikea Jerker desk, but not the kind that would go high enough for treadmilling at my height.</p> <p><strong>If you aren't going to buy the treadmill and desk as a specific packaged set, some tips:</strong></p> <p>Are you going to need sitting capability too or is this for treadmilling only? Fixed height is cheaper cause you aren't paying for a motorized desk.</p> <p.</p> <p <a href="">Trekdesk</a> . Make sure the measurements really work for you, with some buffer built in.</p> <p><a href=""><img border="0" alt="" src="" width="397" height="358" /></a></p> <p>What I ended up going with was buying a treadmill "Tread" from <a href=""></a> and a <a href="">Safco standing desk</a>.</p> .</p> <p><strong>Observations on the home treadmill desk</strong></p> <p <a href="">Signature desk</a>. </p> <p. :)</p> <p.</p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy infidelity - sign of the times<p><a href=""><img border="0" alt="" src="" width="367" height="251" /></a></p> <p>I've spent a year being massively unfaithful to this blog. :(</p> <p>Last year (and to be fair the year before), it was more about twitter growing in influence and eating away my will to write long pieces.</p> <p>This year, it's been more about posting other places - on the Bing blog about our <a title="" href=" hackathon">social hackathon</a> or for <a title="" href="">Blogworld</a> about cause marketing. I've written most recently a slew of posts around Bing's work with startups at Bing Booster - the <a title="" href=" Casting call">Seattle Casting call</a>, or <a href="">Geeks on a Plane: Asia</a> . Oh yeah, and I'm doing a chickenblogging experiment unrelated to work (though tangentially related to Chris Pirillo) at <a href=""></a> . But that's because I get so many questions in social media channels about the chickens!)</p> <p>And part of it, the bar for this blog is getting higher. In the early days of blogging, typos were a sort of badge of authenticity and real-time meaning. With twitter and SMS and fat-fingered phone statuses, not only do typos in blog posts seem quaint, they seem inadequate. :P I can't compete with the autocorrect sites either cause of the nature of this blog demands it be safe for work! :)</p> <p>I don't believe blogs as a medium are dead. I personally am not dead, and I'm (possibly due to my lack of self-actualization and enlightenment) not yet willing to call this blog dead either. But more and more I feel like it's the place where you have to get Betsy brainmatter unrelated to what's going on in the Bing evangelism world, the twitter world, the FB world, and the "guest blogger around startups" world. Because all those worlds have their outlets and you see enough of me there. They key thing about this blog is that you can email me through the contact list. Straight into my work inbox (though you might want to tweet at me to remind me, if you think it might have been too long and/or caught by Microsoft corporate junk filters).</p> <p>That means the cadence of the writing here will of necessity slow down and the posts will all end up being longer. And topicwise, more about cyberculture and social media and women in tech. So for 2012, that's what you will see here. I've got a decent lifehackerish post in the works up next that will be my first of 2012. It will probably take me these two weeks (on vacation) to finish it (my how things change right - blogs used to be our twitter).</p> <p>So more in a bit, and thanks for your patience! 2012 will be an amazing year!</p> <p> </p> <p> </p> <p> </p> <p> </p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy bet - why Bing reaches out to startups, and why a conference like this is fun in general<p>Last Friday I had the privilege of both speaking and mentoring at the inaugural BlogHer | bet conference, both as a Bing PR/marketing person and as social media/community application/games program manager. It's an unusual conference and so I'll lay out a bit of the format and then give some flavor of the event, which was held at the Microsoft campus in Silicon Valley.</p> <p> <img height="255" width="295" src="" border="0" /></p> <p>The number of attendees was gated on the number of mentors - each person got a one hour slot to pitch their business, ask technical/marketing/funding advice, and the rest of the time was filled with panels to help you get the leg up technically or investor-wise. During the time I mentored my two people (one per hour) , there was a startup "speed-dating" sort of circle where folks got practice pitching and introducing themselves - something I've seen before at BlogHer conferences but especially important here, where practicing that pitch for your business was important. Because I have been to a ton of technical tracks at conferences, I decided to focus on the entrepreneurial/funding tracks for my own development and of course, I was in the metrics panel close to the end of day. </p> <p>What I love about the BlogHer founders' mindset - Lisa Stone, Jory Des Jardins, and Elisa Camahort Paige - is that each project they do has as its subtext giving women a voice and empowering them to sit at the table. Bet was designed to answer the question "where are the women with big tech ideas" "where are the women who want to found tech companies" - and create a conference that gives those women a leg up. </p> <p>Here's what I learned...</p> <p><em>About mentoring...</em> the mentees were nervous to meet us, but being a useful mentor was actually nervewracking in its own way. I was fortunate in that I had different experiences because my mentees asked questions not only abou social media marketing but forums hosting, SEO/SEM, Web site and book cover design, holding events and managing email membership lists, and in general the refinement of their pitch. I texted my boss Stefan at a break that while I mock PR/marketing as an engineer, (and Stefan's hair to boot) the stuff I've learned directly and osmosis by sitting for two years on a PR team served me in good stead. You need to be able to explain your business and why you care about it in a paragraph or less. I told my mentees "the guys I meet at tech conferences can all do this. You gotta do this too."</p> <p><em>About pitching to VCs...</em></p> <p><strong>You need a strong team.</strong> VC, unless they are already friends, don't really go for one-woman shows the way they go for an exec team with a business plan and a rock star pedigree. There is no such thing as a wallflower CEO - the CEO has to be the one making the pitch (though it would help to have the CTO or other folks who are subject matter experts in the financials or the product there for demo). </p> <p><strong>Know your market/competitors - even if you are so new you think you don't have any.</strong> There are different ways to make your case, but there's no such thing as a <a href="">pitch deck</a> without a look at competitors. Think of it this way - the world is getting along fine right now sans your business (or they think they are) - you need to include your thinking around competition or potential competitors early, even if it's in the appendix. </p> <p><strong>People, Terms, Valuation:</strong> Lisa Stone, CEO of Blogher, relayed this great advice she got from <a href="">Caterina Fake</a> (Flickr, Hunch, Chairman of Board for etsy.com) when BlogHer itself was getting venture funding - prioritize yourself by People, Terms, and Valuation. If you assemble the right all-star team who is 100% behind your business idea, and get your business idea to a provable state faster, you will attract the right terms from VCs and a better valuation follows from it. If you are only interested in your valuation and the money you take from the company before the people, you are more likely to run into trouble and an outcome you don't want.</p> <p><strong>If you are lucky enough to get past the courting stage and are talking terms, you can never invest too much money in financial and legal advice.</strong> Have these professionals run models for you so that you know with each round of venture funding, how much you and your employees/prior investors' shares are diluted. VC investments - rule of thumb is they want a return of 10x their investment (if not more). So if you are asking for $1 million in funding, they want you to have a company valuation of at least $10 million - ie, they now own a stake ina $10 million business. I won't go into the lengthy legal terms covered in the term sheet panel but for folks close to this stage, it's likely worth the virtual access fees to see it. <a href=""></a> .</p> <p><strong>Where the pitch meets the money and ownership percentages is risk.</strong> If you already have a profitable business, with customers, and your goal now is scale, that's a way different risk than the company with a new idea that has no customers yet and wants funding to acquire some. Every time you pass a major milestone for the company, the valuation goes up. You may get told to come back to ask for funding once you have done X (gotten customers, built a prototype, etc). Ironically bigger/well-known firms tend to be "softer" about terms than smaller ones - they know that if the founder makes a great business that's the only way they win - having controlling stakes in a tanking company serves no one. </p> <p><strong>The best VCS help your business with more than money -</strong> the best VCs help you shape your company into a better one and its not supposed to be adversarial. While founders/owners may be skittish about giving up control at all, the better relationships are more like partnerships.</p> <p><strong>Metrics and measurement panel:</strong></p> <p>This was fun. I sat next to Amy Chang of Google Analytics and Laura Fitton (@pistachio), CEO of <a href=""></a> - Laura took notes and tracked all the tools we mentioned and put them <a href="">here</a> . I mentioned <a href="">The Goodness Engine ebook</a>, which was a cross-industry project to help Donorschoose.org, but which had lessons that apply to startups and new businesses (because like non-profits, they start with no money and stil have to drive traffic,engage customers, get donations, decide if they need an API or other feeds coming from their site, etc). The main reason I mentioned it in this context though is that the ebook shows how much metrics homework Donorschoose.org did, before baring their problems to our team of 30+ industry experts. Some of the metrics Donorschoose.org offered could be found in Google Analytics or on the twitter tools Laura mentioned - but some of them were innate to their unique business premise and value proposition. You really need to focus around your customers and where your cash flow for your business is coming from - if it's online advertising, then tools that measure user behavior on Web sites and actions they take, will be key. But only you and your database will know how many actual sales you got or consulting appoints you made - it's not just going to be handed to you by an off-the-shelf tool.</p> <p>Why do I love this kind of thing? Well, aside from the fact helping women in technology rocks, folks from Bing benefit in talking with startups and new business folks as much as they benefit from rapping with us. Yes, I reminded folks that Bing webmaster tools (<a href=""></a> ) is where you can learn how to ensure your site is noticed by Bing, and that helps get the word out beyond the SEO conferences, but it wasn't about shilling Bing so much as absorbing the energy and feeling part of that upstart uppityness that has to accompany any new venture. Bing still has a way to go, but like any startup hopefull we've got to justify spending and deliver on promise. It always feels good to tap into the energy of the upstart tribe :) </p> <p>Live it vivid!</p> <p> </p> <p> </p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy and Facebook ate my blog (or possibly my brain)<p>I'm writing this from a skyscraper with a big window where I see exactly no snow falling, despite dire prediction. As we all know, the Seattle area can't handle snow - so much so that the *idea* of snow, keeps drivers at home.</p> <p>This is a somewhat decent metaphor for the idea that the micro-blogging or status-update applications will take over blogging functions in society, much like TV was supposed to destroy radio.</p> <p>But I have to admit that my purposeful blogging has gotten lighter, even as my thinking gets heavier. True, I did write these pieces for Blogworld - one about <a href="">how effective community outreach for a cause can be in turning up engaged users</a> and the other that represent <a href="">my top 10 learnings while building social or community-based applications, and doing social media marketing. </a> <a href="">Bing and the future of search on the Bing blog</a> too, and it makes more sense in most cases to post it there.</p> <p. </p> <p>We all know social media channels can be a distraction and all of them have gotten more popular and noiser. <a href="">Internet usage is up worldwide</a> and according to Pew, <a href="">3/4th of US online teens and young adults use social networks.</a>. </p> <p <a href="">trends of failure. </a>That's natural in a new medium - fail, I say, as fast as you can. And also, some things only work once while the medium is new - and then first mover advantage is used up and you are now just an online marketer like everyone else. </p> <p>The other thing that has informed my silent thinking/non-blogging brain lately has been the issues raised by Linda Stone and others about <a href="">continuous partial attention</a> and <a href="">what it means to have software that shuts off inputs so we can get things done.</a>.</p> <p).</p> <p>No, the blog as a medium is not dead - and this blog is not dead. But to do it right I will have to change my life (or at least the part inside my head) to suit. I'll let you know more how that goes. </p> <p>Live it vivid!</p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy kid deserves a great education<p>Now that some of the launch madness of <a href=""></a> (aka <a href=""></a>) is over, I'd like to blog about what REDU campaign means to me.</p> <p>I'm a completely public-educated person. With the exception of nursery school (ie pre-kindergarten) I went to public schools in Massachusetts and California, and got university degrees in California and Washington from public universities. </p> <p>When I was somewhere between 4 years old and 8 years old (this era for me blurs a bit), the issue of <a href="">Japanese Americans being put in internment camps during World War II</a> was explained to me. Specifically, Japanese-Americans I know. My dad was about 5 years old when he got out, his parents were immigrants, and like many folks of my age, I had relatives who went there and some lost all they had. Talk about recession and losing money trying to sell your home - when people know that the government is forcing you out of your house and you are desperate, they know you will take any price. People could only take what you could carry. In those days there were no Xboxes or Nintendos, no big screen TVs, but try if you will to think about what you'd put in your laptop bag and suitcase and wheel down the street to your new home. </p> <p>So as a kid with a vivid imagination, I imagined all this. The feeling of being singled out, a freak, unwanted and forced to leave my school. Being scared to go somewhere strange and yucky with my family as a kid. Would this kind of thing ever happen to people I knew again? And in these discussions, my mom noted: "Your dad's family has always valued education because when you are packed off to internment camp, it's the one thing that can't be taken from you."</p> <p>Education is this amazing, invisible tool. It helps solve problems, create new possibilities, and fits into a laptop sleeve - it fits into any pocket and up any sleeve. When my relatives got out of internment camp, it was education (and freaking hard work) to get back all the physical, economic and social ground they had lost. The video on the REDU site "<a href="">The Education Crisis in Two Minutes</a>" has a great moving graphic that shows what happens when folks get a good education and can empower themselves in society. That's why education matters so much. It's a lever that rights things, helps the downtrodden, brings out solutions to problems.</p> <p>So REDU is personal to me and I'm personally glad we were able to launch it. Because I got so much of my life and superpowers from public education and it deserves some karmic payback, but also because I know that secret about what education really is. World War II is over but we still have social inequities and societal problems in this country. Education remains as a tool that can't be taken from the people who have it and they can still use it to change their world.</p> <p>Live it vivid!</p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy the summer that struggles to arrive, I struggle to blog<p>A lot of people I ran into this past 12 months talked about it - the great blogging struggle. Some of them, who have a more entrepreneurial or consultant bent, have managed to keep blogging steadily even while picking up responsibilities to their audience in Twitter and Facebook. I, as you can see by my last blog post timing, fell far short of that. </p> <p>Some of it I have to admit is the "twitter ate my blog" syndrome. I find myself with less time to think involved thoughts, and the temptation of the twitter candy is that you have your 140 characters of fun, and then the urge to publish has been banished - along with whatever complicated thought I might have come up with after that. Some of it is twitter honing my insights to what they really should be - only worth a fortune cookie's worth of expression.</p> <p>Some of it has been noisy-ness and the act of keeping up with what folks are saying. I just finished re-reading Cognitive Surplus (Clay Shirky) and am about to start Delivering Happiness (Tony Hsieh) - and these are authors I've met and respect their brains. If I read every single social media book that claimed to give the marketer godlike powers, I'd have to quit my job and hire backup brains to help. I also respect enough of the discourse that I try not to clutter it with my own stuff too much if it's a bit "me-too."</p> <p>But some of what I feel emerging is some observations about being a tech person and being a marketing/PR person, and I'm not quite baked on the notion. I caused trouble with developers on past projects by bringing in customer data and I cause trouble in marketing circles by pointing out that the traditional media landscape is morphing. Aside from being a universal pain in the behind, there has to be some advantage to this dual view, and what i hope by end of summer is to be able to write here more on my findings while walking the periphery of both disciplines.</p> <p>Cheers, and hope where you are, there's more sun than Seattle. </p> <p> </p> <p> </p> <p> </p><div style="clear:both;"></div><img src="" width="1" height="1">Betsy Lovelace Day 2010: In honor of my mom of science<P>When I signed up to do this Ada Lovelace thing, I knew I'd have a few options on my hands. </P> <UL> <LI>I could research obscure women that needed to be brought out to public view.</LI> <LI>I could name women in tech or science you probably have heard of already (but possibly not as much as their male counterparts).</LI> <LI>Or I could go straight for home, to the first woman of science I ever met (since I shot out of her womb) - my mom.</LI></UL> <P>I'm mentioning my mom in part because I want to give encouragement out there to the working moms of tech and science, who think "yikes this is tough doing both" and need to know that the struggle matters. Kids watching you struggle, matters. </P> <P.</P> <P. </P> .</P> <P. </P> <P.</P> <P.</P> <P>Happy Ada Lovelace Day Mom!</P> <H2 style="MARGIN: 12pt 0in 3pt"><A name=_Toc134110822></A><A name=_Toc134113555></A><SPAN style="mso-bookmark: _Toc134110822"><EM><SPAN style="FONT-FAMILY: 'Palatino Linotype','serif'; mso-fareast-font-family: 'Times New Roman'">Autobiography</SPAN></EM></SPAN><SPAN style="mso-bookmark: _Toc134110822"></SPAN><SPAN style="mso-bookmark: _Toc134113555"></SPAN><SPAN style="FONT-FAMILY: 'Palatino Linotype','serif'; mso-fareast-font-family: 'Times New Roman'"><?xml:namespace prefix = o<o:p></o:p></SPAN></H2> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><SPAN style="FONT-FAMILY: 'Palatino','serif'"><o:p><FONT size=3> </FONT></o:p></SPAN></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3Your mother. Your mother is screaming</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3at someone else, not you. Your mother</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3is screaming at a stranger, some guy</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3with a red tie and a limp mustache,</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3some guy who hasn't eaten enough</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3protein, looks like his hair is falling </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3out, too greasy, no sleep, she's screaming </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3out the window of the big blue Valiant, </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3she's screaming words you remember </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3for the rest of your life, you remember </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3she's screaming at the stringbean </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3parking lot attendant to Leonard Morse</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3Hospital, she's pulling her face up closer </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3to his face, he's edging back into the yellow,</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3the asphalt reserved for hospital staff,</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3for the medics with staff privileges</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3and she's driving forward in a roar</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3and she's screaming for the four </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3women in the Yale class of '65</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3she's screaming for all the men who</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3said she shouldn't be here on this spot, </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3grinding the steering wheel she's screaming </FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><FONT size=3so her daughters will remember:</FONT></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><I style="mso-bidi-font-style: normal"><FONT size=3><FONT face="Times New Roman">you thought because I was a woman<o:p></o:p></FONT></FONT></I></P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><I style="mso-bidi-font-style: normal"><FONT size=3><FONT face="Times New Roman">I couldn't be a doctor.<o:p></o:p></FONT></FONT></I></P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy from my PRMKTNG Camp group on social media and storytelling was privileged enough to act as one of the "camp counselors" yesterday at the PR+MKTG Camp hosted by Dan Greenfield <P style="MARGIN: 0in 0in 0pt" class=MsoNormal><A href=""><FONT color=#0000ff size=3 face=Calibri></FONT></A> . I shared my leadership spot for " Blue team" with Patricia Vaccarino of <A href="" mce_href="">Xanthus Communications</A>, a veteran of the PR industry. After a wide-ranging discussion with the entire group, we agreed to focus down on storytelling and outreach.</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal>Patricia and I came from it completely at different angles, which made us a good pairing. But since this is my blog, I'm going to discuss solely what I got out of the experience and from the social media perspective. :) </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoN. </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal. </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal.</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal). </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal. </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormalMy?</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal>.</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormalInterestingly.</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormalOne?</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormalThis is why I love this area - the universe is expanding.</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormalLive it vivd!</P> <P style="MARGIN: 0in 0in 0pt" class=MsoNormal </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy Love: View from the Chicken Coop<P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Ok, both <A href="" mce_href="">Tara Hunt</A> and <A href="" mce_href="">Mona Nomura</A> <SPAN style="mso-spacerun: yes"> </SPAN>greatness is narrowly defined as “career greatness” but even with that definition I believe you can have both things – it’s just tricky, a lot of work, <SPAN style="mso-spacerun: yes"> </SPAN>and there’s definitely some luck involved at any given point in time.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>And this is just how I see it: remember, your mileage may vary.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>The advent of the microwave (yes, there was actually a time BEFORE microwaves) and the rice cooker (no Asian family should be without) made her return to work after the birth of my youngest sister manageable.<SPAN style="mso-spacerun: yes"> </SPAN.<SPAN style="mso-spacerun: yes"> </SPAN>Mom – because she was re-entering the workforce – might be slightly later but she came home to a meal (however lame culinary-wise, it was healthy) ready to eat.<SPAN style="mso-spacerun: yes"> </SPAN.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>So what do you see there? Training the kids, having a reliable sitter, and technological kitchen marvel = ad hoc way to make both parents get what they want out of their careers.<SPAN style="mso-spacerun: yes"> </SPAN>As Mom put it to me once, “we just made it up as we went along.”</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri.<SPAN style="mso-spacerun: yes"> </SPAN>Mom and Dad would shoo the kids away from the kitchen and “talk medicine” (their version of shop) after dinner. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>My choices – because of the generation I’m in - were different from my parents. <SPAN style="mso-spacerun: yes"> </SPAN>For one thing, I’ve had more than one career (not just technology) and all my jobs were the ones where it’s easy to be workaholic.<SPAN style="mso-spacerun: yes"> </SPAN>I married late (at 40) and when I did, I married someone 9 years younger than me (who works in my field so we can talk shop, who cleans like a fiend – Freud would no doubt have something to say here).<SPAN style="mso-spacerun: yes"> </SPAN>I’m unlikely to have kids at this point; I have cats and chickens.<SPAN style="mso-spacerun: yes"> </SPAN>I have succeeded at taking mostly “linchpin” jobs (see Seth Godin’s </FONT><A href=""><FONT size=3 face=Calibri>book</FONT></A><FONT size=3 face=Calibri> of the same name) and well, linchpin jobs demand a lot of passion and a lot of time. I have an artistic side that doesn’t get enough time – it wants to be a linchpin all on its own and it doesn’t pay enough.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri). <SPAN style="mso-spacerun: yes"> </SPAN.<SPAN style="mso-spacerun: yes"> </SPAN>I could always adopt if I wanted kids later. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>And boy, you just can’t time when the right partner will show up.<SPAN style="mso-spacerun: yes"> </SPAN>I won’t lie to you – being single that long while people marry and marry around you, can suck really hard. <SPAN style="mso-spacerun: yes"> </SPAN>There were definitely people I saw get married early out of general fear of being alone (they got divorced later). The world at times seems built for couples and married friends can have “couples amnesia” or “couples denial” about what it was like to be single. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri. <SPAN style="mso-spacerun: yes"> </SPAN>I remember one tech job where I left on lunch break, sobbed my eyes out, and then came back composed and kicked ass.<SPAN style="mso-spacerun: yes"> </SPAN>I did this ritual for several months. You do what you have to do.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>But there were also “career moments” I wouldn’t trade for anything. One joke I had with myself after a breakup was to ask myself, “if I won $100 million in the Lotto right now, would that make me feel <SPAN style="mso-spacerun: yes"> </SPAN>better even though X dude is gone? “– and frankly, sometimes it would. Why? Because with that set of resources, my life would so clearly leave the plane it was on currently and rise up to enable a lot of other dreams I had. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Complex, interesting, GREAT people don’t have just one dream.<SPAN style="mso-spacerun: yes"> </SPAN>Love is important, but no life is one-dimensional, and love itself is more dimensional than just romantic love. Except maybe in that stupid Twilight series. And if you want a bloodsucker or a lupine spouse, you have other issues going on.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>(And no, I’m not going to trade my husband for a Lotto ticket. That’s how you know!)</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>To Mona and Tara (and anyone else struggling with the personal vs. the professional, male or female) all I can offer is my empathy and faith: the struggle to have what you want out of a career and out of your relationships <SPAN style="mso-spacerun: yes"> </SPAN>maybe looks like a mess right now but it doesn’t have to be forever. It's just this snapshot in time.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Totally agree, pressures are harder on women because they have kids and historically men haven’t been used to women being the powerhouses of the relationship. Right now, there’s an inequity in how that all shakes out for men and women’s time. <SPAN style="mso-spacerun: yes"> </SPAN><SPAN style="mso-spacerun: yes"> </SPAN><SPAN style="mso-spacerun: yes"> </SPAN>For men and women, it can be daunting to find that right partner and sometimes you just don’t have time for it cause too much is going on, or the right folks are not out there – yet.<SPAN style="mso-spacerun: yes"> </SPAN>When it’s not working out, it seems like an OR choice. Love OR Greatness. But once it works out – and I believe firmly it will – then it will be Love AND Greatness. Everyone deserves their shot at both. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Keep the faith.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri></FONT> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy was a different kind of conference<IFRAME style="PADDING-BOTTOM: 0px; BACKGROUND-COLOR: #fcfcfc; PADDING-LEFT: 0px; WIDTH: 320px; PADDING-RIGHT: 0px; HEIGHT: 179px; PADDING-TOP: 0px" title=Preview marginHeight=0</IFRAME> <P mce_keep="true"><FONT size=3 face=Calibri>TEDActive was a different kind of conference for me ( though similar to other technical conferences <SPAN style="mso-spacerun: yes"> </SPAN>in its lack of sleep, regular pace, and of course, being thrown in with large quantities of people I didn’t know before I came).<SPAN style="mso-spacerun: yes"> </SPAN>That lack of sleep hampers the writing of this blog post, but I wanted to capture impressions before they vanish back into the rhythm of my everyday life. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>For those not familiar, TEDActive is an event affiliated with the main TED conference and who shares the slogan “Ideas Worth Spreading.” <SPAN style="mso-spacerun: yes"> </SPAN <A href="" mce_href="">Ted.com</A> Web site but in the timeframe immediately around the conference, only a few trickle out at a time.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3><FONT face=Calibri>For this reason, and the fact that TEDActive is also curated in terms of who they let in, the conference has an intimate feel that I suspect (and was told by some folks) doesn’t exist at the larger TED in Long Beach. TEDActive takes place <SPAN style="mso-spacerun: yes"> </SPAN>at the same hotel, and if you happen to stay there, you will see the same folks over and over at meals as well as in the conference sessions.<SPAN style="mso-spacerun: yes"> </SPAN><SPAN style="mso-spacerun: yes"> </SPAN></FONT></FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>The Palm Springs conference viewing area had beanbag chairs, two elaborate circular beds with ceiling-mounted viewing screens(fits up to 4 adults) , couches, and individual stuffed chairs. <SPAN style="mso-spacerun: yes"> </SPAN>The space shared with TEDActive attendees fosters more casual conversation, the TED talks themselves are provocative,<SPAN style="mso-spacerun: yes"> </SPAN>and the picnic lunches (have to form groups of 6 to get the basket) also encourage more intense conversation.</FONT></P><IFRAME style="PADDING-BOTTOM: 0px; BACKGROUND-COLOR: #fcfcfc; PADDING-LEFT: 0px; WIDTH: 320px; PADDING-RIGHT: 0px; HEIGHT: 179px; PADDING-TOP: 0px" title=Preview marginHeight=0 </SPAN.”<SPAN style="mso-spacerun: yes"> </SPAN>Lo and behold, she was working in the education field, and though she was likely thinking I was a nut for being mystical, she had some observations about my critical thinking project and the challenges of online high school learning that were insights I needed to have. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calib.<SPAN style="mso-spacerun: yes"> </SPAN>The insistence on being mentally present and exploring what it means to be a human, trying to do good, created a rigor that was exhausting even with coffee but brilliantly invigorating like a good gym workout.<SPAN style="mso-spacerun: yes"> </SPAN>TED talks themselves referred to pattern-seeking, the difference between experiencing something and experiencing the memory of something, and hit home with themes like obesity and food.<SPAN style="mso-spacerun: yes"> </SPAN>And the tone was positive.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>While I was tweeting that TED’s message is that everything is possible and humans can do some good,<SPAN style="mso-spacerun: yes"> </SPAN>(and I did a bad job on twitter, my phone was having button problems) UW Professor Kathy Gill reminded me that<SPAN style="mso-spacerun: yes"> </SPAN>I was speaking from a position of privilege.<SPAN style="mso-spacerun: yes"> </SPAN>And let’s face it, I am. I have a graduate degree, a hitherto exciting life, an exciting job, and I live in a nation with a better record of treating women well than some.<SPAN style="mso-spacerun: yes"> </SPAN>I am not a slave and I can decide to divorce or marry on my own.<SPAN style="mso-spacerun: yes"> </SPAN>I can vote. </FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>But sitting pretty in the aura of privilege that is my life – or for many TEDsters, their lives – really isn’t enough after you’ve been through several days of the conference.<SPAN style="mso-spacerun: yes"> </SPAN>By being shown the world’s problems and how we are all implicated and affected by them, it is just embarrassing to think about how little potential we are likely using.<SPAN style="mso-spacerun: yes"> </SPAN>And one’s notion of privilege completely shifts when faced with some stark realities. </FONT></P><IFRAME style="PADDING-BOTTOM: 0px; BACKGROUND-COLOR: #fcfcfc; PADDING-LEFT: 0px; WIDTH: 320px; PADDING-RIGHT: 0px; HEIGHT: 179px; PADDING-TOP: 0px" title=Preview marginHeight=0Bing innovation lounge</A>, wore my Techcrunch50 startup shirts for our beloved-by-Bing TC50 startups, and <A href="" mce_href="">cheered Microsoft employees Blaise Aguera y Arcas</A> and Gary Flake on as they demoed fresh innovations from search and<SPAN style="mso-spacerun: yes"> </SPAN>a new way to look at Internet data, Pivot. <SPAN style="mso-spacerun: yes"> </SPAN>But don’t think that was the only thing I was doing at TED – far from it. <SPAN style="mso-spacerun: yes"> </SPAN>I am still digesting the impact the talks and the conference had upon my brain.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>But back to stark realities. Glenna, a woman at TEDActive with a terminal bran cancer diagnosis who likely won’t be around for<SPAN style="mso-spacerun: yes"> </SPAN>Christmas 2011 told us how she reveled in her choices – to go to school, work, play, do only what she wants to do with her life for as long as she has left.<SPAN style="mso-spacerun: yes"> </SPAN>Any of us who expect to live to see Christmas 2011 are privileged – rich or poor, able to attend TED or not, and she asks us: what will we be doing?</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Most of you reading this blog have choices, have the smarts, have the capability to make this world a little better from where you stand.<SPAN style="mso-spacerun: yes"> </SPAN>Let’s make this next year really matter.</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><FONT size=3 face=Calibri>Live it vivid!</FONT></P> <P style="MARGIN: 0in 0in 10pt" class=MsoNormal><?xml:namespace prefix = o<o:p><FONT size=3 face=Calibri> </FONT></o:p><o:p><FONT size=3 face=Calibri> </FONT></o:p> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy to TEDActive<P><A href="" mce_href="">TEDActive</A>.</P> <P.</P> <P>Don't think I can leave all the electronics at home but I'm certainly ok with keeping it under wraps most of the time. </P> <P>If you are reading this and are going/have gone to TEDActive before I salute you, if you are enroute, I meet you there, if you've met me, well, know that I write this with greater sleep backlog than probably when you met me.</P> <P>Live it vivid!</P> <P mce_keep="true"> </P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy customer service = Bliss (Soaps)<P><IMG height=263So, folks who have followed this blog know that I know how to make bath bombs (or fizzes), have given a <A class="" href="" mce_href="">presenation at Foo Camp East</A> on how to make them, give them away to fellow social media conference panelists, and mentioned <A class="" href="" mce_href="">Bliss Soaps</A>, (<A href=""></A>)..</P> <P.</P> <P>A bit later, an email was sent out from Bliss saying - wow, we are so grateful, but never expected this response - we are officially backlogged from all of your Internet orders, hope to get them done in the next few weeks. </P> <P>As folks may have noticed from tweeted blog <A class="" href="" mce_href="">photos,</A> I was then out of the country for the last 3 weeks and so didn't notice the stuff I had ordered hadn't made it yet (and in that sense also, no harm, no foul). </P> <P.</P> <P>I'm lucky; I'm employed - I can eat the $100 if they had actually closed shop for good. But I had a pang: I liked these guys. I hoped to hell they weren't a recession casualty. </P> <P>Today because of Christmas stuff, I didn't make it over to their store. Instead, tonight, the co-founder of the store, Chuck, HAND-DELIVERED my bath bombs to me at my house. </P> <P.</P> <P." </P> <P>I told him Bliss is good stuff, and the community knows it, and I'm glad things are looking up. He shook my hand, put on his shoes, and left.</P> <P mce_keep="true">If you like bath bombs and go to Bliss, just keep knocking so they hear you in the back. </P> <P mce_keep="true".</P> <P mce_keep="true">That's something for all of us to live up to. Live it vivid!</P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy a 5 minute talk at the Kodak Theater (140 Characters Conference)<P mce_keep="true">Doing the Ignite presentation and chatting a little bit with Scott Berkun (who has a <A href="" mce_href="">book coming out on public speaking</A>) I figured I'd do a followup on another 5 minute talk and some things I learned from doing a non-Ignite, 5 minute talk. For <A href="" mce_href="">140 Characters Conference in LA</A>,. </P> <P>You can see how the talk came out <A href="" mce_href="">here</A>. For those who wanted to know how it was meant to close down, summary of the slides you missed..</P> <UL> <LI>Stefan's Expression upon seeing me in his office the next day at 6 am</LI> <LI>Twitter Stats from the overall launch period </LI> <LI>Hugh McLeod's Gaping Void cartoon about purists being the ones with no skin in the game (be relentless for your customers, it doesn't have to be perfect at all times, and after being up all night, how can it be?) </LI></UL> <P (<A href=""></A>) and it was nice to be able to give Hugh his due as well as credit to the twitter-friendly nature of our PR team.</P> <P mce_keep="true">All photos here taken by <A href=""></A>, otherwise known as Stef Michaels. :)</P> <P><STRONG><IMG style="WIDTH: 308px; HEIGHT: 272px" title="darker podium photo" border=1 hspace=10Timing and pace</STRONG></P> <P>First, don't let Ignite presentations, hard as they are, make you overconfident. I had less than 20 slides for my 5 minute talk but what I really should have had, was 5. Maybe 10. The fact I went over 10, set me up for danger.</P> <P> I also did not auto-advance the slides, as I did with the Ignite talk, which would have forced me to complete on time (I was a couple slides short)</P> <P> Instead, I drilled the talk at 4 minutes and 30 seconds. This was good for keeping me brief and moving off the slides without a timer, and I believe was the reason I finished with my story intact (though my kicker slides not exposed). </P> <P>Expect what happened to me, will happen to you. They will start the 5 minute clock but your slide deck won't be up. Keep talking even as you fuss with it. </P> <P mce_keep="true"> </P> <P><STRONG>Content</STRONG></P> <UL> <LI> For a 5 minute talk, do a brief overview and front-load. That's what saved me (once the music started) - my most complex and entertaining story was the first one I told (and possibly could have been the only one I told, but I wanted to balance the presentation with more data). </LI> <LI>Ignite has it right - memorise your words and use vivid pictures. In my case I had to animate some twitter tweets because I was talking about them - but if you have a more visually oriented talk, go the Ignite route and do minimal words, big photo. </LI> <LI>For 140 Characters, I saw some presenters lose their audience (10 minute was the longest, so not even that long a period in which to lose people) by being dry and not conforming to the "story" format that Jeff Pulver really espouses. The best stories were human and vivid (Wm. Marc Salsberry's photos didn't work and he had us all weeping from the narrative of his foray into tech photography and the support for his brother dying of cancer).</LI> <LI>AV fallback = interpretive dance. I joked with my manager Stefan and coworker Aya that if I couldn't use the slide deck of the tweets I'd do an interpretive dance of what happened on launch night. I actually DID think of some poses I would have to do, if I got no visuals. That cracked me up and helped me mentally before going on stage. </LI></UL> <P>Venue</P> <UL> <LI>The Kodak Theater is a 3,000+ seat venue. This is where people receive their Oscars and by custom Jack Nicholson has his own marked chair. If you have a chance to speak here, do, but be prepared to think "Holy crap, this place is REALLY BIG!"</LI> <LI>The nice thing about Kodak is that it is a beautiful piece of architecture. There are lush balconies, an awesome sense of history, and while enormous, it doesn't quite scare the way a stadium or arena venue could, because it has STYLE.</LI> <LI>The AV guy, JT, was awesome but I wasn't used to the cameraman actually flipping from me to the screen of my laptop. Other panelists I talked to, who had to sit in front of monitors showing thier faces, also had eerie feelings. There may be no way to rehearse the sensation without being in the venue, but practicing in front of a mirror could help people get over their own faces.</LI></UL> <P><IMG style="WIDTH: 283px; HEIGHT: 280px" src="" width=352 height=381</P> <P>This photo is more of what I looked like to myself when the camera was on me and not my laptop. Lights are super bright when you are onstage -people warned me and it is true that you really can't see any faces in the audience.</P> <P>Garb</P> <UL> <LI>I wore a skirt thinking I'd be miked and walking around, but I actually ended up behind a podium (to work the laptop). This meant that a key element of my presentation,a 1920s style cloche, stole some of the show (which was fine by me, its an homage to the decor of the theater). I know most of you won't want to wear 20s hats, but it helped disguise my bad hair day and turned me into a "character." </LI> <LI>If I had walked around, I would have tried to imitate Berkun (see his ignite video mentioned in prior post) and his wider gestures. From my high school theater daze, I remembered that people see you as mostly tiny on stage and you can get away with more exaggerated arms and motions.</LI> <LI>I saw killer boots at this conference (after all, it was LA). Next time I'm going for OSSM footgear and walking around the stage. </LI></UL> <P mce_keep="true">Hope this helps others in the same boat - live it vivid!</P> <P mce_keep="true">Betsy</P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy an Ignite Presentation - What I learned at Gnomedex 2009<P><IMG src=""></P> <P>Note: All photos this page Randy Stewart, <A class="" href="" mce_href="">blog.stewtopia.com</A> </P> <P>I am grateful to <A class="" href="" mce_href="">Brady Forrest</A> of O'Reilly Media for nudging me into giving my first Ignite talk at <A class="" href="" mce_href="">Gnomedex</A>. </P> <P>Because other resources online helped me get ready, and because it's not really like any other kind of presentation, I offer some observations for people wanting to do an Ignite talk themselves. I hope you find them useful and look forward to seeing yours at the next Ignite!</P><IMG src=""> <P>Preparation and Delivery</P> <UL> <LI><STRONG>It helped that I'd been to several Ignite events,</STRONG> including the first one ever (held in <A class="" href="" mce_href="">Seattle</A>). I had seen various topics presented really well and I kenned the general vibe. Though the Gnomedex venue last Friday had no alcohol, many of the Ignite venues do, which can help.</LI> <LI><STRONG>I picked a topic around which I had strong feelings</STRONG> (social media 'guruhood') and knew a lot about, and tried to make it funny. The important part of the formula there is the strong feelings - it's a hard enough format without adding apathy in to weigh you down. Passion buoys. Use it!</LI> <LI><STRONG>Read other people's pointers.</STRONG> Required <A class="" href="" mce_href="">reading</A> and <A class="" href="" mce_href="">viewing </A>were helpful items prepared by Scott Berkun (who is writing a book about giving presentations). His stuff especially helps you craft and prep the deck. </LI> <LI><STRONG>Scott Berkun will tell you about the double slide trick.</STRONG> I will tell you about the <STRONG>blurred slide trick.</STRONG>. </LI> <LI><STRONG>Get</STRONG> <STRONG>better photos</STRONG> <STRONG>than I had.</STRONG> I was paranoid about copyright and except for photos I took myself, and one Scott Beale photo with permission, the rest were (aiieeee ) Powerpoint stock art. If I had it to do over I'd have gotten better images. </LI> <LI><STRONG>Pare, pare, pare.</STRONG> My original idea for the ignite deck had 5 ideas per slide. Too many. I tried for 3 points per slide. I ran out of time on some of them, even in the final performance. <A class="" href="" mce_href="How to Give a Successful Ignite Presentation">Jason Grigsby's advice</A> was my watchword here. Improv comedy is improv editing.</LI> <LI><STRONG>Prepare, prepare, prepare.</STRONG> If you haven't gone through your deck 12x (ie, one hour of solid practice) you are slacking. Sit with a <A class="" href="" mce_href="">long suffering friend</A> for an hour and just do it. I could recite slide order driving to work. Those were days I did not carpool. :)</LI> <LI><STRONG>There are two modes of preparation for Ignite.</STRONG> The<EM> drill mode </EM>where you have the deck done and now you are practicing delivery, ad libbing at times but the slides are all set to move at 15 seconds. The <EM>prep mode</EM>, where the deck is still changing based on how you are talking about the slides. Know which mode you are in and no cheating!</LI> <LI><STRONG>It's important to drill without starting over, as Jason Grigsby notes.</STRONG> Even if you mess up. In drill mode, you soldier on til you have one completed rev.</LI> <LI><STRONG>Drilling daily</STRONG>. </LI> <LI><STRONG>In my early practices I was saying too much</STRONG> and running out of breath.Good thing you don't die after holding your breathe for 5 minutes. Have someone sit with you and coach you on when to breath. Usually it's after you make a point, but it's hard to remember.</LI> <LI>I tried to <STRONG>figure out the right pace</STRONG>. </LI> <LI><STRONG>Use shame if you have to, and channel muses.</STRONG>. )</LI></UL> <P><IMG style="WIDTH: 546px; HEIGHT: 327px" height=231</P> <P><EM>"We can take them!"</EM> </P> <P>As you might expect, the actual experience of diving off a bridge with a harness and straps around your feet changes your perspective and likewise, so does an ignite talk. </P> <UL> <LI> <DIV mce_keep="true"><STRONG>I bought new clothes for this (hey, I'm a girl).</STRONG> The pants had to flow and I had to be able to raise my arms over my head without the shirt whacking out. </DIV></LI> <LI> <DIV mce_keep="true"><STRONG>An hour before, find a nook to get your head together.<.</DIV></LI> <LI> <DIV mce_keep="true"><STRONG>Mental breaks help.</STRONG>. </DIV></LI> <LI> <DIV mce_keep="true"><STRONG>Remember the challenge and cut yourself some slack.</STRONG> Watching <A class="" href="" mce_href="">Elan Lee</A>.</DIV></LI> <LI> <DIV mce_keep="true"><STRONG>Take what comfort you can.</STRONG> Brady introduced me and I used that time without the mike to <STRONG><EM>ham it up. </EM> </STRONG. </DIV></LI> <LI> <DIV mce_keep="true"><STRONG>Being on stage, you can be more expansive with your movements.</STRONG>! ). </DIV></LI> <LI><STRONG>You are supposed to pick out faces in the audience and talk to them</STRONG> - I blew that part. I just looked around randomly.I have no real memory of anyone I looked at. :)</LI> <LI><STRONG>I made mistakes and kept going.</STRONG> The audience <STRONG>does not know what you leave out.</STRONG> Use that fact!</LI> <LI> <DIV mce_keep="true"><STRONG>Figure out what your best last sentence is head of time. </STRONG> It's hard to tell when the slide will blank out, so I ended slightly early with my last sentence and handed the mike to Brady.Other presenters seemed to end right on time. Your mileage may vary.</DIV></LI></UL> <P mce_keep="true">Live it vivid!</P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy - the evolution of a conference<P <A class="" href="" mce_href="">Jory DesJardin's post</A> which resonated with me on a lot of levels (In my blogging I don't care about ads, product reviews, or being known for much except enabling others to do community stuff). </P> <P.</P> <P. </P> <P.</P> . </P> <P>(And, yes, the Oscar Mayer Wienermobile team was in the audience too, and they told me they were as moved as I was. )</P> .</P> <P. </P> <P>. </P> <P. </P> <P>Blogher10 is an awesome prospect - the 10th year, New York, and incredible momentum around the voices of women. Whether I'm a Bing booth babe again or not, I'm going. See you there.</P> <P>Live it vivid!</P> <P mce_keep="true"> <">Betsy Gone Wild - Bing & other brands at Blogher 2009 (Part 2). </P> .</P> <P>This meant that the truly wacky moments of branding during Blogher, I was showing the totem or waving the colors. And there were wacky moments.</P> <P>My first moment of brand wildness was realizing I was sitting next to the guy who drives the Oscar Mayer Wienermobile at breakfast. You know what's coming here don't you?</P> <P mce_keep="true"><IMG src="" width=350 </P> <P mce_keep="true"><EM>Doggone Dave, at the helm of the Oscar Mayer Wiener</EM> </P> <P mce_keep="true">That breakfast, I was also sitting with Betsy Weber of Techsmith (I'm the Other Betsy) and we vowed upon hearing they were giving rides at lunch that WE HAD TO GO. I mean who can resist the Big Bun?</P><IMG src="" width=350> <P mce_keep="true">Here it is in full glory...</P><IMG src="" width=350> <P mce_keep="true">The Oscar Mayer crew took video of the "Betsy and Betsy" show as part of their ongoing tips for bloggers; stay tuned at the <A href=""></A> for more info on that. </P> <P mce_keep="true". :) </P> <P mce_keep="true">Next, I was getting ready for the cocktail party when I had more close encounters of the Brand Kind; an unusual visitor to the Microspa led by some Office folks.</P><IMG src="" width=350> <P mce_keep="true">And on Saturday (once again with Betsy Weber - do you see a trend here? :) - posting with the <A class="" href="" mce_href="">Michelin man.</A> Note how the curve of the Michelin brand is echoed by the phrase on my shirt. And yes, dude was self-inflating the whole time we posted for that photo and I don't mean ego. :) I mean that suit is its own air conditioner....</P> <P mce_keep="true". </P> <P mce_keep="true">Bring it and bing it!</P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy at Blogher 2009 (aka #blogher09): Part One<P>I'm writing this in the Chicago airport in the familiar happy post-conference exhaustion. Unlike my previous Blogher attendance, I was doing more booth babe/evangelism of a product than being a social media expert, and it was cool to see how folks have reacted to Bing. And I got into Chicago at the right time - luckily for me - to catch the Chicago appearance of Tara Hunt and the Whuffaoke crew at a meetup at The Coop, a Chicago co-working space.</P> <P><IMG src="" mce_src="" P <> <P mce_keep="true"><EM>The Coop Party</EM> - met some really cool local folks interested in creativity and design</P> <P mce_keep="true">The Whuffaoke crew parked outside The Coop.</P><IMG src=""> <P mce_keep="true"><EM>What drivers see - the view behind Winnie the Winnebago above.</EM></P> <P mce_keep="true"><IMG src=""> </P> <P mce_keep="true"><EM>Tara Hunt,right, getting ready to rawk the mike at Whuffaoke</EM></P> <P mce_keep="true">Blogher this year was a sold-out conference - and it sold out months ago. To help combat disappointment, they added "Lobbycon" and some additional registration spaces but I think we taxed the Sheraton to its limits. I tried to find a photo setup that was emblematic of Blogher and honestly, other people took better ones. See <A href="">here </A>for a stunning array of bloghers' photos.</P> <P mce_keep="true">As mentioned, I wasn't really free to roam too much, between presenting at the MicroSpa with some amazing women bloggers and working the booth. As part of our presenations about Search Overload and Bing, I met Devra and Aviva, cofounders of <A href=""></A> and authors of <A class="" href="" mce_href="">Mommy Guilt</A> . I also got to meet Beth Blecherman of <A class="" href="" mce_href="">Techmamas</A>. All of these women are presenters of great poise and grace and I learned a lot from watching them in our presentations.</P> <P mce_keep="true"> Here's me at the booth demoing video search:</P> <P mce_keep="true"><IMG src=""> </P> <P mce_keep="true"> I did see a panel moderated by <A class="" href="" mce_href="">Ponzi Pirillo</A>, and featuring <A class="" href="" mce_href="">Kelly Russell-Donner,</A> Kate from <A class="" href="" mce_href="">Sweet|Salty</A> and <A class="" href="" mce_href="">Daniela Capistrano</A> about <A class="" href="" mce_href="">Transformational Power of Blogging</A> . It reminded me that blogging was about bravery and storytelling.Corporations could learn from the storytelling skills found in these "issue blogs." If your brand packed as powerful a punch as discussions of babies dying and sexual assault, you wouldn't need marketing programs. </P> <P mce_keep="true">I snuck away from the booth, leaving <A class="" href="" mce_href="">Nathan Buggia</A> behind, to see one more session. Here's Nate:</P><IMG src=""> <P mce_keep="true">I'm always interested in learning more about search issues that cross all engines, and <A class="" href="" mce_href="">Vanessa Fox's Advanced SEO session</A> was fabulous - she is really great at adjusting the level of discussion for technically wonky and technically newbie audience members and really approachable. She was mobbed at the end. :)</P> <P mce_keep="true">Next year, Blogher turns 10 years old, and it will be a blow-out celebration in New York (and no doubt even bigger than this conference, which burst the Sheraton at the seams). See you there!</P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy is good, but not a replacement for the "old objectivity"<P>I was a newspaper reporter before I went into Web technology as a profession, and well before becoming any kind of blogger, so when I comment on journalism I have both perspectives ensconced in my brain. I was also a reporter before the journalism field took a lot of flack for lack of objectivity, so my expectations may surprise some of you jaded by disappointment in journalism.</P> <P>The title of this blog post riffs from a well-written piece by David Weinberger <A class="" href="" mce_href="">Transparency is the New Objectivity.</A> In his piece, he notes that the hyperlinked nature of online articles and blogs, which enable presentation of reference documents (ie, the materials used by a reporter to create a news story) create a more authoritative sourcing and thus more respect for that author's work. Also, he believes respect is fostered by an author's transparency with regard to personal and political views.</P> <P>In other words, if I know your political bent,and I see photos/scans of all the documents you used for Watergate, I'd believe you more 'cause I know where you are coming from. I could reconstruct for myself how you came to the conclusions of your news story, and agree with you based on that knowledge. </P> <P>While I am actually a fan of transparency, and it's nice especially for journalism students to be able to recreate the thought processes behind great investigative journalism, I'm not sure presentation of author views and reference material alone is enough to create the kind of journalism that was the goal of the "old objectivity." We can't afford to mistake the gloss of transparency for the heavy lifting objectivity in journalism was supposed to do.</P> <P><EM>Transparency of reference materials</EM></P> <P>Transparency will not always result in the most accurate reporting coming out. As I tweeted to Dare Obasanjo, what happens if your sources are people in Iran who are afraid for their lives, to go on the record. Do you not publish a blog post from Iran because you can't be 100% transparent about the sources?</P> <P>Another problem is that the kind of investigative journalism that creates social change (see Seattle Times Health <A class="" href="" mce_href="">special project on MRSA</A>) may require creating a new record of assembled data (a db for example). Do you as a reader now distrust the data because it was assembled by The Seattle Times and not an easily referenced document? How will you vet their db analysts? Will you look at the code to ensure they coded it right and their select statements are properly formed? </P> <P>Even if the reference materials are simple flat files and/or readable in a browser, will you as reader really have 5 hours reconstructing each detail of the Seattle Times data processes, or do you just want to look up the hospital near you to see if the doctors wash their hands? Why pay for the paper for access to this information if you have to re-create the reporting entirely yourself?</P> <P>Placing a higher value on linkable reference materials also brings out the question of coverage skewing toward what is easiest to link to. Will reporting just become groups of links to documents that are already public? Link to videos and photos that were already reported (but maybe presented the wrong experts or sources in the captions?) </P> <P>And while we are at this presentation of a collection of links, well, doesn't this notion sound like a search engine results page to anyone? I have yet to see anyone claim that a search engine can replace heavy-hitting, muckraker journalism. Why? Well, often a reporter has studied an area for far longer than the casual Web surfer, so the content value added in synthesizing the information will be higher (it's why you ask only certain people to help you fix your computer, and others you don't bother - the experienced people know what to look for and how to troubleshoot). </P> <P>But also, search engine results are presented in response to queries, and are dependent on the user knowing what terms to put into the box (this is why the bing user experience is so interesting, it tries to visually nudge you to get more of an idea what you really want back). If the searcher doesn't know what cognitive linkages to make between documents beforehand, it's not likely the materials would come back together. </P> <P>To get 3 document links back that resemble the 3 reference materials for a news story, you'd have to know what was in the docs, and their relationships to one another with regard to the investigative conclusion. In other words, you'd have to know the horrifying statistics of MRSA before you put down the search terms. This is why the term "human aggregator" doesn't begin to talk about what a good reporter does with information - they really have to analyze it enough to teach other people how to make these connections.</P> <P>Reliance on only linkable/readily transparent items creates other dynamics. Doesn't even have to be laziness but shortage of time...What if you are blogging nights and weekends and have a day job, without the time to look at older documents on microfiche at the courthouse or pull public paper records to affirm for yourself things got done? The temptation will be high to link to what's easy and write about what's easy and the bar will lower to ease of linkage. Really, only one rich news org has the resources to do any real reporting - then the rest of us just link to that, right? </P> <P>People hotly debated whether Techcrunch should have released the Twitter business documents they blogged about, but few really talked about whether actual journalism was being committed around the documents. (Michael Arrington would argue yes of course, but to Silicon Valley outsiders the docs themselves became news. In a j-school context, you could argue that the docs were really just one "fact check" against an emerging Twitter story that Techcrunch has not yet written and now may not ever write.)</P> <P><EM>Transparency of author viewpoint</EM></P> <P>Fear of a reporter skewing news coverage based on their background, political views or opinion is at the heart of the objectivity debate. However, this fear of non-objectivity applies to other professions as well and people tend to forget that. </P> <P>Every day, you hope that your doctor doesn't mind treating people of different political views than her, and will prescribe you the right medicines for your cold regardless of your choice of vacation home. You hope that your bus driver drives the same (safe) way regardless of who is on his bus. You hope the guy giving you the fries and hamburger didn't spit into your food because he hated the rock band on your t-shirt. Heck, people work with people all the time that they can't stand, and for professionalism's sake, they put personal feelings aside. </P> <P>It's so interesting that people more readily trust there isn't spit in their Coke, but are sure that a reporter is hiding something from them.</P> <P><EM>What were my own standards of objectivity?</EM></P> <P>Here's what I understood to be my standard of objectivity, when I was in the journalism profession. I was told by one news editor early in my career that essentially if everyone disagreed with the "bias" of my story - all the special interest groups opposing each other hated it equally- there was a likelihood I had hit the sweet spot of objectivity. I was supposed to get along with my sources, but they were not supposed to be my best friends, and they should not be able to guess which way I would vote in an election. They should not see me as biased against their religion or point of view. To do my job well, both sides had to trust I'd represent their point of view fairly (or at least piss off the other side equally).</P> <P>So to do a good job, I had to be free to aggravate everyone, because the truth is often complex, hard to get at, and debateable, and if I was always worrying about pleasing people in power or high in celebrity quotient, I wouldn't be representing the truth correctly. The newspaper would stand behind me, protect my notebook (paper not digital), and bail me out of jail for the stories I wrote when powerful public figures or companies went after me. In return, I had to be meticulous in my accuracy, get as much on the record as possible, and as scrupulous about representing as many points of view as would fit in a 6-20 column inch space. It helped that both states I worked in had decent public records laws and my note-taking verbal memory was really good. And I had good editors, who took out things from my stories they felt weren't decently backed up or were redundant and therefore presenting too much of one source's point of view. </P> <P>Transparency meant something other than documentation or stating my political views of the moment- it went straight to the heart of where I got my money from. If I had ever worked for an organization, owned stock in that organization, had relatives in an organizaation, I either did not cover the story or (as sometimes you see in MSN Money) I would have had to disclose my interest "this columnist owns 5 shares of stock in Microsoft." The appearance of being unable to cover a topic area fairly was good enough to keep me from it. Mostly, journalism kept me out of public activism because as a cub reporter I really didn't want to close down story areas I could write in. Other journalists who were more established (ie, could focus on one beat) could afford to have private causes not related to their beat.</P> <P>BTW, I found I always had more notes and material than I could fit into a news story. It wasn't sinister, it's that people hate reading anything long.</P> <P>Even as a freelance book reviewer, which is pure opinion, The Seattle Times constantly ensured I was reading books by people I didn't know, or have any ties to. Being "out of the scene" was helpful because it meant I didn't have a social or political agenda to like or dislike the books. </P> <P>This is the kind of self-policing relative to a standard of objectivity. A standard of transparency for online journalism might help someone get caught violating the standards of objectivity and fairness but I think the root issues would still remain: did you try to get an objective truth? is the article or post accessible to people of all persuasions and initial points of view? Do you have documents, facts, quotes, witnesses to back up the conclusions of your piece? Are you being a lazy journalist/blogger, or are you digging deeper even as you seem sure the conclusion can be reached for this piece? Have you annoyed everyone equally, even the people you are supposed to be in the pockets of?</P> <P mce_keep="true">So, while I think transparency is good, I don't think it replaces the old goals of good journalism and transcending the reporter's personal point of view to get the complete story out. Empathy is key to good journalism and blog reporting- the ability to put yourself in the flood victim or the astronaut's shoes as they tell their stories. A piece of reporting that goes beyond him/herself to reach other people is usually the most powerful kind of reporting and I'm not sure a page full of links can take the place of that human processing information for a public/purpose well beyond the personal.</P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy and a decent shot of courage<P>I <A href=""></A> and like any new community site it is in a constant state of improvement.) </P> <P>There's a lot of blogging and tweeting going on about <A class="" href="" mce_href="">bing<.</P> . </P> <P>Those employees' feedback made bing what it is today. That's an important difference from other launches.</P> <P>Then of course there's our<A class="" href="" mce_href=""> twitter</A> and <A class="" href="" mce_href="">facebook </A <STRONG>you</STRONG> in bing than there was in Live Search. Because we have these new tools and the folks ready to listen.</P> <P>People who have gotten email from my Microsoft email address know I always carry the Anais Nin phrase "<EM>Life shrinks or expands in proportion to one's courage</EM>". </P> <P<EM> in it.</EM> Every time I faced a skeptical customer at a demo (you know who you are MS Hater Guy) I remembered that sea of faces, <EM>just in it.</EM> Showing up and saying hello to the doubters and the haters. </P> .</P> <P>We hope you try bing of course. And send us feedback via the feedback link, the twitter acct, the blog, the <A href=""></A> site. But remember us when you are faced with something tough, where you commit without knowing the outcome, and do it because you choose to & your heart says so. That's what we made it for, that's what we made it with.</P> <P>Live it vivid!</P> <P mce_keep="true"> </P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy tip of the day: Give the props to people who have helped you<P>Recently, in the process of promoting <A class="" href="" mce_href="">Will Code for Green</A>, our killer Live Search API contest that lets you use whatever technology stack you desire to write your Web app, I dug deep and went back to one of my old teachers Tim<FONT color=#0000ff></FONT></A><SPAN style="mso-spacerun: yes"> </SPAN></SPAN>who is a Perl guru around these parts of the Northwest. His educational company essentially gets people ready for system administration gigs using some variety of Unix.</P> <P>Even after more than a decade having passed since I took his class, Tim responded pretty quickly. He kindly said he'd pass the contest along and that he remembered me and was glad I found a tech job I liked. </P> <P>Which consideirng I'm part of the "Evil Empire" to some Open Source folks, I felt was very gracious and also why I'm blogging about Tim here.</P> <P. </P> <P>What Tim does in his classroom is a great model for certain kinds of software evolution (Agile anyone?), or in my current line of work, honing your presence in social media. Keep listening, keep honing, and keep it central to what your passion is. </P> . </P> <P>There's a <STRONG><EM>complete </EM><. :) </P> <P>But over a decade later, who Tim is as a teacher, his "personal brand" if you will ( I hate that term) is what stands out over time. So it brings out the question - how will *you* be remembered?</P> <P>Live it vivid!</P> <P mce_keep="true"> </P> <P mce_keep="true"> </P><div style="clear:both;"></div><img src="" width="1" height="1">Betsy
http://blogs.msdn.com/b/betsya/atom.aspx
CC-MAIN-2015-06
refinedweb
15,171
65.96
The following C function: int sprintf ( char * str, const char * format, ... ); #include <stdio.h> int main () { char buffer [13]; int n, a=5, b=3; n=sprintf (buffer, "%d plus %d is %d", a, b, a+b); printf ("[%s] is a %d char long string\n",buffer,n); return 0; } What you want is one of these two functions: * snprintf (). It takes the length of the output buffer as its second argument, and if the buffer is too small for the result it will return the number of characters needed, allowing you to reallocate a larger buffer. * asprintf. It takes a char ** argument and allocates enough memory to hold the output, as long as that much contiguous virtual memory is available. You have to call free to remove it from memory if you're done with it before the program exits and may need the memory for something else.
https://codedump.io/share/s1zswFSKfwiS/1/writing-formatted-data-of-unknown-length-to-a-string-c-programming
CC-MAIN-2017-34
refinedweb
149
59.98
Build a Java REST API With Quarkus Build a Java REST API With Quarkus Learn more about building a REST API service with Quarkus. Join the DZone community and get the full member experience.Join For Free Quarkus is designed as a container-first framework optimized for high speed, low memory usage, and great scalability. The container-first strategy bypasses the configuration issues and countless updates that may come along with monolithic server systems by bundling together the runtime environment and application code. Initially, Quarkus was created to support native code for Graal/SubstrateVM, but it also works with JVM and OpenJDK HotSpot. Quarkus supports many industry-standard libraries such as Hibernate, Kubernetes, RESTEasy, and Eclipse MicroProfile. You may also like: Quick Guide to Microservices With Quarkus on OpenShift Quarkus was created to be utilized in microservice and serverless environments as well as reactive programming models. It uses JAX-RS for the REST endpoints, JPA to preserve data models, and CDI for dependency injections. Through this post, you’ll learn how to use Java and Quarkus to create a REST API with JAX-RS, and secure it with OAuth 2.0 and Okta. This tutorial is a modified and updated version of the “Quarkus — Using JWT RBAC” tutorial on the Quarkus website. The main difference is that this tutorial will use Okta as the OAuth provider and the OIDC Debugger to generate tokens for ad hoc testing (instead of rolling the whole thing yourself). Let’s get started! Install Quarkus Tutorial Prerequisites You’ll need to install a few things before you get started. Java 11: This project uses Java 11. OpenJDK 11 will work just as well. Instructions are found on the OpenJDK website. OpenJDK can also be installed using Homebrew. Alternatively, SDKMAN is another great option for installing and managing Java versions. Just a hint, if you run mvn -v, you’ll see your Maven version AND the Java version Maven is running on. On my computer (a Mac), I was able to use the following command to set the shell in which Maven was running to Java 11: export JAVA_HOME=$(/usr/libexec/java_home -v 11). HTTPie: This is a simple command-line utility for making HTTP requests. You’ll use this to test the REST application. Check out the installation instructions on their website. Okta Developer Account: You’ll be using Okta as an OAuth/OIDC provider to add JWT authentication and authorization to the application. Go to our developer site and sign up for a free developer account. Create a Java Quarkus Project Open a terminal and cd to an appropriate parent directory for your project. The command below uses the quarkus-maven-plugin to create a starter application and places it in the oauthdemo subdirectory. mvn io.quarkus:quarkus-maven-plugin:0.23.1:create \ -DprojectGroupId=com.okta.quarkus \ -DprojectArtifactId=oauthdemo \ -DclassName="com.okta.quarkus.jwt.TokenSecuredResource" \ -Dpath="/secured" \ -Dextensions="resteasy-jsonb, jwt" If you run the project at this point, you’ll get an error because you need to define some application properties first. Configure Quarkus Application Properties Open the src/main/resources/application.properties file and copy and paste the following into it. mp.jwt.verify.publickey.location=https://{yourOktaDomain}/oauth2/default/v1/keys mp.jwt.verify.issuer=https://{yourOktaDomain}/oauth2/default quarkus.smallrye-jwt.auth-mechanism=MP-JWT quarkus.smallrye-jwt.enabled=true You’ll need to fill in your Okta developer URI in two places. To find your developer URI, open your Okta developer dashboard and navigate to API > Authorization Servers. Look at the row for the default auth server where you’ll see the Issuer URI. That domain is your Okta URI that you’ll need to populate in place of {yourOktaDomain}. Test the Default Quarkus Endpoint Navigate into the project directory: cd oauthdemo. Run the project: ./mvnw compile quarkus:dev You should see output like this: [INFO] Scanning for projects... [INFO] [INFO] ---------------------< com.okta.quarkus:oauthdemo >--------------------- [INFO] Building oauthdemo 1.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] ... Listening for transport dt_socket at address: 5005 2019-09-30 10:39:02,186 INFO [io.qua.dep.QuarkusAugmentor] (main) Beginning quarkus augmentation 2019-09-30 10:39:02,889 INFO [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 703ms 2019-09-30 10:39:03,266 INFO [io.quarkus] (main) Quarkus 0.23.1 started in 1.195s. Listening on: 2019-09-30 10:39:03,268 INFO [io.quarkus] (main) Profile dev activated. Live Coding activated. 2019-09-30 10:39:03,268 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jsonb, security, smallrye-jwt] If you get an error, first check which version of java Maven is using. ./mvnw -v Now, in another terminal window, use HTTPie to test the generated endpoint: $ http :8080/secured HTTP/1.1 200 OK Connection: keep-alive Content-Length: 5 Content-Type: text/plain;charset=UTF-8 hello Pretty sweet! But we can do a little better. Replace the src/main/java/com/okta/quarkus/jwt/TokenSecuredResource.java file with the following: package com.okta.quarkus.jwt; import java.security.Principal; import javax.annotation.security.PermitAll;.JsonWebToken; /** * Version 1 of the TokenSecuredResource */ @Path("/secured") @RequestScoped public class TokenSecuredResource { @Inject JsonWebToken jwt; @GET() @Path("/permit-all") @PermitAll @Produces(MediaType.TEXT_PLAIN) public String hello(@Context SecurityContext ctx) { Principal caller = ctx.getUserPrincipal(); String name = caller == null ? "anonymous" : caller.getName(); boolean hasJWT = jwt != null; String helloReply = String.format("hello + %s, isSecure: %s, authScheme: %s, hasJWT: %s", name, ctx.isSecure(), ctx.getAuthenticationScheme(), hasJWT); return helloReply; } } Now, save the file and test the new, updated endpoint: $ http :8080/secured/permit-all HTTP/1.1 200 OK ... hello + anonymous, isSecure: false, authScheme: null, hasJWT: true Did you notice how you didn’t need to re-start or re-compile your application for the new endpoint to work? That’s one of the slickest features of Quarkus! Next, we’ll add OAuth 2.0 support to the application. Create an OIDC Application in Okta to Test Your Quarkus Service Head over to your Okta developer dashboard — if this is your first time logging in, you may need to click the Admin button. From the top menu, click on the Application button and then click Add Application. Select application type Web and click Next. Give the app a name. I named mine “Quarkus Demo”. Under Login redirect URIs, add a new URI:. Under Grant types allowed, check Implicit (Hybrid). The rest of the default values will work. Click Done. Leave the page open or take note of the Client ID. You’ll need it in a bit when you generate a token. Update TokenSecuredResource Now update the TokenSecuredResource class to do two things: 1) Use CDI dependency injection to inject the groups claim from the JWT, if it’s available. 2) Add a default endpoint for the /secured path that is protected by OAuth 2.0 and requires the Everyone group to access. Change TokenSecuredResource.java to the following: package com.okta.quarkus.jwt; import java.security.Principal; import java.util.Set; import javax.annotation.security.PermitAll; import javax.annotation.security.RolesAllowed;.Claim; import org.eclipse.microprofile.jwt.JsonWebToken; /** * Version 1 of the TokenSecuredResource */ @Path("/secured") @RequestScoped public class TokenSecuredResource { @Inject JsonWebToken jwt; @Inject @Claim("groups") private Set<String> groups; @GET() @Path("permit-all") @PermitAll @Produces(MediaType.TEXT_PLAIN) public String hello(; } @GET() @Path("/") @RolesAllowed({"Everyone"}) @Produces(MediaType.TEXT_PLAIN) public String helloRolesAllowed(; } } Try out the new default endpoint: $ http :8080/secured HTTP/1.1 401 Unauthorized ... Not authorized Generate an OAuth 2.0 Access Token to Test Authentication in Quarkus Open the OpenID Connect Debugger. You’re going to use this page to generate a JWT access token that you can use to authenticate. Follow the below steps to continue: - Set the Authorize URI to: https://{yourOktaDomain}/oauth2/default/v1/authorize - Copy your Client ID from the Okta OIDC application you created above and fill it in under Client ID - Change the scope to be openid email profile - Add something for State. It doesn’t matter what. Just can’t be blank - Scroll down. Click Send - Copy the resulting JWT Access Token to the clipboard, and in the terminal where you are running your HTTPie commands, save the token value to a shell variable, like so: TOKEN=eyJraWQiOiJxMm5rZmtwUDRhMlJLV2REU2JfQ... Test the JWT With the Protected Quarkus Endpoint Now that you have a valid JWT from your OAuth provider (Okta), you should be able to use this JWT to authenticate against the protected endpoint. Try it out: http :8080/secured "Authorization: Bearer $TOKEN" You should see something like: HTTP/1.1 200 OK Connection: keep-alive Content-Length: 123 Content-Type: text/plain;charset=UTF-8 hello + andrew@gmail.com, isSecure: false, authScheme: MP-JWT, hasJWT: true, groups: [Everyone, Admin]" Add Functionality to Quarkus REST Endpoint The first step to building a more realistic REST resource is to create a data model class. To do this, you can use a helper project called Lombok (more on their website). To work with Lombok, add the following dependency to your pom.xml. <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.8</version> <scope>provided</scope> </dependency> NOTE: If you’re using an IDE to build your project, and not the command line, install the Lombok plugin for it. For example, see the Lombok IntelliJ plugin. You’re only going to use a small part of Lombok, the @Data annotation, to save time writing boilerplate code in the data model class (check out the annotation docs, if you like). It generates getters, setters, equals(), and hashCode() methods. Go ahead and create a Java class: src/main/java/com/okta/quarkus/jwt/Kayak.java. package com.okta.quarkus.jwt; import lombok.Data; import java.util.Objects; @Data public class Kayak { private String make; private String model; private Integer length; public Kayak() { } public Kayak(String make, String model, Integer length) { this.make = make; this.model = model; this.length = length; } } You may have guessed by now that your new REST endpoint is going to manage a list of kayaks. This code is pure JAX-RS and is not specific to working with Quarkus or Kubernetes. JAX-RS is the Java API for Restful Web Services, an annotation-based specification for configuring REST services. Because it’s just a spec, the actual implementation is provided by the Quarkus stack. Now, you need to create the REST endpoint resource: src/main/java/com/okta/quarkus/jwt/KayakResource.java. package com.okta.quarkus.jwt; import java.util.Collections; import java.util.LinkedHashMap; import java.util.Set; import javax.annotation.security.RolesAllowed;.core.MediaType; @Path("/kayaks") @Produces(MediaType.APPLICATION_JSON) @Consumes(MediaType.APPLICATION_JSON) public class KayakResource { private Set<Kayak> kayaks = Collections.newSetFromMap(Collections.synchronizedMap(new LinkedHashMap<>())); public KayakResource() { kayaks.add(new Kayak("NDK", "Romany", 17)); kayaks.add(new Kayak("NDK", "Surf", 16)); kayaks.add(new Kayak("P&H", "Scorpio HV", 15)); } @GET public Set<Kayak> list() { return kayaks; } @RolesAllowed({"Everyone"}) @POST public Set<Kayak> add(Kayak kayak) { kayaks.add(kayak); return kayaks; } @RolesAllowed({"Everyone"}) @DELETE public Set<Kayak> delete(Kayak kayak) { kayaks.remove(kayak); return kayaks; } } I won’t go into a ton of detail here, but I do want to point out a few things. First, notice the @Produces and @Consumes annotations. The Quarkus docs state that it is very important to include these as they’re used to optimize the final build. This endpoint uses JSON, specified by the constant MediaType.APPLICATION_JSON. Also, notice that the GET endpoint is public, but the POST and DELETE endpoints require membership in the Everyone group. Don’t confuse Everyone with anonymous. Everyone is a catch-all, default group assigned to anyone that authenticates on the Okta OIDC application. In this instance, it means that a user has authenticated but isn’t necessarily part of any other group, such as Admin. Test Your Quarkus Endpoint and Add a New Kayak Since you added a new dependency to your pom.xml, you’ll need to restart the Maven process running your server. Then, test the POST endpoint without a token to verify that it’s protected. $ http POST :8080/kayaks make="P&H" model="Cetus HV" length=18 HTTP/1.1 401 Unauthorized ... Not authorized Now, try again with the token. You’ll see that a new kayak has been added to the list! $ http POST :8080/kayaks make="P&H" model="Cetus HV" length=18 "Authorization: Bearer $TOKEN" HTTP/1.1 200 OK Connection: keep-alive ... [ { "length": 17, "make": "NDK", "model": "Romany" }, { "length": 16, "make": "NDK", "model": "Surf" }, { "length": 15, "make": "P&H", "model": "Scorpio HV" }, { "length": 18, "make": "P&H", "model": "Cetus HV" } ] You can delete the newly added kayak with the following command: http DELETE :8080/kayaks make="P&H" model="Cetus HV" length=18 "Authorization: Bearer $TOKEN" You might notice there’s no PUT (update) in this service. In a more complete service, each record would have a unique ID of some type associated with it. This would allow a client app to specify a specific record for update and deletion (instead of using the record properties themselves and the equals() method). Also, clearly, this resource is pretty naive, storing the data in a class property. In a real application, JPA annotations could be used to map the data model to a database for easy serialization and deserialization. Learn More About Java, Quarkus, and Token Authentication All done! In this tutorial, you used Quarkus and Java to create a simple REST service, secured with JWT OAuth using Okta as an OAuth/OIDC provider. You also saw how to use CDI dependency injection to inspect JWT claims and retrieve information about the authenticated (or not) client. Finally, you tried some of the basics of RBAC (role-based authentication). As a reminder, this tutorial was inspired by the Quarkus post: Using JWT RBAC. Quarkus has a ton of other great guides on their website. You can find the source code for this tutorial at oktadeveloper/okta-quarkus-example. Here are some related blog posts to learn more about Java and authentication: - Simple Token Authentication for Java Apps - Build a Web App with Spring Boot and Spring Security in 15 Minutes - Create a Secure Spring REST API - Build a Simple CRUD App with Spring Boot and Vue.js If you have any questions about this post, please add a comment below. For more awesome content, follow @oktadev on Twitter, like us on Facebook, or subscribe to our YouTube channel. How to Develop a Quarkus App with Java and OIDC Authentication was originally published on the Okta Developer Blog on September 30, 2019. Published at DZone with permission of Andrew Hughes , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/build-a-java-rest-api-with-quarkus
CC-MAIN-2019-51
refinedweb
2,462
50.53
GNOME Power Manager already has quite a comprehensive DBUS API to allow it to interface with stuff like the brightness applet and gnome-logout screens. Brightness applet working hand-in-hand with g-p-m I think this is something that should be cross-desktop compatible, so that XFCE, KDE and GNOME applications can all play nicely. Something like a session service name of org.freedesktop.PowerManagement would be lovely. Of course, this requires standardization of a set API, which traditionally has been hard to agree on. I've put up a list of the new DBUS API that gnome-power-manager presents, but before anyone stresses out about API compatibility the old API is present as well, except not documented there. Please have a look through and tell me about any method or signal naming funnies, or areas that don't look right – or even descriptions that don't make sense to mere mortals. When I've got some initial feedback, I'll make the important changes to the new API, and suggest just the “required” DBUS methods to xdg-list for comments about using a org.freedesktop namespace. p.s. Don't comment too much about the Register() and UnRegister() methods yet, they are just stub functions at the moment, and not tested by gnome-power-self-test at all. Thanks for any help, Richard. One response to “Desktop Power Management” A method to retrieve the system idle time would definitely be useful, right now you have to poke X to provide this. I'd also include more options for device powermanagement, for example CPU, harddisks, wireless card and others as you already included with display. And while at it, shouldn't the screensaver be able to told to lock the display?
http://blogs.gnome.org/hughsie/2007/01/08/desktop-power-management/
CC-MAIN-2014-52
refinedweb
294
58.82
#include <CGAL/linear_least_squares_fitting_2.h> computes the best fitting 2D line of a 2D object set in the range [ first, beyond). The value returned is a fitting quality between \( 0\) and \( 1\), where \( 0\) means that the variance is the same along any line (a horizontal line going through the centroid is output by default), and \( 1\) means that the variance is null orthogonally to the best fitting line (hence the fit is perfect). It computes the 2D best fitting line (in the least squares sense) of a set of 2D objects such as points, segments, triangles, iso rectangles, circles or disks. The best fitting line minimizes the sum of squared distances from all points comprising these objects to their orthogonal projections onto the line. It can be shown that this line goes through the centroid of the set. This problem is equivalent to search for the linear sub-space which maximizes the variance of projected points (sum of squared distances to the centroid). Internally we solve this problem by eigen decomposition of the covariance matrix of the whole set. Note that the \( 2 \times 2\) covariance matrix is computed internally in closed form and not by point sampling the objects. Eigenvectors corresponding to large eigenvalues are the directions in which the data has strong component, or equivalently large variance. If one eigenvalue is null the fit is perfect as the sum of squared distance from all points to their projection onto the best line is null. If the two eigenvalues are the same there is no preferable sub-space and all lines going through the centroid share the same fitting property. The tag tag identifies the dimension to be considered from the objects. For point sets it should be 0. For segments it can be 1 or 0 according to whether one wants to fit the whole segment or just their end points. For triangles it can range from 0 to 2 according to whether one wants to fit either the triangle points, the segments or the whole triangles. For rectangles it can range from 0 to 2 according to whether one wants to fit either the corner points, the segments, or the whole rectangles. For circles it can be 1 or 2 according to whether one wants to fit either the circles or the whole discs. For triangles it ranges from 0 to 2 according to whether one wants to fit either the points, the segments or the whole triangles. The class K is the kernel in which the value type of the InputIterator is defined. It can be omitted and deduced automatically from the value type. The class DiagonalizeTraits_ is a model of DiagonalizeTraits. It can be omitted if Eigen 3 (or greater) is available and CGAL_EIGEN3_ENABLED is defined: in that case, an overload using Eigen_diagonalize_traits is provided. Requirements InputIteratormust have a value type equivalent to K::Point_2or K::Segment_2or K::Triangle_2or K::Rectangle_2or K::Circle_2. lineis the best fitting line computed. centroidis the centroid computed. This parameter is optional and can be omitted. tagis the tag identifying the dimension to be considered from the objects. It should be one of Dimension_tag<0>, Dimension_tag<1>or Dimension_tag<2>. Also, it should not be of dimension greater than the geometry of the object. For example, a Segmentcan not have a Dimension_tag<2>tag.
https://doc.cgal.org/latest/Principal_component_analysis/group__PkgPrincipalComponentAnalysisDLLSF2.html
CC-MAIN-2020-24
refinedweb
554
61.67
14 December 2012 14:32 [Source: ICIS news] TORONTO (ICIS)--Canadian chemical sales fell 1.1% to Canadian dollar (C$) 3.84bn ($3.92bn) in October from September this year, a statistics agency said on Friday. Compared with October 2011, chemical sales were down 5.0% year on year, Statistics Canada said. Meanwhile, sales in ?xml:namespace> The declines were partly offset by higher petroleum and coal products sales, as well as increased sales for wood product industries, the agency said. Compared to October 2011, overall manufacturing sales were up 0.1%. October manufacturing inventories were C$66.2bn, up 1.3% from September and up 2.1% year on year from October 2011. The inventory-to-sales ratio was 1.36 in October, compared with 1.32 in September and 1.33 in October 2011. The ratio is a measure of time, in months, that would be required to exhaust inventories if sales were to remain at their current level. ($1 = C$0
http://www.icis.com/Articles/2012/12/14/9624667/canadas-october-chemicals-sales-fall-1.1-from-september.html
CC-MAIN-2014-49
refinedweb
164
62.75
上矢印キー Use this as a parameter to a function like Input.GetKey to detect when the user presses the up arrow key. See Also: Input.GetKey, Input.GetKeyDown, Input.GetKeyUp. //Attach this to a GameObject //This script tells when the up arrow key is pressed down and when it is released using UnityEngine; public class Example : MonoBehaviour { void Update() { //Detect when the up arrow key is pressed down if (Input.GetKeyDown(KeyCode.UpArrow)) Debug.Log("Up Arrow key was pressed."); //Detect when the up arrow key has been released if (Input.GetKeyUp(KeyCode.UpArrow)) Debug.Log("Up Arrow key was released."); } }
https://docs.unity3d.com/ja/2017.4/ScriptReference/KeyCode.UpArrow.html
CC-MAIN-2020-45
refinedweb
101
50.43
I'm having trouble with getting multiple dynamic inheritance to work. These examples make the most sense to me(here and here), but there's not enough code in one example for me to really understand what's going on and the other example doesn't seem to be working when I change it around for my needs (code below). I'm creating a universal tool that works with multiple software packages. In one software, I need to inherit from 2 classes: 1 software specific API mixin, and 1 PySide class. In another software I only need to inherit from the 1 PySide class. The least elegant solution that I can think of is to just create 2 separate classes (with all of the same methods) and call either one based on the software that's running. I have a feeling there's a better solution. Here's what I'm working with: ## MainWindow.py import os from maya.app.general.mayaMixin import MayaQWidgetDockableMixin # Build class def build_main_window(*arg): class Build(arg): def __init__(self): super( Build, self ).__init__() # ----- a bunch of methods # Get software software = os.getenv('SOFTWARE') # Run tool if software == 'maya': build_main_window(maya_mixin_class, QtGui.QWidget) if software == 'houdini': build_main_window(QtGui.QWidget) # class Build(arg): # TypeError: Error when calling the metaclass bases # tuple() takes at most 1 argument (3 given) # ## MainWindow.py import os # Build class class BuildMixin(): def __init__(self): super( BuildMixin, self ).__init__() # ----- a bunch of methods def build_main_window(*args): return type('Build', (BuildMixin, QtGui.QWidget) + args, {}) # Get software software = os.getenv('SOFTWARE') # Run tool if software == 'maya': from maya.app.general.mayaMixin import MayaQWidgetDockableMixin Build = build_main_window(MayaQWidgetDockableMixin) if software == 'houdini': Build = build_main_window() The error in your original code is caused by failing to use tuple expansion in the class definition. I would suggest simplifying your code to this: # Get software software = os.getenv('SOFTWARE') BaseClasses = [QtGui.QWidget] if software == 'maya': from maya.app.general.mayaMixin import MayaQWidgetDockableMixin BaseClasses.insert(0, MayaQWidgetDockableMixin) class Build(*BaseClasses): def __init__(self, parent=None): super(Build, self).__init__(parent)
https://codedump.io/share/xuDtzlzAEpgg/1/python--multiple-dynamic-inheritance
CC-MAIN-2017-51
refinedweb
339
50.73
<< RipItMembers Content count15 Joined Last visited Community Reputation108 Neutral About RipIt - RankMember RipIt replied to RipIt's topic in For BeginnersThank you very much,caldier that was very helpful and yes Daaark i can read the site. I haven't yet but i am going to look at it now! RipIt replied to RipIt's topic in For BeginnersI would but due to the nature of where i am (afghanistan) i cannot open pages such as games and certain other things due to being blocked by filter through the ACL. I will give it a try and look into all this when i return (jan-feb) but for now i am just trying to grasp basic knowledge of the things i can and cannot due in certain languages and which languages are easier / more apt for certain things such as text based RPG through browser and what not. i appreciate all the help you have provided me and i will continue to use these forums for a long time to come, asking for help and providing where i can haha. RipIt replied to RipIt's topic in For BeginnersOk, thank you, i am trying to get a grasp as i am learning the basics to C++ and will soon use some type of graphics library with it to practice drawing maps / tile based screens through an array? i think thats how its done, from what i have read so far. still not sure about all that lol.? RipIt posted a topic in For BeginnersI am curious as to what i could use to create these games in without having the bottleneck be the language / library / API--not sure if those are the right terms, still new a bit to it all. ) : [b]Or:[What would that mean for graphics and game play / user accesibility? (the porting to HTML5 part) RipIt replied to RipIt's topic in For BeginnersSo pertaining to Text-based online RPG's i could use C++ for all of it, or i could use it for just the server side -also i can use python for this as well? Or i could use javascript for all my needs? I feel like i may have missed something so im just trying to clarify What would be the "best" language to create the following, if its this simple, im still a bit lost lol sorry. Text-based RPGSo if the support for Java in browsers is going to "die" what will then happen to such things like runescape or games like? RipIt replied to RipIt's topic in For Beginners[quote name='Servant of the Lord' timestamp='1348772222' post='4984453'] [quote name='RipIt' timestamp='1348771784' post='4984449'] ] [/quote] I'm assuming you posted this post before seeing my previous post. Few notes about internet etiquet:[list] [*]Bold, italic, or colored text is for emphasizing a few words in a sentence - [b]not[/b] for highlighting your entire post. It's considered rude and pushy. [*]This is an internet forum, not a chat room, it takes time for people to respond - it's also considered impolite to bump your post after only a single hour. We're helping you make an informed decision, giving our time for free. Have patience! We don't get paid to sit here, we're doing other things at the same time. [*]We provide you with information so you can make informed decisions. If we told you, "Use python", it removes the burden of a decision from you, and while easier for you, doesn't help you know [i]why[/i] python is a good choice. Remember, nothing is "better" or "best", just different and better suited to certain situations. [/list] (Note: I am not a moderator, and I'm not giving you orders or commands - these are just general "How to be a proper gentleman when online" rules that apply to the entire internet) That said, use [url=""]python[/url]. [img][/img] (You just can't embed in in a website, as far as I know) [/quote] I wasn't meaning to bold it, i was actually suprised by that myself, i didnt mean to or know i did. I know, i am not trying to be pushy or even bump my thread i was just posting that comment in here since i had a thread going already instead of making a new thread, i was just curious that is all. Sorry if i came / come off as pushy or a jerk. not my intentions :/ RipIt replied to RipIt's topic in For BeginnersI am thinking more like Runescape as i played this game since it first started to just recently. Except mine would obviously be much worse movement and graphics, even worse than when Runescape started, haha. Thank you so much for this information as it has helped me a lot. So if i were to want to do the Runescape type game, i could use say Java for an online RPG text based as well as long term Runescape type game? So i could use C++ for the server side / game engine ? and Java as the actual game code ? i may be confused to this part still. RipIt replied to RipIt's topic in For BeginnersI posted the above as you were typing that up, sorry lol. RipIt replied to RipIt's topic in For Beginners] RipIt replied to RipIt's topic in For BeginnersThis help's a lot. Yes i want to start with a "simple" and "basic" game. i guess it just seemed a bit too simple in my head and i like to push ahead for results of what i want (impatient) lol. So with your post and the prior post about c++ and SFML, would c++ and sfml be the ideal language to use / continue learning for a BASIC GAME and then move to an online RPG (eventually)? I haven't made tetris or pong yet, i am deployed right now and i cant download the SFML due to blocked sites / restrictions on downloads :/ I am just reading C++ for dummies and using the contained CD with code::blocks and various internet sites for help like this site and cplusplus.com It's really just a learning phase and planning phase for things i would like to happen when i get back with making games like tetris and pong in between then and hopefully if time permits a tiny 2D basic game. RipIt replied to RipIt's topic in For BeginnersOk, so in order to make a multiplayer 2D online topdown RPG Javascript and python would be better? I thought C++ was used to make tons of games now a days? RipIt replied to RipIt's topic in For BeginnersHow would i go about setting up a server for that situation i guess is my problem / question. What are free resources i could use to host it on a server i have at my house? RipIt posted a topic in For BeginnersSooo..Not sure quite where im going with this but my goal by the end of next year is to make a BASIC 2D top-down RPG that allows users to go to a website and login and play the game online with other players. I am learning C++ and SFML. using code::blocks Thank you for all the help and i hope this is a good explanation of what i want / am asking you. Any help/suggestions/tips/guides will be wonderful thank you!
https://www.gamedev.net/profile/203565-ripit/?tab=reputation
CC-MAIN-2017-30
refinedweb
1,248
72.7
Episode 11 · July 7, 2014 Using Rails to upload files manually and how you can do it even cleaner using Carrierwave So file uploading is the next feature we want to add to our application, and we're going to talk about how to do this in pure rails, as well as using the carrierwave gem to simplify things considerably. So, what do we want to upload files for? Well, our books like "Mastery" could use a bit of sprucing up, they could use an image for the book cover, and it would be nice if we could click on "Edit" and see a file field and be able to upload an image there. So that's what we're going to do. This is pretty straightforward, however. When we do this, we're going to need to do a fair amount of work because we have to understand how rails works internally a little bit, so that we can place the files in the proper directory so that they're available for the browser. So if we jump into our terminal, and we take a look at the public folder, this is pretty special. So the public folder** is static files that are served up by your web server and they hit before rails does. So if you type "404.html", it will bring up the 404 page just like you see here, but this is not being served up through rails. It sees 404.html inside this folder, and it immediately serves it up, so there's no processing going on, there's no database lookups or hitting your rails routes or any of those things. It sees the file name and it serves it up. If this didn't exist, it would go through rails and look for a route and go though the regular process, but it doesn't because this url matches exactly that. So we're going to take advantage of this knowledge, and we're going to put it here, a folder for the book in the book covers. So we're going to have to design how that works so that we can properly implement this and handle it. In our terminal, we can go into the same public directory and list out the files. And if we think about it, this is the perfect place to put our uploaded images. We can store them here, and they'll be served up without ever processing through rails, which will be very quick compared to going through rails, since they're static files. So in here, we want to separate everything out very cleanly because things can change in the future, and we want to be ready for that. So for now, we're just uploading images for books, but what if we want to add avatars for users later on? Well, that means we should separate those out into folders at the hightest level. So if we make a directory called books, here we can store all of the images and file uploads for books. If we make images and avatars for users, then we can have a users one, and separate those two out cleanly. Let's talk about what else could change, what if our books have multiple images, what if we have an image for the book cover? For the back of the book? What if we have an image for the author? All of these mean that we could have more folders inside of here for every single book. So at the next level, we want to add a folder for a dynamic folder name that is the book database id, or the database slug. We're going to use the database id because the slugs can change, and we're just going to stick with the id's to keep it simple. So book number two is "Mastery", and we're going to hard-code an image in here and test it all out. So if we create book number two here and we dive into that, then we can talk about what we just mentioned where what if the book has a cover, a back cover or an author image? So in here, we want to separate those out. We don't have those yet, but we want to think about it in case we ever do, so here we're going to put the cover folder, and then inside the cover folder, you're going to upload the file directly into this folder. I'm going to simulate thay by just copying an image in here and saving it. So we could rename the images, but it really doesn't benefit us that much. If we keep the original filename, we can save this mastery_cover.jpg into the database and then our book knows that it does have a cover. So if our field is empty in our database record, then it knows that there is no cover, and we can use regular rails ActiveRecord code to skip the image, and if it does have one, it can look and point the image tag to this folder. So this is how we want to structure the folders in the image uploading at the very lowest level on your file system. So if we dive into our code in the show view, we can do something like this, which is still somewhat hardcoded. So we can say there's an image tag for /books/#{@book.id}/cover/Mastery_Cover.jpg, and if we open this up in our browser, the image loads. So this knows to automatically look for the image where we stored it, and if your paste the image url into your browser, you can see that its books/2/cover/Mastery_Cover.jpg, which is exactly the same folder and file name as we just created in our terminal. Now that we've figured out where we want to save our images on disc, we can go to the edit form and the new form and begin adding the file upload field as we had planned in the very beginning. So we need to add the field, we need to have it save the file, and we also need to save the file name into the database record. So let's start with the form app/views/books/_form.html.erb //... <div class="form-group"> <%= f.label :cover %> <%= f.file_field :cover %> </div> Now if we refresh the page, we have a cover attribute that we can upload. So if we upload the mastery cover again, and update book, nothing changed, nothing crashed, but it didn't actually do anything and the reason why is because if we go to our rails logs, we can come back to the PATCH method that we just saw, and we can see unpermitted parameters cover. So the cover file is being uploaded, and as you can see here, it is an action dispatch uploaded file instance, it has a temp file assigned to it, and we can see the file name and the content type and everything about it. and that means that our file is being uploaded correctly, however, we're just ignoring it when we recieve it. So we need to go into our books controller to actually allow it to happen. So if you jump down at the bottom, you can add cover into the book params method as a permitted attribute. So this will now allow the cover image to be submitted, and it will try to assign it to the book. However on the book, we don't actually have anything but FriendlyID on there, we've never added the method for this cover, and we don't even have an attribute for the cover file name that we need to save. So let's start there. Let's go into our terminal and generate a migration called rails g migration AddCoverFilenameToBooks cover_filename rake db:migrate We finally have everything we need to actually upload and process this image, and that's what we're going to do now. so if we add an attr_accessor :cover this allows the controller to assign the image we uploaded to the cover attribute on this book, and then we can go doing the actual saving into the file system where we want. So if we add an after_save callback and we'll call it save_cover_image, this allows us to take that temporary file that we uploaded and really save it into the public directory like we talked about. So we only want to do this if there's a cover image. So if this is nil, then we won't try to save the image again. So then we can add app/models/book.rb class Book < ActiveRecord::Base extend FriendlyId friendly_id :name, use: :slugged attr_accessor :cover after_save :save_cover_image, if: :cover def save_cover_image filename = cover.original_filename folder = "public/books/#{id}/cover" FileUtils::mkdir_p folder f = File.open File.join(foler, filename), "wb" f.write cover.read() f.close self.cover = nil update cover_filename: filename end end So what this does very simply is it creates the folder, it writes the file to it, and then it updates the book record with the filename. So we can add a new file, a different file name and it will always know which one to point to, and self.cover = nil is very important, because if you don't have it, it will go into an infinite loop and continue trying to save this cover image over and over and over again. Now that we have the file uploading working, we can go into our show action, and we can remove the hardcoded file name that we had before, and replace it with Book.cover_filename if Book.cover_filename?, so this will only display the image if the book has had an image uploaded for it. So now we can go into our public folder again, and let's remove the images that we have, and the folder that we created before, and then we can go to our application, we can go into "Mastery", there is no image being displayed, and if we edit and we upload Mastery's cover image, now it displays, and now if we go back and we go to "How to win friends and influence people" and do the same thing, we can see that it also displays properly. If we go into one of the books that I haven't played with, you can see that there's no image here. Well this code isn't terribly complex, it starts to get pretty nasty if we start adding in image editing like cropping and resizing, and even doing work like uploading these images to Amazon s3 or Rackspace cloud files makes this a whole heck of a lot more complex. Not to mention, if you wanted to add an author image here, you pretty much have to duplicate everything we just wrote. So what we're going to talk about next is how you can use carrierwave to replace this. But now that you have a good understanding of why this is important and how carrierwave works at the most basic level, you'll be able to actually use carrierwave pretty extensively, and it won't feel like random magic that someone put together for you. Now that we've design our own mechanism for uploading and storing files on our server, let's take a look at how carrierwave does it. Now the one thing that I want to point out here is that carrierwave has a concept of an uploader. And an uploader is a ruby class that is defined, and it inherits from carrierwave's internal helpers. So it has a class that you basically you mount on your ActiveRecord model that says: Ok, any interaction with this cover attribute will be through carrierwave, so it's going to help you store the file weather that's on your local server or on Amazon s3 or Rackspace cloud, and it's also going to handle all the image processing that happens, and you can also do things such as configuring the storage directory where your files are saved. So this is what the uploader is designed for, it's to encapsulate all of the logic that happens inside of your application when an image is uploaded, and the reason why they do this is because like we talked about earlier, if you were to have an image for the book cover, as well as an image for the author, you might want to process those separately. So one of them might need to be a certain size, and the other one might need to be a different size. Now you can create two uploaders and separate all of that code out very cleanly in between the two. So to transfer over our custom file uploading system to carrierwave, we're going to first install the carrierwave gem, and that goes in our Gemfile, at the bottom, and we can run bundle install, and we can restart our rails server, and then we can run the rails generate uploader to generate a carrierwave uploader. I'm going to call this one cover so that it generates the cover uploader, and this will be what we use to handle cropping or whatever else we want to do with the cover images. So now that this is generated, let's take a look at what it does. In here, we can see at the top there's a couple comments for plugins to carrierwave that you can install that allow you to do ImageMagick to do image cropping and scaling. So if you'd like, you can follow the carrierwave README and learn about how to install ImageMagick and enable these features. The next one is storage file which basically tells it: Save to the storage directory here that we've chosen. So this is very similar to how we layed out the folder structure for our book covers, what they do is they have the class name of the book first, then they have cover, which is what the uploader is mounted as, and then the model id. So that is how the file storage saves to a certain location, and then at the bottom you can override filename's and you can also enable a whitelist of extensions so that only jpgs or GIFs or pngs are allowed. And the option underneath storage file is called storage fog and fog is the rubygem that allows you to interact with remote file systems basically. So if you're going to use Amazon s3, Rackspace, cloudfiles, or you want to do something different, you can use fog to interact with those remote systems, and then when carrierwave recieves a file that's uploaded, it will go and save them remotely. So this is how carrierwave defines all of it's customizations for an upload, and you just simply configure it in here. There are only two more things we need to do to finish installing carrierwave, and that is to rename our column on our books table, and change cover file name to cover. And then we need to tell the books model that the cover database column is where we want to store the uploaded files. rails g migration RenameCoverFilename db/migrate/rename_cover_filename.rb def change rename_column :books, cover_filename, cover end rake db:migrate app/models/book.rb class Book < ActiveRecord::Base extend FriendlyId friendly_id :name, use: :slugged mount_uploader :cover, CoverUploader end Anytime that a file is assigned to a cover attribute on a book, carrierwave will step in and handle it, and do everything that you've defined in the cover uploader. So with that, we can take a look at our rails application, it's uninitialized constant because we need to restart our rails application, and if we restart it now, everything is set up properly after we've installed carrierwave, and now we can come into our book, and we'll see that the image tag that we were using before no longer works, because we don't have the cover_filename attribute. With carrierwave, the way to access the images, it's very simple and much cleaner than what we've wrote before. So here's what we've wrote before, and here's what we can do with carrierwave. So you can say <%= image_tag @Book.cover.url %> That's as simple as it is. They take the cover attribute and add some methods onto it to retrieve the URL for it, and we had to build the url ourselves. Now we could have spent a whole bunch of time moving this in to make this something compatible with carrierwave, but there isn't really much point in doing that when you can just use carrierwave. The new image tag is working, but because we've uploaded the images to a different folder name, it's not available. So we're going to add the same thing here, and say if Book.cover? and make it so that the image does not display if carrierwave doesn't see a cover. This is a little bit better, just because it's very clear if there's a book cover, so our code is really readable this way with carriere wave, and it makes a big difference as your application gets bigger. So we should be able to now go into "Edit" and we'll reupload the "Mastery" cover, and there we go. If we want to compare this to before, we can open the image in a new tab and take a look at the url, which is just about the same. It's now in an uploads/book/cover and then the database id, so ours was the database id first. It doesn't really matter either way, the way that I designed it, where you have the database id first means that all of the images for a single record are in the same folder which can be pretty convenient if you're going to do some manual work on like messing with images on the server. That is carrierwave, and I hope you learned quite a bit about file uploading, there's a whole ton to it, and I highly recommend checking out fog and playing with Amazon s3, because it's free for a year, and it's worth checking out. Transcript written by Miguel Join 24,647+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/episodes/file-uploading-with-carrierwave?autoplay=1
CC-MAIN-2019-47
refinedweb
3,114
63.32
The navigation that the application does to serve you a different view is called routing. Let’s get a solid understanding of routing in Angular. Today we will be looking at one of the many interesting features of any frontend framework—routing—and how it is done in Angular.: When building a single-page application (SPA) using Angular, you need to be able to ensure you take care of navigation and you can serve different views to your users as needed. A great illustration is when you open any business website today, you will see the homepage, the Contact page, the About page and so on. If you proceed to click on the About page, you still see all the elements in the navigation, but the rest of the view changes to the About page. This same thing happens when you click the Contact page or any other page. The navigation that the application does to serve you a different view is called routing. Angular has a library package called the Angular Router (@angular/router) that takes care of routing in your Angular projects. If you set up the router and define routes, you can input a URL and Angular will navigate you to the corresponding view. You can click on a link or button and also get navigated, or you can also use the browser back and forward buttons to trigger router use. Today we are building a simple navigation component to illustrate the concept of routing in Angular. We will be building this manually instead of using the Angular CLI so you can understand what goes into the work that the CLI does when you use it. Create a new folder in the location of choice on your machine and open it with VS Code. Open the terminal and run the command below: ng new router When the Angular CLI prompt asks if you want to add routing, choose No and complete the setup for your project. Now let us install bootstrap so we do not have to style the Navbar component ourselves. npm install bootstrap After this, open your angular.json file and make sure the styles is defined like this: “styles”: [ “node_modules/bootstrap/dist/css/bootstrap.min.css”, “src/styles.css” ] Now we want to generate the about and the contact components. ng generate component about ng generate component contact You can see now files have been created and the app module file being updated. Inside the app component.html file, replace the content with the code block below: > This is a HTML list with three list items: Home, About and Contact. This is what we want to connect to the Angular Router so that we can serve different views for every new page we navigate to. To display content from a child component, you have to tell Angular where exactly in the template you want the display to be. In the app component.html file, add these new lines: > <app-about></app-about> <app-contact></app-contact> </div> Now when you save your work, run the dev server and open the browser to localhost:4200. You should see this: The routes are always defined in the app module. Open the app module file and replace the content with the code block below: import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; import { AppComponent } from './app.component'; import { AboutComponent } from './about/about.component'; import { ContactComponent } from './contact/contact.component'; const routes: Routes = [ {path:'about', component: AboutComponent}, {path:'contact', component: ContactComponent} ]; @NgModule({ declarations: [ AppComponent, AboutComponent, ContactComponent ], imports: [ BrowserModule, RouterModule.forRoot(routes) ], providers: [], bootstrap: [AppComponent] }) export class AppModule { } Here we made three changes: first we imported the router module from Angular, then we created the routes in an array, and finally we declared it by adding it to the imports below. This is how to set up routes in Angular. You can define navigations and all the views exactly how you want it here. If you save your project at this stage, you will see that nothing has really changed in the views you serve. This because we still have app-about and app-contact displaying content from their parent component. To change that, Angular provides the Router Outlet for use in the template. It basically tells Angular to check the routes defined and serve views just according to those definitions. > <router-outlet></router-outlet> </div> If you save the project, you will see that everything works as you would expect it to work. This has been a quick introduction to routing in Angular. We have learned how important navigation is and how Angular handles it with routing. We saw how to set up routing from one component to another easily using the router module..
https://www.telerik.com/blogs/angular-basics-beginner-guide-angular-router
CC-MAIN-2022-40
refinedweb
795
61.67
Hi, First of all I apologies if I duplicate my post but I didn't see my mail from yesterday pop out on the list. So here it is again :) I am looking into implementing a system that would use as input a high volume of apaches logs in order to produce reports used on a stats website (table, graph...). The main requirement is to be low latency as this is the reason we want to get rid of mysql currently not doing the job properly. This lead me to study various nosql databases, but Couch DB was the one that caught my attention most. It's easy to setup and play with also the incremental view seemed to really fit my needs. It allowed me to easily import my existing data and start experimenting with views what leads me here today. I am struggling to write a particular type of view and hope I can find some guidance here... To summarize here is what I am trying to achieve: my database is filled with requests containing (among other things) datetime and client_id. The view I'm trying to write should display the number of occurrences for a date range grouped by client_id. Unfortunately I don't manage to create a view that would contain the expected result with the right format. The closest I could do was producing a result as follow: [2009, 11, 19, 61] => 14 [2009, 11, 20, 61] => 30 [2009, 11, 20, 64] => 30 ... This represents 14 occurences for the client 61 on the 2009-11-19, 30 on the 2009-11-20 and so on. What I would like to retrieve is a resulted grouped on client id to avoid having multiple rows for the same client id) The view is called with startkey=[2011,11,1]&endkey=[2011,11,30,{}]&group=true&stale=ok This is important for me because not being able to group mean I am getting a "row" per day per client what leads to a response to big a makes the latency to big on front. Thanks, Thomas
http://mail-archives.apache.org/mod_mbox/couchdb-user/201201.mbox/%3C1327047073.2079.1.camel@Thomas%3E
CC-MAIN-2014-42
refinedweb
348
71.18
in reply to Re: Re: Automatic module installation in thread Automatic module installation Sigh. Please do learn the difference between one level of indentation and two. The not available on CPAN was a reply to Corion's posting, not yours. Corion after all claimed the The::Net namespace was unclaimed, which (s)he could only think if (s)he was assuming the The::Net module wasn't available on CPAN. A foolish day Just another day Internet cleaning day The real first day of Spring The real first day of Autumn Wait a second, ... is this poll a joke? Results (425 votes), past polls
http://www.perlmonks.org/index.pl?node_id=169548
CC-MAIN-2014-15
refinedweb
104
65.73
I'm pretty new to Python, however I've been trying to write a script that first connects to server via SSH, then issues and command to read the contents of a file. <-- this works My issue now is that I really don't want all of the contents of the file. I'd like to pass the output from the SSH command and then print out only lines matching particular search terms. I'm having a lot of trouble getting this to work and I'm sure this can't be that complicated. All the examples I've found online pretty much involve a filename.txt to start with. However in my case I'm trying to read a file through SSH and then only display what I want. Hoping someone can provide me some sort of example on how I can achieve this. I've included my code below with only the reading of the file part since nothing I was doing past that would work. - Code: Select all #!/usr/bin/python import paramiko import cmd import re import sys ssh = paramiko.SSHClient() ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy()) ssh.connect('HOSTIP', username='myusername', password='mypassword') stdin, stdout, stderr = ssh.exec_command("cat /path/to/my/text/mytext.txt") stdin.flush() data = stdout.read() print data Hope someone can help! Cheers
http://www.python-forum.org/viewtopic.php?f=6&t=11903
CC-MAIN-2014-52
refinedweb
220
66.13
Recently,. If I could pass these strings directly to SQL Server as part of a string query, this would be simple. I could just have: sql = "SELECT * FROM Orders WHERE OrderDate BETWEEN '" + startdate + " ' AND '" + enddate + "'"; However, since the code was coming from a user, I did not want to do this because of SQL injections attacks, etc. As a result, I needed to create parameters for my query instead. In this particular case, I could not use a stored procedure so I used a parameterized query. Below is the code I used to accomplish this. It also includes an example of how to use the version 3.0 of the Data Access Application Block, which allows for complete database independence (unlike version 2.0). If you have not checked that project out, it is very useful. You can reach it at: I struggled for a while when I tried to figure out how to do a parameterized query using the Data Access Application Blocks so hopefully this code will serve as an example for that also. To do this, you first have to tack on 12:00AM to the start date and 11:59PM to the end date. This makes sure you get the entire day. If you are using stored procedures you can do this inside the procedure. So here is the code: using System; using GotDotNet.ApplicationBlocks.Data; using System.Data; public class Converter { public static void Main( string[] args ) { if( args.Length != 2 ) { Console.WriteLine( "Please supply a start and end date" ); return; } IDataReader rdr = GetTheDates( args[0], args[1] ); while( rdr.Read() ) { Console.WriteLine( "OrderId: {0} - CustomerId: {1}", rdr[ "OrderId" ], rdr[ "CustomerId" ] ); } } public static IDataReader GetTheDates( string startdate, string enddate ) { string dbcon = "Data Source=localhost; Integrated Security=SSPI;Initial Catalog=northwind"; SqlServer s = new SqlServer(); startdate += " 12:00AM"; enddate += " 11:59PM"; IDataParameter[] p = new IDataParameter[2]; p[0] = s.GetParameter( "@StartDate", DateTime.Parse( startdate )); p[1] = s.GetParameter( "@EndDate", DateTime.Parse( enddate )); string sql = "Select * From Orders WHERE OrderDate BETWEEN @StartDate AND @EndDate"; return s.ExecuteReader( dbcon, CommandType.Text, sql, p ); } } // COMPILE USING: csc string_datetime_convert.cs /r:GotDotNet.ApplicationBlocks.Data.dll I hope this article was helpful. It is written for beginners in C# who are just learning how to connect to databases. Again, I would highly recommmend using the Data Access Application Block instead of the ADO.NET functions directly. The DAAB is very simple to use and reduces that amount of code drastically. This is my first article here so any suggestions about the content, formatting, would be.
https://www.codeproject.com/Articles/8321/Converting-Strings-to-DateTimes-for-Use-in-ADO-NET?fid=107636&df=90&mpp=10&sort=Position&spc=Relaxed&tid=932482
CC-MAIN-2017-51
refinedweb
424
57.87
Signed-off-by: Marc Branchaud <marcn...@xiplink.com> --- Advertising This started out as an attempt to make the backward compatibility notes more parsable, but then I just kept going... M. Documentation/RelNotes/1.8.3.txt | 145 +++++++++++++++++++-------------------- 1 file changed, 72 insertions(+), 73 deletions(-) diff --git a/Documentation/RelNotes/1.8.3.txt b/Documentation/RelNotes/1.8.3.txt index 6d25165..06bc831 100644 --- a/Documentation/RelNotes/1.8.3.txt +++ b/Documentation/RelNotes/1.8.3.txt @@ -8,23 +8,22 @@ pushes the current branch to the branch with the same -name, only when the current branch is set to integrate with that -remote branch. There is a user preference configuration variable +semantics that pushes only the current branch to the branch with the same +name, and only when the current branch is set to integrate with that +remote branch. Use the user preference configuration variable "push.default" to change this. If you are an old-timer who is used -to the "matching" semantics, you can set it to "matching" to keep the +to the "matching" semantics, you can set the varaible. A warning is issued when these commands are . @@ -33,8 +32,8 @@ is encouraged to use "git add --ignore-removal <path>" and -get used to it. +behaviour are encouraged to start using "git add --ignore-removal <path>" +now before 2.0 is released. Updates since v1.8.2 @@ -114,7 +113,7 @@ UI, Workflows & Features * "git status" suggests users to look into using --untracked=no option when it takes too long. - * "git status" shows a bit more information to "git status" during a + * "git status" shows a bit more information during a rebase/bisect session. * "git fetch" learned to fetch a commit at the tip of an unadvertised @@ -148,8 +147,8 @@ UI, Workflows & Features * "git mergetool" now feeds files to the "p4merge" backend in the order that matches the p4 convention, where "theirs" is usually - shown on the left side, which is the opposite from other backend - expects. + shown on the left side, which is the opposite from what other backends + expect. * "show/log" now honors gpg.program configuration just like other parts of the code that use GnuPG. @@ -173,7 +172,7 @@ Performance, Internal Implementation, etc. * Updates for building under msvc. - * A handful of issues in the code to traverse working tree to find + * A handful of issues in the code that traverses the working tree to find untracked and/or ignored files have been fixed, and the general codepath involved in "status -u" and "clean" have been cleaned up and optimized. @@ -182,15 +181,15 @@ Performance, Internal Implementation, etc. pack has been shrunk. * The logic to coalesce the same lines removed from the parents in - the output from "diff -c/--cc" has been updated, but with an O(n^2) + the output from "diff -c/--cc" has been updated, but with O(n^2) complexity, so this might turn out to be undesirable. * The code to enforce permission bits on files in $GIT_DIR/ for - shared repositories have been simplified. + shared repositories has been simplified. - * A few codepaths knew how much data they need to put in the - hashtables they use upfront, but still started from a small table - repeatedly growing and rehashing. + * A few codepaths know how much data they need to put in the + hashtables they use when they start, but still began with small tables + and repeatedly grew and rehashed them. * The API to walk reflog entries from the latest to older, which was necessary for operations such as "git checkout -", was cumbersome @@ -202,9 +201,9 @@ Performance, Internal Implementation, etc. * The pkt-line API, implementation and its callers have been cleaned up to make them more robust. - * Cygwin port has a faster-but-lying lstat(2) emulation whose + * The Cygwin port has a faster-but-lying lstat(2) emulation whose incorrectness does not matter in practice except for a few - codepaths, and setting permission bits to directories is a codepath + codepaths, and setting permission bits on directories is a codepath that needs to use a more correct one. * "git checkout" had repeated pathspec matches on the same paths, @@ -225,42 +224,42 @@ Unless otherwise noted, all the fixes since v1.8.2 in the maintenance track are contained in this release (see release notes to them for details). - * When receive-pack detects error in the pack header it received in + * When receive-pack detects an error in the pack header it received in order to decide which of unpack-objects or index-pack to run, it - returned without closing the error stream, which led to a hang + returned without closing the error stream, which led to a hung sideband thread. - * Zsh completion forgot that '%' character used to signal untracked + * Zsh completion forgot that the '%' character used to signal untracked files needs to be escaped with another '%'. * A commit object whose author or committer ident are malformed - crashed some code that trusted that a name, an email and an + crashed some code that trusted that a name, an email and a timestamp can always be found in it. * When "upload-pack" fails while generating a pack in response to - "git fetch" (or "git clone"), the receiving side mistakenly said - there was a programming error to trigger the die handler + "git fetch" (or "git clone"), the receiving side had + a programming error that triggered the die handler recursively. - * "rev-list --stdin" and friends kept bogus pointers into input + * "rev-list --stdin" and friends kept bogus pointers into the input buffer around as human readble object names. This was not a huge problem but was exposed by a new change that uses these names in error output. (merge 70d26c6 tr/copy-revisions-from-stdin later to maint). * Smart-capable HTTP servers were not restricted via the - GIT_NAMESPACE mechanism when talking with commit-walker clients, - like they do when talking with smart HTTP clients. + GIT_NAMESPACE mechanism when talking with commit-walking clients, + like they are when talking with smart HTTP clients. (merge 6130f86 jk/http-dumb-namespaces later to maint). * "git merge-tree" did not omit a merge result that is identical to - "our" side in certain cases. + the "our" side in certain cases. (merge aacecc3 jk/merge-tree-added-identically later to maint). - * Perl scripts like "git-svn" closed (not redirecting to /dev/null) + * Perl scripts like "git-svn" closed (instead of redirecting to /dev/null) the standard error stream, which is not a very smart thing to do. - Later open may return file descriptor #2 for unrelated purpose, and - error reporting code may write into them. + A later open may return file descriptor #2 for an unrelated purpose, and + error reporting code may write into it. * "git show-branch" was not prepared to show a very long run of ancestor operators e.g. foobar^2~2^2^2^2...^2~4 correctly. @@ -268,17 +267,17 @@ details). * "git diff --diff-algorithm algo" is also understood as "git diff --diff-algorithm=algo". - * The new core.commentchar configuration was not applied to a few + * The new core.commentchar configuration was not applied in a few places. * "git bundle" did not like a bundle created using a commit without - any message as its one of the prerequistes. + any message, as it is one of the prerequistes. * "git log -S/-G" started paying attention to textconv filter, but - there was no way to disable this. Make it honor --no-textconv + there was no way to disable this. Make it honor the --no-textconv option. - * When used with "-d temporary-directory" option, "git filter-branch" + * When used with the "-d temporary-directory" option, "git filter-branch" failed to come back to the original working tree to perform the final clean-up procedure. @@ -287,9 +286,9 @@ details).. + in refs/tags/) to decide when to special-case tag merging. - * Fix 1.8.1.x regression that stopped matching "dir" (without + * Fix a 1.8.1.x regression that stopped matching "dir" (without a trailing slash) to a directory "dir". (merge efa5f82 jc/directory-attrs-regression-fix later to maint-1.8.1). @@ -300,46 +299,46 @@ details). * The prompt string generator (in contrib/completion/) did not notice when we are in a middle of a "git revert" session. - * "submodule summary --summary-limit" option did not support + * "submodule summary --summary-limit" option did not support the "--option=value" form. * "index-pack --fix-thin" used an uninitialized value to compute - delta depths of objects it appends to the resulting pack. + the delta depths of objects it appends to the resulting pack. - * "index-pack --verify-stat" used a few counters outside protection - of mutex, possibly showing incorrect numbers. + * "index-pack --verify-stat" used a few counters outside the protection + of a mutex, possibly showing incorrect numbers. * The code to keep track of what directory names are known to Git on - platforms with case insensitive filesystems can get confused upon a - hash collision between these pathnames and looped forever. + platforms with case insensitive filesystems could get confused upon a + hash collision between these pathnames and would loop forever. - * Annotated tags outside refs/tags/ hierarchy were not advertised - correctly to the ls-remote and fetch with recent version of Git. + * Annotated tags outside the refs/tags/ hierarchy were not advertised + correctly to ls-remote and fetch with recent versions of Git. - * Recent optimization broke shallow clones. + * Recent optimizations broke shallow clones. * "git cmd -- ':(top'" was not diagnosed as an invalid syntax, and instead the parser kept reading beyond the end of the string. * "git tag -f <tag>" always said "Updated tag '<tag>'" even when - creating a new tag (i.e. not overwriting nor updating). + creating a new tag (i.e. neither overwriting nor updating). * "git p4" did not behave well when the path to the root of the P4 client was not its real path. (merge bbd8486 pw/p4-symlinked-root later to maint). - * "git archive" reports a failure when asked to create an archive out - of an empty tree. It would be more intuitive to give an empty + * "git archive" reported a failure when asked to create an archive out + of an empty tree. It is more intuitive to give an empty archive back in such a case. - * When "format-patch" quoted a non-ascii strings on the header files, + * When "format-patch" quoted a non-ascii string in header files, it incorrectly applied rfc2047 and chopped a single character in - the middle of it. + the middle of the string. * An aliased command spawned from a bare repository that does not say - it is bare with "core.bare = yes" is treated as non-bare by mistake. + it is bare with "core.bare = yes" was treated as non-bare by mistake. - * In "git reflog expire", REACHABLE bit was not cleared from the + * In "git reflog expire", the REACHABLE bit was not cleared from the correct objects. * The logic used by "git diff -M --stat" to shorten the names of @@ -347,9 +346,9 @@ details). common prefix and suffix between the two filenames overlapped. * The "--match=<pattern>" option of "git describe", when used with - "--all" to allow refs that are not annotated tags to be used as a + "--all" to allow refs that are not annotated tags to be a base of description, did not restrict the output from the command - to those that match the given pattern. + to those refs that match the given pattern. * Clarify in the documentation "what" gets pushed to "where" when the command line to "git push" does not say these explicitly. @@ -357,7 +356,7 @@ details). * The "--color=<when>" argument to the commands in the diff family was described poorly. - * The arguments given to pre-rebase hook were not documented. + * The arguments given to the pre-rebase hook were not documented. * The v4 index format was not documented. @@ -375,7 +374,7 @@ details). * In the v1.8.0 era, we changed symbols that do not have to be global to file scope static, but a few functions in graph.c were used by - CGit from sideways bypassing the entry points of the API the + CGit sideways, bypassing the entry points of the API the in-tree users use. * "git update-index -h" did not do the usual "-h(elp)" thing. @@ -388,30 +387,30 @@ details). $msg already ended with one. * The SSL peer verification done by "git imap-send" did not ask for - Server Name Indication (RFC 4366), failing to connect SSL/TLS + Server Name Indication (RFC 4366), failing to connect to SSL/TLS sites that serve multiple hostnames on a single IP. * perl/Git.pm::cat_blob slurped everything in core only to write it out to a file descriptor, which was not a very smart thing to do. * "git branch" did not bother to check nonsense command line - parameters and issue errors in many cases. + parameters. It now issues errors in many cases. - * Verification of signed tags were not done correctly when not in C + * Verification of signed tags was not done correctly when not in C or en/US locale. * Some platforms and users spell UTF-8 differently; retry with the most official "UTF-8" when the system does not understand the - user-supplied encoding name that are the common alternative - spellings of UTF-8. + user-supplied encoding name that is a common alternative + spelling of UTF-8. - * When export-subst is used, "zip" output recorded incorrect + * When export-subst is used, "zip" output recorded an incorrect size of the file. * "git am $maildir/" applied messages in an unexpected order; sort filenames read from the maildir/ in a way that is more likely to - sort messages in the order the writing MUA meant to, by sorting - numeric segment in numeric order and non-numeric segment in + sort the messages in the order the writing MUA meant to, by sorting + numeric segments in numeric order and non-numeric segments in alphabetical order. * "git submodule update", when recursed into sub-submodules, did not -- 1.8.2 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majord...@vger.kernel.org More majordomo info at
https://www.mail-archive.com/git@vger.kernel.org/msg24676.html
CC-MAIN-2016-50
refinedweb
2,342
63.09
C++ Basic Syntax When we consider a C++ program, it can be defined as a collection of objects that communicate via invoking each other's methods. Let us now briefly look into what a class, object, methods, and instant variables mean. Object − Objects have states and behaviors. Example: A dog has states - color, name, breed as well as behaviors - wagging, barking, eating. An object is an instance of a class. Class − A class can be defined as a template/blueprint that describes the behaviors/states that object of its type support.. C++ Program Structure Let us look at a simple code that would print the words Hello World.Live Demo #include <iostream> using namespace std; // main() is where program execution begins. int main() { cout << "Hello World"; // prints Hello World return 0; } Let us look at the various parts of the above program − << "Hello World"; causes the message "Hello World" to be displayed on the screen. The next line return 0; terminates main( )function and causes it to return the value 0 to the calling process. Compile and Execute C++ Program Let's look at how to save the file, compile and run the program. Please follow the steps given below − our 'Makefile Tutorial'. Semicolons and Blocks in C++ In C++, the semicolon is a statement terminator. That is, each individual statement must be ended with a semicolon. It indicates the end of one logical entity. For example, following are three different statements − x = y; y = y + 1; add(x, y); A block is a set of logically connected statements that are surrounded by opening and closing braces. For example − { cout << "Hello World"; // prints Hello World return 0; } C++ does not recognize the end of the line as a terminator. For this reason, it does not matter where you put a statement in a line. For example − x = y; y = y + 1; add(x, y); is the same as x = y; y = y + 1; add(x, y); C++ − All the compilers do not support trigraphs and they are not advised to be used because of their confusing nature. Whitespace in C++ A line containing only whitespace, possibly with a comment, is known as a blank line,. Statement 1 int age; In the above statement there must be at least one whitespace character (usually a space) between int and age for the compiler to be able to distinguish them. Statement 2 fruit = apples + oranges; // Get the total fruit In the above statement 2, no whitespace characters are necessary between fruit and =, or between = and apples, although you are free to include some if you wish for readability purpose.
https://www.tutorialspoint.com/cplusplus/cpp_basic_syntax.htm
CC-MAIN-2019-09
refinedweb
436
68.6
#include <Thyra_NonlinearSolverBase.hpp> Inheritance diagram for Thyra::NonlinearSolverBase< Scalar >: Warning! This interface is highly experimental and general developers should not even consider using it in any way if there is any expectation of code stability! ToDo: Finish documentation. ToDo: Definition at line 59 of file Thyra_NonlinearSolverBase.hpp. Set the model that defines the nonlinear equations. After the model is set, only the residual f can change between solves and not the structure of the Jacobian W. If a more significant change to *model occurs, then this function must be called again to reset the model and reinitialize. Get the model that defines the nonlinear equations. Solve a set of nonlinear equations from a given starting point. SolveStatusobject gives the status of the returned solution *x. Return if this solver object supports cloning or not. The default implementation returns false. Definition at line 201 of file Thyra_NonlinearSolverBase.hpp. Clone the solver algorithm if supported. Postconditions: supportsCloning()==true] returnVal != Teuchos::null supportsCloning()==false] returnVal == Teuchos::null Note that cloning a nonlinear solver in this case does not imply that the Jacobian state will be copied as well, shallow or deep. Instead, here cloning means to just clone the solver algorithm and it will do a showllow of the model as well if a model is set. Since the model is stateless, this is okay. Therefore, do not assume that the state of *returnValue is exactly the same as the state of *this. You have been warned! The default implementation returns Teuchos::null which is consistent with the default implementation of supportsCloning(). If this function is overridden in a base class to support cloning, then supportsCloning() must be overridden to return true. Definition at line 208 of file Thyra_NonlinearSolverBase.hpp. Return the current value of the solution x as computed in the last solve() operation if supported. The default implementation returns return.get()==NULL. Definition at line 215 of file Thyra_NonlinearSolverBase.hpp. Returns true if *get_W() is current with respect to *get_current_x(). The default implementation returns false. Definition at line 221 of file Thyra_NonlinearSolverBase.hpp. Get a nonconst RCP to the Jacobian if available. forceUpToDate==true] this->is_W_current() == true Through this the RCP returned from this function, a client can change the W object held internally. If the object gets changed the client should call set_W_is_current(false). The default implementation returns return.get()==NULL. Definition at line 228 of file Thyra_NonlinearSolverBase.hpp. Get a const RCP to the Jacobian if available. Through this interface the client should not change the object W. The default implementation returns return.get()==NULL. Definition at line 235 of file Thyra_NonlinearSolverBase.hpp. Set if *get_W() is current with respect to *get_current_x(). Preconditions: this->get_W().get()!=NULL The default implementation throwns an exception. Definition at line 241 of file Thyra_NonlinearSolverBase.hpp.
http://trilinos.sandia.gov/packages/docs/r8.0/packages/thyra/src/interfaces/nonlinear/solvers/ana/fundamental/doc/html/classThyra_1_1NonlinearSolverBase.html
CC-MAIN-2014-15
refinedweb
462
52.26