text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
In Python, we can create functions known as anonymous functions which don't have a name. A lambda function is a type of anonymous function defined using the lambda keyword. We know that a regular function is defined using the def keyword. A lambda function is defined using the lambda keyword and hence got its name. But why would we ever use a lambda function? Suppose you have created a function having a single statement in its body and that function is called at only one place in your program. In this situation, it doesn’t make much sense to create such short functions for one time use. This is where we can use lambda function in place of regular function. We can directly define the lambda function at the place where it is required instead of defining a function separately and then calling it. This will make the code more readable. Now the question is why would I ever need a function if I need it only one place and it contains only a single statement? There may be cases where a different function needs a function as their input. In those cases, a lambda function would be helpful. Let’s see what lambda functions are and then you will understand what we are talking about. We will discuss more use cases and limitations of lambda functions later. Using lambda Functions in Python Before learning to create lambda functions, look at the following function. def identity(x): return x + 1 The function identity() has just one statement and it simply returns the parameter that it receives after incrementing it by 1. This function can be written in the form of a lambda function as shown below. lambda x: x + 1 This function created using the keyword lambda doesn’t have a name and is called a lambda function. It has one argument x that is passed to the function. It has one expression x + 1 which is evaluated and returned. Thus, this lambda function will receive an argument x, add 1 to it and then return the result. Now let’s look at the syntax of a lambda function. Syntax of lambda function lambda arguments: expression lambda is a keyword which is used to define lambda functions. arguments are the same arguments which we pass to regular functions. There can be any number of arguments in lambda functions. expression is some expression which is evaluated and returned. There can be only one expression in a lambda function. Now let’s look at some examples of lambda functions. Examples of lambda function The following example has a function square() which takes a number and returns the square of that number. Python3 def square(x): return x**2 print(square(3)) Now suppose this square() function is called at only one place in the program. Then instead of defining and calling it, we can directly define a lambda function at the place where it is called. Let’s see how. print((lambda x: x**2)(3)) In this example, we defined a lambda function, passed a value to it and printed the value returned by the function. lambda x: x**2 - In this function, x is the argument and x**2 is the expression. The function receives the argument x, evaluates the expression x**2 and then returns the result of the evaluation. (3) - The value 3 is passed to the function, or we can say that 3 is the argument passed to the function. The values passed to a lambda function are enclosed within parentheses ( ) and written after the lambda function definition. Did you notice that the value of the expression is getting returned even without using the return keyword? In lambda functions, the value of the expression always gets returned, so make sure to write the expressions accordingly. We can also assign the lambda function to a variable so that we can use it anywhere by directly passing the variable name. square = lambda x: x**2 print(square(3)) In this example, the same lambda function is created and assigned to a variable square. The value 3 is passed to the lambda function by writing square(3). ( square(3) is the same as writing ( lambda x: x**2)(3)) Look at another example in which a lambda function takes two arguments and returns the sum of the arguments. print(( lambda x, y: x + y)(3, 2)) The lambda function takes two arguments x and y and then returns the sum of the arguments. Two values 3 and 2 are passed to the lambda function by writing (3, 2) after the function definition. The first value 3 got assigned to x and the second value 2 got assigned to y. In the next example, the lambda function created is assigned to a variable sum. sum = lambda x, y: x + y print(sum(3, 2)) lambda Function with no argument Yes, we can also define lambda functions with no argument. dummy = lambda: 10 print(dummy()) # prints 10 The lambda function dummy takes no argument and returns 10. lambda Function with default argument We can pass positional, keyword and default arguments to lambda functions. sum = lambda x, y=3: x + y print(sum(3)) print(sum(3, 2)) The lambda function sum takes one positional argument x and one default argument y. We learned about positional and default arguments in this chapter. When to Use lambda Functions Lambda functions are easier to create and get executed somewhat faster than regular functions. Therefore, lambda functions can be used instead of regular functions when the functions have only one expression and are of less complexity. This will be helpful to prevent the situation where separate functions are created for one expression long code, making the code more readable. Lambda functions can also be used when a function (having only one expression) is passed to another function as its parameter. For example, consider a case where a function named foo() takes another function func() as its parameter, where func() returns True if the value passed to it is greater than 10, and False otherwise. # defining function func def func(x): '''Returns True if x > 10, otherwise returns False''' return x > 10 def foo(y): '''Do something here''' foo(func(12)) Here, instead of declaring a separate function func() and calling it for passing it as an argument to the foo() function, we can directly pass a lambda function as the argument as shown below. def foo(y): '''Do something here''' foo((lambda x: x > 10)(12)) As you can see, this made the code shorter and more clean. Therefore, lambda functions should be used when the function logic is less complex and can be reduced to a single expression. These are used when the function is not called frequently. In all other scenarios, it is better to use regular functions. Lambda functions can also be used with built-in functions in Python like filter() and map(). Lambda Function with filter() The filter() function is used to filter an iterable like list, tuple, dictionary, set, range, etc., based on some condition. For example, it can be used if you want to filter out (remove) all the odd elements of a list. The filter() function takes two parameters - a function and an iterable. The function gets called once with each element of the iterable as argument. The filter() function returns only those elements of the iterable for which the function returns True. To understand this, take the same example in which we have a list and want to keep all the even elements in the list and remove all the odd elements. For that, we will pass a function which returns True if the value passed to it is even and False if it is odd as the first argument and the list to be filtered as the second argument to the filter() function. Examples Let’s take an example in which the filter() function filters out the odd elements of a list mylist. mylist = [5, 7, 8, 10, 14, 15, 20] def even(num): if num % 2 == 0: return True else: return False new_list = list(filter(even, mylist)) print("Filtered list:", new_list) We passed a function even() and a list mylist to the filter() function. The even() function returns True if the value passed to it is even, and returns False otherwise. Internally, each element of the list mylist is passed to the even() function and only those elements for which the function returns True are returned by the filter() function. Finally, the list() function creates a list with all the even elements returned by the filter() function as its elements. We can pass a lambda function instead of the regular function to make the code more readable and short. The code for the same is given below. mylist = [5, 7, 8, 10, 14, 15, 20] new_list = list(filter(lambda num: num % 2 == 0, mylist)) print("Filtered list:", new_list) In the lambda function, the expression num % 2 == 0 returns True if num is divisible by 2, otherwise it returns False. Let’s see one more example in which we will filter out all the odd numbers from 1 to 10 (included). myrange = range(1, 11) new_list = list(filter(lambda num: num % 2 == 0, myrange)) print("Filtered list:", new_list) This example is similar to the previous example, with the difference that instead of list, we have passed a range of numbers from 1 to 10 to be filtered. In the above two examples, instead of defining a new function and calling it inside the filter() function, we have defined a lambda function there itself. This will prevent us from defining and calling a new function every time a filter() function is defined. Lambda Function with map() The map() function is used to modify each element of an iterable like list, tuple, dictionary, set, range, etc. For example, it can be used if you want to increase the value of all the elements of a list by 1. The map() function takes two parameters - a function and an iterable. The function gets called once with each element of the iterable as an argument and returns the modified element. The map() function returns the iterable having the modified elements. For example, consider a case when we want to increase the value of each element of a list by 1. For that, we will pass a function which adds 1 to the value passed to it and returns the incremented value as the first argument and the list to be modified as the second argument to the map() function. Examples In the following example, each element of the list mylist is multiplied by 2. mylist = [5, 7, 8, 10, 14, 15, 20] def multiply(num): return num * 2 new_list = list(map(multiply, mylist)) print("Modified list:", new_list) We passed a function multiply() and the list mylist to the map() function. Internally, the function multiply() takes each element of the list mylist as an argument and returns the element after multiplying it with 2. Finally, the map() function returns the iterable with the modified elements. Finally, the list() function converts the iterable returned by map() to list. In the next example, the function multiply() is replaced by a lambda function. mylist = [5, 7, 8, 10, 14, 15, 20] new_list = list(map(lambda num: num * 2, mylist)) print("Modified list:", new_list) Using map() with Multiple Iterables Suppose a function passed to map() takes two iterable arguments. In that case, we need to pass those iterables separated by comma to map() as well. list1 = [1, 2, 3, 4, 5] list2 = [6, 7, 8, 9, 10] def add(num1, num2): return num1 + num2 new_list = list(map(add, list1, list2)) print("Modified list:", new_list) In this example, the function add() takes one element of list1 and one element of list2, adds both the elements and returns the result. First, the function takes the first elements of both the lists, adds them and returns the result. After that, it takes the second elements of both the lists to add them and return the result. This continues till all the elements of the lists are added. This example is rewritten using a lambda function below. list1 = [1, 2, 3, 4, 5] list2 = [6, 7, 8, 9, 10] new_list = list(map(lambda num1, num2: num1 + num2, list1, list2)) print("Modified list:", new_list) The lambda function defined here takes two arguments num1 and num2 and returns the result of num1 + num2.
https://www.codesdope.com/course/python-lambda-functions/
CC-MAIN-2022-40
refinedweb
2,082
60.45
Technical The SQL backup format consists of multiple chunks of data which follow a basic structure of struct backupChunk { unsigned long nametag; unsigned long size; } The nametag describes the type of tag (ex.SCIN, SFGI, MQCI, etc). The size corresponds to the size of the complete chunk size which includes the 8byte chunk header. +++ subversion/libsvn_delta/svndiff.c (working copy) @@ -60,10 +60,23 @@ struct encoder_baton { int real_get_rdt_chunk(rtsp_t *rtsp_session, unsigned char **buffer) { int n=1; uint8_t header[8]; rmff_pheader_t ph; int size; int flags1; int unknown1; uint32_t ts; n=rtsp_read_data(rtsp_session, header, 8); 4. If either file timestamp is earlier than indicated in the below table, the installation is vulnerable. File Name Timestamp Size Size on disk webengine.exe 10/30/2009 12:11:16 PM 2936832 bytes Technical Description ===================== those vulnerabilities are discoered via playing with AVI 1) indx truck size 2) wLongsPerEntry 3) nEntriesInuse Olny build 5 testcases Title: Incorrect input validation in PyString_FromStringAndSize() leads to multiple buffer overflows Date Discoverd: ??-April-2008 Date Reported: 08-April-2008 Date Patched: 09-April-2008 Date Disclosed: 11-April-2008 Criticality: High Affected Products ----------------- Change length of first parameter to change rip. --- 2. grapheme_extract() NULL Pointer Dereference --- As we can see in grapheme_extract(str,size) -grapheme_extract()-- .. if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "sl|llz", (char **)&str, &str_len, &size, &extract_type, &lstart, &next) == FAILURE) { <=== str='a' and size='-1' .. } -zend_builtin_functions.c--- -PoC code--- [cx@82 /www]$ ulimit -a socket buffer size (bytes, -b) unlimited core file size (blocks, -c) unlimited data seg size (kbytes, -d) 524288 file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) 4 Good. Let's see to formatted_print.c file in php_sprintf_appendstring() function - ---formatted_print.c-start--- inline static void php_sprintf_appendstring(char **buffer, int *pos, int *size, char *add, int min_width, int max_width, char padding, int alignment, int len, int neg, int expprec, int always_sign) - ---formatted_print.c-end--- The main varible what we will see is "npad" */ */ #include <libnet.h> #include <stdio.h> #include <stdlib.h> #define VTP_DOMAIN_SIZE 32 #define VTP_TIMESTAMP_SIZE 12 struct vtp_summary { u_int8_t version; u_int8_t code; Let's look in code: "./src/modules/proxy/proxy_util.c" long int ap_proxy_send_fb(BUFF *f, request_rec *r, cache_req *c, off_t len, int nowrite, int chunked, size_t recv_buffer_size) { ... size_t buf_size; long remaining = 0; 4. If the file date is earlier than indicated in the below table, the installation is vulnerable. CA ARCserve Backup for Laptops and Desktops File Name File Size (bytes) File Date rxRPC.dll 131,072 June 11, 2008 CA ARCserve Backup for Laptops and Desktops 11.1, 11.1 SP1, 11.1 SP2 File Name File Size (bytes) File Date Service Console package Python update to version 2.4.3-24.el5.. Multiple buffer and integer overflow flaws were found in the Python -----------------------------: Let's look in code: "./goo/gmem.cc" void *gmalloc(int size) GMEM_EXCEP { #ifdef DEBUG_MEM ... #else void *p; Mitigation recommendations from Trend: 0012E428 005AD1C6 ntdll.RtlAllocateHeap xnview.005AD1C0 0012E424 0012E42C 00C60000 hHeap = 00C60000 0012E430 40000060 Flags = HEAP_TAIL_CHECKING_ENABLED|HEAP_FREE_CHECKING_ENABLED|40000000 0012E434 00000010 HeapSize = 10 (16.) 0012E464 005AD0BD xnview.005AD0D9 xnview.005AD0B8 0012E460 0012E46C 005AD0AA xnview.005AD0AD xnview.005AD0A5 0012E478 0049E8D4 xnview.005AD09B xnview.0049E8CF 0012E748 004A00F5 ? xnview.0049E6C0 xnview.004A00F0 a GIF file it loads a dynamic library called 'libsgl.so' which contains the decoders for multiple image file formats. Decoding of the GIF image is performed correctly by the library giflib 4.0 (compiled inside 'libsgl.so'). However, the wrapper object 'GIFImageDecoder' miscalculates the total size of the image. proper key, and there is no need to keep track of any additional information aside from what is already in the encrypted file itself. Think of eCryptfs as a sort of ``gnupgfs.''. (BGP), Intermediate System to Intermediate System (ISIS), etc.) and MPLS TDP/LDP to properly establish connections over an affected interface. In order to identify a blocked input interface, issue the show interfaces command, and search for the Input Queue line. The size of the input queue can continue to increase. If the current size, which is 76 in the example below, is larger than the maximum size (75), the input queue is blocked. It is possible that a device receives a high rate of traffic destined A] D_NetPlayerEvent global buffer-overflow using PKT_CHAT --------------------------------------------------------- When a chat message is received, the server takes the incoming packet and reads who sent it, its destination and naturally the entire message which is copied in a heap buffer using the remaining size of the packet for calculating the amount of data to allocate. Then a strcpy() is performed for copying the message from the packet to the new allocated buffer called msg. If the message is directed to the server it's displayed in the console using the D_NetPlayerEvent function. Nearly all of LinuxRocket's features are free. Be kind and donate to the cause!
http://www.linuxrocket.net/term/size.htm
crawl-003
refinedweb
802
56.76
ok I am continuing on with my evolving program I have 2 structures and what the program is doing is first finding the area of a rectangle "r" then we are to find the center of that rectangle. I worked a bit , think I got the point that is the center. Now I have code here, but it isn't compiling right. I am to return as a value from the point structure. But how? I figured I'd return the value of that point's x, y coordinates (x and y axis on graph, of course) but my return value is wrong. Please help. PHP Code: #include <stdio.h> struct point { int x, y; }; struct rectangle { struct point upper_left, lower_right; } r; int area_of_r(struct rectangle rect); int center_of_r(struct rectangle x,struct rectangle y); int main(void) { printf( "input upper left corner of rectangle r " "as 0 0, with space in between.\n\n"); scanf( "%d %d", &r.upper_left.x, &r.upper_left.y ); printf( "input lower right corner of rectangle r " "as 0 0, with space in between.\n\n"); scanf( "%d %d", &r.lower_right.x, &r.lower_right.y ); /* here we print the area of the rectangle */ printf( "\nThe area of rectangle r is " "%d\n", area_of_r(r)); /* here we print the center of rectangle */ printf("\nThe center point of rectangle r is " "%d %d",center_of_r(r)); return 0; } int area_of_r ( struct rectangle rect ) { int area; area = (r.lower_right.x - r.upper_left.x) * (r.upper_left.y - r.lower_right.y); return area; } int center_of_r(struct rectangle x, struct rectangle y) { int center; int new_x, new_y; /* having problems with this part, don't know how to run line of assignments to next line...is that possible */ x = (r.lower_right.x -((r.lower_right.x - r.upper_left.x)/2); y = (r.upper_left.y - ((r.upper_left.y - r.lower_right.y)/2); /* here's the return call, but what is wrong here?? How do I return the values of x and y which are the values of the coordinates of a point on a graph ??? */ return (x,y); }
http://cboard.cprogramming.com/c-programming/3458-more-my-1st-structure-program.html
CC-MAIN-2014-23
refinedweb
343
69.92
Problem has been solved, instead started using some random method to choose random number of objects to turn them into a water Hello, I am in need of Help. I am trying to create a generator that is using Perlin noise. What I am trying to create are islands that are created from hexagons or cubes and not one big mesh. I want to use Perlin noise to create more natural looking Islands. I have tried to follow tutorials created by Sebastian Lague but I have problems understanding half of his code and his coding method. What I want. What I have managed to create. My question is How do I "repair" the code so I get the result as in picture 1 ? Here is my code. public class Noise : MonoBehaviour { public int width; public int height; public float scale; [Range(0, 1)] public float persistance; public float lacunarity; int offsetX; int offsetY; public float noiseHeight; Renderer render; public int octaves; private void Start() { offsetX = Random.Range(0, 99999); offsetY = Random.Range(0, 99999); } void Update() { render = GetComponent<Renderer>(); render.material.mainTexture = GenerateTexture(); } Texture2D GenerateTexture() { Texture2D texture = new Texture2D(width, height); for(int x = 0 ; x < width; x++) { for (int y = 0; y < height; y++) { float amplitude = 1; float frequency = 1; noiseHeight = 0; for (int i = 0; i < octaves; i++) { float xCoord = x / scale * frequency + offsetX; float yCoord = y / scale * frequency + offsetY; float sample = Mathf.PerlinNoise(xCoord, yCoord) * 2 -1; noiseHeight += sample * amplitude; amplitude *= persistance; frequency *= lacunarity; } Color color = CalculateColor(noiseHeight); texture.SetPixel(x, y,color); } } texture.Apply(); return texture; } Color CalculateColor(float noiseHeight) { return new Color(noiseHeight, noiseHeight, noiseHeight); } } Answer by LeFlop2001 · Feb 03, 2020 at 05:35 PM I would suggest not using perlin noice but rather a survivel model to randomly generating islands. This works by first randomly placing tiles on a tilemap of your chosen size(choosing from water tile and land tile) and then using a forloop to go through each of the tiles to check their neighbors. Based on the amount of similar or diffrent tiles the script either changes or leaves the tile. This can be repreated for a smoother result. If you want i can explain it further and give some code, but im to lazy to do so without knowing if youd use it or not. The results are pretty similar though. That actually sounds very interesting.Isnˇt it called the Fisher-Yates shuffle method ? If you want and have the time please feel free to explain. If I decide to ditch the generation through Perlin noise, I might use what you are proposing.82 People are following this question. Can anyone explain this code? 1 Answer Strange random seed issue with level generation 1 Answer Code to randomly generate a mesh? 0 Answers Multiple Cars not working 1 Answer Random Island Generation 1 Answer
https://answers.unity.com/questions/1696538/random-island-generation-using-perlin-noise.html
CC-MAIN-2021-04
refinedweb
474
55.74
In this tutorial I'm going to share some useful compilation and debugging tips. These tips can save you a ton of time debugging and make compilation and debugging convenient. Useful compilation flags G++ has many useful warning flags such as -Wallenables all warnings. I highly recommend using this flags. -Wextraenables extra warnings. -Wno-sign-conversionsilences sign conversion warnings for code like x < vec.size() -Wshadowenables shadowing warnings, so that if you define another variable with the same name in a local scope, you will get a warning. Using these flags can save you time debugging silly mistakes like forgetting to return a value in a function or doing a comparison like x < vec.size()-1 where vec.size()-1 can underflow into SIZE_MAX-1. Precompiled headers You can use precompiled headers to substantially speed up the compilation of your code. Note that the precompiled header is only used if it is compiled with the same flags that your code is compiled with. You can store the precompiled headers in the folder bits/stdc++.h.gch and G++ will automatically find the correct precompiled header (with the correct flags). You can store multiple precompiled headers for bits/stdc++.h for different sets of flags. sudo g++ -Wall -Wextra -Wshadow -D_GLIBCXX_ASSERTIONS -DDEBUG -ggdb3 -fmax-errors=2 -std=c++17 -o /usr/include/x86_64-linux-gnu/c++/9/bits/stdc++.h{.gch/ASSERT_DEBUG_17.gch,} Useful debugging flags Debugging is an important part of CP. After all, even tourist gets RTE on his submissions occasionally. -genables basic debug information -ggdb3enables extended debug information such as macros. I suggest using this generally. It slows down compilation a little bit, but has no effect on run-time, since it just adds an extra piece of data in the executable that helps the debugger map the executable file position to line numbers and other related information. -D_GLIBCXX_DEBUGenables out of bounds checking and checks that your comparison operator is irreflexive ( comp(a, a)should return false), and many other useful things. However, this can slow down your code runtime (and compile time) substantially. -D_GLIBCXX_ASSERTIONSenables light-weight debug checks such as out of bounds checking. I suggest always using this when compiling locally, as it has negligible runtime and compile-time performance impact. -fmax-errors=<n>limits the number of errors displayed to n. This stops the compiler from printing an excessive amount of error messages. -DDEBUGis used in my debug macro (see below) to enable debugging. Add this to your compilation flags locally to enable debugging. Since this macro is not defined in CodeForces' compilation options, when your code is judged debugging will be disabled. The _GLIBCXX_* macros are documented in the libstdc++ documentation. Note that the names of these macros start with an underscore _. The flag -D<macro> defines the macro macro, so you can also use these macros by defining them in the first line before you include any headers. Example: #ifdef DEBUG #define _GLIBCXX_DEBUG #endif #include <bits/stdc++.h> How to use these flags Add the compilation flag -D_GLIBCXX_DEBUG to define the macro, or likewise -D_GLIBCXX_ASSERTIONS. Recommended compiler flags -Wall -Wextra -Wshadow -D_GLIBCXX_ASSERTIONS -DDEBUG -ggdb3 -fmax-errors=2 AddressSanitizer and UndefinedBehaviorSanitizer The _GLIBCXX_* macros only do debug checks for STL containers and algorithms. So you should use std::array instead of C arrays. For checking for C array out of bounds, use AddressSanitizer with -fsanitize=address. For checking undefined behavior like arithmetic overflow etc use -fsanitize=undefined. You can use them both with -fsanitize=address,undefined. When upsolving a problem, if you solution does not AC, CodeForces runs sanitizers on it. So you can check the output to see if the sanitizers caught any error. Debug macro A useful tool is to have a debug macro that can print useful information like the line number, variable name, etc. Remember to add -DDEBUG to your local compilation flags to enable this debug macro. #include <bits/stdc++.h> using namespace std; // === Debug macro starts here === int recur_depth = 0; #ifdef DEBUG #define dbg(x) {++recur_depth; auto x_=x; --recur_depth; cerr<<string(recur_depth, '\t')<<"\e[91m"<<__func__<<":"<<__LINE__<<"\t"<<#x<<" = "<<x_<<"\e[39m"<<endl;} #else #define dbg(x) #endif template<typename Ostream, typename Cont> typename enable_if<is_same<Ostream,ostream>::value, Ostream&>::type operator<<(Ostream& os, const Cont& v){ os<<"["; for(auto& x:v){os<<x<<", ";} return os<<"]"; } template<typename Ostream, typename ...Ts> Ostream& operator<<(Ostream& os, const pair<Ts...>& p){ return os<<"{"<<p.first<<", "<<p.second<<"}"; } // === Debug macro ends here === int func(vector<pair<int, string>>& vec){ dbg(vec); return 42; } int main(){ vector<pair<int, string>> vec{{1, "code"}, {2, "forces"}}; dbg(func(vec)); } Debug output Note that the debug output is indented based on the recursion level. Also, the output is in red to aid in differentiating debug output from regular output. Nifty isn't it? How does the debug macro work? (you may skip these technical details) The Ostream template parameter is used so that this operator<< overload has low priority. The operator<< template is also constrained by enable_if so that it does not get called when std::bitset::operator<< is intended to be called. Debugging using gdb To debug using gdb, this bash command is useful: (echo "run < a.in" && cat) | gdb -q a This automatically runs a with input from a.in. Quick gdb tutorial startstarts your code, pausing execution at main() runruns your code until a breakpoint is hit break <function>pauses execution when a function is called - Use upand downto move up and down the stack. jump <line_no>can be used to jump to a line number. continuecan be used to continue execution nextgoes to the next line in the current function stepsteps into the next line executed (so if you call a function it enters the function's code) - You can type abbreviations of these commands, such as rfor run. Adding these flags and commands to ~/.bashrc If you use the terminal to compile, you can add these functions to your .bashrc file in your home folder (e.g. C:\Users\my_username on Windows). This shell script in this file is executed by bash when you open the terminal. Note that you need to restart your terminal or enter . .bashrc for the changes to take effect. function compile { g++ -Wall -Wextra -Wshadow -D_GLIBCXX_ASSERTIONS -DDEBUG -ggdb3 -fmax-errors=2 -o $1{,.cpp} } function debug { (echo "run < $1.in" && cat) | gdb -q $1 } Using these bash functions: compile a # Compiles a.cpp to a debug a # Runs a in the debugger with input from a.in I hope you found this tutorial useful. Feel free to ask any questions or give feedback. EDIT: Added section on precompiled headers.
http://codeforces.com/blog/entry/79024
CC-MAIN-2021-04
refinedweb
1,107
57.57
xlrd Purpose: Provide a library for developers to use to extract data from Microsoft Excel (tm) spreadsheet files. It is not an end-user tool. Author: John Machin, Lingfo Pty Ltd ([email protected]) Licence: BSD-style (see licences.py) Versions of Python supported: 2.6, 2.7, 3.3+. External modules required: The package itself is pure Python with no dependencies on modules or packages outside the standard Python distribution. Outside the current scope: xl Unlikely to be done: - Handling password-protected (encrypted) files. Particular emphasis (refer docs for details): - Operability across OS, regions, platforms - Handling Excel's date problems, including the Windows / Macintosh four-year differential. - Providing access to named constants and named groups of cells (from version 0.6.0) - Providing access to "visual" information: font, "number format", background, border, alignment and protection for cells, height/width etc for rows/columns (from version 0.6.1) Quick start: import xlrd book = xlrd.py 3rows *blah*.xls the docs are in the doc subdirectory, and there's a sample script: PYDIR/Scripts/runxlrd.py - If os.sep != "/": make the appropriate adjustments. Acknowledgements: - This package started life as a translation from C into Python of parts of a utility called "xlreader" developed by David Giffin. "This product includes software developed by David Giffin [email protected]." - ()
https://devhub.io/repos/python-excel-xlrd
CC-MAIN-2019-51
refinedweb
215
50.73
. std::cin.ignore (INFINITE, '\n'); should work. From cin I wouldn't expect any EOF since it is a stream but not a file. Experts Exchange Solution brought to you by Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.Start your 7-day free trial Using cin is not as simple as it looks. std::cin.flush(); // flush the complete stream Not sure though ! --------8<-------- #include <iostream> #include <limits> int main() { std::string str1,str2; using std::cin; const int INFINITE = std::numeric_limits<int>:: cin >> str1; cin.ignore(INFINITE,'\n'); getline(cin,str2); std::cout << "str1 [" << str1 << "]\nstr2 [" << str2 << "]\n"; } --------8<-------- Can you post a simple example of your problem?
https://www.experts-exchange.com/questions/20803630/Very-newb-question-about-std-cin.html
CC-MAIN-2018-30
refinedweb
126
67.76
Griffon is to the desktop what Grails is to the web. (And its 0.0 release was today.) That's more or less all that needs to be said about it, if you're familiar with Grails. If you're not, Griffon is an MVC framework for Swing applications, using "convention over configuration" for its source structure and Groovy as its language. It's new and fun, in the same way that Grails is. Here's some recent (colorfully titled) reading, i.e., all of it published in the last 24 hours or so: - Danno Ferrin: Announcing Griffon - Andres Almiray: Griffon takes flight - James Williams: Awakening the Griffon - Guillaume Laforge: Griffon shows its claws So much being similar to Grails, creating tools for Griffon is as simple as tweaking the tools for Grails. Here's the NetBeans project template that sets up the source structure by calling "griffon create-app": Complete the wizard and here's your application, looking similar to Grails, of course: Let's look at those three files. First, an empty model: import groovy.beans.Bindable class HelloGriffonModel { } Then, a controller: class HelloGriffonController { // these will be injected by Griffon def model def view } Finally, the view: application(title:'HelloGriffon', pack:true, locationByPlatform:true) { // add content here label("Content Goes Here") // deleteme } Run it and you see this: Now, let's do something with the model: import groovy.beans.Bindable class HelloGriffonModel { @Bindable def greeting = "Hello world" } Then, we'll change the view to show the above simple message in our label: application(title:'HelloGriffon', pack:true, locationByPlatform:true) { // add content here label(text:bind {model.greeting}) } That's all. Just run it and you'll have your static message replaced by the text set in the model. Next, see the Griffon Quick Start for details on creating your own initial application based on the above. The NetBeans Groovy Editor comes in handy: Three cool samples are part of the Griffon distro, in the "samples" folder. I simply opened them in the IDE (i.e., thanks to "convention over configuration", NetBeans IDE knows exactly what a Griffon application consists of, so there's no import process, no NetBeans project metadata is added to the source structure, one simply opens it via the tweaked Grails modules for NetBeans IDE): For example, the "GrailsSnoop" sample lets you browse the Grails documentation in a Swing application: However, in addition to a Swing application, thanks to the "griffon run-app" (i.e., in my case, I simply choose "Run" inside NetBeans IDE) you also have a JNLP application, as well as an applet: That's not bad! Here's wishing this new framework all the best! I'm also hoping that there will be many more Grails-like frameworks coming out. The Grails approach really gives one the very best of all worlds in hiding the complexities beneath DSLs, Groovy as your language (with all the advantages that that brings, such as not needing to throw out all your Java books), MVC as the structure and the possibility to use different widget sets (in fact, one of the samples uses JIDE, specifically, com.jidesoft.swing.TristateCheckBox). And what about the tools I showed above? I.e., the integration with NetBeans IDE? Once the NetBeans Grails support has been officialy released as part of 6.5, i.e., all those tools will then be stable, I will fork them and provide a new set of plugins for Griffon. John Denver replied on Thu, 2008/09/11 - 8:48pm This is awesome!, Imagine using JRE6_10 and Griffon, for create RIA's or desktop apps with this will rock, Im just waiting the GORM integration. Regards. Nic Pill. David Savage replied on Mon, 2009/09/28 - 3:26am Kookee Gacho replied on Mon, 2012/07/02 - 7:11am
http://groovy.dzone.com/news/hello-griffon
CC-MAIN-2014-42
refinedweb
633
59.74
Hi All, Can one of you guys give me some advice on the following, I have 2 float numbers and i want to change them into ints then find the remander via modulus, this is what i have :- If i enter 98.33 for float1 & 15.22 for float2If i enter 98.33 for float1 & 15.22 for float2Code: #include <iostream.h> #include <stdlib.h> void main() { float float_num1; float float_num2; int int_num3; int int_num4; cout << "1st number:"; cin >> float_num1; cout << "2nd number:"; cin >> float_num2; int_num3 = (int)float_num1; int_num4 = (int)float_num2; cout << int_num3 % int_num4; } The remainder shuld be 5 (if my maths is right) But i get the result 8 in my program? Can anyone see a problem? Cheers Boontune
http://cboard.cprogramming.com/cplusplus-programming/32773-help-modulus-printable-thread.html
CC-MAIN-2015-06
refinedweb
120
82.65
Ubuntu Feisty 7.04 manual page repository Ubuntu is a free computer operating system based on the Linux kernel. Many IT companies, like DeployIS is using it to provide an up-to-date, stable operating system. Provided by: libaa1-dev_1.4p5-30_i386 NAME aa_getevent - keyboard functions SYNOPSIS #include <aalib.h> int aa_getevent ( aa_context *c, int wait ); PARAMETERS aa_context *c Specifies the AA-lib context to operate on. int wait 1 if you wish to wait for the even when queue is empty. DESCRIPTION Return next event from queue. Return next even from queue. Optionally wait for even when queue is empty. RETURNS Next event from queue (values lower than 256 are used to report ascii values of pressed keys and higher values have special meanings) See the AA-lib texinfo documentation for more details. 0 is returned when queue is empty and wait is set to 0. aa_fonts(3), aa_dither‐ aa_mousedrivers(3), aa_displayrecommended(3), aa_scrwidth(3), aa_imgwidth(3), aa_attrs(3), aa_current‐ aa_autoinitmouse(3), aa_initkbd(3), aa_uninitmouse(3), aa_gotoxy(3), aa_hidemouse(3), aa_setfont(3), aa_parseoptions(3), aa_putpixel(3), aa_recom‐ aa_recommendhimouse(3), aa_recom‐ aa_recommendlowdisplay(3)
http://feisty.unixmanpages.org/man3/aa_getevent.html
CC-MAIN-2020-16
refinedweb
183
50.63
Saving / exporting kerning groups & table Hello! I don't have Metrics Machine, so I've laboriously created kerning groups and my own kerning table. Is there a way to save separately, or export, the group and table information so I could use it for other fonts as well? Thanks, you could make an import groupFromUFOscript like: font = CurrentFont() sourceFont = OpenFont(showUI=False) for group, value in sourceFont.groups.items(): if group in font.groups: print "already in font!" else: font.groups[group] = value That works for importing groups, but not for the kerning table. It also doesn't update groups who have the same name in the source font and target font. I've been reading a bit more about ufo's - if there are files called kerning.plistand groups.plistcontaining the ufo's kerning and group info, can I use those to pass this information around between different ufo? If yes, how I can extract those plist files from my ufo? It appears that you can change your file extension from fontname.ufoto fontname.zipand then gain access to the fontinfo.plistfiles. It looks like an xml format like .mobi, .epub, .docx, etc... even better: control-click your ufo and select "show package contents"
https://forum.robofont.com/topic/258/saving-exporting-kerning-groups-table
CC-MAIN-2020-40
refinedweb
205
67.35
hi I'm new I got some questions about bubble-sort, so I registered here. I'm currently working on this program (bubble-sort) so I looked up some example codes and I found this here, which is practically the right thing I'm looking for. But I have some questions, like 1. How would printf("%2d. Pass: ", i-1); be with cout, since I'm not really familiar with printf as I always worked with cout. my guess would be cout<<("Pass: ", i-1); but then there is no space between each number printf("%3d", z[k]); I think it should be cout<<z[k]; 2. How would I program this when I want to type numbers myself instead of random numbers. #include <iostream.h> #include <stdlib.h> #include <time.h> void bubble_sort(int n, int z[]) { int i,j,x,k; for (i=2; i<=n; i++) { for (j=n; j>=i; j--) if (z[j-1] > z[j]) { x = z[j-1]; z[j-1] = z[j]; z[j] = x; } printf("%2d. Pass: ", i-1); for (k=1; k<=10; k++) printf("%3d", z[k]); printf("\n"); } } int main() { int i,k,number[11]; srand(time(NULL)); for (i=1; i<=8;i++) number[i] = rand()%100; printf("------ before bubble_sort ------------------------------\n"); for (k=1; k<=8; k++) printf("%3d", number [k]); cout<<("\n\n"); bubble_sort(8, number); printf("------ after bubble_sort ------------------------------\n"); for (k=1; k<=8; k++) printf("%3d", number [k]); cout<<("\n"); } oh and I get 02293616 or sth. like that on each pass, how can I remove it?
https://www.daniweb.com/programming/software-development/threads/396343/bubble-sort
CC-MAIN-2017-26
refinedweb
263
79.8
Naming JavaScript Functions - 3/9/2014 Remember that anonymous function? Probably not. After all, if it wasn’t worth a name, it probably hasn’t been used since. But written one lately? Absolutely. They abound, especially in event handlers: document.addEventListener('DOMContentLoaded', function () { console.log('ready!'); }); And iterators: [1,2,3,4].filter(function (n) { return (n % 2) === 1; }); Most of the time they’re, ahem, functionally harmless. But there are good reasons to give them a name. Reuse We recognize that the iterator above checks for odd parity. More complicated functions won’t be as clear, though. So rather than making a reader work back through the actual implementation, why not describe its behavior with a name? function isOdd (n) { return (n % 2) === 1; } [1,2,3,4].filter(isOdd); This isn’t just clearer to read. By providing a name (or assigning to a variable) we’ve turned a one-off function into something we can reuse throughout a project. If we wanted to write a function for choosing prime numbers from a list, a first step might be to use isOdd to filter out anything divisible by two. Now that it’s named, we can do that. Clarity Besides allowing us to reuse functions throughout a project, names can also help us understand what they represent. If we can establish a consistent convention we can guess at a glance how a function should be used. Consider the following rules: is*, has*( isChocolate, hasFrosting) – truth tests around an object property to*( toJSON) – a conversion to another type get*( Cake.prototype.getType) – retrieves a property from an object set*( Cake.prototype.setType) – sets a property on an object If we adhere to convention, we will immediately know that we can use isOdd to filter collections of things; based on context we can make a reasonable inference that those things will be numbers and only odd ones will be returned. Debugging Stack traces indicate sources of error, but they’re infinitely more useful when the functions that are failing have names. For instance, running a purely-anonymous function with node’s --stack-trace-limit set to 1: (function () { throw new Error('Whodunnit?'); })(); Produces an unhelpful trace: Error: Whodunnit? at repl:2:7 Contrast with the result once a name has been added: (function isJudgeDoom () { throw new Error('Whodunnit?'); })(); Error: Whodunnit? at isJudgeDoom (repl:2:7)r> Much better. Trace summary Each interpreter will present traces slightly differently, but all benefit from more information. Using the Node REPL as an example, contrast an error in an anonymous function with the following: The best course, then, is to err on the side of caution: name early, name often. Do it for clarity, do it for reuse. Do it to support debugging. But unless the function is the most trivial one-off, just do it.
https://rjzaworski.com/2014/03/naming-javascript-functions
CC-MAIN-2022-05
refinedweb
472
57.37
Q: How can I tell how much destination buffer space I'll need for an arbitrary sprintf call? How can I avoid overflowing the destination buffer with sprintf? A: When the format string being used with sprintf is known and relatively simple, you can sometimes predict a buffer size in an ad-hoc way. If the format consists of one or two %s's, you can count the fixed characters in the format string yourself (or let sizeof count them for you) and add in the result of calling strlen on the string(s) to be inserted. For example, to compute the buffer size that the call sprintf(buf, "You typed \"%s\"", answer);would need, you could write: int bufsize = 13 + strlen(answer); or int bufsize = sizeof("You typed \"%s\"") + strlen(answer);followed by char *buf = malloc(bufsize); if(buf != NULL) sprintf(buf, "You typed \"%s\"", answer);You can conservatively estimate the size that %d will expand to with code like: #include <limits.h> char buf[(sizeof(int) * CHAR_BIT + 2) / 3 + 1 + 1]; sprintf(buf, "%d", n);This code computes the number of characters required for a base-8 representation of a number; a base-10 expansion is guaranteed to take as much room or less. (The +2 takes care of truncation if the size is not a multiple of 3, and the +1+1 leaves room for a leading - and a trailing \0.) An analogous technique could of course be used for long int, and the same buffer can obviously be used with %u, %o, and %x formats as well. When the format string is more complicated, or is not even known until run time, predicting the buffer size becomes as difficult as reimplementing sprintf, and correspondingly error-prone (and inadvisable). A last-ditch technique which is sometimes suggested is to use fprintf to print the same text to a temporary file, and then to look at fprintf's return value or the size of the file (but see question 19.12). (Using a temporary file for this application is admittedly clumsy and inelegant,[footnote] but it's the only portable solution besides writing an entire sprintf format interpreter. If your system provides one, you can use a null or ``bit bucket'' device such as /dev/null or NUL instead of a temporary file.) If there's any chance that the buffer might not be big enough, you won't want to call sprintf without some guarantee that the buffer will not overflow and overwrite some other part of memory. If the format string is known, you can limit %s expansion by using %.Ns for some N, or %.*s (see also question 12.10). To avoid the overflow problem, you can use a length-limited version of sprintf, namely snprintf. It is used like this: snprintf(buf, bufsize, "You typed \"%s\"", answer);snprintf has been available in several stdio libraries (including GNU and 4.4bsd) for several years. It has finally been standardized in C99. As an extra, added bonus, the C99 snprintf provides a way to predict the size required for an arbitrary sprintf call. C99's snprintf returns the number of characters it would have placed in the buffer if there were room, not just how many it did place. Furthermore, it may be called with a null pointer and a buffer size of 0 and a null pointer as the destination buffer. Therefore, the call nch = snprintf(NULL, 0, fmtstring, /* other arguments */ );computes the number of characters required for the fully-formatted string. With that number (nch) in hand, you can then malloc a big-enough buffer and make a second snprintf call to fill it. Yet another option is the (nonstandard) asprintf function, present in various C libraries including bsd's and GNU's, which formats to (and returns a pointer to) a malloc'ed buffer, like this: char *buf; asprintf(&buf, "%d = %s", 42, "forty-two"); /* now buf points to malloc'ed space containing formatted string */ Additional links: sample implementation of asprintf References: C9X Sec. 7.13.6.6 Hosted by
http://c-faq.com/stdio/sprintfsize.html
CC-MAIN-2014-52
refinedweb
677
57.2
struts2 properties file struts2 properties file How to set properties file in struts 2 ? Struts 2 Format Examples Struts 2 Tutorial Struts 2 File Upload error Struts 2 File Upload error Hi! I am trying implement a file upload using Struts 2, I use this article, but now the server response the error... solve this? Hi Friend, Please visit the following link: File Struts 2 Format Examples the formatting in properties file In Struts 2 the formatting is defined... Struts 2 Format Examples In this section you will learn how to format Date and numbers in Struts 2 Framework. Our struts2 struts2 dear deepak sir plz give the struts 2 examples using applicationresources.properties file Struts 2 File Upload , these error messages are stored in the struts-messsages.properties file... Struts 2 File Upload In this section you will learn how to write program in Struts 2 to upload the file struts2 struts2 Sir when i have run my struts 2 web application,every time i get error " request resources is not available",,,what is this,,,plz help me Properties File IN Struts - Struts Properties File IN Struts Can we break a large property file into small pieces? Suppose we have property file whose size is 64 kb .can we break... the detail along with code and also entry about properties into configuration file struts in this file. # Struts Validator Error Messages errors.required={0...;!-- This file contains the default Struts Validator pluggable validator... messages associated with each validator defined in this file. They should Interceptors in Struts 2 be done simply by removing the entry from the struts.xml file. Struts 2 default.... fileUpload: This interceptor provides support to file upload in struts 2...Interceptors in Struts 2 Interceptors are conceptually analogous to Servlet Struts properties file location - Struts Struts properties file location Hi, Where struts properties file stored in web application. I mean which location. Thank u Hi Friend, The struts.properties file can be locate anywhere on the classpath Struts 2 Tutorial ; Struts 2 Format Examples In this section you will learn how to format Date and numbers in Struts 2 Framework. Our Struts 2 Format Examples are very...; Struts 2 File Upload In this section you will learn how Struts 2 Session Scope Struts 2 Session Scope  ... as it is not available with the Struts2 distribution. The struts.xml file...; <struts> <!-- Rose India Struts 2 Tutorials --> Struts2 Validation Problem - Struts in the browser having the example of handling the error in struts 2. http... information on Struts 2 visit to : Validation Problem Hi, How to validate field Struts 2 Interceptors Struts 2 Interceptors Struts 2 framework relies upon Interceptors to do most... part of Struts 2 default stack and are executed in a specific order... and retrieves action or error messages for action that implements Struts 2 : Http Status Error 404 - Struts Struts 2 : Http Status Error 404 Hi All, I'm facing the below... error as shown below. see below error for the details code... --------------------------------------------------------------------------- ERROR(404) : "The requested Error - Struts ----------------------- RoseIndia.Net Struts 2 Tutorial RoseIndia.net Struts 2... to test the examples Run Struts 2 Hello...Error Hi, I downloaded the roseindia first struts example Introduction to Struts 2 Introduction to Struts 2 This section provides you a quick introduction to Struts 2 framework. This section we are discussing the new features, struts 2 basics and architecture Struts 2 Redirect Action Struts 2 Redirect Action In this section, you will get familiar with struts 2 Redirect action and learn to use it in the struts 2 application. Redirect After Post: This post Struts - Struts Struts Hi, I m getting Error when runing struts application. i... /WEB-INF/struts-config.xml 1 ActionServlet *.do but i m getting error 2 Date Format Examples Struts 2 Date Format Examples In this tutorial you will learn about Date Format function in Struts 2. We...; <html> <head> <title>Struts 2 Format Struts Struts What is called properties file in struts? How you call the properties message to the View (Front End) JSP Pages Struts 2 double validator Struts 2 double validator The Double validator of Struts 2 Framework checks if the given input is double or not. If the input is not double, it generates the error message. Double Struts 2 Actions Struts 2 Actions In this section we will learn about Struts 2 Actions, which is a fundamental concept in most of the web application frameworks. Struts 2 Action are the important MySQL Struts 2 MySQL In this section, You will learn to connect the MySQL database with the struts 2 application...; <include file="struts-default.xml"/>   Url Validator Struts 2 Url Validator The URLValidator of Struts 2 Framework checks whether the String contained within..., it generates an error message. The error message is supplied Features Struts 2 Features The strut-2... of the general features of the current Apache Strut 2 framework are given below. ... of the browser in HTML, PDF, images or any other. Tags - Tags in Strut 2 Detailed introduction to Struts 2 Architecture Detailed introduction to Struts 2 Architecture Struts 2 Framework Architecture In the previous section we learned... components of Struts 2 framework. How Struts 2 Framework works? Suppose you problems regrading .properties files of formbean class. else it will throw one error msg form .properties file...problems regrading .properties files According to my struts... that why its not showing these error messages Struts 2 Validation (Int Validator) Struts 2 Validation (Int Validator) Struts 2 Framework provides in-built validation functions to validate user.... This section discusses all the validation functions available with Struts 1)in struts server side validations u are using programmatically validations and declarative validations? tell me which one is better ? 2) How to enable the validator plug-in file2 Filter not getting initialised in WAS 8.0 "Struts2 filter could not be initialised" The filter properties are set in WAS 8.0 server settings. The required struts 2 jar files are also in the WEB-INF/lib...Struts2 Filter not getting initialised in WAS 8.0 Hi All, Am facing Struts 2 Date Validator Struts 2 Date Validator The Date validator in the Struts 2 Framework checks whether the supplied date lies... 4 : Create a Date validator with in an xml file: The validation.xml format STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain hi in my application number of properties file are there then how can we find second properties file in jsp page Struts 2 RequiredString validator . Then the error message is displayed to user. This example demonstrates how to use Struts 2...Struts 2 RequiredString validator This section discusses RequiredString validator of Struts 2 framework Struts 2 datetimepicker Example ;Struts 2 Format Date Example!</title> <link href="<s... Struts 2 datetimepicker Example In this section we will show you how to develop datetimepicker in struts 2. Struts 2 struts2 -...("Error:" + ioe.getMessage()); } return returnInt; } protected void Struts 2 Login Application Struts 2 Login Application Developing Struts 2 Login Application In this section we are going to develop login application based on Struts 2 Framework. Our current login application The server encountered internal error() - Struts The server encountered internal error() Hello, I'm facing the problem in struts application. Here is my web.xml MYAPP... config 2 action Struts2 Actions . However with struts 2 actions you can get different return types other than... name of the action to be executed. Struts 2 processes an action...Struts2 Actions E-mail Validator Struts 2 E-mail Validator The email validator in Struts 2 Framework checks whether a given String... an error message. The error message is supplied between the < STRUTS STRUTS Suppose if you write label message with in your JSP page. But that "add.title" key name was not added in ApplicationResources.properties file? What happens when you run that JSP? What error shows? If it is run Struts2 - Struts Struts2 Hi, I am using doubleselect tag in struts2.roseindia is giving example of it with hardcoded values,i want to use it with dynamic values. Please give a example of it. Thanks Format | Struts 2 File Upload | Struts 2 Resources | Static Parameter... Login Application | Struts 2 | Struts1 vs Struts2 | Introduction... configuration file | Struts 2 Actions | Struts 2 Redirect Action Struts Articles mapping definitions in the struts-config file. 2. Servlet creates... and add some lines to the struts-config.xml file to get this going... the specific format message for the client side and a taglib to handle the error message Still have the same problem--First Example of Struts2 - Struts Class .......... Properties file WebContent ....... Pages ............. Jsp... of struts-config file in this structure put ur file...Still have the same problem--First Example of Struts2 Hi I tried namespace in struts.xml file - Struts in struts.xml file Struts Problem Report Struts has detected an unhandled exception: Messages: There is no Action mapped for namespace / and action name.../struts.properties file. this error i got when i run program please help me  Struts 2 Non-form Tags (UItags) Struts 2 Non-form Tags (UItags) Apache Struts is an open-source framework used to develop Java web applications. In this section, struts 2 non struts - Struts 2 Framework works? Struts 2 framework works? Container reads the WEB-INF/web.xml file, which has all... from the web.xml file and configures the Struts 2 environment on the startup...How Struts 2 Framework works? This tutorial explains you the working Struts 2 - Validation - Struts Struts 2 - Validation annotations digging for a simple struts 2 validation annotations example.1.8 Hello World Example example using the latest Struts 2.8.1. We will use the properties files... to configure struts environment in web.xml file. We have provided the necessary... file web.xml: In the web.xml file we are required to configure the Struts filter Exception Handling-Error Messages in Program Exception Handling-Error Messages in Program Hi Friend, I am having... with this. Here is the code with the error messages as Follows: import...[]) throws Exception{ This is where I begin to see problems with error messages My first struts 2 program My first struts 2 program Hi, Please help me for my first struts 2... from one page to another. Details: I am trying my first Struts 2 example...: Struts2 Tutorials Thanks Struts 2.2.1 - Struts 2.2.1 Tutorial Struts 2 hello world application using annotation Running... Implementing Actions in Struts 2 Chaining Actions in Struts... About Struts 2.2.1 Login application Create JSP file Create Why Struts 2 to use all String properties. Simplified testability - Struts 2... Why Struts 2 The new version Struts... of design problem of struts1 framework that has been resolved in the struts 2>
http://www.roseindia.net/tutorialhelp/comment/87545
CC-MAIN-2014-35
refinedweb
1,805
58.18
ink Ink is inkle's scripting language for writing interactive narrative, both for text-centric games as well as more graphical games that contain highly branching stories. It's designed to be easy to learn, but with powerful enough features to allow an advanced level of structuring. Here's a taster from the tutorial. - I looked at Monsieur Fogg * ... and I could contain myself no longer. 'What is the purpose of our journey, Monsieur?' 'A wager,' he replied. * * 'A wager!'[] I returned. He nodded. * * * 'But surely that is foolishness!' * * * 'A most serious matter then!' - - - He nodded again. * * * 'But can we win?' 'That is what we will endeavour to find out,' he answered. * * * 'A modest wager, I trust?' 'Twenty thousand pounds,' he replied, quite flatly. * * * I asked nothing further of him then[.], and after a final, polite cough, he offered nothing more to me. <> * * 'Ah[.'],' I replied, uncertain what I thought. - - After that, <> * ... but I said nothing[] and <> - we passed the day in silence. - -> END Getting started Download Inky, our ink editor, and the follow either: - The basics tutorial if you're non-technical and/or if you'd like to use ink to make a web-based interactive fiction game - The full tutorial if you want to see everything that ink has to offer. For those who are very technically-minded, you can also use inklecate directly, our ink command line compiler (and player). To keep up to date with the latest news about ink sign up for the mailing list. Writing with Unity - Download the latest ink-unity-integration package, or grab it from the Unity AssetStore, and place in your project. - Create a .inktext file such as myStory.ink, containing the text Hello, world!. - Select the file in Unity, and you should see a Play button in the file's inspector. - Click it, and you should get an Editor window that lets you play (preview) your story. - Follow the tutorial: Writing with Ink. Ink, Inky, ink-unity-integration, inkjs, inklecate, inkle oh my! - Ink is the core narrative engine itself, written in C#. It includes the code for the compiler. If you're not technical, you don't need to worry about this. - Inky is our ink editor, which is a text editor with support for playing as you write. If you're just starting out with ink, this is all you need. - ink-unity-integration is a plugin to allow you integrate the ink engine with a Unity game. It includes the full Ink engine source. - inklecate is a command-line compiler for ink. Inky uses it behind the scenes. - inkjs is a JavaScript port of the ink engine, useful for powering web-based game. This is included when you export a story for web within Inky. - inkle is the game development studio that created ink - inklewriter is an unrelated interactive story creation tool that designed to be easy to use, but is far less powerful. It's possible to export inklewriter stories to ink, but not vice versa. What you need if you are a: - Writer: Inky - Unity game developer: ink-unity-integration plugin. Optionally, Inky if you're reading/writing the ink too. - Web-game author: Inky Versioning The intention is the following: - Each latest ink/inky/ink-unity-integration release on each Github release page should work together. Ink and Inky version numbering are separate though. You can see which version of the ink engine Inky has in the About box. - ink / ink-unity-integration should effectively have the same version of the same engine, except that the integration might have additional Unity-specific extra minor releases. Their X.0.0 and 0.Y.0 version numbers should match. The 0.0.Z version number in ink-unity-integration may diverge to reflect Unity-specific changes. - inkjs is maintained by the community (primarily @y-lohse and @ephread). It's usually one major version behind the main ink engine, but they work hard to catch up after each release! - The ink engine also has story-format and save-format versions that are internal to the code (see Story.cs and StoryState.cs). Advanced: Using inklecate on the command line Download the latest version of inklecate (or build it yourself, see below.) Create a text file called myStory.ink, containing the text Hello, world!. On the command line, run the following: Mac: ./inklecate -p myStory.ink Windows: inklecate.exe -p myStory.ink Linux: mono inklecate.exe -p myStory.ink - To run on Linux, you need the Mono runtime and the Mono System.Core library (for CLI 4.0). If you have access to the debian repository, you can install these using: sudo apt install mono-complete The -poption uses play mode so that you can see the result immediately. If you want to get a compiled .jsonfile, just remove the -poption from the examples above. Follow the tutorial: Writing with Ink. Integrating into your game Full article: see Running Your Ink. For a sample Unity project, see The Intercept. Ink comes with a C#-based (or JavaScript-based runtime engine that can load and run a compiled ink story in JSON format. To compile the ink, either export from Inky (File -> Export to JSON). Or if you're using Unity, you can use the ink-unity-integration package which will automatically compile your ink for you whenever you edit it either in Inky or in an editor of your choice. Advanced: You can also use the inklecate command line tool to compile ink stories, or you can call the compiler from C# code yourself. ink isn't designed as an end-to-end narrative game engine. Rather, it's designed to be flexible, so that it can slot into your own game and UI with ease. Here's a taster of the code you need to get started: using Ink.Runtime; // 1) Load story _story = new Story(sourceJsonString); // 2) Game content, line by line while(_story.canContinue) Debug.Log(story.Continue()); // 3) Display story.currentChoices list, allow player to choose one Debug.Log(_story.currentChoices[0].choiceText); _story.ChooseChoiceIndex(0); // 4) Back to 2 ... The development of ink Build Requirements All Environments: - .NET Core SDK 3.1 or newer - Optionally Visual Studio Code Windows (Optional): - Visual Studio (e.g. Community edition); required to build nuget package with multi-targeting of .NET Framework 3.5 - Xamarin, or Unity's own version of MonoDevelop Mac (Optional): - Visual Studio for Mac - Xamarin, or Unity's own version of MonoDevelop Building with Visual Studio - Load up the solution file - ink.sln. - Select the Release configuration and choose Build -> Build All (or Build Solution in Visual Studio). - The compiler binary should be built in inklecate/bin/Release(or x86), while the runtime engine DLL will be built in ink-engine-dll/bin/Release/ink-engine.dll Building with command-line cdto the project you want to build (e.g., cd inklecate) - Build using dotnet: dotnet build -c Release - To run console apps: dotnet run -c Release - To produce self-contained executable: dotnet publish -r win-x64 -c Release --self-contained false - Recommended RIDs for the platform ( -r) are: win-x64, linux-x64, and osx-x64 To run the binaries, you need to install .NET Core Runtime 2.2 or newer (included in SDK). Need help? - Discord - we have an active community of ink users who would be happy to help you out. Discord is probably the best place to find the answer to your question. - GitHub Discussions - Or, you can ask a question here on GitHub. (Note: we used to use Issues for general Q&A, but we have now migrated.) How to contribute We’d of course appreciate any bug fixes you might find - feel free to submit a pull request. However, usually we're actively working on a game, so it might take a little while for us to take a look at a non-trivial pull request. Apologies in advance if it takes a while to get a response! Architectural overview See the architectural overview documentation for information about the pipeline of the ink engine, and a birds-eye view of the project's code structure. License ink is released under the MIT license. Although we don't require attribution, we'd love to know if you decide to use ink a project! Let us know on Twitter or by email. Support us! ink is free forever, but represents multiple years of thought, design, development and testing. Please consider supporting us via Patreon. Thank you, and have fun!
https://curatedcsharp.com/p/ink-is-inkle-ink/index.html
CC-MAIN-2022-40
refinedweb
1,420
67.76
String issue in Java code Sam Thompson Ranch Hand Joined: Jul 05, 2011 Posts: 87 posted Dec 23, 2011 08:19:02 0 Hi all! I am currently writing a simulation for dealing with a deck of 96 cards. 5 cards from the deck are taken initially before the do loop and put face up in a sequence next to the deck (this would be done in real life). What I want the program to do is to pick two cards in the following way per cycle in the do-while loop: (a) pick two cards randomly from the deck (b) pick two cards randomly from the sequence (there are conditions to this, as you will see in the code below) (c) pick one card from the deck and one from the sequence (again, conditions, as you can see in my code below). However, I am having difficulty in getting one particular method to work. It is my pickOneFromDeck() method, which seems to have some trouble with the cardPicked String variable. The compiler requires it to be initialized, which is fine. But whatever or whenever I DO initialize it, instead of placing a card name in the variable, it just puts the initial value (i.e., "empty space" or NULL if I set to null), in and keeps doing this. I want it to have a name of the color card picked in that space. What am I doing incorrectly that's causing this runtime error. Here is my code below: package t2r; import java.lang.Math.*; import java.util.*; public class T2R_Monte_Carlo { private int[] _deck = new int[9]; private String[] _colors = new String[9]; private int wildCardOccurrences = 0; private int runs = 0; public String[] cardsChosen; public int[] setDeck() { _deck[0] = 10; _deck[1] = 10; _deck[2] = 10; _deck[3] = 10; _deck[4] = 10; _deck[5] = 10; _deck[6] = 10; _deck[7] = 10; _deck[8] = 16; return _deck; } public void reset(int[] deck) { deck = this.setDeck(); } public void setColors() { _colors[0] = "red"; _colors[1] = "green"; _colors[2] = "blue"; _colors[3] = "yellow"; _colors[4] = "orange"; _colors[5] = "gray"; _colors[6] = "black"; _colors[7] = "white"; _colors[8] = "WILD"; } public int[] getDeck() { int[] deck = new int[9]; deck = this.setDeck(); return deck; } private int getRun() { return runs; } public String[] getColors() { return _colors; } public String[] cardsUp() { String[] colors = _colors; int[] deck = _deck; String[] cardsFaceUp = new String[5]; this.setColors(); int i, recurrences = 0, low = 0, high = 8; int wilds = 0, wildOccurrences = wildCardOccurrences; Random randomNumber = new Random(); int pickACard; try { for (i = 0; i < cardsFaceUp.length; i++) { pickACard = randomNumber.nextInt(high - low + 1) + low; cardsFaceUp[i] = colors[pickACard]; deck[pickACard] = deck[pickACard] - 1; if (cardsFaceUp[i].equals("WILD")) wilds++; if (wilds >= 3) throw new MC_Exception("WILD"); } System.out.println("The cards that have been chosen are:"); for (i = 0; i < cardsFaceUp.length; i++) System.out.println(cardsFaceUp[i]); } catch (MC_Exception problem) { problem.threeWild(); cardsFaceUp = cardsUp(); wildOccurrences++; this.saveWildData(wildOccurrences); } return cardsFaceUp; } private void saveWildData(int occurrences) { int data = occurrences; } public int pickAnOption() { int choice, low = 0, high = 2; choice = low + (int)(Math.random()*(high-low+1)); return choice; } public String pickOneFromDeck(int[] deck, String[] cardType) { int wildOrColor, low = 0, high = 9; int pick = low + (int)(Math.random()*(high-low+1)); String cardPicked = cardType[pick]; if (pick < 8) { wildOrColor = 0; cardPicked = cardType[pick]; } else wildOrColor = 1;); } return cardPicked; } public String[] pickFaceUpCard(String[] colors, int[] deck, String[] faceUp) { int i, wildOccurrences = wildCardOccurrences, wildOrColor, low = 0, high = 9, low_2 = 0, high_2 = 5; int pick = low + (int)(Math.random()*(high-low+1)); if (pick < 8) wildOrColor = 0; //color else wildOrColor = 1; //wild int pick_2 = low_2 + (int)(Math.random()*(high_2-low_2+1)); int pick_3 = low + (int)(Math.random()*(high-low+1)); int pick_4 = low + (int)(Math.random()*(high-low+1)); try { //faceUp[pick_2] = "empty slot"; card removed, space at pick_2 empty //boolean equals = faceUp[pick_2].equals(colors[pick_4]); //card replcaed at pick_2's position for (i = 0; i < faceUp.length; i++) { if ("WILD".equals(faceUp[i])) wildOccurrences++; if (wildOccurrences >= 3) throw new MC_Exception("WILD"); }); } } catch (MC_Exception dilemma) { dilemma.threeWild(); wildOccurrences++; this.saveWildData(wildOccurrences); } return faceUp; } private int runCount(int run) { run++; return run; } public double[] calcEvolveProb(int[] deck) { int i, cycle; ArrayList<String> whistleBlower = new ArrayList<String>(); double[] probSequence = new double[9]; cycle = this.getRun(); /*probabilistic expression for calculating a p(X) of any given color card is ((10-c[i])/(110-(c[i]+w[i])), where w[i] is the number of wild cards and c[i] is the number of color cards remaining in the deck*/ try { for (i = 0; i <= probSequence.length; i++) { if (i <= 8) probSequence[i] = (10 - (double)(deck[i]))/(110-(deck[i]+deck[8])); else probSequence[8] = (16 - (double)(deck[8]))/(110-(deck[i]+deck[8])); if (probSequence[i] == 0 || probSequence[8] == 0) { whistleBlower.add(Double.toString(probSequence[i]) + ", " + Integer.toString(cycle)); throw new MC_Exception("zero p(x)"); } } } catch (MC_Exception issue) { issue.zeroProbability(); this.zeroProbIncident(whistleBlower); } return probSequence; } private void zeroProbIncident(ArrayList<String> incidents) { ArrayList<String> eventMemory = incidents; } public static void main(String[] args) { T2R_Monte_Carlo monteCarlo = new T2R_Monte_Carlo(); String[] faceUp = new String[5]; String[] cardsChosen = new String[2]; //monteCarlo.setDeck(); monteCarlo.setColors(); int[] deck = new int[9]; deck = monteCarlo.getDeck(); String[] colorCards = new String[9]; colorCards = monteCarlo.getColors(); int i; Scanner userInput = new Scanner(System.in); faceUp = monteCarlo.cardsUp(); System.out.println("Please enter a limit for the simulation."); int cycle = 0, limit = userInput.nextInt(); int low_3 = 0, high_3 = 5; int pick = low_3 + (int)(Math.random()*(high_3-low_3+1)); do { cycle++; int cardDecision = monteCarlo.pickAnOption(); if (cardDecision == 1) { cardsChosen[0] = monteCarlo.pickOneFromDeck(deck, colorCards); cardsChosen[1] = monteCarlo.pickOneFromDeck(deck, colorCards); } else if (cardDecision == 2) { try { cardsChosen = monteCarlo.pickFaceUpCard(colorCards, deck, faceUp); System.out.println("The cards chosen are:"); for (i = 0; i < faceUp.length; i++) { System.out.println(faceUp[i]); } if ("WILD".equals(cardsChosen[0]) && "WILD".equals(cardsChosen[1])) throw new MC_Exception("Can't have two wilds from faceups."); } catch(MC_Exception two_wilds) { two_wilds.cardSingularity(); cardsChosen[1] = null; } } } while (cycle != limit); } } Here is the exception class code if you require it... package t2r; public class MC_Exception extends Exception { public MC_Exception(String message) { super(message); } public void threeWild() { System.out.println("Three wild cards cannot be present face up. Will redistribute" + " cards."); } public void colorEmpty() { System.out.println("There are no more cards of this color. Please try again."); } public void zeroProbability() { System.out.println("Incident of zero probability reached."); } public void cardSingularity() { System.out.println("A card after a wild card in faceups has been picked. " + "Wrong action. Action will be undone."); } public void deckExhausted() { System.out.println("Deck has been exhausted. Will reshuffle."); } } I know it is a lot to ask it seems, but this is the only part of the code that I am having trouble with. Other than that, it would work fine. Thank you so much for your assistance and help everyone. I look forward to hearing from all of you. S.T. Ireneusz Kordal Ranch Hand Joined: Jun 21, 2008 Posts: 423 posted Dec 23, 2011 10:09:45 0 Hi, In lines 121-124 int wildOrColor, low = 0, high = 9; int pick = low + (int)(Math.random()*(high-low+1)); String cardPicked = cardType[pick]; low = 0, high = 9, high-low+1 = 10 expression (Math.random()*(high-low+1) returns values between 0 and 9.999999 9.99999 casted to (int) gives 9 Since the array cardType has 9 elements (from 0 to 8), sometimes the program aborts with index out of bound exception trying to get cell 9 from the array. Randall Twede Ranch Hand Joined: Oct 21, 2000 Posts: 4089 posted Dec 23, 2011 11:01:46 0 wow. you are good Ireneusz. i already gave up when i didnt see right away where cardPicked was declared SCJP I agree. Here's the link: subject: String issue in Java code Similar Threads Main class not recognized variable returns "null" value in output illegal start of expression check my code please? How to test my code for proper function? All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/562572/java/java/String-Java-code
CC-MAIN-2013-20
refinedweb
1,336
50.02
I'm having trouble dealing with WSDL documents that have an <import> in their <declarations> section. Basically, I want to be able to load a WSDL, and any schema that it imports, into a SchemaTypeLoader on which I can call findType(QName). Presently, I can only "see" types that are defined in the WSDL itself -- it doesn't seem to be recursively adding the imported types. It seems to me that this should work... or am I missing something? Currently, I'm loading the WSDL with code that looks a lot like SchemaCompiler.loadTypeSystem (especially the "WSDL" case). Can anyone offer any suggestions? Thanks, Jeff ________________________________ From: Jeffrey Crump [mailto:jcrump@sonicsoftware.com] Sent: Tuesday, July 27, 2004 11:41 AM To: xmlbeans-user@xml.apache.org Subject: Type import? Does XML Beans support WSDL type importing? <definitions name="Foo" targetNamespace="myNS"> <import namespace=" <> " schemaLocation="CustomerOrder.xsd" /> ... </definitions> And if so, how do I find the definition of that type in order to create a SchemaTypeLoader? For types that are defined in-line to the WSDL, I'm importing them in the same way the SchemaCompiler does (thanks, Cezar, for the tip). But dpes XML Beans try to load the external definition? I suppose the alternative is to navigate the WSDL independently and just get a stream to that schema document, but if XML Beans is already importing it, I don't want to duplicate work. Thanks, Jeff
http://mail-archives.apache.org/mod_mbox/xml-xmlbeans-dev/200407.mbox/%3C62C6BE6D9F74D14F92B79323DE9C51979ACB3A@MAIL01.bedford.progress.com%3E
CC-MAIN-2015-11
refinedweb
238
58.48
table of contents NAME¶ strdup, strndup, strdupa, strndupa - duplicate a string SYNOPSIS¶ #include <string.h> char *strdup(const char *s); char *strndup(const char *s, size_t n); char *strdupa(const char *s); char *strndupa(const char *s, size_t n); strdu¶¶ On success, the strdup() function returns a pointer to the duplicated string. It returns NULL if insufficient memory was available, with errno set to indicate the error. ERRORS¶ ATTRIBUTES¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ strdup() conforms to SVr4, 4.3BSD, POSIX.1-2001. strndup() conforms to POSIX.1-2008. strdupa() and strndupa() are GNU extensions. SEE ALSO¶ alloca(3), calloc(3), free(3), malloc(3), realloc(3), string(3), wcsdup(3) COLOPHON¶ This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://dyn.manpages.debian.org/unstable/manpages-dev/strdup.3
CC-MAIN-2022-27
refinedweb
156
66.13
I have a simple question. Im currently learning C++, I am an experienced C# and Java programmer and have a need to write some lower level programs, so decided to learn C++. Heres the source code to my problem and my question is below. ----------------------Source Code------------------------------------------ #include <iostream> using namespace std; class Counter { public: Counter(); Counter(int num); ~Counter(); int GetVal() const { return this->cnt; } void SetVal(int newNum); private: int cnt; }; Counter::Counter(int num): cnt(num){} Counter::~Counter() { cout << "DESTRUCTOR CALLED" << endl; } void Counter::SetVal(int num) { this->cnt = num; } int main() { Counter myCnt(6); cout << "My Cnt Value = " << myCnt.GetVal(); myCnt.SetVal(7); cout << endl << "Count now = " << myCnt.GetVal() << endl; Counter *pMyCnt = &myCnt; cout << "The address of my count is: " << pMyCnt << " and the value is: " << pMyCnt->GetVal() << endl; pMyCnt->SetVal(888); cout << "The pointer object value is: " << pMyCnt->GetVal() << endl; cout << "The original object value is: " << myCnt.GetVal() << endl; try { delete pMyCnt; pMyCnt = 0; cout << "Object Pointed used to delete object." << endl; cout << "So can we now access it? " << pMyCnt->GetVal() << endl; } catch(...) { cout << "Deleting the object through the pointer succeed." << endl; } try { myCnt.SetVal(777); cout << "Trying to access the original reference to Counter: " << myCnt.GetVal() << endl; cout << "The address for myCnt is: " << &myCnt << endl; } catch(...) { cout << "Failed.. the objection no longer exists." << endl; } return 0; } ----------------------End Source Code-------------------------------------- The problem I have is I use the pointer for Counter to delete the Counter object, yet when I use the Counter variable myCnt later on it still works. Now the pointer is supposed to delete the object from my understanding and the output says it does as the object deconstructor is called. So why then can I still use the object with myCnt? What am I missing here? Thats not really a pointer, its a 'reference'. You took the address of a variable which exists on the stack, and while you can modify the contents and do things with the address, you can't delete it this way because its not dynamically allocated. The delete on this address is a mistake, its undefined behavior (your compiler seems to have handled this safely, at least today). 'Stack' or local variables exist until they go out of scope. To dynamically allocate, you would have pmyCnt = new Counter; //get it dynamically ... delete pmyCnt; //ok, now after this line, if you access it, it should crash. Ahh thanks for the answer. That clears it up. Thanks for the answer. Cheers Your code results in undefined behavior which means that anything can happen, and that it's behavior is unpredictable. The problem is that you're calling delete to destro an object that was allocated on the stack. Such objects never get deleted explicitly -- they are destroyed automatically. Another note: you try blocks don't contain any throw statements so their matching catch() blocks never execute. To test catch() blocks you need to throw an execption from inside the try block. Danny Kalev One last thing. If you compile that code using VS 2003 you will see that even though Im trying to delete the object off the stack, the deconstructor is still called. That could lead to some nasty problems as well especially if you allocate memory in your object and then free it up in the deconstructor. Cheers Originally Posted by Agent_Smith Don't sweat too hard: the fact that the catch(...) gets called occasionally is due to a long standing bug in Visual C++. The catch(...) should *never* get called if there is no explicit throw statement in your try block. Generally speaking, notice that that exceptions are a losse term that means different things. In C++ parlance, an exception occurs if and only if a throw statement is executed in the program. However, there is another type of "exception" called Structured Exception Handling which is something quite different. Sadly, Microsoft messed things up a few years ago by letting C++ exceptions constructs "interact" with SEH. This is going to be fixed in Visual C++ 2005 (at last!). To summarize, the catch() statements in yoru code are unreachable code. Originally Posted by Agent_Smith One last thing. If you compile that code using VS 2003 you will see that even though Im trying to delete the object off the stack, the deconstructor is still called. Cheers Which is exactly what "undefined behavior" means: expect the unexpected. I'm not surprised thet destructor (that's the correct technical term) is called: the compiler assumes that the stack object terminates its life like a normal stack object, i.e., by having its destructor called when the object goes out of scope. I suspect that what yoru code does is essentially corrupt the process's heap. Anyway, imagine what would happen if the destructor did more serious resource releasing operations such as closing files or closing sokcets, database connections etc. Debug and release modes can do some strange things -- they are so different its hard to point to any one thing and say 'this is why it works in debug and not in release' The fastest way to see a difference is compile with a assembler output (listing file?) and see what the options did to the assembler code that makes one version work and another different. The slow way is to transform a debug into a release one option at a time. Thats one way to do undefined things, to call the destructor... the problem is not how the compiler handled undefined operations so much as that it "worked". Whenever wrong code works some of the time, you can be fooled into thinking its just a bug in your code. Its far better that wrong code does not work or even compile, but because of complexity, this cannot always be done by a compiler. Here, to catch the problem, the compiler has to "know" that the pointer points to a local variable instead of a dynamic object, which requires too much knowledge on the compiler's part -- imagine you assign (pointers a and b) a = b; -- and b is a pointer to a local object, as you have -- now the compiler has to know 2 deep that a is a pointer to a reference... how many deep should it look? Almost all of the undefined behaviors in the language are like this (too hard to detect as errors). Forum Rules
http://forums.devx.com/showthread.php?142678-deleting-stack-objects
CC-MAIN-2013-20
refinedweb
1,059
62.88
: update profile update profile coding for update profile: Updating user profile Updating user profile how should i provide user to update his profile with edit option including entered data by user should be shown in jsp page fileds The given code retrieve data from database and display User Registration Action Class and DAO code User Registration Action Class and DAO code... to write code for action class and code for performing database operations... UserRegisterAction.java in the project\WEB-INF\src\java\roseindia\web\struts\action directory Struts Hibernate Spring - Login and User Registration - Hibernate Struts Hibernate Spring - Login and User Registration Hi All, I fallowed instructions that was given in Struts Hibernate Spring, Login and user.... coding for update profile coding for update profile coding for update profile with hibernate integration spring with hibernate integration i want code for update and find operation how can i update and search operations in one method by using...:// spring controller V/S stuts Action - Spring spring controller V/S stuts Action we are going to use spring framework so what is better spring controller or struts action Struts Hibernate Integration In this section we will write Hibernate Struts Plugin Java code... Struts Hibernate  ... and passes it to the action for further processing. Action class uses Hibernate to make Action Configuration - Struts Action Configuration I need a code for struts action configuration in XML spring hibernate spring spring hibernate spring why are all the links in the following page broken.... the following link: Spring Hibernate Thanks Update Profile Update Profile The Update Profile page allows the logged in user to update his/her profile... on the Update Profile links the following update profile pages is displayed. Once the user Hibernate giving exception - struts - MySQL - Hibernate Hibernate giving exception - struts - MySQL Hi all, My name..., and using Struts frame work with hibernate and MySQL database. I am getting exection... org.hibernate.dialect.MySQLMyISAMDialect root asif 3 update true code Understanding Spring Struts Hibernate DAO Layer Understanding Spring Struts Hibernate DAO Layer... how Spring Hibernate and Struts will work together to provide best solution for any web based application. Understanding Spring Struts spring hibernate spring hibernate I need to save registration details in a database table through jsp using spring an hibernate....and the fields in the registration jsp are in different tables???can any one help or is there any sample code servlet action not available - Struts servlet action not available hi i am new to struts and i am.... Struts Blank Application action org.apache.struts.action.ActionServlet config /WEB-INF/struts-config.xml 2 action *.do Dynamic-update not working in Hibernate. Dynamic-update not working in Hibernate. Why is dynamic update not working in hibernate? Dynamic-update is not working. It means when you are running your update, new data is added in your table in Is Multiple Actions in Action class - Struts ") ,if(action.equals("update"). Then when to use Action and DispatchAction...Is Multiple Actions in Action class In ActionClass we can use only one action i.e execute(), but in DispatchAction we can use multiple actions.My myJSF,Hibernate and Spring integration code is not working. - Hibernate myJSF,Hibernate and Spring integration code is not working. the code given in this url : i have no action mapped for action - Struts no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld Spring all, the application code does not depend on the Spring APIs unlike the EJB that force to use JNDI and the Struts that force to extend Action. Spring promotes...Spring   struts - Struts :// Hope that the above links...struts Hi, I am new to struts.Please send the sample code for login and registration sample code with backend as mysql database.Please send profile display profile display JSP Servlet Search Page 1)search.jsp: <html> <head> </head> <body> <br><br><..." action="..//search"> <table border="0" width="300" align="center" bgcolor An introduction to spring framework . Just as Hibernate attacks CMP as primitive ORM technology, Spring attacks... application code. 2. Spring Context/Application Context: The Spring context... programming to Spring. Using this we can add annotation to the source code spring - Spring spring what is the difference between spring and hibernate and ejb3.0 Hi mamatha, Spring provides hibernate template and it has many...:// Thanks. Amardeep Spring Tutorial be integrated with other Java frameworks and ORM. Integration of Spring with Struts Integration of Spring with JSF Integration of Spring with Hibernate...Spring Tutorial In this section we will read about the Spring framework hibernate code - Hibernate hibernate code How to store a image in mysql or oracle db using struts &hibern Hibernate update Method an update method of Hibernate to update the record of table. Complete Code...Hibernate update Method In this tutorial you will learn about the update method in Hibernate Hibernate's update method saves the modified value how to forget password in spring framework in spring framework with hibernate give me code Please visit the following links: Please Struts-Hibernate-Integration - Hibernate Struts-Hibernate-Integration Hi, I was executing struts hibernate intgeration code the following error has occured. Anyone can give me... the following link: Hope Update value Update value How to update value of database using hibernate ? Hi Samar, With the help of this code, you will see how can update database using hibernate. package net.roseindia.DAO; import with hibernate struts with hibernate /SearchTutorial.jsp(4,2) Unable to find setter method for attribute: locale <%@ taglib uri="/tags/struts-bean" prefix="bean" %> <%@ taglib uri="/tags/struts-html" prefix="html" %> < For CRUD application - Spring For CRUD application Hi, Can i have Crud(create ,edit,update,delete the data in database ) code & search the eployee using "id or name" using Spring ,Hibernate and Mysql Thanks Raghavendra...-config.xml file (action mapping is show later in this page). Here is code Struts Action Chaining Struts Action Chaining Struts Action Chaining update update Predict and justify the output of the following code snippet written by the developer to update the Status table: String str = "UPDATE m...://localhost:3306/roseindia", "root", "root"); String str = "UPDATE Status SET Hibernate code - Hibernate Hibernate code firstExample code that you have given for hibernate... true org.hibernate.dialect.MySQLDialect update... inserted in the database from this file. ); The complete code of forgot password action is given below... The password forgot Action is invoked Struts Code - Struts using struts . I am placing two links Update and Delete beside each record . Now I want to apend the id of the record with the Url using html:link Update...Struts Code Hi I executed "select * from example" query Hibernate code problem - Hibernate Hibernate code problem Hai, iam working on simple login application using hibernate in spring. I've used Spring dependency injection too.I.... Please find my src code here... ----------------controller Layer ACTION - AGGREGATING ACTIONS IN STRUTS STRUTS ACTION - AGGREGATING ACTIONS IN STRUTS... of Action classes for your project. The latest version of struts provides classes... action. In this article we will see how to achieve this. Struts provides four code - Struts code How to write the code for multiple actions for many submit buttons. use dispatch action Update not working in hibernate. Update not working in hibernate. Update not working in Update Query Hibernate Update Query  ... is the code of our java file (UpdateExample.java), where we will update a field...=? Hibernate: update insurance set insurance_name=?, invested Could not find action or result STRUTS 2 Could not find action or result hiii..i am new to struts 2...;package <action name="fetch" class... not mapping with the action resource my menujsp.jsp code must also be incorrect Complete Hibernate 4.0 Tutorial Spring Hibernate Annotations Hibernate hello world Struts Spring... Hibernate update Method Hibernate persist Method Hibernate... Hibernate Update Query Hibernate Delete Query STRUTS STRUTS 1) Difference between Action form and DynaActionForm? 2) How the Client request was mapped to the Action file? Write the code and explain Hibernate required jar files - Hibernate , Please visit the following link: Download the full code from the above link and extract... to spring-hibernate integration. Thanks Why Struts 2 handling per action, if desired. Easy Spring integration - Struts 2... core interfaces are HTTP independent. Struts 2 Action classes... Why Struts 2 The new version Struts Spring Framework Part III -tests in our data access code, spring comes with a very lightweight 'dataSource... SPRING .... HITTING DATABASE by Farihah Noushene.... Spring comes with a family of data access frameworks that integrates well
http://www.roseindia.net/tutorialhelp/comment/82106
CC-MAIN-2014-42
refinedweb
1,424
56.76
Direct is a platform and language agnostic remote procedure call (RPC) protocol. Ext Direct allows for seamless communication between the client side of an Ext JS application and any server platform that conforms to the specification. Ext Direct is stateless and lightweight, supporting features like API discovery, call batching, and server to client events. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. Ext Direct is partially based on JSON (see or RFC 4627 ), and utilizes HTML Form-based File Uploads ( RFC 1867, RFC 2388 ). Server MAY support publishing its API to Clients via API discovery mechanism. If API discovery is supported the Server MUST respond to HTTP GET requests at preconfigured URI, returning a document with a correct content type for the browser to interpret this document as JavaScript code. The Server API declaration MUST be valid JavaScript code that results in creation of a set of nested Objects assigned to a variable that can later be passed to Ext Direct initialization code in the Client application. It is RECOMMENDED that the code also conforms to stricter rules of JSON object syntax, for the benefit of non-JavaScript implementations that might try to parse the API declarations as JSON. An example of API declaration code: var Ext = Ext || {}; Ext.REMOTING_API = { "id": "provider1", "url": "ext/direct/router", "type": "remoting", "actions": { "Album": [{ "name": "getAll", "len": 0 }, { "name": "add", "params": ["name", "artist"], "strict": false }, { "name": "delete", "len": 1 }] } }; The JavaScript Object of the API declaration MUST contain the following mandatory properties: url- The Service URI for this API. type- MUST be either remotingfor Remoting API, or pollingfor Polling API. actions- A JavaScript Object listing all Actions and Methods available for a given Remoting API. MUST be omitted in Polling API declarations. The API declaration MAY also contain the following OPTIONAL properties: id- The identifier for the Remoting API Provider. This is useful when there are more than one API in use. namespace- The Namespace for the given Remoting API. timeout- The number of milliseconds to use as the timeout for every Method invocation in this Remoting API. Any other properties are OPTIONAL. Each Action within the actions property of the API declaration is an Array of Objects that represent Methods. Actions do not have properties in themselves. Action names MAY be nested, i.e. an Action may contain other Actions as well as Methods. Method declaration MUST have the following properties: name- The name of this Method. MUST be unique within its Action. Each Method is fully qualified by Action and Method names concatenated with dot character (.), prefixed by optional Namespace: [Namespace.]Foo.Bar.baz Where Foo is outer Action name, Bar is inner Action name, and baz is the Method name. Method declaration MUST have one of the following mutually exclusive properties that describe the Method's calling convention: len- The number of arguments required for Ordered Methods. This number MAY be 0 for Methods that do not accept arguments. params- An array of parameters supported by a Named Method. This array MAY be empty. formHandler- A JavaScript Boolean value of trueindicates that this Method accepts Form submits. Ordered Methods MUST always conform to their calling convention. When Remoting Method proxy function is called for an Ordered Method, it MUST be supplied with exactly the number of arguments required, in exactly the same order as required by the Method. If less than required number of arguments is passed, the Router MAY choose to return an Exception without invoking the actual Method. Named Methods MAY use strict property to control how the arguments will be checked and passed to the Server when a Method is called: strictis set to true, only arguments with names listed in the paramsarray are sent to the Server; any other arguments are discarded. This is the default behavior. strictis set to false, all arguments are passed to the Server. The Router SHOULD pass all arguments to the actual Method. The Router MAY choose to return an Exception if some of the listed parameters are missing, and omit invoking the actual Method. If the params Array is empty and strict property is set to false, the Router MUST NOT perform any argument checks and MUST pass all arguments to the invoked Method. An example of a Named Method with all optional parameters: "actions": { "TestAction": [{ "name": "named_no_strict", "params": [], "strict": false }] } Method declaration MAY contain the optional metadata property that defines the type of Call Metadata accepted by the Method. If the metadata property is not present in Method declaration, the Client MUST NOT send call metadata to the Server for any invocation of that Method. The metadata property, if present, MUST be a JavaScript object with the following properties: len- The number of arguments required for by-position Call Metadata. This number MUST be greater than 0. params- The Array of names of the supported for by-name Call Metadata. This Array MAY be empty. strict- JavaScript Boolean value that controls how Call Metadata arguments are checked. When present, Call Metadata arguments MUST conform to their calling convention. Call Metadata calling convention MAY be different from the main Method arguments calling convention, e.g. an Ordered Method MAY accept by-name Call Metadata, or Named Method MAY accept by-position Call Metadata. The argument checks are performed the same way as with main Method arguments. Some examples of Method declarations accepting Call Metadata: "actions": { "TestAction": [{ "name": "meta1", "len": 0, "metadata": { "len": 1 } }, { "name": "meta2", "len": 1, "metadata": { "params": ["foo", "bar"], "strict": false } }, { "name": "meta3", "params": [], "strict": false, "metadata": { "len": 3 } }, { "name": "meta4", "params": ["foo", "bar"], "metadata": { "params": ["baz", "qux"] } }] } Declaring Polling API is OPTIONAL. If the Server implements more than one Event Provider, it is RECOMMENDED to include Polling API declarations along with Remoting API declaration in the same JavaScript document. An example of API declaration with one Remoting API and one Polling API: var Ext = Ext || {}; Ext.REMOTING_API = { "id": "provider1", "type": "remoting", "url": "ext/direct/router", "actions": { "User": [{ "name": "read", "len": 1 }, { "name": "create", "params": ["id", "username"] }] } }; Ext.POLLING_API = { "id": "provider2", "type": "polling", "url": "ext/direct/events" }; A remoting call is represented by sending a Request object, or multiple Request objects, to a Server. Request(s) are encoded in JSON and are sent as raw payload in a HTTP POST request to the Service URI advertised as the url in API Declaration. There MUST not be any other data present in the HTTP POST, except valid JSON document containing one or more Requests. Among HTTP headers for the POST request, there MUST be the Content-Type header with the value of "application/json". The Client MUST use UTF-8 character encoding for the Request document. A Request is an Object with the following members: type– MUST be a string "rpc". tid– The transaction id for this Request. MUST be an integer number unique among the Requests in one batch. action– The Action that the invoked Method belongs to. MUST be specified. method– The Remoting Method that is to be invoked. MUST be specified. data– A set of arguments to be passed to the called Remoting Method. MUST be either nullfor Methods that accept 0 (zero) parameters, an Array for Ordered methods, or an Object for Named methods. metadata- OPTIONAL set of meta-arguments to be made available to the called Remoting Method, if provided by the Client. If no metadata is associated with the call, this member MUST NOT be present in the Request. A typical JSON encoded Request Object for an Ordered Method may look like this: {"type":"rpc","tid":1,"action":"Foo","method":"bar","data":[42,"baz"]} A typical JSON encoded Request object for a Named Method may look like this: {"type":"rpc","tid":2,"action":"Foo","method":"qux","data":{"foo":"bar"}} A typical JSON encoded Request object for an Ordered Method with by-name Call Metadata may look like this: {"type":"rpc","tid":3,"action":"Foo","method":"fred","data":[0], "metadata":{"borgle":"throbbe"}} Remoting Requests MAY be batched, in which case the Requests MUST be sent as an Array of Request Objects with unique tid members: [ {"type":"rpc","tid":3,"action":"Foo","method":"frob","data":["foo"]}, {"type":"rpc","tid":4,"action":"Foo","method":"blerg","data":["qux"]} ] The Server MUST support Request batching, and SHOULD attempt to return call invocation Results or Exceptions in the same order. A response to a Remoting call MUST contain either a Result, or an Exception for each Request. Responses are encoded in JSON and returned to the Client as raw HTTP response payload, with Content-Type header of "application/json" and character encoding of UTF-8. If Requests were sent as a batch the Server MUST return the corresponding Responses only after all Requests were processed by the Router, and the Responses SHOULD follow the same order as the original Requests. For each Response, the corresponding tid member of the original Request MUST be passed back unchanged by the Server. A Result is an Object with the following members: type— MUST be a string "rpc".. result— The data returned by the Method. MUST be present in the Response object, but MAY be nullfor Methods that do not return any data. A typical JSON encoded Array of Response objects may look like this: [ {"type":"rpc","tid":1,"action":"Foo","method":"bar","result":0}, {"type":"rpc","tid":2,"action":"Foo","method":"baz","result":null} ] An Exception is an Object with the following members: type— MUST be a string "exception".. message— The error message. MUST be present. where— OPTIONAL description of where exactly the exception was raised. MAY contain stack trace, or additional information. A typical JSON encoded Exception may look like this: { "type": "exception", "tid": 3, "action": "Foo", "method": "frob", "message": "Division by zero", "where": "... stack trace here ..." } A remoting form invocation is represented by submitting an HTML form with HTTP POST request. The form content MUST conform to (HTML 4.01 specification)[5]. The Server MUST support both "application/x-www-form-urlencoded" and "multipart/form-data" content types for backwards compatibility. The Client MAY choose to implement only "multipart/form-data" encoding. There MUST be only one Method invocation per submitted form. The Client MUST use form submission for each Method with Form Handler calling convention declared in Remoting API. The form SHALL contain the following fields: extType- MUST be a string "rpc". extTID- The transaction id for this Request. MUST be an integer number unique among the Requests in one batch. extAction- The Action that the invoked Method belongs to. MUST be specified. extMethod- The Remoting Method that is to be invoked. MUST be specified. extUpload- Stringified Boolean value ( "true"or "false") indicating that file uploads are attached to this form submission. The form MAY contain the metadata field if there is Call Metadata associated with the given form submission. The form MAY contain other fields in addition to the mandatory ones listed above. These additional fields MUST be passed to the invoked Method as Named arguments. When the form is used to upload files, encoding type MUST be "multipart/form-data". The Server MUST respond to the form submission with a JSON document containing a valid Response or Exception object as described in sections 4.2.1 and 4.2.2. The document MUST have content type of "application/json" and UTF-8 character encoding. When the form request contained one or more file uploads, the Server MUST return a valid HTML document with correct content type for the browser to interpret this document as HTML. The document MUST have UTF-8 character encoding. The HTML document MUST contain a valid JSON encoded Response or Exception as described in sections 4.2.1 and 4.2.2, enclosed in HTML <textarea> An example of a file upload response may look like this: <!DOCTYPE html> <html> <head> <title>File upload response</title> </head> <body> <textarea>{JSON RESPONSE HERE}</textarea> </body> </html> The Server MAY implement OPTIONAL event polling mechanism. Event polling is performed by periodically making HTTP GET requests to the Server. There MAY be more than one Event Provider per Server; in that case each Event Provider MUST have a separate Service URI. An event poll request in its basic form is an HTTP GET request to the Service URI of the polled Event Provider. The Server MUST maintain a list of active Poll Handler methods for every Event Provider, and invoke each Poll Handler method every time a poll request is made. The Server MAY support passing arguments to Poll Handler methods via HTTP GET request URI but it is not required. An event poll response MUST be a valid JSON document with a correct content type for the browser to interpret this document as JSON. The document MUST use UTF-8 character encoding. The event poll response should contain an array of Event objects. This array MAY be empty if no Events are pending for the given request. The Server MUST NOT include Exception objects in the response array. An event object MUST contain the following properties: type- MUST be a string "event". name- Event name, MUST be a string. data- Event data, SHOULD be any valid JSON data. An example of a typical Event object: { "type": "event", "name": "progressupdate", "data": { "processId": 42, "progress": 100 } } Ext Direct makes use of the following terms: windowobject will be used. [Namespace.]Action.Method.
http://docs.sencha.com/extjs/5.1.2/guides/backend_connectors/direct/specification.html
CC-MAIN-2017-43
refinedweb
2,234
54.93
Over the past year or so I’ve been digging fairly deeply into curves, mostly into Bezier curves specifically. While digging around, I’ve found many mentions of the De Casteljau algorithm for evaluating Bezier curves, but never much in the way of a formal definition of what the algorithm actually is, or practical examples of it working. Now that I understand the De Casteljau algorithm, I want to share it with you folks, and help there be more useful google search results for it. The De Casteljau algorithm is more numerically stable than evaluating Bernstein polynomials, but it is slower. Which method of evaluating Bezier curves is more appropriate is based on your specific usage case, so it’s important to know both. If you are looking for the mathematical equation of a Bezier curve (the Bernstein form which uses Bernstein basis functions), you have come to the right place, but the wrong page! You can find that information here: Easy Binomial Expansion & Bezier Curve Formulas Onto the algorithm! The De Casteljau Algorithm The De Casteljau algorithm is actually pretty simple. If you know how to do a linear interpolation between two values, you have basically everything you need to be able to do this thing. In short, the algorithm to evaluate a Bezier curve of any order is to just linearly interpolate between two curves of degree . Below are some examples to help show some details. The simplest version of a Bezier curve is a linear curve, which has a degree of 1. It is just a linear interpolation between two points and at time , where is a value from 0 to 1. When has a value of 0, you will get point . When has a value of 1, you will get point . For values of t between 0 and 1, you will get points along the line between and . The equation for this is super simple and something you’ve probably seen before: . The next simplest version of a Bezier curve is a quadratic curve, which has a degree of 2 and control points . A quadratic curve is just a linear interpolation between two curves of degree 1 (aka linear curves). Specifically, you take a linear interpolation between , and a linear interpolation between , and then take a linear interpolation between those two results. That will give you your quadratic curve. The next version is a cubic curve which has a degree of 3 and control points . A cubic curve is just a linear interpolation between two quadratic curves. Specifically, the first quadratic curve is defined by control points and the second quadratic curve is defined by control points . The next version is a quartic curve, which has a degree of 4 and control points . A quartic curve is just a linear interpolation between two cubic curves. The first cubic curve is defined by control points and the second cubic curve is defined by control points . So yeah, an order Bezier curve is made by linear interpolating between two Bezier curves of order . Redundancies While simple, the De Casteljau has some redundancies in it, which is the reason that it is usually slower to calculate than the Bernstein form. The diagram below shows how a quartic curve with control points is calculated via the De Casteljau algorithm. Compare that to the Bernstein form (where is just ) The Bernstein form removes the redundancies and gives you the values you want with the least amount of moving parts, but it comes at the cost of math operations that can give you less precision in practice, versus the tree of lerps (linear interpolations). Sample Code Pretty animations and intuitive explanations are all well and good, but here’s some C++ code to help really drive home how simple this is. #include <stdio.h> void WaitForEnter() { printf("Press Enter to quit"); fflush(stdin); getchar(); } float mix(float a, float b, float t) { // degree 1 return a * (1.0f - t) + b*t; } float BezierQuadratic(float A, float B, float C, float t) { // degree 2 float AB = mix(A, B, t); float BC = mix(B, C, t); return mix(AB, BC, t); } float BezierCubic(float A, float B, float C, float D, float t) { // degree 3 float ABC = BezierQuadratic(A, B, C, t); float BCD = BezierQuadratic(B, C, D, t); return mix(ABC, BCD, t); } float BezierQuartic(float A, float B, float C, float D, float E, float t) { // degree 4 float ABCD = BezierCubic(A, B, C, D, t); float BCDE = BezierCubic(B, C, D, E, t); return mix(ABCD, BCDE, t); } float BezierQuintic(float A, float B, float C, float D, float E, float F, float t) { // degree 5 float ABCDE = BezierQuartic(A, B, C, D, E, t); float BCDEF = BezierQuartic(B, C, D, E, F, t); return mix(ABCDE, BCDEF, t); } float BezierSextic(float A, float B, float C, float D, float E, float F, float G, float t) { // degree 6 float ABCDEF = BezierQuintic(A, B, C, D, E, F, t); float BCDEFG = BezierQuintic(B, C, D, E, F, G, t); return mix(ABCDEF, BCDEFG, t); } int main(int argc, char **argv) { struct SPoint { float x; float y; }; SPoint controlPoints[7] = { { 0.0f, 1.1f }, { 2.0f, 8.3f }, { 0.5f, 6.5f }, { 5.1f, 4.7f }, { 3.3f, 3.1f }, { 1.4f, 7.5f }, { 2.1f, 0.0f }, }; //calculate some points on a sextic curve! const float c_numPoints = 10; for (int i = 0; i < c_numPoints; ++i) { float t = ((float)i) / (float(c_numPoints - 1)); SPoint p; p.x = BezierSextic(controlPoints[0].x, controlPoints[1].x, controlPoints[2].x, controlPoints[3].x, controlPoints[4].x, controlPoints[5].x, controlPoints[6].x, t); p.y = BezierSextic(controlPoints[0].y, controlPoints[1].y, controlPoints[2].y, controlPoints[3].y, controlPoints[4].y, controlPoints[5].y, controlPoints[6].y, t); printf("point at time %0.2f = (%0.2f, %0.2f)n", t, p.x, p.y); } WaitForEnter(); } Here’s the output of the program: Thanks to wikipedia for the awesome Bezier animations! Wikipedia: Bézier curve
https://blog.demofox.org/2015/07/05/the-de-casteljeau-algorithm-for-evaluating-bezier-curves/
CC-MAIN-2022-05
refinedweb
999
57.4
Update of /cvsroot/sbcl/sbcl/src/runtime In directory sc8-pr-cvs8.sourceforge.net:/tmp/cvs-serv24087/src/runtime Modified Files: x86-win32-os.c Log Message: 1.0.0.3: correct stack start addresses on Windows * based on patch by Alastair Bridgewater. * add AB to initials glossary and sort it by initials. Index: x86-win32-os.c =================================================================== RCS file: /cvsroot/sbcl/sbcl/src/runtime/x86-win32-os.c,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- x86-win32-os.c 7 Apr 2006 11:41:47 -0000 1.3 +++ x86-win32-os.c 30 Nov 2006 17:03:50 -0000 1.4 @@ -42,32 +42,32 @@ #include "validate.h" size_t os_vm_page_size; -int arch_os_thread_init(struct thread *thread) { +int arch_os_thread_init(struct thread *thread) +{ { void *top_exception_frame; void *cur_stack_end; void *cur_stack_start; - + MEMORY_BASIC_INFORMATION stack_memory; + asm volatile ("movl %%fs:0,%0": "=r" (top_exception_frame)); asm volatile ("movl %%fs:4,%0": "=r" (cur_stack_end)); - /* - * Can't pull stack start from fs:4 or fs:8 or whatever, + /* Can't pull stack start from fs:4 or fs:8 or whatever, * because that's only what currently has memory behind - * it from being used. Our basic options are to know, - * a priori, what the stack size is (1 meg by default) - * or to grub the default size out of the executable - * header in memory by means of hardcoded addresses and - * offsets. - * - * We'll just assume it's 1 megabyte. Easiest that way. - */ - cur_stack_start = cur_stack_end - 0x100000; + * it from being used, so do a quick VirtualQuery() and + * grab the AllocationBase. -AB 2006/11/25 + */ - /* - * We use top_exception_frame rather than cur_stack_end - * to elide the last few (boring) stack entries at the - * bottom of the backtrace. + if (!VirtualQuery(&stack_memory, &stack_memory, sizeof(stack_memory))) { + fprintf(stderr, "VirtualQuery: 0x%lx.\n", GetLastError()); + lose("Could not query stack memory information."); + } + cur_stack_start = stack_memory.AllocationBase; + + /* We use top_exception_frame rather than cur_stack_end to + * elide the last few (boring) stack entries at the bottom of + * the backtrace. */ thread->control_stack_start = cur_stack_start; thread->control_stack_end = top_exception_frame;
http://sourceforge.net/p/sbcl/mailman/message/13604073/
CC-MAIN-2014-52
refinedweb
330
51.34
Before you start About this tutorial In this tutorial, learn how to couple several IBM products to create an infrastructure for application logging. Use IBM solidDB — a fast, in-memory database — as a cache on the application side to decouple the application from the logging infrastructure. Use WebSphere Message Queue (MQ) to persistently store and transfer messages to WebSphere Message Broker (WMB), where you can analyze and transform the messages into different XML output formats. Finally, store the messages in DB2 for Linux®, UNIX®, and Windows®. The pureXML capabilities make it possible to store the log files in their native XML format and later query and analyze the logs. Objectives This tutorial introduces the challenges of application logging, how to use XML in this context, and how to set up an infrastructure that brings application logging into your business. In this tutorial, learn how to work with IBM tools, including IBM solidDB, WebSphere Message Broker, and DB2 for Linux, UNIX, and Windows. Prerequisites This tutorial is written for users whose skills and experience are at a beginning to intermediate level. You should have a general familiarity with installing and using software, especially DB2, WebSphere MQ, WebSphere Message Broker, and solidDB. System requirements To set up the infrastructure introduced in this tutorial, you need a Windows box (Server 2003, Vista, or Server 2008) with at least 2GB of free disk space, full administrator access to the box, and the ability to reboot the box several times a day. You should not use a production server machine, but a dedicated box where you can safely play around. Application logging With legislative changes, like the introduction of the Sarbanes-Oxley-Act, and its need for detailed logging of activities, as well as recent economic changes, like service orientation and on-demand business, keeping track of who is doing what within enterprises and therefore application logging is becoming more important. Logging is no longer a feature used just for debugging when something goes wrong inside an application, but is a permanent process to make all transactions traceable and accountable. In business-critical applications, like customer databases or ATM terminals, logging is a vital requirement to keep track of all events. Therefore, logs have to be stored reliably and made searchable. XML is at the core of SOA and Web services. Moreover, it is flexible and thus ideal for log messages where information, new log types, and applications may be added over time. Customers are usually distinguishing between at least two different types of logs: technical logs that capture environment information (what machine, which OS, and so on), and functional logs, which capture what is done. Both log types can also be mixed to a single structure. Logs of both types contain a lot of information; some parts are business-critical, while other parts are just informal. Usually, many (rather small) log files are produced — one for every operation or step an application performs. Therefore, tens of millions or even more log files per month can accumulate in a single enterprise. Despite the amount, all files need to be processed efficiently, accurately, and without loss. In addition, client applications must not be impacted by log file processing. Assuming one log file is between 1KB and 20KB in size, and you have to deal with up to 10 million log files per day, you'll need 35GB of storage space for uncompressed data for just one day and 3TB of storage space for uncompressed data for a whole month, approximately. Since the clients generating the log files are running on lightweight and specialized hardware, they do not provide storage space for the load of log files they produce. Therefore, you need centralized storage with large capacities, where you can store and analyze the log files. Databases have proven to be the best available storage systems for this type of task. Database management systems with the ability to natively store and query XML documents will facilitate application logging. Listing 1 shows an example XML file: Listing 1. Sample XML file <?xml version="1.0" encoding="US-ASCII"?> <File> <Record> <Header version="1"> <Time>2002-11-15 18:19:17.6</Time> <Type>INFORMATION</Type> <Id>-471559096676384768</Id> </Header> <Application> <Name>SecurityWebService</Name> <Function>GetValue</Function> <User>JDoe</User> <Result>3171861797959368704</Result> <Params> <Param> <Type>Object</Type> <Value>Object</Value> </Param> <Param> <Type>Object</Type> <Value>security.ssl</Value> </Param> <Param> <Type>Object</Type> <Value>0</Value> </Param> </Params> <CallTime>2004-11-15 16:19:17.7</CallTime> <StartTime>2006-10-18 12:18:14.7</StartTime> <EndTime>2000-11-16 18:14:16.4</EndTime> <ReturnTime>2004-11-12 10:10:12.7</ReturnTime> </Application> <System> <Name>INTRANET01</Name> <State>498308015556919296</State> </System> [..] </Record> </File> Log shipping A centralized repository for the logs (a database system, for example) is used to integrate the activities from various applications. The data can be analyzed and the "big picture" across all applications can be created. Having the applications insert their logs directly into the central repository is not feasible for many reasons. To reliably move the log information from the application to the database, a message queue is used. To further decouple the application from the message queue, a small in-memory database system can be used. It also supports buffering messages during peak loads. Since losing log files in case of failure is not tolerable, all systems involved in log shipping must be transactional. Scenario Figure 1 shows the sample scenario architecture you'll set up in this tutorial: Figure 1. The tutorial's architecture Assume there are multiple client applications generating log files and saving them in the solidDB cache databases. IBM solidDB is a fast in-memory cache database that is optimized for high performance. Since it keeps all data in memory and does not write any data to disk, except transaction logs for persistence, it's fast but also reliable. A Java tool will then pick the log files up and transfer them into WebSphere MQ. WMB will analyze and transfer the files to the DB2 back-end database. To simulate application logging clients, this tutorial uses a Java tool that generates and loads XML files into the cache databases. The tool — Java Load Generator Tool — is included with this tutorial (see Downloads). By running the tool without command line options, it prints out usage information. This tutorial uses this tool to load messages into solidDB. To ship log files into the back-end database, this tutorial uses IBM WebSphere Message Broker. It offers transactional and persistent message queues and routing mechanisms to transfer messages from one location to another. Additionally, you can use WebSphere Message Broker to analyze and transform the messages. Figure 2 shows the message flows introduced in this tutorial: Figure 2. WebSphere message flows Messages are read from two queues, holding different message formats. Queue 1 (Q1) will hold messages consisting of one element named Record. Every message will either be routed to another queue (Q3) or to the back-end database. Queue 2 (Q2) will hold File messages that consist of one or more Record elements. Each File message will be split into single Record messages and a summary message per Record. Record messages will then be routed to the back-end database, while Summary messages will be stored in another queue (Q4). WMB can interact with IBM DB2 to store messages transactional. That way, no message can be lost due to (network) failures. IBM DB2 for Linux, UNIX, and Windows will finally store all the messages. It uses pureXML to natively store XML files in a dedicated data structure. That way, queries can be performed efficiently while preserving the original structure. DB2 also supports compression of XML files, which will save a lot of storage space. Preparation As a first step, you need to download and install the following software packages: - WebSphere MQ 7.0 - WebSphere Message Broker 6.1 - WebSphere Message Broker Toolkit V6.1 - solidDB 6.3 - DB2 9.5 Express-C Each software package should be installed with the default options. This tutorial describes how to set up and configure them in the following sections. Database configuration For storing the log files, this tutorial uses a simple one-table approach with the following DDL statements for solidDB and DB2 (for production systems, enhanced features for the physical design could be applied): Table 1. Database definitions solidDB at the front end does not require a schema name. For DB2 at the back end, use LOGAPP as the schema name. Adding new message queues All messages read from solidDB will be stored on WebSphere MQ. Therefore, you need dedicated queues that hold your messages. As Table 2 shows, you need five new queues (note the prefix AL_ for "Application Logging"): Table 2. Message queues Figure 3. A look into WebSphere MQ Explorer Now you've set up your work environment and are ready to get started with application logging. This tutorial uses WebSphere Message Broker Toolkit, an Eclipse-based tool for developing message flow applications. When you start the application for the first time, you need to set up a workplace folder for all project files. Figure 4. Selecting a workspace Now you need to create a new message flow project that holds all flows you develop later. Right-click on the empty project list, and select New > Message Flow Project: Figure 5. Create new message flow project A window pops up, asking you about the new project's name. Enter logApp as name for the new project: Figure 6. Create new message flow project Message Broker connection Next, set up a connection to the Message Broker domain instance you created earlier. Using this connection, you can interact with the Message Broker later on. To do this, right-click on your newly created project, and select New > Other: Figure 7. New domain connection In the opening window, look for and select the item Domain Connection: Figure 8. New domain connection You now need to enter the name of your Queue Manager. Since we used the default configuration, it is named WBRK61_DEFAULT_QUEUE_MANAGER, using port 2414: Figure 9. New domain connection Enter logAppConnection as connection name. After clicking Finish, confirm you want to create a new server project in the opening message box: Figure 10. New domain connection You are now connected to the Message Broker. This is necessary to deploy projects and run them. Importing message definitions Next, let's import the XML Schema definitions you'll be using in Message Broker Toolkit. That way, WMB will be able to recognize and parse messages coming from solidDB. Let's import the two files into WebSphere Message Broker Toolkit. First, you need to add a Message Set to your project. This set will hold all message definitions. Right-click on the message flow project, and select New > Message Set: Figure 11. New message set Next, specify a name for the new message set. Since the workspace does not contain a message set project, you need to also enter a name for the new project. The new project, containing the new message set, will be added to your working set. Name the message set logAppMessages and the project logAppMessageSet: Figure 12. New message set You need to specify what types of data the new message set holds. Since you want to process XML data, select XML documents: Figure 13. New message set You have now entered all information needed, and the message set is ready to be created. Figure 14. New message set This message set will be the container for all custom message formats you use in this tutorial. You are ready to import the XML Schema files into the message set. Right-click on your message set project, and select New > Message Definition File From > XML Schema File: Figure 15. Import message definitions Next, enter the path to the log file schema documents: Figure 16. Import message definitions Finally, choose the data types and messages you want to import. In this case, select all the boxes by clicking on the Select All button: Figure 17. Import message definitions You have successfully imported the first XML Schema definition into WebSphere Message Broker. To import the schema LogSummary.xsd to your workspace, repeat the steps above. You need to modify the message set to support single Record elements in your message. Double-click on the LogFile message definition in your workspace to get the definitions to open in the right pane. Figure 18. Modifying the message definition You need to rename RecordType to Record, otherwise all the messages containing a Record as root element will be named RecordType, instead of Record. The renaming is already done in Figure 19: Figure 19. Modifying the message definition Now you can use the definitions to create mappings between different formats. Reading messages from solidDB using Java First of all, your log messages need to be transferred from the solidDB cache into WebSphere MQ. Generally, there are several ways to interact with databases in WebSphere. But unfortunately, solidDB's ODBC driver does not provide support for WMB, and solidDB's JDBC driver does not yet support distributed transaction. These limitations narrow the choices to a custom Java program for reading messages from solidDB. You also have to make sure to include solidDB's jar file (SolidDriver2.0.jar) in your java program or in the classpath. Otherwise, you are unable to connect to solidDB. Listing 4, below, provides a code snippet to connect to solidDB, but before you can connect, you have to correctly fill out the parameters (host, port, user, pass): Listing 4. Connecting to solidDB String connString; connString = "jdbc:solid://" + host + ":" + port + "/" + user + "/" + pass; Class.forName("solid.jdbc.SolidDriver").newInstance(); Connection connection = DriverManager.getConnection(connString); Statement statement; statement = connection.createStatement(); ResultSet resultSet; resultSet = statement.executeQuery("SELECT doc FROM LOGS"); Listing 5 provides another code snippet to connect to the message queue: Listing 5. Connecting to WebSphere MQ String queueManagerName = "WBRK61_DEFAULT_QUEUE_MANAGER"; String queueName = "AL_INQUEUE1"; // or AL_INQUEUE2 MQQueueManager queueManager; queueManager = new MQQueueManager(queueManagerName); MQQueue queue; queue = queueManager.accessQueue( queueName, CMQC.MQOO_OUTPUT, null, null, null); After you have established connections to the source and the sink of the log messages, you need to loop over the input messages and insert each of them into the queue: Listing 6. Transferring messages from solidDB to WebSphere MQ while (resultSet.next()) { byte[] message; message = resultSet.getBytes(1); MQMessage queueMessage; queueMessage = new MQMessage(); queueMessage.correlationId = CMQC.MQCI_NONE; queueMessage.messageId = CMQC.MQMI_NONE; queueMessage.write(message); queue.put(queueMessage, queueMessageOptions); } The full Java code for these snippets are included in the Downloads section. After you have the messages in the queue, you might want to do something with them before saving them straight to the back-end database. So let's start with a few examples of how to modify XML messages using WebSphere Message Broker. Using WMB to analyze and transform messages In order to use the previously created broker connection and the imported message types, you need to add references to your message flow project. To select referenced projects, open the properties page of your message flow project logApp, right-click on it, and select Properties. In the opening window, select Project References. Click on the two check boxes on the right —logAppMessageSet and Servers. Figure 20. Message flow project properties You are now ready to create the message flows in WebSphere Message Broker Toolkit. Click on your message flow project, and select New > Message Flow: Figure 21. New message flow Enter logFlow1 as the name for the new flow, and click on Finish: Figure 22. New message flow The new flow will open in the upper right window, so you can add and configure nodes in it. You can define sources, sinks, transformations, and other operations to perform with messages. Routing messages dependent on their content Let's first take a look at how to analyze messages inside the flow and route them dependent on their content. Routing can be used to separate message types. For example, you can forward debug messages to developer machines and business-critical messages, like orders or billing information, to the persistent storage. Figure 23 shows the message flows you will set up in this section. An MQINput node will read messages from the queues. A route node will then redirect messages to different sinks, dependent on their content. Some messages will be mapped to a different format using the mapping node. Figure 23. Sample message flow First, insert an MQInput node that reads messages from the message queue and put them into your flow. In the menu to the left of the window, select WebSphere Message Broker, then drag and drop an MQInput node to the right, empty window. This will be the source, fetching messages from the queue. You need to specify the name of the queue, where messages should be read from. Click on the newly inserted node, and have a look at the properties window at the lower right. In the field Queue name, enter AL_INQUEUE1. As mentioned, AL_INQUEUE1 will hold messages of type Record. The next node is a ResetContentDescriptor node (see Figure 23). This node is necessary to tell WMB the message content is XML and not just a BLOB. Insert the node and configure it, using the following preferences (as shown in Figure 24): - Select all the check boxes to reset message domain, reset message set, reset message type, and reset message format. - Specify XMLNSC for the message domain. - Specify logAppMessages for the message set. Figure 24. ResetContentDescriptor node To see a larger version of Figure 24, click here. Now you can use a route node to inspect the content of the message and route it dependent on its content. Assuming the queue holds messages of type Record, you want to route them to different output terminals, based on whether the value of the header element id is odd or even. After inserting the route node into the flow, right-click on it and use the context menu to rename the output terminals to Odd and Even. Figure 25. Route node context menu Next, add two patterns to the node, telling it how to route messages. Select the route node in the upper panel, and take a look at its properties page: Figure 26. Route node properties page To make your route node's properties page look like that in Figure 26, you have to add two filter patterns to the filter table. Click on Add… to open a new dialog box, where you can enter the first expression: $Body/Record/Header/Id mod 2 = 0. This pattern applies to all messages with even message ids in their header. After adding the first pattern, repeat the steps to add the second pattern, which applies to all odd ids. Second filter pattern: $Body/Record/Header/Id mod 2 = 1. With this configuration, the messages arriving at this node will be routed to one of the output terminals based on their content. You can now connect the two output terminals to different successors. Transforming messages into different formats Next, you'll use a mapping node to transform messages into anther format. You need to insert a mapping node to the flow and connect one of the route node's output terminals to it. It does not matter whether you connect the odd or even output terminal. Double-click on the newly inserted node. A new window will open where the input and output message types can be specified: Figure 27. New message mapping Since we expect Record messages coming from this flow, select Record as input type. LogSummary will be the output type, as shown in Figure 27. In the next window, you can define mappings by either dragging and dropping elements from the left (input) side to the right (output) side or by entering the mapping expression for each element. You can also use integrated functions like fn:concat, as shown in the lower pane of Figure 28: Figure 28. New message mapping To see a larger version of Figure 28, click here. Finally, you need insert two MQOutput nodes to write messages back to the queue. Connect the remaining terminal of the route node to one of the output queues, and connect the output terminal from the mapping node to the second output node. Your flow should now look similar to the example in Figure 23. Lastly, you need to configure the MQOutput nodes and specify a destination queue for the messages. Use AL_OUTQUEUE as destination for the MQOutput node connected to the route node and AL_SUMMARYQUEUE for the one connected to the mapping node. Splitting messages using mapping nodes In this section, use WebSphere Message Broker to split an input message into several output messages and divide the output messages according to their format to different output terminals. Let's say you want to split a large file info smaller parts and create a summary for each parts while preserving the parts, as shown in Figure 29: Figure 29. Splitting and summarizing a message Figure 30 shows the flow you want to set up: Figure 30. Splitting and summarizing a message An MQInput node is connected with the already introduced ResetContentDescriptor node, followed by a mapping node (named Split). This is an extension to your previously created flow. The incoming connection from the first flow is visible at the top and attached to the MQOutput node named SUMMARY. You'll insert all nodes according to the picture first. The MQInput node will read messages from AL_INQUEUE2, so you have to make sure to configure this node as well as the ResetContentDescriptor node. The latter will use the same configuration as the first ResetContentDescriptor, so you can easily copy that one. Double-click on the node to open a window where you can enter the source and target message formats, as shown in Figure 31. Select File as the input type, assuming the queue holds messages of that type. And his time, select two messages as mapping/output targets: LogSummary and Record. This way, you can split one input message into more output messages. Figure 31. Define message types for mapping Now, for every message that reaches this node, you want to map each of the inner Record elements into their own message and also spawn a summary message (LogSummary) for every Record. Figure 32 and Figure 33 show the mapping. The for-Expression in the lower screenshot is iterating over the Record element in the input and generating a separate output message for each. Figure 32. Mapping node view To see a larger version of Figure 32, click here. Figure 33. Mapping node details To see a larger version of Figure 33, click here. After mapping and splitting, there is a problem you have to deal with: both message types, Record and LogSummary, are put on the same output terminal. You want to split these two messages up to different flows. Use a JavaCompute node to deal with this issue. After inserting the node to the flow, double-click on it to open an assistant that helps you create a new Java project. Replace the code in the auto-generated class with the example code in Listing 7. (The code is included in the Downloads section.) Listing 7. Routing messages using JavaCompute public class JavaRouteNode extends MbJavaComputeNode { public void evaluate(MbMessageAssembly assembly) throws MbException { MbOutputTerminal out = getOutputTerminal("out"); MbOutputTerminal alt = getOutputTerminal("alternate"); MbMessage message = assembly.getMessage(); MbElement logMessage = message.getRootElement().getLastChild(); if (logMessage.getFirstElementByPath("./Record") != null) { // Message is a complete log record out.propagate(assembly); } else if (logMessage.getFirstElementByPath("./LogSummary") != null) { // Message is a summary alt.propagate(assembly); }else{ throw new IllegalArgumentException(); } } } This node is invoked for every message, and it performs two XQueries to decide whether the message is a Record or a LogSummary. Depending on the XQuery result, the message is either sent to the output terminal or the alternate terminal (which is another output terminal). You've now seen how to route, transform, and analyze messages using WMB in several ways. Next, learn how to store some of the messages waiting in the queue in to the back-end database. Saving messages into DB2 Connecting to DB2 To write messages directly from WebSphere MQ to DB2, you need to first set up an ODBC connection to the target database. To add a new ODBC connection, you have to open the ODBC data source configuration in the Windows Control Panel. Click on Add… to create a new ODBC connection to your database: Figure 34. ODBC Data Source Administrator Since you want to connect to your default DB2 instance, select IBM DB2 ODBC DRIVER – DB2COPY1: Figure 35. Creating a new ODBC data source In the next window, enter LOGAPPDB as the data source's name, then click on the Add button: Figure 36. Creating a new ODBC data source You now need to enter the user name and password WMB will use to connect to the database. Make sure you select the check box to Save password. Figure 37. Creating a new ODBC Data Source In the next pane, enter the database name, its host name, and port. Since WebSphere Message Broker and DB2 reside on the same machine, you can enter localhost as host name. After you click on OK, you can close all the open ODBC windows and return to the WebSphere Message Broker Toolkit. Figure 38. Creating a new ODBC data source Importing the database layout After configuring the database connection, you need to inform WebSphere Message Broker of the database table layout. To do this, you can directly import the database layout from DB2 by adding a connection to the Message Broker Toolkit. In the Broker application development view, switch to the Database Explorer tab and create a new database connection. A window will pop up, where you can specify connection parameters. After filling out the form with our parameters, you can click on Finish. Figure 39. Creating a database connection in WebSphere Message Broker Toolkit You are now connected to the database. Next, you have to import database definitions into the toolkit to enable message mappings to database tables. Right-click on the project list, and select New > Database Definition: Figure 40. New database definition A new window opens. The project list is empty because the project list does not contain any data design projects. Click on New to add a data design project that will hold the database definitions. Name the new project databaseDefinitions, and click on Next: Figure 41. New data design project You do not need to select any references to other projects, so click on Finish: Figure 42. New data design project After creating the data design project, you are taken back to the database definition window. The name of your newly created project is already selected. Make sure you select the correct DB2 version: Figure 43. New Database Definition In the next window, select the database connection you created earlier: Figure 44. Creating a database connection in WebSphere Message Broker Toolkit Now, enter the user ID and password to connect to the database: Figure 45. New database definition WMB Toolkit connects to the database and retrieves a list of all schemas. Select the LOGAPP schema: Figure 46. New database definition In the next window, select at least the table definitions: Figure 47. New database definition After importing the table definitions, you can use the tables with message flows. Creating the flow Now, you can create a new message flow that reads messages from the message queue (AL_XMLQUEUE) and stores them into DB2. The name of the flow will be storageFlow. It consists of three nodes, an MQInput node, configured to read messages from AL_XMLQUEUE, followed by a ResetContentDescriptor node, and a DatabaseInsert node. Figure 48 shows the flow layout: Figure 48. Message flow to store messages into DB2 Click on the DatabaseInsert node (named Backend in Figure 48), and switch to its basic properties. Enter the name of the previously created ODBC data source (LOGAPPDB). Double-click on the new DatabaseInsert node. A new window appears, asking you about the input and output message formats. Select Record as the input message type and your database table LOGS as the output target: Figure 49. New message map for the storage flow In the mapping window, you can drag and drop the elements from the left to a table column on the right. Since WMB does not know about the new pureXML features of DB2 and its ability to store XML, you cannot drag a complex type to a column. By doing this, the toolkit would create a submapping, assuming you want to map certain portions of the complex type into a single column. The toolkit expects you to concatenate or summarize sub elements to fit into a single database column. To map the entire element with all sub elements to a database column, you need to use a function that serializes the entire document. This can be done using esql:asbitstream($source/Record), as shown in Figure 50: Figure 50. Message map view You are finished developing all the necessary message flow nodes, and the project should be able to run on a Message Broker. Note: Deploying your project into Message Broker is not covered in this tutorial. To get information on how to create a Broker Archive (BAR) and deploy it in Message Broker, please refer to the WebSphere Message Broker Information Center. Conclusion This tutorial has shown you how to set up an infrastructure for application logging using several IBM products. IBM solidDB acts as an in-memory cache to decouple the application from the logging infrastructure. WebSphere MQ and WebSphere Message Broker are used to ship, analyze, and transform XML messages. Finally, DB2 for Linux, UNIX, and Windows and its pureXML feature are used to efficiently store and manage the XML data. There is no single product that offers the full logging infrastructure, but, as shown here, it is possible to build such an application logging infrastructure quickly. XML is at the core because it gives the flexibility to change and add to the message formats without impacting the infrastructure itself. Using DB2 with its SQL/XML and XQuery support, the XML data can be queried for failure analysis or auditing purposes. Downloads Notes - This is a sample ToXgene file used to generate log files. - This is sample code for WebSphere Message Broker JavaRouteNode. - This is sample code to connect solidDB with WebSphere MQ. - This is a tool to generate and load log files into solidDB and DB2. Resources Learn - "Thinking XML: A glimpse into XML in the financial services industry" (developerWorks, February 2004): Take a look at some best practices for the adoption of XML in the financial services industry. - "XML Transformation with WebSphere Message Broker V6" (developerWorks, December 2006): See how to use the XMLTransformation node in WebSphere Message Broker. - "Using coordinated transactions with DB2 Type 2 JDBC and WebSphere Message Broker" (developerWorks, March 2007): Implement XA coordinated transactions in a WebSphere Message Broker V6 Java Compute Node accessing a DB2 database (for Java developers). - "IBM WebSphere Developer Technical Journal: Running a standalone Java application on WebSphere MQ V6.0"(developerWorks, October 2006): Learn how to develop a J2SE application that sends and receives messages using WebSphere MQ. - WebSphere Message Broker Information Center: Find detailed instructions on how to complete the tasks that you need to perform to establish and maintain your broker environment. - "XML for DB2 Information Integration" (IBM, July 2004): Design the mapping between XML and relational data, and the other way around, to enable a flexible exchange of information (Redbook). - developerWorks Information Management zone: Learn more about Information Management. Find technical documentation, how-to articles, education, downloads, product information, and more. - Stay current with developerWorks technical events and webcasts. Get products and technologies - DB2 Express-C 9.5: Download a free trial version of DB2 Express-C 9.5. - solidDB 6.3: Download a free trial version of solidDB 6.3. - WebSphere Message Broker 6.1: Download a free trial version of WebSphere Message Broker 6.1. - WebSphere MQ 7.0: Download a free trial version of WebSphere MQ 7.0. - Build your next development project with IBM trial software, available for download directly from developerWorks. Discuss - Participate in the discussion forum. - solidDB forum: Ask questions and discuss issues around solidDB. -.
http://www.ibm.com/developerworks/data/tutorials/dm-0905applogging/index.html
CC-MAIN-2014-52
refinedweb
5,392
55.13
stash ipad error:cannot close history panel hi, I just tried stash on my old iPadPro I saw some toolbar icons on top of the keyboard. after I clicked that ‘H’ icon, it showed a history panel on right top corner screen. but then, I found no way to close this history panel, and anywhere else except this panel is unclickable, the only way to exit is double Home to kill this App. then I checked this on iPhone, I found there’s a close X icon in top left corner of the history panel, which I couldn’t find in iPad. known issue? @shtek same on iPad mini 4. Little triangle at top of window shows that the view is presented as popover but this presentation does not exist on iPhone. @shtek If you really need it, change in site-packages/stash/system/shui.py History present sheet instead of popover def history_present(self, listsource): table = ui.TableView() listsource.font = self.BUTTON_FONT table.data_source = listsource table.delegate = listsource table.width = 300 table.height = 300 table.row_height = self.BUTTON_FONT[1] + 4 table.present('sheet') table.wait_modal() perfect. thank you
https://forum.omz-software.com/topic/5551/stash-ipad-error-cannot-close-history-panel
CC-MAIN-2021-04
refinedweb
189
66.94
To understand this example, you should have the knowledge of following C++ programming topics: This program takes n number of element from user(where, n is specified by user) and stores data in an array. Then, this program displays the largest element of that array using loops. #include <iostream> using namespace std; int main() { int i, n; float arr[100]; cout << "Enter total number of elements(1 to 100): "; cin >> n; cout << endl; // Store number entered by the user for(i = 0; i < n; ++i) { cout << "Enter Number " << i + 1 << " : "; cin >> arr[i]; } // Loop to store largest number to arr[0] for(i = 1;i < n; ++i) { // Change < to > if you want to find the smallest element if(arr[0] < arr[i]) arr[0] = arr[i]; } cout << "Largest element = " << arr[0]; return 0; } Output Enter total number of elements: 8 Enter Number 1: 23.4 Enter Number 2: -34.5 Enter Number 3: 50 Enter Number 4: 33.5 Enter Number 5: 55.5 Enter Number 6: 43.7 Enter Number 7: 5.7 Enter Number 8: -66.5 Largest element = 55.5 This program takes n number of elements from user and stores it in array arr[]. To find the largest element, the first two elements of array are checked and largest of these two element is placed in arr[0]. Then, the first and third elements are checked and largest of these two element is placed in arr[0]. This process continues until and first and last elements are checked. After this process, the largest element of an array will be in arr[0] position.
https://cdn.programiz.com/cpp-programming/examples/array-largest-element
CC-MAIN-2019-47
refinedweb
267
63.59
Contents - Before using Thrift for Objective-C - Getting started - Objective-C specific Notes Before using Thrift for Objective-C Supported Platforms Thrift supports the following platforms as well as other platforms. - Mac OS X 10.5 or later - iOS 4 or later cocoa represents the above platforms. The runtime library and the generated code adapt themselves to the platforms automatically. Thus, please use cocoa for them. % thrift --gen cocoa idl.thrift Installation Support of ARC Since iOS 5 and Mac OS 10.7, Objective-C supports Automatic Reference Counting (ARC) in addition to the conventional object life cycle management by retain/release (non-ARC). Thrift 0.8.0 does not support ARC. If you want use it within an app in ARC mode, you need to compile the thrift runtime library in non-ARC mode by specifying -fno-objc-arc flag. Use 0.9.0-dev or later, if you need to use thrift in ARC mode. It supports both ARC and non-ARC. Both the runtime library and generated Objective-C code can be compiled both in ARC mode and in non-ARC mode. They automatically adapt themselves to the compilation modes without any switch. When you modify Thrift Use retain_stub instead of retain and use release_stub instead of release inside the runtime library. The thrift compiler should generate retain_stub/release_stub instead of retain/release. See this for detail. How to Link the Runtime Library You need to link the runtime library of Thrift with your project. The location of the runtime library: thrift-x.x.x/lib/cocoa/src There are two options. - If your project is small, you can add the source files of the runtime library into your project. - Otherwise, if you use Thrift within several sub projects, you must make a framework from the source files of the runtime library. Since ways of making a framework is a bit complicated and exceeds this guide, please consult other sites. We expect somebody will contribute a script that makes a framework automatically. The rest of this guide assumes we directly add the runtime library into our sample project and we don't make the framework. Getting started This tutorial will walk you through creating a sample project which is a bulletin board application using Thrift. It consists of an iOS client app written in Objective-C and a Java server. It is assumed that you will have a basic knowledge of Objective-C and Xcode to complete this tutorial. Note that the client runs on Mac OS with small modification. Acknowledgment: A part of this tutorial was borrowed from newacct's tutorial. Sample Download You can download the sample project from here. It includes: - The whole myThriftApp project - the runtime library in thrift/ directory from 0.8.0-dev (You should replace this with the latest one.) - idl.thrift Server.java and BulletinBoard.java in gen-java/ directory Requirements Make sure that your system meets the requirements as noted in ThriftRequirements. - Thrift 0.9.0+ (0.8.0-dev+) - iOS 4+ or Mac OS X 10.5+ - Xcode 4.2+ The following are required for the Java server; but, not the Objective-C client. - Java 1.4+ (already installed in Mac OS X 10.5+) SLF4J (available at) Creating the Thrift file We will use a simple thrift IDL file, myThriftApp/idl.thrift. This defines a bulletin board service. You can upload your message and date using add() method. get() method returns a list of the struct so that we demonstrate how to send/receive an array of structs in Objective-C. // idl.thrift struct Message { 1: string text, 2: string date } service BulletinBoard { void add(1: Message msg), list<Message> get() } - Run the thrift compiler to generate the stub files (e.g. gen-cocoa/idl.h and gen-cocoa/idl.m). thrift --gen java idl.thrift thrift --gen cocoa idl.thrift Create the Objective-C client app The objective-c client is a simple app that allows the user to fill out a text field add/get them from the server. - Create a new iOS single view project. Product name is myThriftApp. Check to Use Automatic Reference Counting - Add generated files. Right click on myThriftApp and select Add files to "myThirtApp". Choose "gen-cocoa". - Add the runtime library. Right click on myThriftApp and select Add files to "myThirtApp". Choose "thrift-x.x.x/lib/cocoa/src". rename group name from src to thrift. - Setup header search path in build settings. Always Search User Path: YES Framework Search Paths: add $(SRCROOT) and $(inherited) Copy the following text to ViewController.h #import <UIKit/UIKit.h> @class BulletinBoardClient; @interface ViewController : UIViewController <UITextFieldDelegate> { BulletinBoardClient *server; } @property (strong, nonatomic) IBOutlet UITextField *textField; @property (strong, nonatomic) IBOutlet UITextView *textView; - (IBAction)addPressed:(id)sender; @end Open the ViewController_iPhone.xib Place a Text Field, a Round Rect Button, and a Text View. - Connect the delegate of the Text Field to File's Owner. Connect the Text Field to textField variable in ViewController.h. Connect the Button to addPressed: method in ViewController.h. Give add title to the button. Connect the Text View to textView variable in ViewController.h. Clear the content of the Text View. Copy the following code into ViewController.m #import <TSocketClient.h> #import <TBinaryProtocol.h> #import "ViewController.h" #import "idl.h" @implementation ViewController @synthesize textField; @synthesize textView; - (void)viewDidLoad { [super viewDidLoad]; //]; } - (void)viewDidUnload { [self setTextField:nil]; [self setTextView:nil]; [super viewDidUnload]; } - (IBAction)addPressed:(id)sender { Message *msg = [[Message alloc] init]; msg.text = textField.text; msg.date = [[NSDate date] description]; [server add:msg]; // send data NSArray *array = [server get]; // receive data NSMutableString *s = [NSMutableString stringWithCapacity:1000]; for (Message *m in array) [s appendFormat:@"%@ %@\n", m.date, m.text]; textView.text = s; } - (BOOL)textFieldShouldReturn:(UITextField*)aTextField { [aTextField resignFirstResponder]; return YES; } @end - This code creates a new transport object that connects to localhost:7911, it then creates a protocol using the strict read and strict write settings. These are important as the java server has strictRead off and strictWrite On by default. In the iOS app they are both off by default. If you omit these parameters the two objects will not be able to communicate. Creating the Java Server - cd into the gen-java directory - make sure that your classpath is properly setup. You will need to ensure that ".", libthrift.jar, slf4j-api, and slf4j-simple are in your classpath. create the file BulletinBoardImpl.java import org.apache.thrift.TException; import java.util.List; import java.util.ArrayList; class BulletinBoardImpl implements BulletinBoard.Iface { private List<Message> msgs; public BulletinBoardImpl() { msgs = new ArrayList<Message>(); } @Override public void add(Message msg) throws TException { System.out.println("date: " + msg.date); System.out.println("text: " + msg.text); msgs.add(msg); } @Override public List<Message> get() throws TException { return msgs; } } - create a file called 'Server.java' import java.io.IOException; import org.apache.thrift.protocol.TBinaryProtocol; import org.apache.thrift.protocol.TBinaryProtocol.Factory; import org.apache.thrift.server.TServer; import org.apache.thrift.server.TServer.Args; import org.apache.thrift.server.TSimpleServer; import org.apache.thrift.transport.TServerTransport; import org.apache.thrift.transport.TServerSocket; import org.apache.thrift.transport.TTransportException; public class Server { private void start() { try { BulletinBoard.Processor processor = new BulletinBoard.Processor(new BulletinBoardImpl()); TServerTransport serverTransport = new TServerSocket(7911); TServer server = new TSimpleServer(new Args(serverTransport).processor(processor)); System.out.println("Starting server on port 7911 ..."); server.serve(); } catch (TTransportException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } public static void main(String args[]) { Server srv = new Server(); srv.start(); } } - compile the classes javac *.java This will run a server that implements the service BulletinBoard on port 7911. The service simply stores Message structures in a List and returns the list when requested. Running - run the server java Server Build and run the myThriftApp in the iPhone simulator. Since it tries to connect to localhost, you need to run it in the iPhone simulator rather than an iPhone. - Put a text in the text field and push add button. Objective-C specific Notes Method signature When a service with multiple arguments is compiled, argument names are omitted. // IDL service TestService { void method(1: i32 value1, 2: i32 value2), } void method:(int)value1 :(int)value2; // [server method:24 :33]; Attributes Struct members can be accessed via attributes of an instance. // IDL struct Person { 1: string name, 2: string city, } Person *p = [[Person alloc] init]; p.name = @"HIRANO"; p.city = @"Tsukuba"; Initialization Since 0.9.0 (0.9.0-dev), initialization of struct and const is supported. // IDL struct Person { 1: string name = "your name", 2: string city = "your city", } const list<Person> MyFamily = [{name: "Satoshi", city: "Tsukuba"}, {name: "Akiko", city: "Tokyo"}] String type string is converted into UTF-8 encoding before sending. The UTF-8 encoding is converted to NSString* after receiving. Note all language versions convert encoding automatically. For example Python does not do encoding conversion. (Thus, you need to do conversion by yourself). Binary type Thrift for Objective-C supports binary type as NSData*. // IDL struct Person { 1: string name, 2: binary photo, service TestService { void register(1: Person person), } Person *p = [[Person alloc] init]; p.name = @"HIRANO Satoshi"; NSImage *image = [NSImage imageNamed:@"hirano.png"]; // load an image NSData *tiffData = [image TIFFRepresentation]; // obtain data p.photo = tiffData; [server register:p]; Collection types The objective-C version supports collection types, list, set and map as NSArray, NSSet, and NSDictionary. // IDL struct TestData { list<string> listData, set<string> setData, map<string, string> mapData, } TestData *t = [[TestData alloc] init]; t.listData = [NSArray arrayWithObjects:@"a", @"b", nil]; t.setData = [NSSet setWithObjects::@"e", @"f", nil]; t.mapData = [NSDictionary dictionaryWithObjectsAndKeys:@"name", @"HIRANO", @"city", @"Tsukuba", nil]; Client Side Transports You can mainly use the socket transport and the HTTP transport. Here is a client side example for the socket transport. #import <TSocketClient.h> #import <TBinaryProtocol.h> - (void) connect { //]; Here is a client side example for the HTTP transport. You may use https. You can connect to Google App Engine for example. #import <THTTPClient.h> #import <TBinaryProtocol.h> - (void) connect { NSURL *url = [NSURL URLWithString:@""]; // url = [NSURL URLWithString:@""]; // Talk to a server via HTTP, using a binary protocol THTTPClient *transport = [[THTTPClient alloc] initWithURL:url]; TBinaryProtocol *protocol = [[TBinaryProtocol alloc] initWithTransport:transport strictRead:YES strictWrite:YES]; server = [[TestServiceClient alloc] initWithProtocol:protocol]; Asynchronous Client The asynchronous operation means that an RPC call returns immediately without waiting for the completion of a server operation. It is different from the oneway operation. An asynchronous operation continues in background to wait for completion and it is possible to receive a return value. This feature plays very important role in GUI based apps. You don't want to block for long time when a user pushes a button. Unlike Java version, Objective-C version does not support asynchronous operation. However, it is possible to write asynchronous operations using NSOperationQueue. The basic usage of NSOperationQueue is like this. Your async block is executed in a background thread. [asyncQueue addOperationWithBlock:^(void) { // your async block is here. int val = [remoteServer operation]; }]; There are some points to be considered. Your async blocks are done in asyncQueue, an instance of NSOperationQueue. Strong objects may not be accessed from a block. Since self is also a strong object, we need to avoid to access self within an async block. We use weakSelf, a weak reference to self. GUI operations must be in the main thread. For example, uilabel.text = @"foo"; must be done in the main thread and you may not write it in the above async block. A block that handles GUI is added to the mainQueue which represents the main thread. We use weaksSelf2 in the nested async block. - Exception handling is also needed. Here are the fragments of ThriftTestViewController class. // IDL service TestService { i32 sum(1: i32 value1, 2: i32 value2), } @interface ThriftTestViewController : UIViewController { IBOutlet UILabel *msg; } @property (nonatomic, retain) UILabel *msg; @property (nonatomic, strong) TestServiceClient *server; @property (nonatomic, strong) NSOperationQueue *asyncQueue; @property (nonatomic, strong) NSOperationQueue *mainQueue; - (void )viewDidLoad { asyncQueue = [[NSOperationQueue alloc] init]; [asyncQueue setMaxConcurrentOperationCount:1]; // serial mainQueue = [NSOperationQueue mainQueue]; // for GUI, DB NSURL *url = [NSURL URLWithString:@""]; // Talk to a server via HTTP, using a binary protocol THTTPClient *transport = [[THTTPClient alloc] initWithURL:url]; TBinaryProtocol *protocol = [[TBinaryProtocol alloc] initWithTransport:transport strictRead:YES strictWrite:YES]; // Use the service defined in profile.thrift server = [[TestServiceClient alloc] initWithProtocol:protocol]; NSLog(@"Client init done %@", url); } -(void)doCalc { __unsafe_unretained ThriftTestViewController *weakSelf = self; [asyncQueue addOperationWithBlock:^(void) { __unsafe_unretained ThriftTestViewController *weakSelf2 = weakSelf; @try { weakSelf.msg.text = nil; int result = [weakSelf.server calc:24 :32]; [weakSelf.mainQueue addOperationWithBlock:^(void) { weakSelf2.msg.text = [NSString stringWithFormat:@"%d", result]; }]; } @catch (TException *e) { NSString *errorMsg = e.description; NSLog(@"Error %@", errorMsg); [weakSelf.mainQueue addOperationWithBlock:^(void) { weakSelf2.msg.text = errorMsg; }]; } }]; }
https://wiki.apache.org/thrift/ThriftUsageObjectiveC-new
CC-MAIN-2016-50
refinedweb
2,089
51.75
I am working on a driver that uses PCI DMA transfers from system memory to the PCI device. On the 2.4.18 kernel it worked OK, but now when it is recompiled for 2.4.20 it doesn't. Digging into things I have found that the function virt_to_phys() has been changed from : return PHYSADDR(address) to return (unsigned long)address - PAGE_OFFSET Where PAGE_OFFSET is 0x8000 0000, and where PHYSADDR would AND the address against 0x1FFF FFFF. As far as I can tell the problem comes from pci_alloc_consistent doing : ret = UNCAC_ADDR(ret) which converts a 0x8xxx address to 0xAxxx, and then when you pass this 0xAxxx_xxxx address through virt_to_phys() you get an address of the form 0x2xxx_xxxx. This 0x2xxx_xxxx is passed to the dma controller as the physical address to where it must read / write data, and because it is 0x2xxx_xxxx and not 0x0xxx_xxxx an exception occurs. At first I just tried AND'ing out the 0xA.. like PHYSADDR used to do it, but with that change i no longer get the exception, but the driver does not dma the data across - it just sits there. I read DMA-mapping.txt and it says virt_to_phys() will be phased out, and should be used, but doesn't elaborate any further (like how you should do it now ). So after that long intro, my question is : Anybody know where I'm going wrong and how to fix things ? Also any tips on what drivers to look at for good examples would also be appreciated. TIA __________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo.
http://www.linux-mips.org/archives/linux-mips/2003-05/msg00204.html
CC-MAIN-2014-42
refinedweb
265
79.8
Every program must have a function in the global namespace called main, which is the main program. This function must return type int. The C++ environment calls main; your program must never call main. The main function cannot be declared inline or static. It can be called with no arguments or with two arguments: int main( ) or: int main(int argc, char* argv[]) The argc parameter is the number of command-line arguments, and argv is an array of pointers to the command-line arguments, which are null-terminated character strings. By definition, argv[argc] is a null pointer. The first element of the array (argv[0]) is the program name or an empty string. Static objects at namespace scope can have constant or dynamic initial values, those of POD type with constant values are initialized by constant data before the program starts, and those with dynamic values are initialized by code when the program begins. When, exactly, the objects are initialized is implementation-defined. It might happen before main is called, or it might be after. You should avoid writing code that depends on the order in which static objects are initialized. If you cannot avoid it, you can work around the problem by defining a class that performs the required initialization and defining a static instance of your class. For example, you can guarantee that the standard I/O stream objects are created early so they can be used in the constructor of a static object. See <ios> in Chapter 13 for more information and an example. A local static object is initialized when execution first reaches its declaration. If the function is never called, or if execution never reaches the declaration, the object is never initialized. When main returns or when exit is called (see <cstdlib> in Chapter 13), static objects are destroyed in the reverse order of their construction, and the program terminates. All local, static objects are also destroyed. If a function that contains a local, static object is called during the destruction of static objects, the behavior is undefined. The value returned from main is passed to the host environment. You can return 0 or EXIT_SUCCESS (declared in <cstdlib>) to indicate success, or EXIT_FAILURE to tell the environment that the program failed. Other values are implementation-defined. Some environments ignore the value returned from main; others rely on the value.
http://etutorials.org/Programming/Programming+Cpp/Chapter+5.+Functions/5.5+The+main+Function/
crawl-001
refinedweb
396
54.42
In my code I usede the greedy algorithm in order to use the minimum amaount of coins. For example: I must return $0.41, the minimum amount of coins I can use is 4: 1 - 0,25; 1 - 0.10; 1 - 0.05; 1 - 0.01; #include <stdio.h> #include <cs50.h> int main(void) { printf("Enter the sum, that you want to return you:"); float sum = GetFloat(); float quaters = 0.25; float dime = 0.10; float nickel = 0.05; float penny = 0.01; int count_q = 0,count_d = 0,count_n = 0,count_p = 0; while(sum<0){ printf("Incorrect, enter the positive float number"); sum = GetFloat(); } while(sum > 0){ if(sum - quaters >=0){ sum -=quaters; count_q +=1; } else if((sum -quaters <0) && (sum -dime>=0)){ sum -= dime; count_d +=1; } else if((sum - dime <0) &&(sum - nickel>=0) ){ sum -= nickel; count_n +=1; } else if(sum - nickel <0){ sum -= penny; count_p +=1; } } printf("The number of quaters: %i\n",count_q); printf("The number of dimes: %i\n",count_d); printf("The number of nickels: %i\n",count_n); printf("The number of pennies: %i\n",count_p); } Enter the sum, that you want to return you:1.12 The number of quaters: 4 The number of dimes: 1 The number of nickels: 0 The number of pennies: 3 To my understanding, there is no bug in your code in the strictest sense, as the reasoning on which the implementation is based (a greedy algorithm) is correct. You are most likely experiencing rounding errors due to repeated subtraction, as you use float, the single-precision floating type to represent your values. Perhaps, if you change float to double in your code, the output will be as expected for your example input. However, this only pushes the boundaries of the limitation. Perhaps it would be better to internally represent the amount of money in pennies as int. Note that, when first confronted with the fact that floating point representations are inaccurate, I believed that the impossibility to represent some values and accumulation of rounding errors would be an issue only when you absolutely do some rocket science calculations, but would never be relevant for what I considered to be layman's calculations. However, this is not the case.
https://codedump.io/share/oOolIVRQKfvB/1/greedy-algorithm-with-coins-in-c
CC-MAIN-2017-51
refinedweb
370
62.38
Class that calculates the "surflet" features for each pair in the given pointcloud. More... #include <pcl/features/ppf.h> Class that calculates the "surflet" features for each pair in the given pointcloud. Please refer to the following publication for more details: B. Drost, M. Ulrich, N. Navab, S. Ilic Model Globally, Match Locally: Efficient and Robust 3D Object Recognition 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 13-18 June 2010, San Francisco, CA PointOutT is meant to be pcl::PPFSignature - contains the 4 values of the Surflet feature and in addition, alpha_m for the respective pair - optimization proposed by the authors (see above) Definition at line 75 of file ppf.h. Empty Constructor. Definition at line 48 of file ppf.hpp. References pcl::Feature< PointInT, PointOutT >::feature_name_.
https://pointclouds.org/documentation/classpcl_1_1_p_p_f_estimation.html
CC-MAIN-2020-40
refinedweb
130
50.87
This guide explains how to set up and use a service account to access the Google Chat REST API. First, it walks you through how to create a service account. Then, it demonstrates how to write a script that uses the service account to authenticate with the Chat API and post a message in a Chat space. Service accounts let Chat apps send asynchronous messages to Google Chat, which lets them: - Notify users when a long-running background job finishes running. - Alert users that a server has gone offline. - Ask a customer support person to tend to a newly opened customer case. When authenticated with a service account, to get data about or perform actions in a Chat space, Chat apps must have membership in the space. For example, to list members of a space, or to create a message in a space, the Chat app has to itself be a member of the space. If your Chat app needs to access user data or perform actions on a user's behalf, authenticate with user credentials instead. To learn more about when Chat apps require authentication and what kind of authentication to use, see Types of required authentication in the Chat API authentication and authorization overview. Prerequisites To run the example in this guide, you need the following prerequisites: Python - Python 3.6 or greater - The pip package management tool - A Google Workspace account with access to Google Chat. - A Google Cloud project with the Chat API enabled. To create a project and enable an API, refer to Create a project and enable the API. - A published Chat app with membership in a Chat space. To create and publish a Chat app, see Build a Google Chat app with Cloud Functions. Step 1: Install the Google client library If you haven't already installed the Google client libraries for your language of choice, run the following command in your command line interface: Python pip3 install --upgrade google-api-python-client google-auth-httplib2 google-auth-oauthlib oauth2client You can use any language supported by our client libraries. Step 2: Create a service account and private key in Google Cloud Console Create a service account with a private key that your Chat app will use to access Google APIs. Create a service account: To create a service account, follow these steps: - Open the Google Cloud console. - At the top-left, click Menu > IAM & Admin > Service Accounts. - Click Create service account. - Fill in the service account details, then click Create and continue. - Optional: Assign roles to your service account to grant access to your Google Cloud project's resources. For more details, refer to Granting, changing, and revoking access to resources. - Click Continue. - Optional: Enter users or groups that can manage and perform actions with this service account. For more details, refer to Managing service account impersonation. - Click Done. The service account appears on the service account page. Next, create a private key for the service account. Create a private key To create a private key for the service account, follow these steps: - Open the Google Cloud console. - At the top-left, click Menu > IAM & Admin > Service Accounts. - Select your service account. - Click Keys > Add keys > Create new key. - Select JSON, then click Create. Your new public/private key pair is generated and downloaded to your machine as a new file. This file is the only copy of this key. For information about how to store your key securely, see Managing service account keys. - Click Close. For more information about service accounts, see service accounts in the Google Cloud IAM documentation. Step 3: Apply credentials to HTTP request headers Next, apply necessary credential headers to all requests made by an httplib2.Http instance. The following code snippet shows how to create a credential object, using a private key like the one created in step 2: Python from oauth2client.service_account import ServiceAccountCredentials SCOPES = [''] CREDENTIALS = ServiceAccountCredentials.from_json_keyfile_name( 'service_account.json', SCOPES) Step 4: Build a service endpoint and call the Chat API Use a client library to call the Google Chat REST API. The following code snippet creates a service endpoint to the Chat REST API by creating an HTTP client and authorizing with service account credentials: Python from apiclient.discovery import build chat = build('chat', 'v1', http=CREDENTIALS.authorize(Http())) Next, create a new message with space.name as the parent and JSON payload in the message body, and call the execute() method: Python result = chat.spaces().messages().create( # Replace {space} with a space name. # Obtain the space name from the spaces resource of Chat API, # or from a space's URL. parent='spaces/{space}', body={'text': 'Test message'}).execute() print(result) Run the complete example The following code authenticates with the Chat REST API using a service account, then posts a message to a Chat space: Python())) result = chat.spaces().messages().create( parent='spaces/{space}',# replace {space} with a space name body={'text': 'Test message'}).execute() print(result) To run the sample, save the code in a file named chat_create_message.py, then execute the following command from the command line: Python python chat_create_message.py Your script makes an authenticated request to the Chat REST API, which responds by posting a message in a Chat space as a Chat app. Troubleshoot the example This section describes common issues that you might encounter while attempting to run this sample. You are not permitted to use this app When running chat_create_message.py, you might receive an error that says: <HttpError 403 when requesting{space}/messages?alt=json returned "You are not permitted to use this app". Details: "You are not permitted to use this app"> This error message means that the Chat app associated with the service account you created doesn't have permission to post Chat messages in the Chat space you are trying to post to. To resolve the error, add the Chat app associated with the service account to the Chat space specified in chat_create_message.py. Next step Learn what else Chat API can do by reviewing the Chat API reference documentation.
https://developers.google.com/chat/api/guides/auth/service-accounts?hl=pt-br
CC-MAIN-2022-27
refinedweb
1,009
63.7
Simplifying namespace declarations Discussion in 'XML' started by Nicolas George, Sep 18, 2007. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Simplifying this combinational logic?Shawn, Feb 25, 2006, in forum: VHDL - Replies: - 2 - Views: - 717 - Rob Dekker - Feb 27, 2006 simplifying use of propertieskartik, Nov 3, 2004, in forum: Java - Replies: - 23 - Views: - 773 - Stefan Schulz - Nov 10, 2004 Simplifying HTML codeRobert, Nov 30, 2004, in forum: HTML - Replies: - 7 - Views: - 512 - Jim Michaels - Feb 1, 2006 Simplifying imports?, Sep 13, 2005, in forum: Python - Replies: - 9 - Views: - 337 - George Sakkis - Sep 14, 2005 simplifying c code with function pointers, May 11, 2006, in forum: C Programming - Replies: - 8 - Views: - 325 - Dave Thompson - May 22, 2006
http://www.thecodingforums.com/threads/simplifying-namespace-declarations.538275/
CC-MAIN-2014-35
refinedweb
155
66.37
Fundamental use cases for porting iPhone and Android applications to Qt Revision as of 09:49, 12 July 2011[1][3] property. Note that the QtWebKit import is also required. import QtQuick 1.0 import QtWebKit 1.0 WebView { height: 640 width: 480 url: "" } To get a scrolling webview, put the WebView element inside a Flickable. For a step-by-step introduction to Qt Game Enabler, see this guide.: QSound::play("mysounds/bells.wav");[5] provides the Video QML element [7]. All details are handled automatically: QSettings settings("CompanyName", "ApplicationName"); settings.setValue("store/price", 13); ... int price = settings.value("store/price").toInt(); More complex data can be serialized into a binary form using QDataStream [8]. Serialization of C++'s basic data types, such as char, short, int and char*, is automatic. Serialization of more complex data is accomplished by breaking up the data into these primitive units. An XML format for data storage is available with QtXml [9], which provides a stream reader and writer for XML documents, and C++ implementations of SAX and DOM. The most powerful data storage method is QtSql .
http://developer.nokia.com/community/wiki/index.php?title=Fundamental_use_cases_for_porting_iPhone_and_Android_applications_to_Qt&diff=105913&oldid=105901
CC-MAIN-2014-10
refinedweb
183
51.24
I mounted my Teensy 3.6 in a breadboard now, and I'm getting the same problem. Both with my own code and with the FastLED example demoreel100 EDIT: I found the problem. The problem was that I... I mounted my Teensy 3.6 in a breadboard now, and I'm getting the same problem. Both with my own code and with the FastLED example demoreel100 EDIT: I found the problem. The problem was that I... I can do that tomorrow (time for me to go to bed at the moment). Both the Teensy and my LED strips are connected to the same PSU though, so they should have common ground Hello. I am trying to run my LED programs on a Teensy 3.6. They have usually been working fine with Teensy 3.2, but on the Teensy 3.6 the colors are wrong, and the LEDs flicker. I have an octo... I know about FastLEDs ability for 16 outputs, which is what I hope to use. I am almost only animating FastLED-effects, and only taking in data in the form of OSC-messages from my ipad that I use to... What I want want to do is 16 strips with 150 LEDs each. Would it be better then to use 8 outputs, and chain two and two strips you think? My main reason for not wanting to do that is that I have to... What I'm thinking about is using FastLEDs functionality for multiple outputs, and then just wiring the output pins of the Teensy to input pins on the octo boards. That should work right? Unless... I would like to to have more than 8 parallel LEDstrip outputs, and I'm wondering if it's possible to connect two octo boards to one Teensy 3.6? Allright! I've checked the video, and I think I understand it a little better now. After watching the video I even think the peak module is better suited for the thing I'm trying to do now! And... Hello! I am making a LED installation that is reacting to audio. The idea is that I want to be able to control what frequency the LEDs should react to live, using Touch OSC. I've "wired up" an... Hi! I'm having a trouble figuring out how to make this happen, or even how to google it properly. I have an animation that makes a "wave" move through an matrix of LEDs. I can adjust the angle of... This is something I haven't thought about, but there's definitely some of the arrays that could be defined as constants. I've figured it out with help from the creator of FastLED. He replied on FastLEDs google+ group. His guess was that my gigantic arrays with LED data ended up overlapping some of the memory allocated... I can do that of course: #define USE_OCTOWS2811 #include<OctoWS2811.h> #include<FastLED.h> #include <SLIPEncodedSerial.h> #include <OSCMessage.h> String readString; I'm suddenly facing an issue where my program running on a Teensy 3.2 stops when it reaches a FastLED.show(); I know this because I moved a serial print through my code, and my print does not get... Yes it does. I've used it as recently as two days ago. Do you know if there are any programmable/addressable UV LEDs out there? Yes, I did! I'm going to tweak the numbers som more to see if I can get the dimming even smoother :) Thank you for your tips! I ended up using your method luni, with some modifications. I'm pretty happy with the result :) Hey! I've made myself a 5x5 LED matrix. I'm trying to make an animation that starts in the middle and expands out in a circle. Could anyone help me get started on the math behind painting a circle... Cool, thanks! I will test this when I get home from work tonight! :) Oh, sorry. I didn't post it because it's basically just the Spectrum Analyzer example that comes with FastLED or Teensyduino (I don't remember which) But here it is: // LED Audio Spectrum... Hello! Yesterday I go the spectrum analyzer working with the LEDs I've installed in my roof at home. It was very cool to watch the music visually unfold above me :) Now I have to questions: 1.... System requirements from enviral-design.com says: But Touch Designer, the software GeoPix is made in, is Mac compatible according to their web page. I've got to say I think it's really cool that you went back to my post and figured this out! This problem, I realise, was way beyond me, but I find it really interesting to read about what was... Hey Lucas! Really cool program! I've watched all the introduction videos you've made, and I'm going to try out geopix for a couple of projects I have. I'd like to make it work with a Teensy 3.2... I'm making a lamp, and I wanted to be able to turn the lamp on by simply putting my hand on top of it. I'm starting to think this is not possible with the touchRead-function of the Teensy. On an... I'm not sure how to achieve this though. I tried putting a copper tape, connected to ground, under the copper tape I'm using as sensor. I separated them from each other with a normal piece of tape... I've just started experimenting with the touchRead()-function on a TeensyLC in order to make touch buttons. I have a setup with some copper tape that I put on a piece of wood, with a cable soldered... I'm having a hard time believing that, because then the data wires for all 7 LED strips would've had to be broken. And it happened at the same time. I haven't tested out any simple code yet. I... I've just recently run into a strange problem that I'm not able to solve :/ In my basement I have 7x 5m Neopixel strips from Adafruit in my ceiling. They are controlled with a Teensy 3.2 with an... I figured it out! It was even more trivial than uint32_t vs uint16_t. This statement was never true: if (Index_LEDi[i] > TotalSteps) Changing it to this fixed the problem: if (Index_LEDi[i]... I tried changing it to both uint32_t and unsigned long, but that didn't change anything. I've managed to get som serial output, and I can see that the values of the array NextBlink_LEDi[] never... Here is the complete program. The part that I'm having troubles with is only a small part of the code. #include "FastLED.h" /////////// CANDLE PARAMETERS \\\\\\\\\\ // The data-in pin of... Hey. I have a setup with 5 LEDs. I want these to light up at random intervals, and then fade out. My approach is to have an array (NextBlink_LEDi[5]). It has one value for each LED, indicating the... Hey. I'm trying to debug my code by adding a Serial.println. When I do my code stops running (or so it seems). My setup is a Teensy LC controlling 5 LEDs. I want these LEDs to light up at random... Go for it! =) I've added some more modes and parameters to my setup. You can see a demo here: I'll gladly answer questions if you have any =) I'm not actually using my computer for anything except debugging. This: "Serial.begin(9600); //Teensy <=> Computer " is only for sending a line of text to the serial monitor of my computer... Alright!! I figured it out! :D The problem was in the receiving end of the SLIP packets, on the Teensy. I had 'if(!msgIN.hasError())' inside the loop: 'while(!SLIPSerial.endofPacket())' What... I'm really not an expert on protocols and packets, so I'm not 100% sure why SLIP is needed to send packets from the ESP to the Teensy. But I think it's because OSC is an application layer level... I've run into another problem instead. When I try to read the value of an OSC message it always comes back as 0. But when I only use the ESP (in other words, when I do not send the packet from the... I have an ESP8266 breakout connected to a Teensy 3.2 The ESP is set up to receive OSC messages over UDP and then send them to the Teensy over serial with SLIP. The teensy is set up to output the... Ok. I'm looking into this as we speak. I've come to the conclusion that there are not actually errors in the OSC messages. The reason I thought so is because I was doing 'if(msgIN.hasError())'... Hey Adrian. Thanks for the reply. I've been a little busy lately so haven't had time to look more at it. Next week I have more time, so I'll test some more and come back here with a reply then :) You can find it as part of the OSC library from CNMAT that I'm using: Hello. I've connected a Huzzah ESP8266 breakout to a Teensy 3.2 through serial. I'm trying to make the ESP receive OSC messages over UDP, and send them over serial (with SLIP serial) to the Teensy,... Yes, the plan is to use a single teensy. I'll see if I manage to reverse engineer the code and get it working :) Thanks for the tip! I want to wirelessly controll an LED installation I have at home. I'm using a Teensy 3.2 and Octows2811 adapter. I've been looking at the pinout for octo and cc3000 and they seem to use some of the...
https://forum.pjrc.com/search.php?s=39ff8037719a619a73127a4b9bf91952&searchid=5019334
CC-MAIN-2019-35
refinedweb
1,643
75.61
DbEnv::txn_begin() method. Once you have completed all of the operations that you want to include in the transaction, you must commit the transaction using the DbTxn::commit() method. If, for any reason, you want to abandon the transaction, you abort it using DbTxn: database, obtains a transaction handle, and then performs a write operation under its protection. In the event of any failure in the write operation, the transaction is aborted and the database is left in a state as if no operations had ever been attempted in the first | DB_AUTO_COMMIT; Db *dbp = NULL; const char *file_name = "mydb.db"; const char *keystr ="thekey"; const char *datastr = "thedata"; std::string envHome("/export1/testEnv"); DbEnv myEnv(0); try { myEnv.open(envHome.c_str(), env_flags, 0); dbp = new Db(&myEnv, 0); // Open the database. Note that we are using auto commit for // the open, so the database is able to support transactions. dbp->open(NULL, // Txn pointer file_name, // File name NULL, // Logical db name DB_BTREE, // Database type (using btree) db_flags, // Open flags 0); // File mode. Using defaults Dbt key, data; key.set_data(keystr); key.set_size((strlen(keystr) + 1) * sizeof(char)); key.set_data(datastr); key.set_size((strlen(datastr) + 1) * sizeof(char)); DbTxn *txn = NULL; myEnv.txn_begin(NULL, &txn, 0); try { db->put(txn, &key, &data, 0); txn->commit(0); } catch (DbException &e) { std::cerr << "Error in transaction: " << e.what() << std::endl; txn->abort(); } } catch(DbException &e) { std::cerr << "Error opening database and environment: " << file_name << ", " << envHome << std::endl; std::cerr << e.what() << std::endl; } try { if (dbp != NULL) dbp->close(0); myEnv.close(0); } catch(DbException &e) { std::cerr << "Error closing database and environment: " << file_name << ", " << envHome << std::endl; std::cerr << e.what() << std::endl; return (EXIT_FAILURE); } return (EXIT_SUCCESS); } In order to fully understand what is happening when you commit a transaction, you must first understand a little about what DB is doing with the logging subsystem. Logging causes all database write operations to be identified in logs, and by default these logs are backed by files on disk. These logs are used to restore your databases in the event of a system or application failure, so by performing logging, DB ensures the integrity of your data. Moreover, DB performs write-ahead logging. This means that information is written to the logs before the actual database is changed. This means that all write activity performed under the protection of the transaction is noted in the log before the transaction is committed. Be aware, however, that database maintains logs in-memory. If you are backing your logs on disk, the log information will eventually be written to the log files, but while the transaction is on-going the log data may be held only in memory.. DbTxn::commit(). Notice that committing a transaction does not necessarily cause data modified in your memory cache to be written to the files backing your databases database database activities under the control of a new transaction, you must obtain a fresh transaction handle.
http://idlebox.net/2011/apidocs/db-5.2.28.zip/gsg_txn/CXX/usingtxns.html
CC-MAIN-2013-48
refinedweb
492
53
NAME SYNOPSIS DESCRIPTION RETURN VALUE NOTES CAVEATS BUGS SEE ALSO pmem_is_pmem(), pmem_map_file(), pmem_unmap() - check persistency, create and delete mappings #include <libpmem.h> int pmem_is_pmem(const void *addr, size_t len); void *pmem_map_file(const char *path, size_t len, int flags, mode_t mode, size_t *mapped_lenp, int *is_pmemp); int pmem_unmap(void *addr, size_t len); Most pmem-aware applications will take advantage of higher level libraries that alleviate the need for the application to call into libpmem directly. Application developers that wish to access raw memory mapped persistence directly (via mmap(2)) and that wish to take on the responsibility for flushing stores to persistence will find the functions described in this section to be the most commonly used. The pmem_is_pmem() function detects if the entire range [addr, addr+len) consists of persistent memory. Calling this function with a memory range that originates from a source different than pmem_map_file() is undefined. The implementation of pmem_is_pmem() requires a non-trivial amount of work to determine if the given range is entirely persistent memory. For this reason, it is better to call pmem_is_pmem() once when a range of memory is first encountered, save the result, and use the saved result to determine whether pmem_persist(3) or msync(2) is appropriate for flushing changes to persistence. Calling pmem_is_pmem() each time changes are flushed to persistence will not perform well. The pmem_map_file() function creates a new read/write mapping for a file. If PMEM_FILE_CREATE is not specified in flags, the entire existing file path is mapped, len must be zero, and mode is ignored. Otherwise, path is opened or created as specified by flags and mode, and len must be non-zero. pmem_map_file() maps the file using mmap(2), but it also takes extra steps to make large page mappings more likely. On success, pmem_map_file() returns a pointer to the mapped area. If mapped_lenp is not NULL, the length of the mapping is stored into *mapped_lenp. If is_pmemp is not NULL, a flag indicating whether the mapped file is actual pmem, or if msync() must be used to flush writes for the mapped range, is stored into *is_pmemp. The flags argument is 0 or the bitwise OR of one or more of the following file creation flags: The remaining flags modify the behavior of pmem_map_file() when PMEM_FILE_CREATE is specified. PMEM_FILE_EXCL - If specified in conjunction with PMEM_FILE_CREATE, and path already exists, then pmem_map_file() will fail with EEXIST. Otherwise, has the same meaning as O_EXCL on open(2), which is generally undefined. PMEM_FILE_SPARSE - When specified in conjunction with PMEM_FILE_CREATE, create a sparse (holey) file using ftruncate(2) rather than allocating it using posix_fallocate(3). Otherwise ignored. PMEM_FILE_TMPFILE - Create a mapping for an unnamed temporary file. Must be specified with PMEM_FILE_CREATE. len must be non-zero, mode is ignored (the temporary file is always created with mode 0600), and path must specify an existing directory name. If the underlying file system supports O_TMPFILE, the unnamed temporary file is created in the filesystem containing the directory path; if PMEM_FILE_EXCL is also specified, the temporary file may not subsequently be linked into the filesystem (see open(2)). Otherwise, the file is created in path and immediately unlinked. The path can point to a Device DAX. In this case only the PMEM_FILE_CREATE and PMEM_FILE_SPARSE flags are valid, but they are both ignored. For Device DAX mappings, len must be equal to either 0 or the exact size of the device. To delete mappings created with pmem_map_file(), use pmem_unmap(). The pmem_unmap() function deletes all the mappings for the specified address range, and causes further references to addresses within the range to generate invalid memory references. It will use the address specified by the parameter addr, where addr must be a previously mapped region. pmem_unmap() will delete the mappings using munmap(2). The pmem_is_pmem() function returns true only if the entire range [addr, addr+len) consists of persistent memory. A true return from pmem_is_pmem() means it is safe to use pmem_persist(3) and the related functions to make changes durable for that memory range. See also CAVEATS. On success, pmem_map_file() returns a pointer to the memory-mapped region and sets *mapped_lenp and *is_pmemp if they are not NULL. On error, it returns NULL, sets errno appropriately, and does not modify *mapped_lenp or *is_pmemp. On success, pmem_unmap() returns 0. On error, it returns -1 and sets errno appropriately. On Linux, pmem_is_pmem() returns true if the entire range was mapped directly from Device DAX (/dev/daxX.Y) without an intervening file system, or MAP_SYNC flag of mmap(2) is supported by the file system on Filesystem DAX. The result of pmem_is_pmem() query is only valid for the mappings created using pmem_map_file(). For other memory regions, in particular those created by a direct call to mmap(2), pmem_is_pmem() always returns false, even if the queried range is entirely persistent memory. Not all file systems support posix_fallocate(3). pmem_map_file() will fail if PMEM_FILE_CREATE is specified without PMEM_FILE_SPARSE and the underlying file system does not support posix_fallocate(3). creat(2), ftruncate(2), mmap(2), msync(2), munmap(2), open(2), pmem_persist(3), posix_fallocate(3), libpmem(7) and The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
https://pmem.io/pmdk/manpages/linux/master/libpmem/pmem_is_pmem.3/
CC-MAIN-2022-27
refinedweb
861
52.9
Is This Content Helpful? We're glad to know this article was helpful. Why is the del statement in Python unable to delete data referenced by variables? The intent of the del function, known as the 'delete' function, in regards to the deletion of a variable name, is to remove the bindings of the name from the local or global namespace. This pure Python function is not intended to physically delete the data on disk or in memory that is referenced by the variable. For example, code is written that executes the following tasks. 1. Use a variable to point to an input table 2. Use a variable to point to an output table 3. Check if the output table exists, and if so, delete it from existence 4. Copy the input table to the location of the output table In the above requirements, the del statement cannot be used to delete the data on disk - it is not like the 'Delete' key on a keyboard to delete a file. If the code is run in this manner, as shown in the following image, a NameError is thrown when referencing the variable unbound via the call to the del statement. To delete the data on disk, use the appropriate function designed to delete files from disk or memory. In this case, use one of the functions provided by Esri's ArcPy module. In scenarios where the data being deleted is analyzable by the system, pure Python methods can be used to delete the data. In summary, the del statement in Python is not designed to physically remove data from disk. As per the Python documentation, 'Deletion of a name removes the binding of that name from the local or global namespace, depending on whether the name occurs in a global statement in the same code block. If the name is unbound, a NameError exception will be raised.' Official Python Documentation: Section 6.5 - The del statement
https://support.esri.com/en/technical-article/000012415
CC-MAIN-2018-26
refinedweb
326
71.14
SYNOPSISnn [ options ] [ newsgroup | +folder | file ]... nn -g [ -r ] nn -a0 [ newsgroup ]... DESCRIPTIONNet news is a world-wide information exchange service covering numerous topics in science and every day life. Topics are organized in news groups, and these groups are open for everybody to post articles on a subject related to the topic of the group. Nn is a `point-and-shoot' net news interface program, or a news reader for short (not to be confused with the human news reader). When you use nn, you can decide which of the many news groups you are interested in, and you can unsubscribe to those which don't interest you. nn will let you read the new (and old) articles in each of the groups you subscribe to using a menu based article selection prior to reading the articles in the news group. When a news group is entered, nn will locate all the presently unread articles in the group, and extract their sender, subject, and other relevant information. This information is then rearranged, sorted, and marked in various ways to give it a pleasant format when it is presented on the screen. This will be done very quickly, because nn uses the NOV database via the NNTP XOVER command. The news server to use can be overridden by setting the environment variable $NNTPSERVER to the name of the system (such as news.newserver.com), or by setting the variable nntp-server (on the command line only, since it is looked at before the init file), as "nntp-server=news.some.domain"). If you use multiple servers, you probably want to set the nn-directory and newsrc variables on the command line to an alternate names as well, since some of the data files are server dependent. If you are using a slow tcp link (such as ppp over a modem) and NNTP, see the NOTES section at the end of this manual. When the article menu appears on the screen, nn will be in a mode called selection mode. In this mode, the articles which seems to be interesting can be selected by single keystrokes (using the keys a-z and 0-9). When all the interesting articles among the ones presently displayed have been selected, the space bar is hit, which causes nn to enter reading mode. In reading mode, each of the selected articles will be presented. You use the space bar to go on to the next page of the current article, or to the next article. Of course, there are all sorts of commands to scroll text up and down, skip to the next article, responding to an article, decrypt an article, and so on. When all the selected articles in the current group have been read, the last hit on the space bar will cause nn will continue to the next group with unread articles, and enter selection mode on that group. FREQUENTLY USED OPTIONSnn accepts a lot of command line options, but here only the frequently used options are described. Options can also be set permanently by including appropriate variable settings in the init file described later. All options are described in the section on Command Line Options towards the end of this manual. The frequently used command line options are: - -a0 - Catch up on unread articles and groups. See the section "Catch up" below. - -g - Prompt for the name of a news group or folder to be entered (with completion). - -r - Used with -g to repeatedly prompt for groups to enter. - -lN - Print only the first N lines of the first page of each article before prompting to continue. This is useful on slow terminals and modem lines to be able to see the first few lines of longer articles. - -sWORD - Collect only articles which contain the string WORD in their subject (case is ignored). This is normally combined with the -x and -m options to find all articles on a specific subject. - -s/regexp - Collect only articles whose subject matches the regular expression regexp. This is normally combined with the -x and -m options to find all articles on a specific subject. - -nWORD or -n/regexp - Same as -s except that it matches on the sender's name instead of the article's subject. This is normally combined with the -x and -m options to find all articles from a specific author. It cannot be mixed with the -s option! - -i - Normally searches with -n and -s are case independent. Using this option, the case becomes significant. - -m - Merge all articles into one `meta group' instead of showing them one group at a time. This is normally used together with the -x and -s options to get all the articles on a specific subject presented on a single menu (when you don't care about which group they belong to). When -m is used, no articles will be marked as read. - -x[N] - Present (or scan) all (or the last N) unread as well as read articles. When this option is used, nn will never mark unread articles as read (i.e. .newsrc is not updated). - -X - Read/scan unsubscribed groups also. Most useful when looking for a specific subject in all groups, e.g. nn -mxX -sSubject all - news.group or file or +folder - If none of these arguments are given, all subscribed news groups will be used. Otherwise, only the specified news groups and/or files will be collected and presented. In specifying a news groups, the following `meta notation' can be used: If the news group ends with a `.' (or `.all'), all subgroups of the news group will be collected, e.g. comp.sources.If a news group starts with a `.' (or `all.'), all the matching subgroups will be collected, e.g. .sources.unixThe argument `all' identifies all (subscribed) news groups. COMMAND INPUTIn general, nn commands consist of one or two key-strokes, and nn reacts instantly to the commands you give it; you don't have to enter return after each command (except where explicitly stated). Some commands have more serious effects than others, and therefore nn requests you to confirm the command. You confirm by hitting the the y key, and reject by hitting the n key. Some `trivial' requests may also be confirmed simply by hitting space. For example, to confirm the creation of a save file, just hit space, but if one or more directories also have to be created, you must enter y. Many commands will require that you enter a line of text, e.g. a file name or a shell command. If you enter space as the first character on a line, the line will be filled with a default value (if one is defined). For example, the default value for a file name is the last file name you have entered, and the default shell command is your previous shell command. You can edit this default value as well as a directly typed text, using the following editing commands. The erase, kill, and interrupt keys are the keys defined by the current tty settings. On systems without job control, the suspend key will be control-Z while it is the current suspend character on system with job control. - erase Delete the last character on the line. - delete-word (normally ^W) Delete the last word or component of the input. - kill Delete all characters on the line. - interrupt and control-G Cancel the command which needs the input. - suspend - Suspend nn if supported by the system. Otherwise, spawn an interactive shell. - return Terminate the line, and continue with the command. Related variables: erase-key, flow-control, flush-typeahead, help-key, kill-key, word-key. BASIC COMMANDSThere are numerous commands in nn, and most of them can be invoked by a single keystroke. The descriptions in this manual are based on the standard bindings of the commands to the keys, but it is possible to customize these using the map command described later. For each of the keystroke commands described in this manual, the corresponding command name will also be shown in curly braces, e.g. {command}. The following commands work in both selection mode and in reading mode. The notation ^X means `control X': - Help. Gives a one page overview of the commands available in the current mode. - ^L {redraw} - Redraw screen. - ^R {redraw} - Redraw screen (Same as ^L). - ^P {message} - Repeat the last message shown on the message line. The command can be repeated to successively show previous messages (the maximum number of saved messages is controlled via the message-history variable.) - ! {shell} - Shell escape. The user is prompted for a command which is executed by your favorite shell (see the shell variable). Shell escapes are described in detail later on. - Q {quit} - Quit nn. When you use this command, you neither lose unread articles in the current group nor the selections you might have made (unless the articles are expired in the meantime of course). - V {version} - Print release and version information. - :command {command} - Execute the command by name. This form can be used to invoke any of nn's commands, also those which cannot be bound to a key (such as :coredump), or those which are not bound to a key by default (such as post and unshar). Related and basic variables: backup, backup-suffix, confirm-auto-quit, expert, mail, message-history, new-group-action, newsrc, quick-count. SELECTION MODEIn selection mode, the screen is divided into four parts: the header line showing the name of the news group and the number of articles, the menu lines which show the collected articles - one article per line, the prompt line where you enter commands, and the message line where nn prints various messages to you. Each menu line begins with an article id which is a unique letter (or digit if your screen can show more than 26 menu lines). To select an articles for reading, you simply enter the corresponding id, and the menu line will be high-lighted to indicate that the article is selected. When you have selected all the interesting articles on the present menu, you simply hit space. If there are more articles collected for the current group than could be presented on one screenful of text, you will be presented with the next portion of articles to select from. When you have had the opportunity to select among all the articles in the group, hitting space will enter reading mode. If no articles have been selected in the current group, hitting space will enter selection mode on the next news group, or exit nn if the current group was the last news group with unread articles. It is thus possible to go through ALL unread articles (without reading any of them) just by hitting space a few times. The articles will be presented on the menu using one of the following layouts: - 0: - x Name......... Subject.............. +123 - 1: - x Name......... 123 Subject.............. - 2: - x 123 Subject................................... - 3: - x Subject........................................... - 4: - x Subject........................................ Here x is the letter or digit that must be entered to select the article, Name is the real name of the sender (or the mail address if the real name cannot be found), Subject is the contents of the "Subject:" line in the article, and 123 is the number of lines in the article. Layout 0 and 1 are just two ways to present the same information, while layout 2 and 3 are intended for groups whose articles have very long subject lines, e.g. comp.sources. Layout 4 is a hybrid between layout 1 and 3. It will normally use layout 1, but it will use layout 3 (with a little indentation) for menu lines where the subject is longer than the space available with layout 1. Layout 1 is the default layout, and an alternative menu line layout is selected using the -L option or by setting the layout variable. Once nn is started the layout can be changed at any time using the " key {layout}. The Name is limited to 16 characters, and to make maximum use of this space, nn will perform a series of simplifications on the name, e.g. changing first names into initials, removing domain names from mail addresses (if the real name is not found) etc. It does a good job, but some people on the net put weird things into the From: field (or actually into their password file) which result in nn producing quite cryptic, and sometimes funny "names". One a usual 80 column terminal, the Subject is limited to about 60 characters (75 in layout 3) and is thus only an approximation to the actual subject line which may be much longer. To get as much out of this space, Re: prefixes (in various forms) are recognized and replaced by a single `>' character (see the re-layout variable). Since articles are sorted according to the subject, two or more adjacent articles may share the same subject (ignoring any `>'s). In this case, only the first article will show the subject of the article; the rest will only show the `>' character in the subject field (or a `-' if there is no `>' at the beginning of the line). A typical menu will thus only show each subject once, saving a lot of time in scanning the news articles. If consolidated menus (see section below) are enabled, adjacent articles sharing the same subject will be shown with a single line on the menu corresponding to the first of the articles. The number of articles with the same subject will be shown as a braketed number in front of the subject, e.g. with layout 1: x Name......... 123 [4] Subject..............For further information see the section on consolidated menus below. Related variables: collapse-subject, columns, confirm-entry, confirm-entry-limit, entry-report-limit, fsort, kill, layout, limit, lines, long-menu, re-layout, repeat, slow-mode, sort, sort-mode, split, subject-match-limit, subject-match-offset, subject-match-parts, subject-match-minimum. ARTICLE ATTRIBUTESWhile nn is running and between invocations, nn associates an attribute with each article on your system. These attributes are used to differentiate between read and unread articles, selected articles, articles marked for later treatment, etc. Depending on how nn is configured, these attributes can be saved between invocations of nn, or some of them may only be used while nn is running. The attribute is shown on the menu using either a single character following the article id or by high-lighting the menu line, depending on the attribute and the capabilities of the terminal. You can also change the attributes to your own taste (see the attributes variable). The attribute of an article can be changed explicitly using the selection mode commands described below, or it will change automatically for example when you have read or saved a selected article. If a command may change any article attributes, it will be noted in the description of the command. The following descriptions of the attributes will only mention the most important commands that may set (or preserve) the attribute. The following attributes may be associated with an article: - read - Menu attribute "." - indicates that the article has been read or saved. When you leave the group, these articles will be marked permanently read, and are not presented the next time you enter the group. - seen - Menu attribute "," - indicates that the article is unread, but that it has been presented on a menu. Depending on how nn is configured, these articles will automatically be marked read when you leave the group, they may remain seen, or they may just be unread the next time you enter the group (see the auto-junk-seen, confirm-junk-seen, and retain-seen-status variables). Only the commands continue (space) and read-skip (X) will mark unread articles on the current (or all) menu pages as seen when they are used. Other commands that scroll through the menu pages or enter reading mode will let unread articles remain unread. - unread - Menu attribute " " - indicates an unread article. These articles were unread when you entered the group, and they may remain unread when you leave the group, unless they have been marked seen by the command that you used to leave the group or enter reading mode. - selected - Menu line high-lighted (or menu attribute "*") - indicates that you have selected the article. If you leave the group, the selected articles will remain selected the next time you enter the group. When you have read a selected article, the attribute will automatically change to read. - auto-selected - These articles have the same appearance as selected articles on the menu, and the only difference is that these articles have been selected automatically via the auto-selection facility rather than manually by you. Very few commands differentiate between these attributes and if they do, it is explicitly stated in this manual. The main difference is that these articles are only marked as unread when you leave the group (supposing they will also be auto-selected the next the group is entered). This simplifies the house-keeping between invocations of nn. - leave - Menu attribute "+" - indicates that the article is marked for later treatment by the leave-article (l) command. These articles may be selected (on demand) when you have read all selected articles in a group. However, if you do not select them then immediately, they are stored as the leave-next attribute described below. - leave-next - Menu attribute "=" - indicates that the article is marked for later treatment by the leave-next (L) command. This is a permanent attribute, which will remain on the article until you either read the article, change the attribute, or it is expired. So assinging this attribute to an article will effectively keep it unread until you do something. If the variable select-leave-next is set, nn will ask whether these articles should be selected on entry to a group (but naturally, doing so will change the leave-next attribute to select). - cancelled - Menu attribute "#" - indicates that the article has been cancelled. This is mainly useful when tidying a folder; it is set by the cancel (C) command, and can be cleared by any command that change attributes, e.g. you can select and deselect the article. - killed - Menu attribute "!" - indicates that the article has been killed (e.g. by the K {kill-select} command). Killed articles are immediately removed from the menu, so you should not normally see articles with this attribute. If you do, report it as a bug! The attributes are saved in two files: .newsrc (read articles) and .nn/select (other attributes). Plain unread articles are saved by not occurring in either of these files. Both files are described in more detail later on. Related variables: attributes, auto-junk-seen, confirm-junk-seen, retain-seen-status, select-leave-next. SELECTION MODE COMMANDSThe primary purpose of the selection mode is of course to select the articles to be read, but numerous other commands may also be performed in this mode: saving of articles in files, replying and following up on articles, mailing/forwarding articles, shell escapes etc. As described above, the selected articles are marked either by showing the corresponding menu line in standout mode (reverse video), or if the terminal does not have this capability by placing an asterisk (*) after the selection letter or digit. Most commands which are used to select articles will work as toggle commands. If the article is not already selected, the selectedattribute on the article(s), independent on the previous attribute. Otherwise, the article(s) will be deselected and marked unread. Consequently, any article can be marked unread simply be selecting and deselecting it. During selection, the cursor will normally be placed on the article following the last article whose attribute was changed (initially the first article). The article pointed out by the cursor is called the current article, and the following commands work relative to the current article and cursor position. - abc...z 01..9 {article N} - The article with the given identification letter or digit is selected or deselected. The following article becomes the current article. If the variable auto-select-subject is set, all articles with the same subject as the given article are selected. - Select or deselect the current article and move the cursor to the next article. - , {line+1} - Move the cursor to the next article. You can use the down arrow as well. - / {line-1} - Move cursor to previous article. You can use the up arrow as well. - * {select-subject} - Select or deselect all articles with same subject as current article. This will work across several menu pages if necessary. - -x {select-range} - Select or deselect the range of articles between the current article and the article specified by x. For example you can select all articles from e to k by simply typing e-k. The following commands may change the attributes on all articles on the current menu page, or on all articles on all menu pages. - @ {select-invert} Reverse selections. All selected articles on the current page are deselected, and vice-versa. (Use the find command to select all articles.) - ~ {unselect-all} - Deselect all auto-selected articles in the group (this works across all menu pages). If the command is executed twice, the selected articles will also be deselected. - + {select-auto} - Perform auto-selections in the group (see the section on "auto kill/select" below). - = {find} - Prompts for a regular expression, and selects all articles on the menu (all pages) which matches the regular expression. Depending on the variable select-on-sender matching is performed against the subject (default) or the sender of the articles. An empty answer (= return) will reuse the previous expression. Example: The command = . return will select all articles in the group. - J {junk-articles} - This is a very versatile command which can be used. The full functionality of the junk-articles command is described in a separate section below. - L {leave-next} - This is a specialized version of the generic J {junk-articles} command to set the leave-next attribute on a subset of the articles on the menu. It is also described further below. The following commands move between the pages belonging to the same news group when there are more articles than will fit on a single page. These commands will not change any article attributes. - > {page+1} - Goto next menu page. - < {page-1} - Goto previous menu page, or to last menu page if on first menu page. - $ {page=$} - Goto last menu page. - ^ {page=1} - Goto first menu page. The following commands are used to enter reading mode for the selected articles, and to move between news groups (in selection mode). They may change article attributes if noted below. - space {continue} - Continue to next menu page, or if on last menu page, read the selected articles. If no articles have been selected, continue to the next news group. The unread articles on the current menu page will automatically be marked seen. - return {continue-no-mark} - Identical to the continue command, except that the unread articles on the current menu page will remain unread. (The newline key has the same effect). - Z {read-return} - Enter reading mode immediately with the currently selected articles. When all articles have been read, return to selection mode in the current group. It will mark selected articles read as they are read, but unread articles are not normally changed (can be controlled with the variable marked-by-read-return.) - X {read-skip} - Mark all unmarked articles seen on all menu pages (or the pages defined by the marked-by-read-skip variable), and enter reading mode immediately with the currently selected articles. As the selected articles are read, they are marked read. When all selected articles have been read, nn will enter selection mode in the next news group. When no articles are selected, it goes directly to the next group. This can be used to skip all the articles in a large news group without having to go through all the menu pages. If you don't want to read the current group now, but want to keep it for later, you can use the following commands which will only mark seen and read articles as read. Currently selected articles will still be selected the next time you enter the group. None of these commands will change any attributes themselves (by default). - N {next-group} - Go forward to the next group in the presentation sequence. If the variable marked-by-next-group is set articles on the menu can optionally be marked seen - P {previous} - Go back to the previous group. This command will enter selection mode on the last active group (two P commands in sequence will bring you to the current group). If there are still some unread articles in the group, only those articles will be shown. Otherwise, all the articles which were unread when nn was invoked will be shown marked with the read attribute (which can be changed as usual). As described in the "Article Attributes" section, the read and seen articles will normally be marked read when you leave the group, and these articles are not shown the next time you enter the group. In all releases prior to release 6.4, it was impossible to have individual articles in a group marked unread when you left a group, and the default behaviour of release 6.4 onwards will closely match the traditional behaviour. This means that the seen and read articles are treated alike for most practical purposes with the default variable settings. If you don't like nn to silently mark the seen articles read, you can set the variable confirm-junk-seen to get nn to prompt you for confirmation before doing this, or you can unset the variable auto-junk-seen to simply keep the seen articles for the next time you enter the group. You then have to use the J {junk-articles} to mark articles read. Using return {continue-no-mark} will also allow you to keep articles unread rather than marking them seen when scrolling through the menu pages and entering reading mode. If this is your preferred reading style, you can remap space to this command. Related variables: auto-junk-seen, auto-preview-mode, auto-select-subject, case-fold-search, confirm-auto-quit, confirm-entry, confirm-junk-seen, marked-by-next-group, marked-by-read-return, marked-by-read-skip, retain-seen-status, select-on-sender. CONSOLIDATED MENUSNormally, nn will use one menu line for each article, so if there are many articles with identical subjects, each menu page will only contain a few different subjects. To have each subject occur only once on the menu, nn can operate with consolidated menus by setting the variable consolidated-menu. When consolidated menus are used, nn operates with two kinds of subjects: open and closed. An open subject is a subject which is shown in the traditional way with one menu line for each article with the given subject. In other words, when consolidated menus are not used, all subjects are open (by default). A closed subject is a multi-article subject which is presented by a single menu line. This line will be the normal menu line for the first (oldest) article with the subject, but with the subject field annotated with a bracketed number showing the number of articles with that subject, e.g. a Kim F. Storm 12 [4] Future plans for nn b.Kim F. Storm 43 [3] More plans for nn In this example, there are four unread articles with subject `a' of which the first is posted by me and has 12 lines. The rest of the articles are hidden, and will only be shown on request. The `.' marker on subject `b' shows that all three articles within that subject have been read (or seen). To select (or deselect) ALL the articles within a closed subject, simply select the article shown on the menu; this will automatically select (or deselect) the rest (see auto-select-closed). When all the unread articles within a closed subject are selected, the menu line will be high-lighted. If you want to view the individual articles in a subject (maybe to select individual articles), you can open the subject with the commands: - (x - Open subject x on menu. - (( - Open current subject. When you have completed viewing the opened subject, you can close it again using the commands: - )x - Close subject x on menu (x is any article with the subject). - )) - Close current subject. In the basic layout of the menu line for a closed subject as shown above, ALL articles in the closed subject are supposed to be either: - unread - The menu line is not high-lighted. - selected - Menu line is fully high-lighted (if all UNREAD are selected). - read/seen - There is a `.' (read attribute) following the article id. If neither of these cases apply, i.e. there is a mixture of unread, selected, and seen/read articles, the bracketed number will have one of the following formats: - [U:T] - There are U unread articles of T total (U<T). - [S/T] - There are S selected articles of T total (S<U=T). - [S/U:T] - There are S selected of U unread of T total (S<U<T). If there are any selected articles (S>0), the information between the brackets will be high-lighted (to show that something is selected, but not all the unread articles). Notice: Consolidated menus only work with the `subject' and `lexical' sorting methods. Variables related to consolidated menus are: auto-select-closed, consolidated-menu, counter-delim-left, counter-delim-right, counter-padding, save-closed-mode. THE JUNK-ARTICLES AND LEAVE-NEXT COMMANDSThe J {junk-articles} command is a very flexible command which: - Mark Read - This submenu allows you to mark articles read. - Unmark - This submenu allows you to mark articles unread. - This submenu allows you to select articles based on their attribute. - Kill - This submenu allows you to mark articles read and remove them from the menu based on their attribute. The L {leave-next} command is an extension of the J command with a fifth menu: - Leave - This menu allows you to mark articles for later handling with the leave-next attribute which will keep the article unread until you explicitly change the attribute (e.g. by reading it) or it is expired. For each of these submenus, nn will list the most plausible choices you may use, but all of the following answers can be used at all submenus. When you have entered a choice, nn will afterward ask whether the change should be made to all menu pages or only the current page. - J - Show next submenu. - L - Change attribute on all leave articles. - N - Change attribute on all leave-next articles. - R - Change attribute on all read articles. - S - Change attribute on all seen articles. - U - Change attribute on all unmarked (i.e. unread) articles. - A - Change attribute on all articles no matter their current attribute. - * - Change attribute on all selected articles on the current page. - + - Change attribute on all selected articles on all pages. - a-z0-9 - Change attribute on one or more specific articles on the current page. You end the list of articles by a space or by using one of the other choices described above. - . - Change attribute on current article. - , / - Move the current article down or up the menu without changing any attributes. READING MODE COMMANDSIn reading mode, the selected articles are presented one page at a time. To get the next page of an article, simply hit space, and when you are on the last page of an article, hit space to get to the next selected article. Articles are normally marked read when you go to the next article, while going back to the menu, quitting nn, etc. will retain the attribute on the current article. When you are on the last page of the last article, hit space to enter selection mode on the next group (or the current group if reading mode was entered using the Z command). To read an article, the following text scrolling commands are available: - space {continue} - Scroll one page forward or continue with the next article or group as described above. - backspace / delete {page-1} - Go one page backwards in article. - d {page+1/2} - Scroll one half page forward. - u {page-1/2} - Go one half page backwards. - return {line+1} - Scroll one line forward in the article. - tab {skip-lines} - Skip over lines starting with the same character as the last line on the current page. This is useful to skip over included text or to the next file in a shell archive. - ^ {page=1} - Move to the first page (excluding the header) of the article. - $ {page=$} - Move to the last page of the article. - gN {[email protected]} - Move to line N in the article. - /regexp {find} in the article. If a matching text is found, it will be high-lighted. - . {find-next} - Repeat search for last regular expression. - h {page=0} - Show the header of the article, and continue from the top of the article. - H {full-digest} - If the current article is extracted from a digest, show the entire digest article including its header. Another H command will return to the current subarticle. - D {rot13} - Turn rot13 (caesar) decryption on and off for the current article, and redraw current page. If the article is saved while it is decrypted on the screen, it will be saved in decrypted form as well! - c {compress} - Turn compression on and off for the current article and redraw current page. With compression turned on, multiple spaces and tabs are shown as a single space. This makes it much easier to read right justified text which separate words with several spaces. (See also the compress variable) The following commands are used to move among the selected articles. - n {next-article} - Move to next selected article. This command skips the rest of the current article, marks it read, and jumps directly to the first page of the next selected article (or to the next group if it was the last selected article). - l {leave-article} - Mark the current article with the leave attribute and continue with the next selected article. When all the selected articles in the current group have been read, these left over articles can be automatically selected and shown once more, or the treatment can be postponed to the next time you enter the group. This is particularly useful if you see an article which you may want to respond to unless one the following articles is already saying what you intended to say. - L {leave-next} - Mark the current article with the leave-next attribute and continue with the next selected article. - p {previous} - Goto previous article. - k {next-subject} - Kill subject. Skips rest of current article, and all following articles with the same subject. The skipped articles are marked read. To kill a subject permanently use the K command. - * {select-subject} - Show next article with same subject (even if it is not selected). This command will select all following articles with the same subject as the current article (similar to the `*' command in selection mode). This can be used to select only the first article on a subject in selection mode, and then select all follow-ups in reading mode if you find the article interesting. - a {advance-article} - Goto the following article on the menu even if it is not selected. This command skips the rest of the current article and jumps directly to the first page of the next article (it will not skip to the next group if it is the last article). The attribute on the current article will be restored, except for the unread attribute which will be changed to seen. - b {back-article} - Goto the article before current article on the menu even if it is not selected. This is similar to the a command, except for the direction. The following commands perform an immediate return from reading mode to selection mode in the current group or skip to the next group. - = {goto-menu} - Return to selection mode in the current group (think of = as the "icon" of the selection menu). The articles read so far will be marked read. - N {next-group} - Skip the rest of the selected and unread articles in the current group and go directly to the next group. Only the read (and seen) articles in the current group are marked as read. - X {read-skip} - Mark all articles in the current group as read and go directly to the next group. (You will be asked to confirm this command.) Related variables: case-fold-search, charset, compress, data-bits, date, header-lines, mark-overlap, monitor, overlap, scroll-clear-page, stop, trusted-escape-codes, wrap-header-margin. PREVIEWING ARTICLES IN SELECTION MODEIn selection mode, it is possible to read a specific article on the menu without entering reading mode for all the selected articles on the menu. Using the commands described below will enter reading mode for one article only, and then return to the menu mode immediately after (depending on the setting of the preview-continuation variable). If there are more than 5 free lines at the bottom of the menu screen, nn will use that space to show the article (a minimal preview window can be permanently allocated with the window variable). Otherwise, the screen will be cleared to show the article. After previewing an article, it will be marked read (if the preview-mark-read variable is set), and the following article will become the current article. - %x {preview} - Preview article x. - %% {preview} - Preview the current article. When the article is being shown, the following reading mode commands are very useful: - = {goto-menu} - Skip the rest of the current article, and return to menu mode. - n {next-article} - Skip the rest of the current article, and preview the next article. - l {leave-article} - Mark the article as selected (!) on the menu for handling later on. Then skip the rest of the current article, and preview the next article. - %y {preview} - Preview article y . If the variable auto-preview-mode is set, just hitting the article id in menu mode will enter preview mode on the specified article. Related variables: auto-preview-mode, min-window, preview-continuation, preview-mark-read, window. SAVING ARTICLESThe following commands are used to save articles in files, unpack archives, decode binaries, etc. It is possible to use the commands in both reading mode to save the current article and in selection mode to save one or more articles on the menu. The saved articles will be appended to the specified file(s) followed by an empty line each. Both files and directories will be created as needed. When an article has been saved in a file, a message reporting the number of lines saved will be shown if the save-report variable is set (default on). - S {save-full} - Save articles including the full article header. - O {save-short} - Save articles with a short header containing only the name of the sender, the subject, and the posting date of the article. - E {save-header} - Save only the header of the articles. - W {save-body} - Write article without a header. - :print {print} - Print article. Instead of a file name, this command will prompt for the print command to which the current article will be piped. The default print command is specified at compile time, but it can be changed by setting the printer variable. The output will be identical to that of the O command. - :patch {patch} - Send articles through patch(1) (or the program defined in the patch-command variable). Instead of a file name, you will be prompted for the name of a directory in which you want the patch command to be executed. nn will then pipe the body of the article through the patch command. The output from the patch process will be shown on the screen and also appended to a file named Patch.Result in the patch directory. - :unshar {unshar} - Unshar articles. You will be prompted for the name of a directory in which you want nn to unshar the articles. nn will then pipe the proper parts of the article body into a Bourne Shell whose working directory will be set to the specified directory. During the unpacking, the normal output from the unshar process will appear on the screen, and the menu or article text will be redrawn when the process is finished. The output is also appended to a file named Unshar.Result in the unshar directory. The file specified in unshar-header-file (default "Unshar.Headers") in the unshar directory will contain the header and initial text (before the shar data) from the article. You can use the `G' {goto-group} command to look at the Unshar.Headers file. - :decode {decode} - Decode uuencoded articles into binary files. You will be prompted for the name of a directory in which you want nn to place the decoded binary files (the file names are taken from the uuencoded data). nn will combine several articles into single files as needed, and you can even decode unrelated packages (into the same directory) with one decode command. To be able to decode a binary file which spans several articles, nn may have to ignore lines which fail the normal sanity checks on uuencoded data instead of treating them as transmission errors. Consequently, it is strongly recommended to check the resulting decoded file using the checksum which is normally contained in the original article. (Actually, you are also supposed to do this after decoding with a stand-alone uudecode program). The header and initial information in the decoded articles are saved in the file specified in decode-header-file (default "Decode.Headers") in the same directory as the decoded files. If decode-skip-prefix is non-null, :decode will attempt to ignore up to that many characters on each line to find the encoded data. This is particularly useful in some binaries groups where files are both uuencoded and packed with shar; nn will ignore the prefix added to each line by shar, and thus be able to unshar, concatenate, and decode multi-part postings automatically. In reading mode, the following keys can also be used to invoke the save commands: - s - Same as S. - o - Same as O. - w - Same as W. - P - Same as :print. The save commands will prompt for a file name which is expanded according to the rules described in the section on file name expansion below. For each group, it is possible to specify a default save file in the init file, either in connection with the group presentation sequence or in a separate save-files section (see below). If a default save file is specified for the group, nn will show this on the prompt line when it prompts for the file name. You can edit this name as usual, but if you kill the entire name immediately, nn will replace the default name with the last file name you entered. If you kill this as well, nn will leave you with a blank line. If the quick-save variable is set, nn will only prompt for a save file name when the current article is inside a folder; otherwise, the default save file defined in the init file will be used unconditionally. If the file (and directories in the path) does not exist, nn will ask whether the file (and the directories) should be created. If the file name contains an asterisk, e.g. part*.sharnn will save each of the articles in uniquely named files constructed by replacing the asterisk by numbers from the sequence 1, 2, 3, etc. The format of the string that replaces the * can be changed with the save-counter variable, and the first number to use can be changed via save-counter-offset. In selection mode, nn will prompt you for the identifier of one or more articles you want to save. When you don't want to save more articles, just hit space. The saved articles will be marked read. If you enter an asterisk `*' when you are prompted for an article to save, nn will automatically save all the selected articles on the current menu page and mark them read. Likewise, if you enter a plus `+', nn will save all the selected articles on all menu pages and mark them read. This is very useful to unpack an entire package using the :unshar and :decode commands. It can also be used in combination with the save selected articles feature to save a selection of articles in separate, successively numbered files. But do not confuse these two concepts! The S* and S+ commands can be used to save the selected articles in a single file as well as in separate files, and the save in separate files feature can be used also when saving individual articles, either in the selection mode, or in the article reading mode. When articles are saved in a file with a full or partial header, any header lines in the body of the article will be escaped by a tilde (e.g. ~From: ...) to enable nn to split the folder into separate articles. The escape string can be redefined via the embedded-header-escape variable. Articles can optionally be saved in MAIL or MMDF compatible format by setting the mail-format and mmdf-format variables. These variables only specify the format used when creating a new folder, while appending to an existing folder will be done in the format of the folder (unless folder-format-check is false). Related variables: confirm-append, confirm-create, decode-header-file, decode-skip-prefix, default-save-file, folder-save-file, edit-patch-command, edit-print-command, edit-unshar-command, folder, folder-format-check, mail-format, mmdf-format, patch-command, printer, quick-save, save-counter, save-counter-offset, save-report, suggest-default-save, unshar-command, unshar-header-file. FOLDER MAINTENANCEWhen more than one article is saved in a folder, nn is able to split the folder, and each article in the folder can be treated like a separate article. This means that you can save, decode, reply, follow-up, etc. just as with the original article. You can also cancel (delete) individual articles in a folder using the normal C {cancel} command described later. When you quit from the folder, you will then be given the option to remove the cancelled articles from the folder. The original folder is saved in a file named `BackupFolder~' in the .nn directory (see the backup-folder-path variable) by renaming or copying the old folder as appropriate. When the folder has been compressed, the backup folder will be removed unless the variable keep-backup-folder is set. If all articles in a folder are cancelled, the folder will be removed or truncated to zero length (whatever is allowed by directory and file permissions). In this case no backup folder is retained even when keep-backup-folder is set! If the variable trace-folder-packing is set, nn will show which articles are kept and which are removed as the folder is rewritten. Folders are rewritten in the format of the original folder, i.e. the mail-format and mmdf-format variables are ignored. Related variables: backup-folder-path, keep-backup-folder, trace-folder-packing. FILE NAME EXPANSIONWhen the save commands prompts for a file name, the following file name expansions are performed on the file name you enter: - +folder - The + is replaced by the contents of the folder variable (default value "~/News/") resulting in the name of a file in the folder directory. Examples: +emacs, +nn, +sources/shar/nn - + - A single plus is replaced by the expansion of the file name contained in the default-save-file variable (or by folder-save-file when saving from a folder). - ~/file - The ~ is replaced by the contents of the environment variable HOME, i.e. the path name of your home directory. Examples: ~/News/emacs, ~/News/nn, ~/src/shar/nn - ~user/file - The ~user part is replaced by the user's home directory as defined in the /etc/passwd file. - |command-line - Instead of writing to a file, the articles are piped to the given shell (/bin/sh) command-line. Each save or write command will create a separate pipe, but all articles saved or written in one command (in selection mode) are given as input to the same shell command. Example: | pr | lpThis will print the articles on the printer after they have been piped through pr. It is possible to create separate pipes for each saved article by using a double pipe symbol in the beginning of the command, e.g. || cd ~/src/nn ; patch The following symbols are expanded in a file name or command: - $F - will be expanded to the name of the current group with the periods replaced by slashes, e.g. rec/music/synth. - $G - will be expanded to the name of the current group. - $L - will be expanded to the last component of the name of the current group. You may use this to create default save file names like +src/$L in the comp.sources groups. - $N - will be expanded to the (local) article number, e.g. 1099. In selection mode it is only allowed at the end of the file name! - $(VAR) - is replaced by the string value of the environment variable VAR. Using these symbols, a simple naming scheme for `default folder name' is +$G which will use the group name as folder name. Another possibility is +$F/$N. As mentioned above, you can also instruct nn to save a series of files in separate, unique files. All that is required is that the file name contains an asterisk, e.g. +src/hype/part*.sharThis will cause each of the articles to be saved in separate, unique files named part1.shar, part2.shar, and so on, always choosing a part number that results in a unique file name (i.e. if part1.shar did already exist, the first article would be saved in part2.shar, the next in part3.shar, and so on). Related variables: default-save-file, folder, folder-save-file, save-counter, save-counter-offset. FILE AND GROUP NAME COMPLETIONWhen entering a file name or a news group name, a simple completion feature is available using the space, tab, and ? keys. Hitting space anywhere during input will complete the current component of the file name or group name with the first available possibility. If this possibility is not the one you want, keep on hitting space until it appears. When the right completion has appeared, you can just continue typing the file or group name, or you can hit tab to fix the current component, and get the first possibility for the next component, and then use space to go through the other possible completions. The ? key will produce a list of the possible completions of the current component. If the list is too long for the available space on screen, the key can be repeated to get the next part of the list. The current completion can be deleted with the erase key. The default value for a file name is the last file name you have entered, so if you enter a space as the first character after the prompt, the last file name will be repeated (and you can edit it if you like). In some cases, a string will already be written for you in the prompt line, and to get the default value in these cases, use the kill key. This also means that if you neither want the initial value, nor the default value, you will have to hit the kill twice to get a clean prompt line. Related variables: comp1-key, comp2-key, help-key, suggest-default-save. POSTING AND RESPONDING TO ARTICLESIn both selection mode and reading mode you can post new articles, post follow-ups to articles, send replies to the author of an article, and you can send mail to another user with the option of including an article in the letter. In reading mode, a response is made to the current article, while in selection mode you will be prompted for an article to respond to. The following commands are available (the lower-case equivalents are also available in reading mode): - R {reply} - Reply through mail to the author of the article. This is the preferred way to respond to an article unless you think your reply is of general interest. - F {follow} - Follow-up with an article in the same newsgroup (unless an alternative group is specified in the article header). The distribution of the follow-up is normally the same as the original article, but this can be modified via the follow-distribution variable. - M {mail} - Mail a letter or forward an article to a single recipient. In selection mode, you will be prompted for an article to include in your letter, and in reading mode you will be asked if the current article should be included in the letter. You will then be prompted for the recipient of the letter (default recipient is yourself) and the subject of the letter (if an article is included, you may hit space to get the default subject which is the subject of the included article). The header of the article is only included in the posted letter if it is forwarded (i.e. not edited), or if the variable include-full-header is set. - :post {post} - Post a new article to any newsgroup. This command will prompt you for a comma-separated list of newsgroups to post to (you cannot enter a space because space is used for group name completion as described below). If you enter ? {help-key} as the first key, nn will show you a list of all available news groups and their purpose. While paging through this list, you can enter q to quit looking at the list. You can also enter / followed by a regular expression (typically a single word) which will cause nn to show a (much shorter) list containing only the lines matching the regular expression. Normally, you will be prompted for the distribution of the article with the default take from default-distribution, but this can be changed via the post-distribution variable. Generally, nn will construct a file with a suitable header, optionally include a copy of the article in the file with each non-empty line prefixed by a `>' character (except in mail mode), and invoke an editor of your choice (using the EDITOR environment variable) on this file, positioning you on the first line of the body of the article (if it knows the editor). When you have completed editing the message, it will compare it to the unedited file, and if they are identical (i.e. you did not make any changes to the file), or it is empty, the operation is cancelled. Otherwise you will be prompted for an action to take on the constructed article (enter first letter followed by return, or just return to take the default action): a)bort c)c e)dit h)old i)spell m)ail p)ost r)eedit s)end v)iew w)rite 7)bit Action: (post article) You now have the opportunity to perform one of the following actions: a throw the response away (will ask for confirmation), c mail a copy of a follow-up to the poster of the article, e edit the file again, h hold response for later completion, i run an (interactive) spell-checker on the text, m mail a (blind) copy to a specified recipient, n same as abort (no don't post), p post article (same as send), r throw away the edited text and edit the original text, s send the article or letter, v view the article (through the pager), w append it to a file (before you send it), y confirm default answer (e.g. yes post it), or 7 strip the high-order bit from all characters in the message If you have selected a 7-bit character set (this is determined by the values of the charset and data-bits variables), nn will not allow you to post an article or send a letter whose body contains characters with the high-order bit set. It will warn you after you have first edited the message and disable the c)c, m)ail, p)ost, s)end and y)es actions. You can then either e)dit the message to delete those characters, use 7)bit to strip the high-order bits, a)bort the message, or h)old it and select an 8-bit character set from nn. To complete an unfinished response saved by the h)old command, simply enter any response action, e.g. R {reply}. This will notice the unfinished response and ask you whether you want to complete it now. Only one unfinished response can exist at a time. Notice that the $A environment variable may no longer be valid as a path to the original article when the response is completed. If your message contains 8-bit characters, the charset variable is not set to "unknown" and the message does not already have a MIME-Version or Content-XXX header, nn will add the following headers to your message before sending it: MIME-Version: 1.0 Content-Type: text/plain; charset=charset Content-Transfer-Encoding: 8bit It must be noted that sending 8-bit characters over the current news and mail networks is risky at best; although large parts of the network will pass through such characters unchanged, high-order bits may occasionally be stripped. Although the MIME standard provides solutions for this by encoding the characters, this is not yet supported by nn. Adding the above headers is an interim solution that is compatible with current practice and is much better than just sending the message without any hints about the character set used. Related variables: append-signature-mail, append-signature-post, charset, data-bits, default-distribution, follow-distribution, post-distribution, edit-response-check, editor, include-art-id, include-full-header, included-mark, mail-header, mail-record, mail-script, mailer, mailer-pipe-input, news-header, news-record, news-script, orig-to-include-mask, pager, query-signature, record, response-check-pause, response-default-answer, save-counter, save-counter-offset, save-report, spell-checker. JUMPING TO OTHER GROUPSBy default nn will present the news groups in a predefined sequence (see the section on Presentation Sequence later on). To override this sequence and have a look at any other group the G {goto-group} command available in both selection and reading mode enables you to move freely between all the newsgroups. Furthermore, the G command enables you to open folders and other files, to read old articles you have read before, and to grep for a specific subject in a group. It is important to notice that normally the goto command is recursive, i.e. a new menu level is created when the specified group or folder is presented, and when it has been read, nn will continue the activity in the group that was presented before the goto command was executed. However, if there are unread articles in the target group you can avoid entering a new menu level by using the j reply described below. The current menu level (i.e. number of nested goto commands) will be shown in the prompt line as "<N>" (in reverse video). The goto command is very powerful, but unfortunately also a little bit tricky at first sight, because the facilities it provides depend on the context in which the command is used. When executed, the goto command will prompt you for the name of the newsgroup, folder, or file to open. It will use the first letter you enter to distinguish these three possibilities: - return - An empty answer is equivalent to the current newsgroup. - letter - The answer is taken to be the name of a newsgroup. If a news group with the given name does not exist, nn will treat the answer as a regular expression and locate the first group in the presentation sequence (or among all groups) whose name matches the expression. - + The answer is taken to be the name of a folder. If only `+' is entered, it is equivalent to the default save file for the current group. - / or ./ or ~/ - The answer is taken to be the name of a file, either relative to the current directory, relative to your home directory, or an absolute path name for the file. - % - In reading mode, this reply corresponds to reading the current article (and splitting it as a digest). In selection mode, it will prompt for an article on the menu to read. - @ - This choice is equivalent to the archive file for the current group. - = and number - These answers are equivalent to the same answers described below applied to the current group (e.g. G return = and G = are equivalent). Specifying a folder, a file, or an article (with %) will cause nn to treat the file like a digest and split it into separate articles (not physically!) which are then presented on a menu in the usual way, allowing you to read or save individual subarticles from the folder. When you enter a group name, nn will ask you how many articles in the group you want to see on the menu. You can give the following answers: - a number N - In this case you will get the newest N articles in the group, or if you specified the current group (by hitting return to the group name prompt or entering the number directly), you will get that many extra articles included on the same menu (without creating a new menu level). - j - This answer can only be given if there are unread articles in the group. It will instruct nn to jump directly to the specified group in the presentation sequence without creating a new menu level. - u - This instructs nn to present the unread articles in the group (if there are any). If you have already read the group (in the current invocation of nn), the u answer will instruct nn to present the articles that were unread when you entered nn. - a - This instruct nn to present all articles in the group. - sword or =word - This instructs nn to search all articles in the groups, but only present the articles containing the word word in the subject. Notice that case is ignored when searching for the word in the subject lines. - nword - Same as the s form except that it searched for articles where the sender name matches word. - eword - Same as the s form except that it Psearched for articles where either the subject or the sender name matches word. - word = /regexp - When the first character of the word specified with the s, n, and e forms is a slash `/', the rest of the input is interpreted as a regular expression to search for. Notice that regular expression matching is case insensitive when case-fold-search is set (default). - return - The meaning of an empty answer depends on the context: if there are unread articles in the specified group the unread articles will be presented, otherwise all articles in the group will be included in the menu. If you specified the current group, and the menu already contains all the available articles, nn will directly prompt for a word to search for in the subject of all articles (the prompt will be an equal sign.) When the goto command creates a new menu level, nn will not perform auto kill or selection in the group. You can use the + command in menu mode to perform the auto-selections. There are three commands in the goto family: - G {goto-group} - This is the general goto command described above. - B {back-group} - Backup one or more groups. You can hit this key one or more times to go back in the groups already presented (including those without new articles); when you have found the group you are looking for, hit space to enter it. - A {advance-group} - Advance one or more groups. This command is similar to the B command, but operates in the opposite direction. - N {next-group} - When used within an A or B command, it skips forward to the next group in the sequence with unread articles or which has previously been visited. - P {previous} - When used within an A or B command, it skips backwards to the preceding group in the sequence with unread articles or which has previously been visited. Once you have entered an A or Bcommand, you can freely mix the A, B, P, and N commands to find the group you want, and you can also use the G command to be prompted for a group name. To show the use of the goto command some typical examples on its use are given below: Present the unread articles in the dk.general group G dk.general return u Jump directly to the gnu.emacs group and continue from there G gnu.emacs return j Include the last 10 READ articles in the current group menu G 10 return Find all articles in rec.music.misc on the subject Floyd G rec.music.misc return = floyd return Open the folder +nn G +nn return Split current article as a digest (in reading mode) G % Related variables: case-fold-search, default-save-file, folder-save-file AUTOMATIC KILL AND SELECTIONWhen there is a subject or an author which you are either very interested in, or find completely uninteresting, you can easily instruct nn to auto-select or auto-kill articles with specific subjects or from specific authors. These instructions are stored in a kill file, and the most common types of entries can be created using the following command: - K {kill-select} - Create an entry in your personal kill file. The contents of the entry is specified during a short dialog that is described in details below. This command is available in both selection and reading mode. Entries in the kill file may apply to a single newsgroup or to all newsgroups. Furthermore, entries may be permanent or they may be expired a given number of days after their entry. To increase performance, nn uses a compiled version of the kill file which is read in when nn is invoked. The compiled kill file will automatically be updated if the normal kill file has been modified. The following dialog is used to build the kill file entry: - AUTO (k)ill or (s)elect (CR => Kill subject 30 days) - If you simply want nn to kill all articles with the subject of the current article (in reading mode) or a specific article (which nn will prompt for in selection mode), just hit return. This will cause nn to create an entry in the kill file to kill the current (or specified) subject in the current group for a period of 30 days (which should be enough for the discussion to die out). You can control the default kill period, or change it into a "select" period via the default-kill-select variable. If this "default behaviour" is not what you want, just answer either k or s to kill or select articles, respectively, which will bring you on to the rest of the questions. - AUTO SELECT on (s)ubject or (n)ame (s) - (The SELECT will be substituted with KILL depending on the previous answer). Here you specify whether you want the kill or select to depend on the subject of the article (s or space), or on the name of the author (n). - SELECT NAME: - (Again SELECT may be substituted with KILL and SUBJECT may replace NAME). You must now enter a name (or subject) to select (or kill). In reading mode, you may just hit return (or %) to use the name (or subject) of the current article. In selection mode, you can use the name (or subject) from an article on the menu by answering with % followed by the corresponding article identifier. When the name or subject is taken from an article (the current or one from the menu), nn will only select or kill articles where the name or subject matches the original name or subject exactly including case. If the first character typed at the prompt is a slash `/', the rest of the line is used as a regular expression which is used to match the name or subject (case insensitive). Otherwise, nn will select or kill articles which contain the specified string anywhere in the name or subject (ignoring case). - SELECT in (g)roup `dk.general' or in (a)ll groups (g) - You must now specify whether the selection or kill should apply to the current group only (g or space) or to all groups (a). - Lifetime of entry in days (p)ermanent (30) - You can now specify the lifetime of the entry, either by entering a number specifying the number of days the entry should be active, or p to specify the entry as a permanent entry. An empty reply is equivalent to 30 days. - CONFIRM SELECT .... - Finally, you will be asked to confirm the entry, and you should especially note the presence or absence of the word exact which specify whether an exact match applies for the entry. Related variables: default-kill-select, kill. THE FORMAT OF THE KILL FILEThe kill file consists of one line for each entry. Empty lines and lines starting with a # character are ignored. nn automatically places a # character in the first position of expired entries when it compiles the kill file. You can then edit the kill file manually from time to time to clean out these entries. Each line has the following format [expire time :] [group name] : flags : string [: string]... Permanent entries have no expire time (in which case the colon is omitted as well!). Otherwise, the expire time defines the time (as a time_t value) when the entry should be expired. The group name field can have three forms: - news.group.name - If it is the name of a single news group (e.g. comp.unix), the entry applies to that group only. - /regular expression - If it starts with a slash `/' followed by a regular expression (e.g. /^news\..*), the entry applies to all groups whose name are matched by the regular expression. - empty - An empty group field will apply the entry to all groups. The flags field consists of a list of characters which identifies the type of entry, and the interpretation of each string field. When used, the flag characters must be used in the order in which they are described below: - ~ (optional) When this flag is present on any of the entries for a specific group, it causes all entires which are not auto-selected to be killed. This is a simple way to say: I'm interested in this and that, but nothing else. - + or ! (optional) Specify an auto-select + or an auto-kill ! entry, respectively. If neither are used, the article is neither selected nor killed which is useful in combination with the `~' flag. - > (optional) - When used with a subject (flag s), the kill entry only matches follow-ups to that subject (i.e. where the Subject: line starts with Re:). For example, to kill all "Re:"'s in rec.humor use the following kill entry: rec.humor:!>s/:. - < (optional) - When used with a subject (flag s), the kill entry only matches base articles with that subject (i.e. where the Subject: line does not start with Re:). For example, to kill all articles asking for help (but not follow-ups) in the tex group, add this to your kill file: comp.text.tex:!s</:^HELP - n or s or a (mandatory) Specify whether the corresponding string applies to the name n or to the subject s of an article. If flag a is used, the corresponding string is ignored (but must be present), and the entry applies to articles with a non-empty References: line. - / (optional) Specifies that the corresponding string is a regular expression which the sender or subject is matched against. If not specified, a simple string match is performed using the given string. - = (optional) Specifies that the match against the name or subject is case sensitive. Furthermore, when regular expression matching is not used, the name or subject must be of the same length of the string to match. Otherwise, the match will be case insensitive, and a string may occur anywhere in the name or subject to match. - | or & (mandatory if multiple strings) If more than one string is specified, the set of flags corresponding to each string must be separated by either an or operator `|' or an and operator `&'. The and operator has a higher precedence than the or operator, e.g. a complex match expression a|b&c|d will succeed if either of a, b&c, or d matches. The string field in the entry is the name, subject or regular expression that will be matched against the name or subject of each article in the group (or all groups). Colons and backslashes must be escaped with a backslash in the string. Example 1: Auto-select articles from `Tom Collins' (exact) on subject `News' in all groups: :+n=&s:Tom Collins:News Example 2: Kill all articles which are neither from `Tom' or `Eve' in some.group. Select only articles from Eve: some.group:~n:Tom some.group:+n:Eve The second example can also be written as a single entry with an or operator (in this case, the select/kill attribute only applies to the succeeding strings): some.group:~n|+n:Tom:Eve To remove expired entries, to "undo" a K command, and to make the more advanced entries with more than one string, you will have to edit the kill file manually. To recompile the file, you can use the :compile command. When you invoke nn, it will also recompile the kill file if the compiled version is out of date. SHELL ESCAPESThe ! commands available in selection and reading mode are identical in operation (with one exception). When you enter the shell escape command, you will be prompted for a shell command. This command will be fed to the shell specified in the shell variable (default loaded from the SHELL environment variable or /bin/sh) after the following substitutions have been performed on the command: - File name expansion - The earlier described file name expansions will be performed on all arguments. - $G - will be substituted with the name of the current news group. - $L - will be substituted with the last component of the name of the current news group. - $F - will be substituted with the name of the current news group with the periods replaced by slashes. - $N - will be substituted with the (local) article number (only defined in reading mode). - $A - is replaced by the full path name of the file containing the current article (only defined in reading mode). - % - Same as $A. - $(VAR) - is replaced by the string value of the environment variable VAR. When the shell command is completed, you will be asked to hit any key to continue. If you hit the ! key again, you will be prompted for a new shell command. Any other key will redraw the screen and return you to the mode you came from. Related variables: shell, shell-restrictions. MISCELLANEOUS COMMANDSBelow are more useful commands which are available in both selection and reading modes. - U {unsub} - Unsubscribe to the current group. You will not see this group any more unless you explicitly request it. If the variable unsubscribe-mark-read is set, all articles in the group will be marked read when you unsubscribe. If the variable keep-unsubscribed is not set, the group will be removed from .newsrc. If you are not subscribing to the group, you will be given the possibility to resubscribe to the group! This may be used in connection with the G command to resubscribe a group. - C {cancel} - Cancel (delete) an article in the current group or folder. Cancelling articles in a folder will cause the folder to be rewritten when it is closed. In selection mode, you will be prompted for the identifier of the article to cancel. Normal users can only cancel their own articles. See also the section on folder maintenance. - Y {overview} - Provide an overview of the groups with unread articles. - " {layout} - Change menu layout in selection mode. The menu will be redrawn using the next layout (cycling through ..., 2, 3, 4, 0, 1, ...) Most of the commands in nn are bound to a key and can be activated by a single keystroke. However, there are a few commands that cannot be bound to a key directly. As shown in the keystroke command descriptions, all commands have a name, and it is possible to activate a command by name with the extended command key (:). Hitting this key will prompt you for the name of a command (and parameters). For example, an alternative to hitting the R key to reply to an article is to enter the extended command :reply followed by return. The :post and :unshar commands described earlier can also be bound to a key. The complete list of commands which can be bound to keys is provided in the section on Key Mappings below. The following extended commands cannot be bound to a key, mainly because they require additional parameters on the prompt line, or because it should not be possible to activate them too easily. - :admin - Enter administrative mode. This is identical in operation to the nnadmin(1M) program. - :bug - Prepare and send a bug report to the nn-bugs mailing address. - :cd [ directory ] - Change current working directory. If the directory argument is not provided, nn will prompt for it. - :clear - Clear the screen (without redraw). This may be useful at the beginning of the init file (possibly guarded by "on program nn"), or in some macros. - :compile - Recompile the kill file. This is not necessary under normal operation since nn automatically compiles the file on start-up if it has changed, but it can be used if you modify the kill file while nn is suspended. - :coredump - Abort with a core dump. For debugging purposes only. - :define macro - Define macro number macro as described in the Macro Definition section below. If macro is omitted, the next free macro number will be chosen. - :dump table - Same as the :show command described below. - :help [ subject ] - Provide online help on the specified subject. If you omit the subject, a list of the available topics will be given. - :load [ file ] - Load the specified file. If the file argument is omitted, the init file is reloaded. The sequence part (if present) is ignored. - :local variable [ value ] - Make the variable local to the current group. Subsequent changes to the variable will only be effective until the current group is left. If a value is specified, it will be assigned to the local variable. To assign a new value to a boolean variable, the values on and off must be used. - :lock variable - Lock the specified variable so it cannot be modified. - :man - Call up the online manual. The manual is presented as a normal folder with the program name in the `From' field and the section title in the `subject' field. All the normal commands related to a folder works for the online manual as well, e.g. you can save and print sections of the manual. - :map arguments - This is the command used for binding commands to the keys. It is fully described in the Key Mapping section below. - :mkdir [ directory ] - Create the directory (and the directories in its path). It will prompt for at directory name if the argument is omitted. - :motd - Show the message of the day (maintained by the news administrator in the file "motd" in the lib directory. This file is automatically displayed on start-up whenever it changes if the motd variable is set. - :pwd - Print path name of current working directory on message line. - :q - Has no effect besides redrawing the screen if necessary. If an extended command (one which is prefixed by a :) produces any output requirering the screen to be redrawn, the screen will not be redrawn immediately if the variable delay-redraw is set (useful on slow terminals). Instead another : prompt is shown to allow you to enter a new extended command immediately. It is sufficient to hit return to redraw the screen, but it has been my experience that entering q return in this situation happens quite often, so it was made a no-op. - :q! - Quit nn without updating the .newsrc file. - :Q - Quit nn. This is equivalent to the normal Q command. - :rmail - Open your mailbox (see the mail variable) as a folder to read the incoming messages. This is not a full mail interface (depending on the nn configuration, you may not be able to delete messages, add cc: on replies, etc), but it can give you a quick glance at new mail without leaving nn. - :set variable [ value ] - Set a boolean variable to true or assign the value to a string or integer variable. The :set command is described in details in the section on VARIABLES. - :sh - Suspend nn, or if that is not possible, spawn an interactive shell. - :show groups mode - Show the total number or the number of unread articles in the current group, depending on mode: all (list the number of unread articles in all groups including groups which you have unsubscribed to), total (list the total number of articles in all existing groups), sequence (list unread groups in presentation sequence order), subscr (list all subscribed groups), unsub (list unsubscribed groups only). Any other mode results in a listing of the number of unread articles in all subscribed groups including those you have suppressed with the `!' symbol in the group presentation sequence. To get just the currently unread groups in the presentation sequence, use the `Y' {overview} command. - :show kill - Show the kill entries that applies to the current group and to all groups. - :show rc [ group ] - Show the .newsrc and select file entries for the current or the specified group. - :show map [ mode ] - Show the key bindings in the current or specified mode. - :sort [ mode ] - Reorder the articles on the menu according to mode or if omitted to the default sort-mode. The following sorting modes are available: arrival: list articles by local article number which will be the same as the order in which they arrived on the system (unless groups are merged), subject: articles with identical subjects are grouped and ordered after age of the oldest article in the group, lexical: subjects in lexicographical order, age: articles ordered after posting date only, sender: articles ordered after sender's name. - :toggle variable - Toggle a boolean variable. - :unread [ group ] [ articles ] - Mark the current (or specified) group as unread. If the articles argument is omitted, the number of unread articles in the group will be set to the number of unread articles when nn was invoked. Otherwise, the argument specifies the number of unread articles. - :unset variable - Set a boolean variable to false or clear an integer variable. - :x - Quit nn and mark all articles in the current group as read! Related variables: backup, bug-report-address, delay-redraw, keep-unsubscribed, unsubscribe-mark-read, mail, pager, sort-mode. CATCH UPIf you have not read news for some time, there are probably more news than you can cope with. Using the option -a0 nn will put you into catch-up mode. The first question you will get is whether to catch up interactively or automatically. If you instruct nn to catch up automatically, it will simply mark all articles in all groups as read, thus bringing you completely up-to-date. If you choose the interactive mode, nn will locate all groups with unread articles, and for each group it will prompt you for an action to take on the group. An action is selected using a single letter followed by return. The following actions are available: - y - Mark all articles as read in current group. - n - Do not update group (this is the default action if you just hit return). - r - Enter reading mode to read the group. - U - Unsubscribe to the group. - ? - Give a list of actions. - q - Quit. When you quit, nn will ask whether the rest of the groups should be updated unconditionally or whether they should remain unread. VARIABLES AND OPTIONSIt is possible to control the behaviour of nn through the setting (and unsetting) of the variables described below. There are several ways of setting variables: - Through command line options when nn is invoked. - Through assignments on the command line when nn is invoked. - Through global set commands in the init file. - Through set or local commands executed from entry macros. - Through the :set extended command when you run nn. There are four types of variables: - Boolean variables - Integer variables - String variables - Key variables Boolean variables control a specific function in nn, e.g. whether the current time is shown in the prompt line. A boolean variable is set to true with the command set variableand it is set to false with either of the following (equivalent) commands: unset variable set novariable You can also toggle the value of a boolean variable using the command: toggle variable For example: set time unset time set notime toggle time Integer variables control an amount e.g. the size of the preview window, or the maximum number of articles to read in each group. They are set with the following command: set variable valueIn some cases, not setting an integer value has a special meaning, for example, not having a minimal preview window or reading all articles in the groups no matter how many there are. The special meaning can be re-established by the following command: unset variableFor example: set window 7 unset limit String variables may specify directory names, default values for prompts, etc. They are set using the command set variable stringNormally, the string value assigned to the variable value starts at the first non-blank character after the variable name and ends with the last non-blank character (excluding comments) on the line. To include leading or trailing blanks, or the comment start symbol, #, in the string they must be escaped using a backslash `\', e.g. to set included-mark to the string " # ", the following assignment can be used: set included-mark \ \#\ # blank-#-blank To include a backslash in the string, it must be duplicated `\\'. A backslash may also be used to include the following special characters in the string: \a=alarm, \b=backspace, \e=escape, \f=form-feed, \n=new-line, \r=return, \t=tab. Key variables control the keys used to control special functions during user input such as line editing and completion. They are set using the command set variable key-name A variable can be locked which makes further modification of the variable impossible: lock variableThis can be used in the setup init file which is loaded unconditionally to enforce local conventions or restrictions. For example, to fix the included-mark variable to the string ">", the following commands can be placed in the setup file: set included-mark > lock included-markSome variables only make sense when set on the command line, since they are examined early in startup, before the init files are read. The syntax for setting variables on the command line is: variable=value The value may need to be quoted if it contains white space or special characters. They can be intermixed with other options, and are examined prior to other argument parsing. The current variable settings can be shown with the :set command: - :set (without arguments) - This will give a listing of the variables which have been set in either the init file or interactively. - :set all - This will give a listing of all variables. Modified variables will be marked with a `*' and local variables will be marked with a `>'. A locked variable is marked with a `!'. - :set /regexp - This will give a listing of all variables whose name matches the given regular expression. - :set partial-name space - The space (comp1-key) key will complete the variable name as usual, but as a side effect it will display the variable's current value in the message line. Variables are global by default, but a local instantiation of the variable can be created using the :local command. The local variable will overlay the global variable as long as the current group is active, i.e. the global variable will be used again when you exit the current group. The initial value of the local variable will be the same as the global variable, unless a new value is specified in the :local command: :local variable [ value ] The following variables are available: - also-full-digest (boolean, default false) - When a digest is split, the digest itself is not normally included on the menu, and as such the initial adminstrative information is not available. Setting also-full-digest will cause the (unsplit) digest to be included on the menu. These articles are marked with a @ at the beginning of the subject. - also-subgroups (boolean, default true) - When set, a group name in the presentation sequence will also cause all the subgroups of the group to be included, for example, comp.unix will also include comp.unix.questions, etc. When also-subgroups is not set, subgroups are only included if the group name is followed by a `.' in which case the main group is not included, i.e. `comp.unix' is not included when `comp.unix.' is specified in the presentation sequence, and vice-versa. Following a group name by an asterisk `*', e.g. comp.unix*, will include the group as well as all subgroups independently of the setting of also-subgroups. - append-signature-mail (boolean, default false) - When false, it is assumed that the .signature file is automatically appended to responses sent via E-mail. If true, .signature will be appended to the letter (see query-signature). - append-signature-post (boolean, default false) - When false, it is assumed that the .signature file is automatically appended to posted articles. If true, .signature will explicitly be appended to posted articles (see query-signature). - attributes symbols (string, default ....) - Each element in this string represents a symbol used to represent an article attribute when displayed on the screen. See the section on Marking Articles and Attributes. - auto-junk-seen (boolean, default true) - When set, articles which have the seen attribute(,) will be marked read when the current group is left. If not set, these articles will still be either unread or marked seen the next time the group is entered (see also confirm-junk-seen and retain-seen-status). auto-preview-mode (boolean, default false) - Enables Auto Preview Mode. In this mode, selecting an article on the menu using its article id (letter a-z) will enter preview mode on that article immediately. Furthermore, the `n' {next-article} command will preview the next article on the menu only if it has the same subject as the current article; otherwise, it will return to the menu with the cursor placed on the next article. The continue command at the end of the article and the `=' {goto-menu} returns to the menu immediately as usual. - auto-read-mode-limit N (integer, default 0) - When operating in auto reading mode, nn will auto-select all unread articles in the group, skip the article selection phase, and enter reading mode directly after entry to the group. Auto reading mode is disabled when auto-read-mode-limit is zero; it is activated unconditionally if the value is negative, and conditionally if the value is greater than zero and the number of unread articles in the current group does not exceed the given value. - auto-select-closed mode (integer, default 1) - Normally, selecting a closed subject (usually in consolidated menu mode) will select (or deselect) all unread articles with the given subject (or all articles if they are all read). This behaviour can be changed via the value of this variable as follows: 0: select only the first article with the subject (shown on menu). 1: select only the unread articles with the subject. 2: select all available articles with the subject. - auto-select-rw (boolean, default false) - If set, a subject of an article read or posted is automatically used for subsequent auto-selecting (if not already selected). This is the most efficient way to see your own posts automatically. - auto-select-subject (boolean, default false) - When set, selecting an article from the menu using the article id (a-z), all articles on the menu with the same subject will automatically be selected as well. - backup (boolean, default true) - When set, a copy of the initial .newsrc and select files will save be the first time they are changed. nn remembers the initial contents of these files internally, so the backup variable can be set any time if not set on start-up. - backup-folder-path file (string, default "BackupFolder~") - When removing deleted articles from a folder, this variable defines the name of the file where a (temporary) copy of the original folder is saved. If the file name doesn't contain a `/', the file will be located in the .nn directory. Otherwise the file name is used directly as the relative or full path name of the backup file. If possible, the old folder will be renamed to the backup folder name; otherwise the old folder is copied to the backup folder. - backup-suffix suffix (string, default ".bak") - The suffix appended to file names to make the corresponding backup file name (see backup). - bug-report-address address (string, default [email protected]) - The mail address to which bug reports created with the :bug command are sent. - case-fold-search (boolean, default true) - When set, string and regular expression matching will be case independent. This is related to all commands matching on names or subjects, except in connection with auto-kill and auto-select where the individual kill file entries specifies this property. - charset charset (string, default "us-ascii") - The character set in use on your terminal. Legal values are "us-ascii", "iso-8859-X", where X is a nonzero digit, and "unknown". Setting this variable also sets the data-bits variable to the default bit width of the character set (7 for "us-ascii" and "unknown", 8 for the "iso-8859-X" sets). The value of this variable also determines whether nn allows 8-bit characters in the body of articles being posted and letters being mailed (unless the value is "unknown", in which case this is determined by the value of the data-bits variable). If necessary, nn will add extra headers to the message indicating its the character set. - check-group-access (boolean, default false) - When set, nn will perform a check on the readability of a group's readability before showing the menu for that group. Normally, this is not necessary since all users traditionally have access to all news groups. Setting (and locking) this variable may be used to limit access to a news group via the permissions and ownership of the group's spool directory (this will only work for non-NNTP sites). - collapse-subject offset (integer, default 25) - When set (non-negative), subject lines which are too long to be presented in full on the menus will be "collapsed" by removing a sufficient number of characters from the subject starting at the given offset in the subject. This is useful in source groups where the "Part (01/10)" string sometimes disappears from the menu. When not set (or negative), the subjects are truncated. - columns col (integer, default screen width) - This variable contains the screen width i.e. character positions per line. - comp1-key key (key, default space) - The key which gives the first/next completion, and the default value when nn is prompting for a string, e.g. a file name. - comp2-key key (key, default tab) - The key which ends the current completion and gives the first completion for the next component when nn is prompting for a string, e.g. a file name. - compress (boolean, default false) - This variable controls whether text compression (see the compress command) is turned on or off when an article is shown. The compression is still toggled for the current article with the compress command key. - confirm-append (boolean, default false) - When set, nn will ask for confirmation before appending an article to an existing file (see also confirm-create). - confirm-auto-quit (boolean, default false) - When set, nn will ask for confirmation before quitting after having read the last group. If not confirmed, nn will recycle the presentation sequence looking for groups that were skipped with the `N' {next-group} command. But it will not look for new articles arrived since the invocation of nn. - confirm-create (boolean, default true) - When set, nn will ask for confirmation before creating a new file or directory when saving or unpacking an article (see also confirm-append). - confirm-entry (boolean, default false) - When set, nn will ask for confirmation before entering a group with more than confirm-entry-limit unread articles (on the first menu level). It is useful on slow terminals if you don't want to wait until nn has drawn the first menu to be able to skip the group. Answering no to the "Enter?" prompt will cause nn to skip to the next group without marking the current group as read. If you answer by hitting interrupt, nn will ask the question "Mark as read?" which allows you to mark the current group as read before going to the next group. If this second question is also answered by hitting interrupt, nn will quit immediately. - confirm-entry-limit articles (integer, default 0) - Specifies the minimum number of unread articles in a group for which the confirm-entry functionality is activated. - confirm-junk-seen (boolean, default false) - When set, nn will require confirmation before marking seen articles as read when auto-junk-seen is set. - confirm-messages (boolean, default false) - In some cases, nn will sleep one second (or more) when it has shown a message to the user, e.g. in connection with macro debugging. Setting confirm-messages will cause nn to wait for you to confirm all messages by hitting any key. (It will show the symbol <> to indicate that it is awaiting confirmation.) - consolidated-manual (boolean, default false) - When set, the online manual will be presented with one menu line for each program in the nn package. - consolidated-menu (boolean, default false) - When set, nn will automatically close all multi-article subjects on entry to a group, so that each subject only occur once on the menu page. - counter-delim-left (string, default "[") - The delimiter string output to the left of the article counter in a closed subject's menu line. - counter-delim-right (string, default "] ") - The delimiter string output to the right of the article counter in a closed subject's menu line. - counter-padding pad (integer, default 5) - On a consolidated menu, the subjects may not be very well aligned because the added [...] counters have varying length. To (partially) remedy this, all counters (and subjects without counters) are prefixed by up to pad spaces to get better alignment. Increasing it further may yield practially perfect alignment at the cost of less space for the subject itself. - cross-filter-seq (boolean, default true) - When set, cross posted articles will be presented in the first possible group, i.e. according to the current presentation sequence (cross-post filtering on sequence). The article is automatically marked read in the other cross posted groups unless you unsubscribe to the first group in which it was shown before reading the other groups. Likewise, it is sufficient to leave the article unread in the first group to keep it for later handling. If not set, cross-postings are shown in the first group occurring on the Newsgroups: line which the user subscribes to (i.e. you let the poster decide which group is most appropriate to read his posting). - cross-post (boolean, default false) - Normally, nn will only show cross-posted articles in the first subscribed group on the Newsgroups: line. When cross-post is set, nn will show cross-posted articles in all subscribed groups to which they are posted. - cross-post-limit N (integer, default 0) - If this variable is set to a value other than 0, then any articles posted to more than N newsgroups are automatically skipped. A value of 5 is pretty good for discarding ``spam'' articles. - data-bits bits (integer, default 7) - When set to 7, nn will display characters with the 8th bit set using a meta-notation M-7bit-char. If set to 8, these characters are sent directly to the screen (unless monitor is set). Setting the charset variable also sets this variable to the default bit width of character set. It also controls whether keyboard input is 7 or 8 bits, and thus whether key maps contain 127 or 255 entries. See the key mapping section for more details. If the charset has value "unknown", the value of data-bits also determines whether nn allows 8-bit characters in the body of articles being posted and letters being mailed (this is normally determined directly by the charset variable). - date (boolean, default true) - If set nn will show the article posting date when articles are read. - debug mask (integer, default 0) - Look in the source if you are going to use this. - decode-header-file file (string, default "Decode.Headers") - The name of the file in which the header and initial text of articles decoded with the :decode command is saved. Unless the file name starts with a `/', the file will be created in the same directory as the decoded files. The information is not saved if this variable is not set. - decode-skip-prefix N (integer, default 2) - When non-null, the :decode command will automatically skip upto N characters at the beginning of each line to find valid uuencoded data. This allows nn to automatically decode (multi-part) postings which are both uuencoded and packed with shar. - default-distribution distr (string, default "world") - The distribution to use as the default suggestion when posting articles using the follow and post commands if the corresponding follow-distribution or post-distribution variable contains the default option. - default-kill-select [1]days (number, default 30) - Specifies the default action for the K {kill-select} command if the first prompt is answered by return. It contains the number of days to keep the kill or select entry in the kill file (1-99 days). If it has the value days+100 (e.g. 130), it denotes that the default action is to select rather than kill on the subject for the specified period. - default-save-file file (string, default +$F) - The default save file used when saving articles in news groups where no save file has been specified in the init file (either in a save-files section or in the presentation sequence). It can also be specified using the abbreviation "+" as the file name when prompted for a file name even in groups with their own save file. - delay-redraw (boolean, default false) - Normally, nn will redraw the screen after extended commands (:cmd) that clear the screen. When delay-redraw is set nn will prompt for another extended command instead of redrawing the screen (hit return to redraw). - echo-prefix-key (boolean, default true) - When true, hitting a prefix key (see the section on key mapping below) will cause the prefix key to be echoed in the message line to indicate that another key is expected. - edit-patch-command (boolean, default true) - When true, the :patch command will show the current patch-command and give you a chance to edit it before applying it to the articles. - edit-print-command (boolean, default true) - When true, the print command will show the current printer command and give you a chance to edit it before printing the articles. Otherwise the articles are just printed using the current printer command. - edit-response-check (boolean, default true) - When editing a response to an article, it normally does not have any meaning to send the initial file prepared by nn unaltered, since it is either empty or only contains included material. When this variable is set, exiting the editor without having changed the file will automatically abort the response action without confirmation. - edit-unshar-command (boolean, default false) - When true, the :unshar command will show the current unshar-command and give you a chance to edit it before applying it to the articles. - editor command (string, default not set) - When set, it will override the current EDITOR environment variable when editing responses and new articles. - embedded-header-escape string (string, default '~') - When saving an article to a file, header lines embedded in the body of the article are escaped using this string to make it possible for nn to split the folder correctly afterwards. Header lines are not escaped if this variable is not set. - enter-last-read-mode mode (integer, default 1) - Normally, nn will remember which group is active when you quit, and offer to jump directly to this group when you start nn the next time. This variable is used to control this behaviour. The following mode values are recognized: 0: Ignore the remembered group (r.g.). 1: Enter r.g. if the group is unread (with user confirmation) 2: Enter r.g. or first unread group after it in the sequence (w/conf). 3: Enter r.g. if the group is unread (no confirmation) 4: Enter r.g. or first unread group after it in the sequence (no conf). - entry-report-limit articles (integer, default 300) - Normally, nn will just move the cursor to the upper left corner of the screen while it is reading articles from the database on entry to a group. For large groups this may take more than a fraction of a second, and nn can then report what it is doing. If it must read more articles than the number specified by this variable, nn will report which group and how many articles it is reading. - erase-key key (key, default tty erase key) - The key which erases the last input character when nn is prompting for a string, e.g. a file name. - expert (boolean, default false) - If set nn will use slightly shorter prompts (e.g. not tell you that ? will give you help), and be a bit less verbose in a few other cases (e.g. not remind you that posted articles are not available instantly). - expired-message-delay pause (integer, default 1) - If a selected article is found to have been expired, nn will normally give a message about this and sleep for a number of seconds specified by this variable. Setting this variable to zero will still make nn give the message without sleeping afterwards. Setting it to -1 will cause the message not to be shown at all. - flow-control (boolean, default true) - When set, nn will turn on xon/xoff flow-control before writing large amounts of text to the screen. This should guard against lossage of output, but in some network configurations it has had the opposite effect, losing several lines of the output. This variable is always true on systems with CBREAK capabilities which can do single character reads without disabling flow control. - flush-typeahead (boolean, default false) - When true, nn will flush typeahead prior to reading commands from the keyboard. It will not flush typeahead while reading parameters for a command, e.g. file names etc. - folder directory (string, default ~/News) - The full pathname of the folder directory which will replace the + in folder names. It will be initialized from the FOLDER environment variable if it is not set in the init file. - folder-format-check (boolean, default true) - When saving an article with a full or partial header in an existing folder, nn will check the format of the folder to be able to append the article in the proper format. If this variable is not set, folders are assumed to be in the format specified via the mmdf-format and mail-format variables, and articles are saved in that format without checking. Otherwise, the *-format variables are only used to determine the format for new folders. - folder-save-file file (string, default not set) - The default save file used when saving articles from a folder. - follow-distribution words (string, default see below) - This variable controls how the Distribution: header is constructed for a follow-up to an original article. Its value is a list of words selected from the following list: [ [ always ] same ] [ ask ] [ default | distribution ] This is interpreted in two steps: - First the default distribution is determined. If same is specified and the original article has a Distribution: header, that header is used. Else if default is specified (or distribution is omitted), the value of default-distribution is used. And finally, if only a distribution (any word) is specified that is used as the default. - Then if ask is specified, the user will be asked to confirm the default distribution or provide another distribution. However, if always (and same) is specified, and the default was taken from the original article's distribution, the original distribution is used without confirmation. The default value of follow-distribution is always same default, i.e. use either the original distribution or the default-distribution without confirmation in either case. - from-line-parsing strictness (integer, default 2) - Specifies how strict nn must parse a "From " line in a folder to recognize it as a mail format message separator line. The following strictness values determine whether a line starting with "From " will be recognized as a separator line: 0: Always. 1: Line must have at least 8 fields. 2: Line must contain a valid date and time (ctime style). - fsort (boolean, default true) - When set, folders are sorted alphabetically according to the subject (and age). Otherwise, the articles in a folder will be presented in the sequence in which they were saved. - guard-double-slash (boolean, default false) - Normally, when entering a file name, entering two slashes `//' in a row (or following a slash by a plus `/+') will cause nn to erase the entire line and replace it with the `/' (or `+'). On some systems, two slashes are used in network file names, and on those systems guard-double-slash can be set; that will cause nn to require three slashes in a row to clear the input. - header-lines list (string, no default) - When set, it determines the list of header fields that are shown when an article is read instead of the normal one line header showing the author and subject. See the full description in the section on Customized Article Headers below. - help-key key (key, default ?) - The key which ends the current completion and gives a list of possible completions for the next component when nn is prompting for a string, e.g. a file name. - ignore-re (boolean, default false) - If set, articles with subjects already seen in a previous invocation of nn or another newsreader - and not auto-selected - are automatically killed. A great way to read even less news! - ignore-xon-xoff (boolean, default false) - Normally, nn will ignore ^S and ^Q in the input from the terminal (if they are not handled in the tty driver). Setting this variable will treat these characters as normal input. - include-art-id (boolean, default false) - The first line in a response with included material normally reads "...somebody... writes:" without a reference to the specific article from which the quotation was taken (this is found in the References: line). When this variable is set, the line will also include the article id of the referenced article: "In ...article... ... writes:". - include-full-header (boolean, default false) - When set, the mail (M) command will always include the full header of the original article. If it is not set, it only includes the header when the article is forwarded without being edited. - include-mark-blank-lines (boolean, default false) - When set, the included-mark is placed on blank lines in included articles. Otherwise, blank lines are left blank (to make it easy to delete whole paragraphs with `d}' in vi and `[email protected] M-] C-W' in emacs). - included-mark string (string, default ">") - This string is prefixed to all lines in the original article that are included in a reply or a follow-up. (Now you have the possibility to change it, but please don't. Lines with a mixture of prefixes like : orig-> <> } ] #- etc. are very difficult to comprehend. Let's all use the standard folks! (And hack inews if it is the 50% rule that bothers you.) - inews shell-command (string, default "INEWS_PATH -h") - The program which is invoked by nn to deliver an article to the news transport. The program will be given a complete article including a header containing the newsgroups to which the article is to be posted. See also inews-pipe-input. It is not used when cancelling an article! - inews-pipe-input (boolean, default true) - When set, the article to be posted will be piped into the inews program. Otherwise, the file containing the article will be given as the first (and only) argument to the inews command. - initial-newsrc-file file (string, default '.defaultnewsrc') - Defines the name of a file which is used as the initial .newsrc file for new users. The name may be a full path name, or as the default a file name which will be looked for in a number of places: in the standard news lib directory (where it can be shared with other news readers), in nn's lib directory, and in the database directory. Groups which are not present in the initial .newsrc file will be automatically unsubscribed provided new-group-action is set to a value allowing unsubscribed groups to be omitted from .newsrc. - keep-backup-folder (boolean, default false) - When set, the backup folder (see backup-folder-path) created when removing deleted articles from a folder is not removed. Notice that a backup folder is not created if all articles are removed from a folder! - keep-unsubscribed (boolean, default true) - When set, unsubscribed groups are kept in .newsrc. If not set, nn will automatically remove all unsubscribed from .newsrc if tidy-newsrc is set. See also unsubscribe-mark-read. - kill (boolean, default true) - If set, nn performs automatic kill and selection based on the kill file. - kill-debug (boolean, default false) - When set, nn will display a trace of the auto-kill/select process on entry to a group. It is automatically turned off if `q' is entered as the answer to a "hit any key" prompt during the debug output. - kill-key key (key, default tty kill key) - The key which deletes the current line when nn is prompting for a string, e.g. a file name. - kill-reference-count N (integer, default 0) - When this variable is non-zero, all articles which have N or more references on the References: line (corresponding to the number of >>'s on the menu line) will be auto-killed if they are not auto-selected (or preserved) via an entry in the kill file. It should probably not be used globally for all groups, but can be set on a per-group via the entry macros. - layout number (integer, default 1) - Set the menu layout. The argument must be a number between 0 and 4. - limit max-articles (integer, default infinite) - Limit the maximum number of articles presented in each group to max-articles. The default is to present all unread articles no matter how many there are. Setting this variable, only the most recent max-articles articles will be presented, but all the articles will still be marked as read. This is useful to get up-to-date quickly if you have not read news for a longer period. - lines lin (integer, default screen height) - This variable contains the screen height i.e. number of lines. - long-menu (boolean, default false) - If set nn will not put an empty line after the header line and an empty line before the prompt line; this gives you two extra menu lines. - macro-debug (boolean, default false) - If set nn will trace the execution of all macros. Prior to the execution of each command or operation in a macro, it will show the name of the command or the input string or key stroke at the bottom of the screen. - mail file (string, default not set) - file must be a full path name of a file. If defined, nn will check for arrival of new mail every minute or so by looking at the specified file. - mail-alias-expander program (string, default not set) - When set, aliases used in mail responses may be expanded by the specified program. The program will be given the completed response in a file as its only argument, and the aliases should be expanded directly in this file (of course the program may use temporary files and other means to expand the aliases as long the the result is stored in the provided file). Notice: currently there are no alias expanders delivered with nn. Warning: Errors in the expansion process may lead to the response not being sent. - mail-format (boolean, default false) - When set, nn will save articles in a format that is compatible with normal mail folders. Unless folder-format-check is false, it is only used to specify the format used when new folders are created. This variable is ignored if mmdf-format is set. - mail-header headers (string, default not set) - The headers string specifies one or more extra header lines (separated by semi-colons `;') which are added to the header of mail sent from nn using the reply and mail commands. For example: set mail-header Reply-To: [email protected];Organization: TI - DKTo include a semicolon `;' in a header, precede it by a backslash (which must be doubled because of the conventions for entering strings). - mail-record file (string, default not set) - file must be a full path name of a file. If defined, all replies and mail will be saved in this file in standard mailbox format, i.e. you can use you favourite mailer (and nn) to look at the file. - mail-script file (string, default not set) - When set, nn will use the specified file instead of the standard aux script when executing the reply and mail commands. - mailer shell-command (string, default REC_MAIL) - The program which is invoked by nn to deliver a message to the mail transport. The program will be given a complete mail message including a header containing the recipient's address. See also mailer-pipe-input. - mailer-pipe-input (boolean, default true) - When set, the message to be sent will be piped into the mailer program. Otherwise, the file containing the message will be given as the first (and only) argument to the mailer command. - marked-by-next-group N (integer, default 0) - Specifies the amount of (unmarked) articles on the menu marked seen by the N {next-group} command in selection mode. See marked-by-read-skip for possible values of N. - marked-by-read-return N (integer, default 0) - Specifies the amount of (unmarked) articles on the menu marked seen by the Z {read-return} command in selection mode. See marked-by-read-skip for possible values of N. - marked-by-read-skip N (integer, default 4) - Specifies the amount of (unmarked) articles on the menu marked seen by the X {read-skip} command in selection mode. The following values of N are recognized: 0: No articles are marked seen 1: Current page is marked seen 2: Previous pages are marked seen 3: Previous and current pages are marked seen 4: All pages are marked seen - mark-overlap (boolean, default false) - When set, nn will draw a line (using the underline capabilities of the terminal if possible) to indicate the end of the overlap (see the overlap variable). - mark-overlap-shading (boolean, default false) - When set, nn will shade overlapping lines (see the overlap variable) using the attributes defined by the shading-on and shading-off variables (of if not set, with the underline attribute). This is typically used to give overlapping lines a different colour on terminals which have this capability. - menu-spacing mode (integer, default 0) - When mode is a non-zero number as described below, nn will add blank lines between the lines on the menu to increase readability at the cost of presenting fewer articles on each page. The following values of mode are recognized: 0: Don't add blank lines between menu lines. 1: Add a blank line between articles with different subjects. 2: Add a blank line between all articles. - merge-report-rate rate (integer, default 1) - When nn is invoked with the -m option (directly or via nngrap), a status report of the merging process is displayed and updated on the screen every rate seconds. The report contains the time used so far and an estimate of the time needed to complete the merge. - message-history N (integer, default 15) - Specifies the maximum number, N, of older messages which can be recalled with the ^P {message} command. - min-window size (integer, default 7) - When the window variable is not set, nn will clear the screen to preview an article if there are less than size unused lines at the bottom of the menu screen. - mmdf-format (boolean, default false) - When set, nn will save articles in MMDF format. Unless folder-format-check is false, it is only used to specify the format used when new folders are created. - monitor (boolean, default false) - When set, nn will show all characters in the received messages using a "cat -v" like format. Otherwise, only the printable characters are shown (default). - motd (boolean, default true) - When set, nn will display the message of the day on start-up if it has changed since it was last shown. The message is taken from the file "motd" in the lib directory. It can also be shown (again) using the :motd command. - multi-key-guard-time timeout (integer, default 2) - When reading a multi-key sequence from the keyboard, nn will expect the characters constituting the multi-key to arrive "quickly" after each other. When a partial multi-key sequence is read, nn will wait (at least) timeout tenths of a second for each of the following characters to arrive to complete the multi-key sequence. If the multi-key sequence is not completed within this period, nn will read the partial multi-key sequence as individual characters instead. This way it is still possible to use for example the ESC key on a terminal with vt100 like arrow keys. When nn is used via an rlogin connection, you may have to increase the timeout to get reliable recognition of multi-keys. - new-group-action action (integer, default 3) - This variable controls how new groups are treated by nn. It is an integer variable, and the following values can be used. Some of these actions (marked with an *) will only work when keep-unsubscribed is set, since the presence of a group in .newsrc is the only way to recognize it as an old group: 0) Ignore groups which are not in .newsrc. This will obviously include new groups, and therefore you must explicitly add any new groups that you care about (by editing the .newsrc file, or using the G menu command and then subscribing to the group). When NNTP is being used, this setting prevents the active.times data from being read from the server; this can be helpful when using a slow link, since the data can often be hundreds of KBytes long. 1*) Groups not in .newsrc are considered to be new, and are inserted at the beginning of the .newsrc file. 2*) Groups not in .newsrc are considered to be new, and are appended to the end of the .newsrc file. 3) New groups are recognized via a time-stamp saved in the file .nn/LAST and in the database, i.e. it is not dependent on the groups currently in .newsrc. The new groups are automatically appended to .newsrc with subscription. Old groups not present in .newsrc will be considered to be unsubscribed. 4) As 3, but the user is asked to confirm that the new group should be appended to .newsrc. If rejected, the group will not be appended to .newsrc, and thus be regarded as unsubscribed. 5) As 4, except that the information is stored in a format compatible with the rn news reader (.rnlast). This needs to be tested! - new-style-read-prompt (boolean, default true) - When set, the reading mode prompt line includes the group name and the number of selected articles in the group. - news-header headers (string, default not set) - The headers string specifies one or more extra header lines (separated by semi-colons `;') which are added to the header of articles posted from nn using the follow and post commands. See mail-header for an example. - news-record file (string, default not set) - Save file for follow-ups and postings. Same rules and format as the mail-record variable. - news-script file (string, default not set) - When set, nn will use the specified file instead of the standard aux script when executing the follow and post commands. - newsrc file (string, default "~/.newsrc") Specifies the - file used by nn to register which groups and articles have been read. The default setting corresponds to the .newsrc file used by other news readers. Notice that nn release 6.4 onwards does allow individual articles to be marked unread, and some articles marked unread, and thus no longer messes up .newsrc for other news readers! Also see nntp-server. - nn-directory directory (string, default "~/.nn") - It only makes sense to set this variable on the command line, e.g. "nn-directory=$HOME/.nn2" since it is looked at before the init file is read. It must be set to a full pathname. Usually set when using multiple servers; see newsrc above and nntp-server below. - nntp-cache-dir directory (string, default "~/.nn") - When NNTP is used, nn needs to store articles temporarily on disk. This variable specifies which directory nn will use to hold these files. The default value may be changed during configuration. This variable can only be set in the init file. - nntp-cache-size size (integer, default 10, maximum 10) - Specifies the number of temporary files in the nntp cache. The default and maximum values may be changed during configuration. - nntp-debug (boolean, default false) - When set, a trace of the nntp related traffic is displayed in the message line on the screen. - nntp-server hostname or filename (string) - It only makes sense to set this variable on the command line, e.g. "nntp-server=news.some.domain", since it is looked at before the init file, If you use multiple servers, you probably want to set the nn-directory and newsrc variables on the command line to alternate names as well, since some of the data files are server dependent. - old [max-articles] (integer, default not set) - When old is set, nn will present (or scan) all (or the last max-articles) unread as well as read articles. While old is set, nn will never mark any unread articles as read. - old-packname (boolean, default false) - When set, nn display names identically to nn-6.6.5 (and earlier). Only set this if you have a large number of entries in your killfile that no longer work due to the new behaviour. Note that in the long run, this option will go away, so it's best to update your killfile rather than set this. - orig-to-include-mask N (integer, default 3) - When replying to an article, nn will include some of the header lines which may be used to construct a proper mail address for the poster of the original article. These addresses are placed on Orig-To: lines in the reply header and will automatically be removed before the letter is sent. This variable specifies which headers from the article are included; its value N is the sum of the following values: 1: Reply-To: 2: From: 4: Path: - overlap lines (integer, default 2) - Specifies the number of overlapping lines from one page to the next when paging through an article in reading mode. The last line from the previous page will be underlined if the terminal has that capability. - pager shell-command (string, default $PAGER) - This is the pager used by the :admin command (and nnadmin) when it executes certain commands, e.g. grepping in the Log file. - patch-command shell-command (string, default "patch -p0") - This is the command which is invoked by the :patch command. - post-distribution words (string, default see below) - This variable controls how the Distribution: header is constructed when posting an original article. Its value is a list of words selected from the following list: [ ask ] [ default | distribution ] This is interpreted in two steps: - First the default distribution is determined. If default is specified (or distribution is omitted), the value of default-distribution is used. Otherwise, the specified distribution (any word) is used as the default. - Then if ask is specified, the user will be asked to confirm the default distribution or provide another distribution. The default value of post-distribution is ask default, i.e. use the default-distribution with confirmation from the user. - preview-continuation cond (integer, default 12) - This variable determines on what terms the following article should be automatically shown when previewing an article, and the next-article command is used, or continue is used at the end of the article. The following values can be used: 0 - never show the next article (return to the menu). 1 - always show the next article (use 'q' to return to the menu). 2 - show the next article if it has the same subject as the current article, else return to the menu. The value should be the sum of two values: one for the action after using continue on the last page of the article, and one for the action performed when the next-article command is used multiplied by 10. - preview-mark-read (boolean, default true) - When set, previewing an article will mark the article as read. - previous-also-read (boolean, default true) - When set, going back to the previously read group with P {previous} will include articles read in the current invocation of nn even if there are still unread articles in the group. - print-header-lines fields (string, default "FDGS") - Specifies the list of header fields that are output when an article is printed via the :print command and print-header-type is 1 (short header). The fields specification is desctribed in the section on Customized Article Headers below. - print-header-type N (integer, default 1) - Specifies what kind of header is printed by the :print command, corresponding to the three save-* commands: 0 prints only the article body (no header), 1 prints a short header, and 2 prints the full article header. - printer shell-command (string, default is system dep.) - This is the default value for the print command. It should include an option which prevents the spooler from echoing a job-id or similar to the terminal to avoid problems with screen handling (e.g. lp -s on System V). - query-signature (boolean, default ...) - Will cause nn to require confirmation before appending the .signature file to out-going mail or news if the corresponding append-sig-... variable is set. - quick-count (boolean, default true) - When set, calculating the total number of unread articles at start-up is done by simple subtracting the first unread article number from the total number of articles in each group. This is very fast, and fairly accurate but it may be a bit too large. If not set, each line in .newsrc will be interpreted to count every unread article, thus giving a very accurate number. This variable is also used by nncheck. - quick-save (boolean, default false) - When set, nn will not prompt for a file name when an article is saved (unless it belongs to a folder). Instead it uses the save file specified for the current group in the init file or the default save file. - re-layout N (integer, default 0) - Normally on the menu, nn will prefix the subject a number of `>'s corresponding to the number of references on the References: line. The re-layout variable may be set to use a different prefix on the subjects: 0: One `>' per reference is shown (default). 1: A single `>' is shown if the Subject contains Re:. 2: The number of references is shown as `n>' 3: A single Re: is shown. 4: If any references use layout 0, else layout 1. - re-layout-read N (integer, default -1) - When the header-lines variable is not set, or contains the "*" field specifier, a line similar to the menu line will be used as the header of the article in reading mode, including the sender's name and the article's subject. When this variable is negative, the subject on this header line will be prefixed according to the re-layout variable. Otherwise, it will define the format of the "Re:" prefix to be used instead of the re-layout used on the menu. - read-return-next-page (boolean, default false) - When set, the Z {read-return} command will return to the next menu page rather than the current menu page. - record file (string, no default) - Setting this pseudo variable will set both the mail-record and the news-record variables to the specified pathname. - repeat (boolean, default false) - When set, nn will not eliminate duplicated subject lines on menus (I cannot imagine why anyone should want that, but....) - repeat-group-query (boolean, default false) - When set, invoking nn with the -g option will always repeat the query for a group to enter until you quit explicitly. (Same as setting the -r option permanently). - report-cost (boolean, default true) - This variable is ignored unless nn is running with accounting enabled (see nnacct). When set, nn will report the cost of the current session and the total on exit. - response-check-pause pause (integer, default 2) - Specifies the number of seconds to wait after posting an article to see whether the action *might* have failed. Some commands run in the background and may thus not have completed during this period, so even when nn says "Article posted", it may still fail (in which case you are informed via mail). - response-default-answer action (string, default "send") - The default action to be taken when hitting return to the "response action" prompt (abort, edit, send, view, write). If it is unset, no default action is defined. - retain-seen-status (boolean, default false) - Normally, seen articles will just be unread the next time the group is entered (unless they were marked read by auto-junk-seen). If retain-seen-status is set, the seen attribute on the articles will survive to the next time the group is entered. (This is not recommended because it may result in very large select files). - retry-on-error times (integer, default 0) - When set, nn will try the specified number of times to open an article before reporting that the article does not exist any more. This may be necessary in some network environments. - save-closed-mode mode (integer, default 13) - When saving an article in selection mode (i.e. by selecting it from the menu), nn will simply save the specified article if the article's subject is open. When the selected menu entry is a closed subject, the save-closed-mode variable determines how many articles among the closed articles should be saved: 0: save root article (the one on the menu) only 1: save selected articles within subject 2: save unread (excl selected) articles within subject 3: save selected+unread articles within subject 4: save all articles within subjectIf `10' is added to the above values, nn will not save the selected subject immediately; instead it will ask which articles to save using the above value as the default answer. - save-counter format (string, default "%d") - This is the printf-format which nn uses to create substitution string for the trailing * in save file names. You can set this to more complex formats if you like, but be sure that it will produce different strings for different numbers. An alternative format which seems to be popular is ".%02d" . - save-counter-offset N (integer, default 0) - Normally, file names created with the part.* form will substitute the * with successive numbers starting from one. Setting this variable will cause these numbers to start from N+1. - save-header-lines fields (string, default "FDNS") - Specifies the list of header fields that are saved when an article is saved via the O {save-short} command. The fields specification is desctribed in the section on Customized Article Headers below. - save-report (boolean, default true) - When set, a message reporting the number of lines written is shown after saving an article. Since messages are shown for a few seconds, this may slow down the saving of many articles (e.g. using the S* command). - scroll-clear-page (boolean, default true) - Determines whether nn clears the screen before showing each new page of an article. - scroll-last-lines N (integer, default 0) - Normally, nn will show each new page of an article from the top of the screen (with proper marking of the overlap). When this variable is set to a negative value, nn will scroll the text of the new pages from the bottom of the screen instead. If it is set to a positive value, nn will show pages from the top as usual, but switch to scrolling when there are less than the specified number of lines left in the article. - select-leave-next (boolean, default false) - When set, you will be asked whether to select articles with the leave-next attribute on entry to a group with left over articles. - select-on-sender (boolean, default false) - Specifies whether the find (=) command in article selection mode will match on the subject or the sender. - shading-on code... (control string, default not set) - Specifies the escape code to be sent to the terminal to cause "shading" of the following output to the screen. This is used if the mark-overlap-shading is set, and by the `+' attribute in the header-lines variable. - shading-off code... (control string, default not set) - Specifies the escape code to be sent to the terminal to turn off the shading defined by shading-on. Shading will typically be done by changing the foreground colour to change, e.g. on term ti924-colour set shading-on ^[ [ 3 2 m set shading-off ^[ [ 3 7 m set mark-overlap-shading unset mark-overlap end - shell program (string, default $SHELL) - The shell program used to execute shell escapes. - shell-restrictions (boolean, default false) - When set (in the init file), nn will not allow the user to invoke the shell in any way, including saving on pipes. It also prevents the user from changing certain variables containing commands. - show-purpose-mode N (integer, default 1) - Normally, nn will show the purpose of a group the first time it is read, provided a purpose is known. Setting this variable, this behaviour can be changed as follows: 0: Never show the purpose. 1: Show the purpose for new groups only. 2: Show the purpose for all groups.When NNTP is being used, a setting of 0 prevents the newsgroups purpose data from being read from the server; this can be helpful when using a slow link, since the data can often be hundreds of KBytes long. - sign-type (string, default pgp) - What program nn will use to sign messages via the Sign command. Only pgp and gpg are currently valid. - silent (boolean, default false) - When set, nn won't print the logo or "No News" if there are no unread articles. Only useful to set in the init file or with the -Q option. - slow-mode (boolean, default false) - When set, nn will cut down on the screen output to give better response time at low speed. Normally, nn will use standout mode (if possible) to mark selected articles on the menu, but when slow-mode is set, nn will just put an asterisk `*' next to the article identifier on selected articles. Also when slow-mode is set nn will avoid redrawing the screen in the following cases: After a goto-group command an empty menu is shown (hit space to make it appear), and after responding to an article, only the prompt line is shown (use ^L to redraw the screen). To avoid redrawing the screen after an extended command, set the delay-redraw variable as well. - slow-speed speed (integer, default 1200) - If the terminal is running at this baud rate or lower, the on slow (see the section on init files) condition will be true, and the on fast will be false (and vice-versa). - sort (boolean, default true) - When set, nn will sort articles according to the current sort-mode on entry to a group. Otherwise, articles will be presented in order of arrival. If not set on entry to a menu for merged groups, the articles from each group will be kept together on the menu. If sort is unset while merged groups are presented on the menu, the articles will be reordered by local article number (which may not keep articles from the same group together). - sort-mode mode (integer, default 1) - The default sort algorithm used to sort the articles on entry to a news group. It is a numeric value corresponding to one of the sorting methods described in connection with the :sort command: 0 - arrival (ordered by article number) 1 - subject (subjects ordered after age of first article) 2 - lexical (subjects in lexicographical order) 3 - age (articles ordered after posting date only) 4 - sender (articles ordered after sender's name) - spell-checker shell-command (string, default not set) - When set, responses can be checked for spelling mistakes via the (i)spell action. The command to perform the spelling is given the file containing the full article including header as its only argument. If the spell checker can fix spelling mistakes, it must apply the changes directly to this file. - split (boolean, default true) - When set, digests will automatically and silently be split into sub-articles which are then handled transparently as normal articles. Otherwise, digests are presented as one article (which you can split on demand with the G command). - stop lines (integer, default not set) - When stop is set, nn will only show the first lines lines of the of each article before prompting you to continue. This is useful on slow terminals and modem lines to be able to see the first few lines of longer articles (and skipping the rest with the n command). - subject-match-limit length (integer, default 256) - Subjects will be considered identical if their first length characters match. Setting this uncritically to a low value may cause unexpected results! - subject-match-offset offset (integer, default 0) - When set to a positive number, that many characters at the beginning of the subject will be ignored when comparing subjects for ordering and equality purposes. - subject-match-parts (boolean, default false) - When set, two subjects will be considered equal if they are identical up to the first (differing) digit. Together with the subject-match-offset variable, this can be used in source groups where the subject often has a format like: vXXXXXX: Name of the package (Part 01/04) Setting subject-match-offset to 8 and subject-match-parts to true will make nn consider all four parts of the package having the same subject (and thus be selectable with `*'). Notice that changing the subject-match-... variables manually will not have an immediate effect. To reorder the menu, an explicit :sort command must be performed. These variables are mainly intended to be set using the :local command in on entry macros for source and binary groups (entry macros are evaluated before the menu is collected and sorted). - subject-match-minimum characters (integer, default 4) - When set to a positive number, that many characters at the beginning of the subject must match before the subject-match-parts option comes into affect. This is important, because the part matching causes the rest of the line to be ignored after the first digit pair is discovered. This begins after any subject-match-offset has been applied. - suggest-default-save (boolean, default true) - When set, nn will present the default-save-file when prompting for a save file name in a group without a specific save file, or folder-save-file when saving from a folder. When not set, no file name is presented, and to use the default save file, a single + must be specified. - tidy-newsrc (boolean, default false) - When set, nn will automatically remove lines from .newsrc which represent groups not found in the active file or unsubscribed groups if keep-unsubscribed is not set. - time (boolean, default true) - When set, nn will show the current time in the prompt line. This is useful on systems without a sysline (1) utility. - trace-folder-packing (boolean, default true) - When set, a trace of the retained and deleted messages is printed when a folder is rewritten. - trusted-escape-codes codes (string, default none) - When set to a list of one or more characters, nn will trust and output escape characters in an article if it is followed by one of the characters in the list. For example, to switch to or from kanji mode, control codes like "esc $" and "esc ( J" may be present in the text. To allow these codes, use the following command: set trusted-escape-codes ($ You can also set it to all to pass all espace codes through to the screen. Notice that nn thinks all characters (including esc) output to the screen as occupy one column. - unshar-command shell-command (string, default "/bin/sh") - This is the command which is invoked by the unshar command. - unshar-header-file file (string, default "Unshar.Headers") - The name of the file in which the header and initial text of articles unpacked with the :unshar command is saved. Unless the file name starts with a `/', the file will be created in the same directory as the unpacked files. The information is not saved if this variable is not set. Setting it to "Unshar.Result" will cause the headers and the results from the unpacking process to be merged in a meaningful way (unless mmdf-format is set). - unsubscribe-mark-read (boolean, default true) - When set, unsubscribing to a group will automatically mark all current articles read; this is recommended to keep the size of .newsrc down. Otherwise, unread articles in the unsubscribe groups are kept in .newsrc. If keep-unsubscribed is false, this variable has no effect. - update-frequency (integer, default 1) - Specifies how many changes need to be done to the .newsrc or select files before they are written back to disk. The default setting causes .newsrc to be updated every time a group has been read. - use-editor-line (boolean, default true) - Most editors accept arguments of the form: editor [-arguments] +n filenamewhere editor is the name of the editor, and n is the line number to put the cursor upon entering the file. If use-editor-line is false, it will not add the "+n" to the arguments. - use-path-in-from (boolean, default false) - When mail-format is set, saved articles will be preceded by a specially formatted "From " line: From origin dateNormally, the origin will be the name of the news group where the article appeared, but if use-path-in-from is set, the contents of the "Path:" header will be used as the origin. - use-selections (boolean, default true) - When set, nn uses the selections and other article attributes saved last time nn was used. If not set, nn ignores the select file. - visible-bell (boolean, default true) - When set, nn will flash the screen instead of "ringing the bell" if the visible bell (flash) capability is defined in the termcap/terminfo database. - window size (integer, default not set) - When set, nn will reserve the last size lines of the menu screen for a preview window. If not set, nn will clear the screen to preview an article if there are less than min-window lines at the bottom of the screen. As a side effect, it can also be used to reduce the size of the menus, which may be useful on slow terminals. - word-key key (key, default ^W) - The key which erases the last input component or word when nn is prompting for a string, e.g. the last name in a path name. - wrap-header-margin size (integer, default 6) - When set (non-negative), the customized header fields specified in header-lines will be split across several lines if they don't fit on one line. When size is greater than zero, lines will be split at the first space occurring in the last size columns of the line. If not set (or negative), long header lines will be truncated if they don't fit on a single line. CUSTOMIZED ARTICLE HEADER PRESENTATIONNormally, nn will just print a (high-lighted) single line header containing the author, subject, and date (optional) of the article when it is read. By setting the header-lines variable as described below, it is possible to get a more informative multi line header with optional high-lighting and underlining. The header-lines variable is set to a list of header line identifiers, and the customized headers will then contain exactly these header lines in the specified order. The same specifications are also used by the :print and save-short commands via the print-header-lines and save-header-lines variables. The following header line identifiers are recognized in the header-lines, print-header-lines, and save-header-lines variables: A Approved: a Spool-File:(path of spool file containing the article) B Distribution: C Control: D Date: d Date-Received: F From: f Sender: G Newsgroup:(current group) g Newsgroup:(current group if cross-posted or merged) I Message-Id: K Keywords: L Lines: N Newsgroups: n Newsgroups: (but only if cross posted) O Organization: P Path: R Reply-To: S Subject: v Save-File:(the default save file for this article) W Followup-To: X References: x Back-References: Y Summary: The 'G' and 'g' fields will include the local article number if it is known, e.g. Newsgroup: news.software.nn/754 The following special symbols are recognized in the header-lines variable (and ignored otherwise): Preceding the identifier with an equal sign "=" or an underscore "_" will cause the header field contents to be high-lighted or underlined. A plus sign "+" will use the shading attribute defined by shading-on and shading-off to high-light the field contents. If no shading attribute is defined it will underline the field instead. Including an asterisk "*" in the list will produce the standard one line header at that point. Example: The following setting of the header-lines variable will show the author (underlined), organization, posting date, and subject (high-lighted) when articles are read: set header-lines _FOD=S COMMAND LINE OPTIONSSome of the command line options have already been described, but below we provide a complete list of the effect of each option by showing the equivalent set, unset, or toggle command. Besides the options described below, you can set any of nn's variables directly on the command line via an argument of the following format: variable=value To set or unset a boolean variable, the value can be specified as on or off (t and f will also work). Notice that the init files are read before the options are parsed (unless you use the -I option). Therefore, the options which are related to boolean variables set in the init file will toggle the value set there, rather than the default value. Consequently, the meaning of the options are also user-defined. The explanations below describe the effect related to the default setting of the variables, with the `reverse' effect in square brackets. - -aN {set limit N} - Limit the maximum number of articles presented in each group to N. This is useful to get up-to-date quickly if you have not read news for a longer period. - -a0 - Mark all unread articles as read. See the full explanation at the beginning of this manual. - -B {toggle backup} - Do not [do] backup the rc file. - -d {toggle split} - Do not [do] split digests into separate articles. - -f {toggle fsort} - Do not [do] sort folders according to the subject (present the articles in a folder in the sequence in which they were saved). - -g - Prompt for the name of a news group or folder to be entered - -i {toggle case-fold-search} - Normally searches with -n and -s are case independent. Using this option, the case becomes significant. - -I - Do not read the init file. This must be the first option!! The global setup file is still read. - -Ifile-list - Specifies an alternate list of init files to be loaded instead of the standard global and private init files. The list is a comma-separated list of file names. Names which does not contain a `/' are looked for in the ~/.nn directory. An empty element in the list is interpreted as the global init file. The list of init files must not be separated from the -I option by blanks, and it must be the first option. Example: The default behaviour corresponds to using -I,init (first the global file, then the file ~/.nn/init). The global setup file is still read as the first init file independently of the -I option used. - -k {toggle kill} - Do not [do] perform automatic kill and selection of articles. - -lN {set stop N} - Stop after printing the first N lines of each article. This is useful on slow terminals. - -L[f] {set layout f} - Select alternative menu layout f (0 to 4). If f is omitted, menu layout 3 is selected. - -m {no corresponding variable} - Merge all articles into one `meta group' instead of showing them one group at a time. When -m is used, no articles will be marked as read. - -nWORD - Collect only articles which contain the string WORD in the sender's name (case is ignored). If WORD starts with a slash `/', the rest of the argument is used as a regular expression instead of a fixed string. - -N {no corresponding variable} - Disable updating of the rc file. This includes not recording that groups have been read or unsubscribed to (although nn will think so until you quit). - -q {toggle sort} - Do not [do] sort the articles (q means quick, but it isn't any quicker in practice!) - -Q {toggle silent} - Quiet mode - don't [do] print the logo or "No News" messages. - -r {toggle repeat-group-query} - Make -g repeat query for a group to enter. - -sWORD - Collect only articles which contain the string WORD in their subject (case is ignored). If WORD starts with a slash `/', the rest of the argument is used as a regular expression instead of a fixed string. - -S {toggle repeat} - Do not [do] eliminate duplicated subject lines on menus. - -T {toggle time} - Do not [do] show the current time in the prompt line. - -w[N] {set window N} - Reserve N lines of the menu screen for a preview window. If N is omitted, the preview window is set to 5 lines. - -W {toggle confirm-messages} - [Don't] Wait for confirmation on all messages. - -x[N] {set old N} - Present (or scan) all (or the last N) unread as well as read articles. This will never mark unread articles as read. - -X {no corresponding variable} - Read/scan unsubscribed groups also. Most useful when looking for a specific subject in all groups, e.g. nn -mxX -sSubject all MACRO DEFINITIONSPractically any combination of commands and key strokes can be defined as a macro which can be bound to a single key in menu and/or reading mode. The macro definition must specify a sequence of commands and key strokes as if they were typed directly from the keyboard. For example, a string specifying a file name must follow a save command. This manual does not give a complete specification of all the input required by the various commands; it is recommended to execute the desired command sequence from the keyboard prior to defining the macro to get the exact requirements of each command. Although it is possible to define temporary macros interactively using the :define command, macro definitions are normally placed in the init file. Macros are numbered from 0 to 100, i.e. it is possible to define a total of 101 different macros (implicit macros defined with the map command uses internal numbers from 101 to 200). To define macro number M, the following construction is used (the line breaks are mandatory): define M body end The body consists of a sequence of tokens separated by white space (blanks or newlines). However, certain tokens continue to the end of the current line. The following tokens may occur in the macro body: - Empty lines and text following a # character (preceded by white space) is ignored. - Command Names - Any command name listed in the key mapping section can be included in a macro causing that command to be invoked when the macro is executed. - Extended Commands - All the extended commands which can be executed through the command command (normally bound to the : key) can also be executed in a macro. An extended command starts with a colon (:) and continues to the end of the current line. Example: :show groups total - Key Strokes - A key stroke (which is normally mapped into a command depending on the current mode) is specified as a key name enclosed in single quotes. Examples (A-key, left arrow key, RETURN key): 'A' 'left' '^M' - Shell Commands - External commands can be invoked as part of a macro execution. There are two forms of shell command invocations available depending on whether a command may produce output or require user input, or it is guaranteed to complete without input or output to the terminal. The difference is that in the latter case, nn does not prepare the terminal to be used by another program. When the command completes, the screen is not redrawn automatically; you should use the redraw command to do that. The tho forms are: :!echo this command uses the terminal :!!echo this command does not > /tmp/file - Strings - Input to commands prompting for a string, e.g. a file name, can be specified in a macro as a double quoted string. Example (save without prompting for a file name): save-short "+$G" - Conditionals - Conditionals may occur anywhere in a macro; a conditional is evaluated when the macro is executed, and if the condition is false the rest of the current line is ignored. The following conditionals are available: ?menu True in menu mode ?show True in reading mode ?folder True when looking at a folder ?group True when looking at a news group ?yes Query user, true if answer is yes ?no Query user, true if answer is noExample (stop macro execution if user rejects to continue): prompt "continue? " ?no break In addition to these conditionals, it is possible to test the current value of boolean and integer variables using the following form: ?variable=valueThis conditional will be true (1) if the variable is an integer variable whose current value is the one specified, or (2) if the variable is a boolean variable which is either on or off. Examples: ?layout=3 :set layout 1 ?monitor=on break ?sort=off :sort age - break - Terminate macro execution completely. This includes nested macros. Example (stop if looking at a folder): ?folder break - return - Terminate execution of current macro. If the current macro is called from another macro, execution of that macro continues immediately. - input - Query the user for a key stroke or a string, for example a file name. Example (prompt the user for a file name in the usual way): save-short input - yes - Confirm unconditionally if a command requires confirmation. It is ignored if the command does not require confirmation. Example (confirm creation of new files): save-short "+$G" yes - no - Terminate execution of current macro if a command requires confirmation; otherwise ignore it. If neither yes nor no is specified when a command requires confirmation, the user must answer the question as usual - if the user confirms the action execution continues normally; otherwise the execution of the current macro is terminated. Example (do not create new files): save-short "+$L/misc" no - prompt string - Print the string in the prompt line (highlighted). The string must be enclosed in double quotes. Example: prompt "Enter recipient name"When the macro terminates, the original prompt shown on entry to the macro will automatically be redrawn. If this is not desirable (e.g. if the macro goes from selection to reading mode), the redrawing of the prompt can be disabled by using a prompt command with an empty string (""). Example: prompt "Enter reading mode?" # old prompt is saved ?no return # and old prompt is restored read-skip # changes the prompt prompt "" # so forget old prompt - echo string - Display the string in the prompt line for a short period. Example: ?show echo "Cannot be used in reading mode" break - puts string-to-end-of-line - The rest of the line is output directly to the terminal without interpretation. - macro M - Invoke macro number M. The maximum macro nesting level is five (also catches macro loops). I use the following macro to quickly save all the selected files in a file whose name is entered as usual. It also works in reading mode (saving just the current article). define 1 :unset save-report save-short input yes ?menu '+' :set save-report end KEY MAPPINGSThe descriptions of the keys and commands provided in this manual reflects the default key mappings in nn. However, you can easily change these mappings to match your personal demands, and it is also possible to remap keys depending on the terminal in use. Permanent remapping of keys must be done through the init file, while temporary changes (for the duration of the current invocation of nn) can be made with the :map command. The binding and mapping of keys are controlled by four tables: - The multikey definition table - This table is used for mapping multicharacter key sequences into single characters. By default the table contains the mappings for the four cursor keys, and there is room for 10 user-defined multikeys. The fourteen multikeys are named: up, down, right, left (the four arrow keys), and #0 through #9 for the user-defined keys. Multikey #i (where i is a digit or an arrow key name) is defined using the following command: map #i key-sequence where the sequence is a list of 7-bit character names (see below) separated by spaces. For example, if the HOME key sends the sequence ESC [ H, you can define multikey #0 to be the home key using the command: map #0 ^[ [ H - The input key mapping table - All characters that are read from the keyboard will be mapped through the input mapping table. Consequently, you can globally remap one key to produce any other key value. By default all keys are mapped into themselves. An entry in the input key mapping table to map input-key into new-key is made with the command map key input-key new-key For example, to make your ESC key function as interrupt you can use the command map key ^[ ^G - The selection mode key binding table - This table defines for each key which command should be invoked when that key is pressed in selection mode, i.e. when the article menu is shown. The command to bind a key to a command in selection mode is: map menu key command For example, to have the HOME key defined as multikey #0 above bound to the select command, the following command is used: map menu #0 select To remap a key to select a specific article on the menu (which the `a' through `z' keys do by default), the command must be specified as `article N' where N is the entry number on the menu counted from zero (i.e. a=0, b=1, ..., z=25, 0=26, ..., 9=35). For example, to map `J' to select article `j', the following command is used: map menu J article 9 - The reading mode key binding table - This table defines for each key which command should be invoked when that key is pressed in reading mode, i.e. when the article text is shown. The command to bind a key to a command in reading mode is: map show key command In addition to the direct mappings described above, the following variations of the map command are available: - User defined keymaps - Additional keymaps can be defined using the command make map newmap This will create a new keymap which can initialized using normal map commands, e.g. map newmap key command To activate a user-defined keymap, it must be bound to a prefix key: map base-map prefix-key prefix newmap When used, the prefix key itself does not activate a command, but instead it require another key to be entered and then execute the command bound to that key in the keymap which is bound to the prefix key. For example, to let the key sequence "^X i" execute macro number 10 in both modes, the following commands can be used: make map ctl-x map ctl-x i macro 10 map both ^X prefix ctl-x - Mapping keys in both modes - Using the pseudo-keymap `both', it is possible to map a key to a command in both selection and reading mode at once. For example, to map the home key to macro number 5 in both modes, the following command can be used: map both #0 macro 5 - Aliasing - A key can also be mapped directly to the command currently bound to another key. Later remapping of the other key will not change the mapping of the `aliased' key. This is done using the following command: map keymap new-key as old-key - Binding macros to keys - A previously defined macro can be bound to a key using the command: map keymap key macro macro-number - Implicit macro definitions - An implicit macro can also be defined directly in connection with the map command: map keymap key ( body... ) Keys and character names are specified using the following notation: - C - A single printable character represents the key or character itself. - ^C - This notation represents a control key or character. DEL is written as ^? - 125, 0175, 0x7D - Characters and keys can be specified by their ordinal value in decimal, octal, and hexadecimal notation. - up, down, right, left - These names represent the cursor keys. - #0 through #9 - These symbols represent the ten user-defined multikeys. If the variable data-bits is 7, key maps can specify binding of all keys in the range 0x00 to 0x7F, and the 8th bit will be stripped in all keyboard input. If the variable data-bits is 8, the 8th bit is not cleared, and key maps are extended to allow binding of keys in the range 0xA0 to 0xFE (corresponding to the national characters defined by the ISO 8859 character sets). Binding commands to these keys can be done either by using their numeric value, or directly specifying the 8 bit character in the map command, e.g. map menu 0xC8 macro 72 map key % To show the current contents of the four tables, the following versions of the :map command are available: - :map - Show the current mode's key bindings. - :map menu - Show the selection mode key bindings. - :map show - Show the reading mode key bindings. - :map # - Show the multikey definition table. - :map key - Show the input key mapping table. STANDARD KEY BINDINGSBelow is a list of all the commands that can be bound to keys, either in selection mode, in reading mode, or both. For each command the default command key bindings in both modes are shown. If the key is not bound in one of the modes, but it can be bound, the corresponding part will just be empty. If the command cannot be bound in one of the modes, that mode will contain the word nix. Function Selection mode Reading mode advance-article nix a advance-group A A article N a-z0-9 nix back-article nix b back-group B B cancel C C command : : compress nix c continue space space continue-no-mark return nix decode find = / find-next nix . follow F fF full-digest nix H goto-group G G goto-menu nix = Z junk-articles J nix kill-select K K layout " nix leave-article nix l leave-next L L line+1 , down return line-1 / nix macro M mail M m M message ^P ^P next-article nix n next-group N N next-subject nix k nil overview Y Y page+1 > nix page+1/2 nix d page-1 < delete backspace page-1/2 nix u page=0 nix h page=1 ^ ^ page=$ $ $ patch preview % % previous P p print P quit Q Q read-return Z nix read-skip X X redraw ^L ^R ^L ^R reply R r R rot13 nix D save-full S s S save-short O o O save-header E e E save-body W w W select . nix select-auto + nix select-invert @ nix select-range - nix select-subject * * shell ! ! skip-lines nix tab unselect-all ~ nix unshar unsub U U version V V See the descriptions of the default bindings for a description of the commands. The pseudo command nil is used to unbind a key. THE INIT FILESThe init files are used to customize nn's behaviour to local conventions and restrictions and to satisfy each user's personal taste. Normally, nn reads upto. The init file is parsed one line at a time. If a line ends with a backslash `\', the backslash is ignored, and the following line is appended to the current line. The init file may contain the following types of commands (and data): - Empty lines and lines with a # character as the first non-blank character are ignored. Except where # has another meaning defined by the command syntax (e.g. multi-keys are named #n), trailing comments on input lines are ignored. - Variable settings - You can set (or unset) all the variables described earlier to change nn's behaviour permanently. The set and unset commands you can use in the init file have exactly the same format as the :set and :unset commands described earlier (except that the : prefix is omitted.) Variables can also be locked via the lock command; this is typically done in the setup file to enforce local policies. - Key mappings - You can use all the versions of the map command in the init file. - Macro Definitions - You can define sequences of commands and key strokes using the define...end construction, which can then be bound to single keys with the map command. - Load terminal specific files - You can load a terminal specific file using the load file The character @ in the file will be replaced by the terminal type defined in the TERM environment variable. nn silently ignores the load command if the file does not exist (so you don't have to have a specific init file for terminals which does not require remapping). If the file is not specified by an absolute pathname, it must reside in your ~/.nn directory. Examples: # load local customizations load /usr/lib/nninit # load personal terminal specific customizations load [email protected] - Switch to loading a different init file - You can skip the rest of the current init file and start loading a different init file with the following command: chain file If this occur in the private or global init file, the chained init file may contain a sequence part which will replace the private or global presentation sequence respectively. - Stop loading current init file - You can skip the rest of the current init file with the following command: stop - Give error messages and/or terminate - If an error is detected in the init file, the following commands can be used to print an error message and/or terminate execution: error fatal error message...Print the message and terminate execution. echo warning message...Print the message and continue. exit [ status ]Terminate nn with the specified exit status or 0 if omitted. - Change working directory of nn - You can use the cd command to change the working directory whenever you enter nn. Example: # Use folder directory as working directory inside nn cd ~/News - Command groups - The init file can contain groups of commands which are executed under special conditions. The command groups are described in the section on command groups below. - One or more save-files sections - A save-files section is used to assign default save files to specific groups: save-files group-name (pattern) file-name ... end The group name (patterns) and save file names are specified in the same way as in the presentation sequence (see below). Example: save-files news* +news/$L comp.sources* /u/src/$L/ end - The news group presentation sequence - The last part of the init file may specify the sequence in which you want the news groups to be presented. This part starts with the command sequence and continues to the end of the init file. Both init files may contain a presentation sequence. In this case, the global sequence is appended to the private sequence. COMMAND GROUPSCommand groups may only occur in the init file, and they provide a way to have series of commands executed at certain points during news reading. In release 6.4 onwards, these possibilities are still rather rudimentary, and a mixture of normal init file syntax and macro syntax is used depending on whether the command group is only executed on start-up or several times during the nn session. A command group begins with the word on and ends with the word end. The following command groups are conditionally executed during the parsing of the init file if the specified condition is true. They may also have an optional else part which is executed if the condition is false: on condition commands [ else commands ] end The following conditional command groups may be used in the init file to be executed at start-up: - on [ test ] - The commands (init file syntax) in the group are executed only if the specified test is true. A shell is spawned to execute the command "[ test ]", so all the options of the test(1) command is available. For example, to unset the flow-control variable if the tty is a pseudo-tty, the following conditional can be used: on [ -n "`tty | grep ttyp`" ] unset flow-control end - on !shell command - The command group is executed if the given shell command exits with 0 status (success). Care should be taken that the command does not produce any output, e.g. by redirecting its output to /dev/null. For example, to prevent people from reading news if load is above a specific level, the following conditional might be placed in the global setup file. on !load-above 5 error load is too high, try again later. end - on `shell command` string... - The command group is executed if the first output line from executing the specified shell command is listed among the specified string values. The shell command can be omitted on subsequent occurrences of this conditional, in which case the output from the last shell command is used. For example, the following conditional can be used to switch to an init file which has a limited sequence for news reading during working hours, evenings, and nights: on `date +%H` 9 10 11 12 13 14 15 16 chain init.work end on `` 17 18 19 20 21 chain init.evening else chain init.night end - on `` string... - This is equivalent to the previous form except that instead of executing a shell command, the output from the previous - on $variable [ value ] - If no value strings are specified, the command group is executed if the given variable is defined in the environment. Otherwise, the command group is executed only if the value of the variable occur in the value list. For example, if you want nn to look for mail in whatever $MAIL is set to - if it is set - you can use the following code: on $MAIL set mail $(MAIL) end - on slow The commands (init file syntax) in the group are executed only if the current terminal output speed is less than or equal to the baud rate set in the slow-speed variable. This can be used to optimize the user-interface for slow terminals by setting suitable variables: on slow set confirm-entry set slow-mode set delay-redraw unset visible-bell set compress unset header-lines set stop 5 set window 10 end - on fast Same as on slow except that the commands are only executed when the terminal is running at a speed above the slow-speed value. - on term term-type... The commands are executed if one of the term-type names is identical to value of the TERM environment variable. - on host host-name... The commands are executed if the local host's name occur in the host-name list. - on program program-name... The commands are executed if the current program (nn, nncheck, etc) in the program-name list. The following on command groups are really macros which may be executed during nn's normal processing, and as such they cannot have an else part. - on entry [ group list ] These commands (macro format!) are executed every time nn enters a news group. If a group list is not specified, the commands are associated with all groups which don't have its own entry macro specified in the group sequence. Otherwise, the entry macro will be associated with the groups in the list. The group list is specified using the meta-notations described in the presentation sequence section. All `:' commands at the beginning of the command group are executed before nn collects the articles in the group, so it is possible to set or unset variables like cross-post and auto-read-mode-limit before any articles are collected and the menu is (not) shown. The non-`:' commands, and `:' commands that follows a command of another type will be executed immediately after the first menu page is presented. The execution of a `:' command can be postponed by using a double `::' as the command prefix. on entry comp.sources* alt.sources :set cross-post on # set before collection :local auto-read-mode-limit -1 # set before showing menu ::unset cross-post # set after collection end - on start-up These `:' commands (macro format!) are executed on start-up just before nn enters the first news group. However, postponed commands (i.e. non-`:' commands) will not be executed until the first group is shown (it works like an entry macro). GROUP PRESENTATION SEQUENCENews groups are normally presented in the sequence defined in the system-wide init file in nn's library directory. You can personalize the presentation sequence by specifying an alternative sequence in the private init file. The sequence in the private init file is used before the global presentation sequence, and need only describe the deviations from the default presentation sequence. The presentation sequence must start with the word sequencefollowed by a list of the news group names in the order you want them to be presented. The group names must be separated by white space. The sequence list must be the last part of the init file (the parsing of commands from the init file stops when the word sequence is encountered). You may use a full group name like "comp.unix.questions", or just the name of a main group or subgroup, e.g. "comp" or "comp.unix". However, if "comp" precedes "comp.unix.questions" in the list, this subgroup will be placed in the normal alphabetic sequence during the collection of all the "comp" groups. Groups which are not explicitly mentioned in any of the sequence files will be placed after the mentioned groups, unless `!!' is used and it has not been disabled (as described below). Each group name may be followed by a file or folder name (must start with either of `/' `~' or `+') which will specify the default save file for that group (and its subgroups). A single `+' following the group name is an abbreviation for the last save file name used. For example, the following two sequences are equivalent: group1 +file group2 +file group3 +file group1 +file group2 + group3 + When an article is saved, the default save name will be used as the initial contents of the file name prompt for further editing. It therefore does not need to be be a complete file name (unless you use the quick save mode). Each group name may also be associated with a so-called entry action. This is basically an (unnamed) macro which is invoked on entry to the group (following the same rules as the `on entry' command group related to :set and :unset commands). The entry action begins with a left parenthesis `(' and ends with a right parenthesis `)' on an otherwise empty line: comp.sources. +src/$L/ ( :set cross-post ) The last entry action can be repeated by specifying an empty set of parenthesis, e.g. comp.unix. +unix () The entry action of a preceding group in the sequence can be associated with the current group(s) by specifying the name of the group in the parentheses instead of the commands, e.g. comp.unix. +unix (comp.sources.unix) A macro can also be associated with the entry action by specifying its number in the same way as the group name above, e.g. rec.music. +music (30) Notice that it is the current definition of the macro which is associated with the group, so if the macro is later redefined with the `:define' command, it will not have any effect on the entry action. Group names can be specified using the following notations: - group.name - Append the group (if it exists) to the presentation sequence list. If also-subgroups is set (default), all subscribed subgroups of the group will be included as well (if there are any). Examples: "comp", "comp.unix", "comp.unix.questions". If the group does not exits (e.g. "comp"), the subgroups will be included even when also-subgroups is not set, i.e. "comp" is equivalent to "comp.". - group.name. - Append the subgroups of the specified group to the presentation sequence. The group itself (if it exists) is not included. Examples: "comp.", "comp.unix.". - .group.name - Append the groups whose name ends with the specified name to the sequence. Example: ".test". - group.name* - Append the group and its subgroups to the presentation sequence list (even when also-subgroups is not set). Example: "comp.unix*". The following meta notation can be used in a sequence file. The group.name can be specified using any of the forms described above: - ! groups - Completely ignore the group or groups specified unless they are already in the presentation sequence (i.e. has been explicitly mentioned earlier in the sequence). - !:code groups - Ignore a selection of groups based on the given code letter (see below), unless they are already included in the sequence. Notice that these forms only excludes groups from the presentation sequence, i.e. they do not include the remaining groups at this point; that must be done explicitly elsewhere. - !:U groups - Ignore unsubscribed groups, i.e. if they are neither new, nor present and subscribed in .newsrc. This is useful to ignore a whole hierarchy except for a few groups which are explicitly mentioned in .newsrc and still see new groups as they are created. - !:X groups - Ignore unsubscribed and new groups, i.e. if they are not currently present and subscribed in .newsrc. This is useful to ignore a whole hierarchy except for a few groups which are explicitly mentioned in .newsrc. New groups in the hierarchy are ignored unless `NEW' occurs earlier in the sequence. - !:O groups - Ignore old groups, i.e. unless they are new. This is useful to ignore a whole hierarchy but still see new groups which are created in the hierarchy (it might become interesting some day). Individual groups can still be included in the sequence if they are specified before the `!:O' entry. - !:N groups - Ignore new groups in the hierarchy. - !! - Stop building the presentation sequence. This eliminates all groups that are not already in the presentation sequence. - NEW - This is a pseudo group name which matches all new groups; you could place this symbol early in your presentation sequence to see new groups `out of sequence' (to attract your attention to them). - RC - This is a pseudo group name which matches all groups occurring in the .newsrc file. It will cause the groups in .newsrc to be appended to the presentation sequence in the sequence in which they are listed in .newsrc. - RC:number - Similar to the RC entry, but limited to the first number lines of the .newsrc file. Example: RC:10 (use 10 lines of .newsrc). - RC:string - Similar to the RC entry, but limited to the lines up to (and including) the first line (i.e. group) starting with the given string. For example: RC:alt.sources - < group.name - Place the group (and its subgroups) at the beginning of the presentation sequence. Notice that each `<' entry will place the group(s) at the beginning of the current sequence, i.e. < A < B < C will generate the sequence C B A. - > group.name - Place the group (and its subgroups) after all other groups that are and will be entered into the presentation sequence. - @ - Disable the `!!' command. This can be included in the personal presentation sequence if the global sequence file contains a !! entry (see example 1 below). - % .... % - Starts and ends a region of the sequence where it is possible to include groups which has been eliminated earlier. This may be useful to alter the sequence of some groups, e.g. to place comp.sources.bugs after all other source groups, the following sequence can be used: ! comp.sources.bugs comp.sources* % comp.sources.bugs % Example 1: In a company where ordinary users only should read the local news groups, and ignore the rest (including new news groups which are otherwise always subscribed to initially), can use the following global presentation sequence: general follow ! local.test local !! The "expert" users in the company must put the @ command somewhere in their private sequence to avoid losing news groups which they have not explicitly mentioned in their init file. Example 2: This is the global sequence for systems with heavy news addicts who setup their own sequences anyway. # all must read the general news first < general # test is test, and junk is junk, # so it is placed at the very end > test > .test > junk # this is the standard sequence which everybody may # change to their own liking local # our local groups dk # the Danish groups eunet.general # to present it before eunet.followup eunet # the other European groups comp # the serious groups news # news on news sci # other serious groups rec # not really that important (don't quote me) misc # well, it must be somewhere # the groups that are not listed above goes here Notice the use of comments in the sequence where they are allowed at the end of non-empty lines as well. Example 3: My own presentation sequence (in the init file) simply lists my favourite groups and the corresponding default save files: sequence !:U alt* # ignore unsubscribed alt groups news.software.nn +nn comp.sys.ti* +ti/$L NEW # show new groups here news* rec.music.synth +synth/ comp.emacs*,gnu.emacs +emacs/misc comp.risks +risks eunet.sources +src/unix/ comp.sources* +src/$L/ The presentation sequence is not used when nn is called with one or more news group names on the command line; it is thus possible to read ignored groups (on explicit request) wihtout changing the init file. (Of course, you can also use the G command to read ignored groups). MERGING NEWS GROUPSThe third example above contains the following line: comp.emacs*,gnu.emacs +emacs/misc This is the syntax used to merge groups. When two or more groups are merged, all new articles in these groups are presented together as if they were one group. To merge groups, their names must be listed together in the sequence, and only separated by a single comma. To merge the groups resulting from a single group pattern (e.g. comp.emacs*), the group pattern must be followed by a comma and a blank (e.g. comp.emacs*, ...). Merged groups are presented as the first group in the "list", and the word "MERGED" will be shown after the group name. The Y {overview} command will still show merged groups as individual groups, but they will be annotated with the symbol `&' on the first of the groups, and a `+' on the rest of the groups. In the current version, the concept of the current group in connection with merged groups is a bit fuzzy. This should only be noticeable with the G command, which will take the most recently used group among the merged groups as the current group. So things like G = ... may not always work as expected. ENVIRONMENTThe following environment variables are used by nn: EDITOR. The editor invoked when editing replies, follow-ups, and composing mail. nn knows about the following editors: vi, ded, GNU emacs, and micro-emacs, and will try to position the cursor on the first line following the header, i.e. after the blank line which must not be deleted! If an article has been included, the cursor is placed on the first line of the included text (to allow you to delete sections easily). LOGNAME. This is taken as the login name of the current user. It is used by nn to return failed mail. If it is not defined, nn will use the value of USER, or if that is not defined either, nn will use the call `who am i' to get this information. If all attempts fail, the failed mail is dropped in the bit bucket. PAGER. This is used as the initial value of the pager variable. SHELL. This is the shell which is spawned if the system cannot suspend nn, and it will be used to execute the shell escapes. TERM. The terminal type. NOTESWhen NNTP is being used over a slow link (as with the ppp protocol and a modem), it may be desirable to suppress the retrieval of the information about new newsgroups, and their purpose, since they can be hundreds of KBytes in size. To do this, the new-group-action and show-purpose-mode variables should be set to 0 in your init file. See the descriptions of those variables for more info. Unfortunately, the list of active newsgroups is still fetched, since nn uses it to determine which groups to check for new articles. Even this could be avoided, but the cost would be checking for new articles in every group, which might well be slower overall, although startup would be faster. FILES~/.newsrc The record of read articles. ~/.nn/select The record of selected and seen articles. ~/.nn/init Personal configuration and presentation sequence. ~/.nn/kill The automatic kills and selections. ~/.nn/KILL.COMP The compiled kill file. ~/.nn/LAST The time stamp of the last new news group we have seen. ~/.nn/NEXTG Active group last time nn was quit. ~/.nn/.param Parameter file for the aux script $lib/setup System-wide setup - always read first. $lib/init System-wide setup and presentation sequence. $lib/aux The response edit and send script. $lib/routes Mapping rules for mail addresses (on non-domain systems). $db/* The news data base. /etc/termcap Terminal data base [BSD]. /usr/lib/terminfo/*Terminal data base [SysV]. /usr/local/lib/nntp_serverName of remote nntp server, if not changed by setting the environment variable NNTPSERVER or the nntp-server variable on the command line. The name $lib and $db are the directories used for the auxiliary files and the news data base respectively. Their name and location is defined at compile time. Common choices are /usr/local/lib/nn or /usr/lib/news/nn for $lib and /usr/spool/nn or /usr/spool/news/.nn for $db. ORIGINAL AUTHORKim F. Storm, Texas Instruments A/S, Denmark CURRENT MAINTAINERMichael T Pins [email protected] The NNTP support was designed and implemented by Ren Seindal, Institute of Datalogy, University of Copenhagen, Denmark. The news.software.nn group is used for discussion on all subjects related to the nn news reader. This includes, but is not limited to, questions, answers, ideas, hints, information from the development group, patches, etc.
https://manpages.org/nn
CC-MAIN-2022-27
refinedweb
32,386
61.36
I don’t know about you, but as someone who lives on planet Earth, I believe that the climate crisis is an issue that needs to be solved. Personally, I would like to solve it through building more renewables as I outline on my climate blog. Since the climate crisis has been a pressing issue for the last, oh I don’t know, 50 years? 60 years? I’d expect that there’d be a good amount of news about climate related topics. I didn’t know, but I decided to find out. To do this, I used the New York Times Archive API and pulled the headlines of articles from 2008 to 2021. For a detailed explanation of how to use the New York Times Archive API to download archived news headlines, please see How to Download Archived News Headlines. For those of you uninterested in the code, skip directly to the findings. Checking Titles for Mentions of Climate At this point, I’m going to assume that you’ve already downloaded all the data from the NY Times archive API into JSON format as I outlined in the article above. The first thing we’ll do for our project to track mentions of climate in the news over time is create a function that will extract the headlines for each month. To get started we’ll have to import the json library and the month_dict we created in the link on how to download archived titles above. It’s a relatively simple dictionary with the month number as the key and the month name as the value. import json from archive import month_dict Now let’s make a function that checks each headline for mentions of the climate. This function will take two parameters, the year and the month that we’re interested in. The first thing we’ll do is open up our file and store the JSON information into a dictionary titled entries. We’ll enclose this in a try/except block just in case the file doesn’t exist. Next, we’ll save the length of our entries as total_headlines which represents the total number of news headlines in that month. From here we’ll start off our count, which I’m storing in a variable called cc for “climate count”, at 0. As we loop through each headline in the entry, if the headline contains the word “climate” we’ll increment cc by one. At the end, we’ll return a tuple of the total number of headlines and the count of the number of headlines that contain the word “climate”. # checks headlines for climate change def cc_finder(year, month): filename = f"{year}/{month_dict[month]}.json" try: with open(filename, "r") as f: entries = json.load(f) except: print("No such file") return # get every headline # check if it has climate change in it total_headlines = len(entries) # print(total_headlines) cc = 0 for entry in entries: headline = entry['headline']['main'] if "climate" in headline.lower(): cc += 1 # print(cc) return (total_headlines, cc) Alright now that we’ve created a function to return the number of articles per month and the number of article headlines containing the word “climate” over time, let’s graph our findings. By the way, it’s not necessary to return this as a tuple, we could also just return the proportion, but I’ve chosen to return it as a tuple because I’d like to actually see the numerical comparisons as well. Now let’s take a look at how we can graph our findings. Create the Function to Graph How Often Climate is Mentioned in the News We’ll define a function, graph_cc that stands for “graph climate count”. I graph three figures here, but if you only care about the first one, simply replace the ratio.show() line with plt.show() and feel free to delete the rest of the code. I’ve already downloaded all the data for the years 2008 through November (so far) of 2021. Since we’re only a few days into November let’s keep in mind that the article counts for November will be low. Also, since it is around COP26 right now, we should expect a higher ratio and count for climate articles, and we’ll come back to this again in a few months to investigate if the numbers are artificially inflated or not. In our function we’ll start by creating a list of years that will contain the years 2008 through 2022. I’ve defined four lists, xs is the x values that represent months since January 2008 (starting at 0), ys is the y values that are in ratio form, ys_total is the y values for total number of articles in a month, and ys_cc is the values for total number of articles containing the word ‘climate’ in that month. We also have a variable called months_since_2008 that we increment every time we loop through a month. Then we set up a nested for loop to loop through each month for all the years we defined in our year list and get all the x and y values we need. Finally, we plot each of our findings. Notice that I end with an input() statement, that’s to keep the program running long enough to actually see the plots. def graph_cc(): years = list(range(2008, 2022)) xs = [] ys = [] ys_total = [] ys_cc = [] months_since_2008 = 0 for year in years: for i in range(1,13): if year == 2021 and i > 11: continue total, cc = cc_finder(year, i) ratio = cc/total xs.append(months_since_2008) months_since_2008 += 1 ys.append(ratio) ys_total.append(total) ys_cc.append(cc) ratio = plt.figure(1) plt.plot(xs, ys) plt.xlabel("Months since January 2008") plt.ylabel("Proportion of News Headlines about Climate") plt.title("Climate in the News Over Time") plt.show() total = plt.figure(2) plt.plot(xs, ys_total) plt.xlabel("Months since January 2008") plt.ylabel("Total Number of Articles per Month") plt.title("Number of NY Times Articles per Month over Time") total.show() cc = plt.figure(3) plt.plot(xs, ys_cc) plt.xlabel("Months since January 2008") plt.ylabel("Total Number of Articles Mentioning Climate") plt.title("Number of NY Times Articles Mentioning Climate per Month over Time") cc.show() input() Climate In the News Over Time: Graphed Findings Once we run this we should see the following plot: (I’ll leave the other two to the appendix) This is both disheartening and quite interesting. We can see that climate has gotten almost no mentions in the news since 2008 up until literally October of 2021. The average is around 0.004 over this time when we include the last couple months. That’s INSANE! That means, on average, less than half a percent of news (NY Times anyway) articles in the last 13 years have mentioned climate. THIS IS THE MOST IMPORTANT ISSUE OF OUR GENERATION! I’ve included the last two graphs after this, but like WHAT IN THE WORLD?? HOW? WHY? We need to focus more on climate change and how to fight it, and for that, we’ll need the media’s Appendix (the other two images) These other two images are kind of interesting too, but I wanted to wrap up after showing the ratio of climate news to total news because that’s CRAZY to me. The number of NY Times news articles per month have been trending down over time, who knew? Also, there was a weird dip between 2010 and 2012, I wonder why? I’ll have to do some snooping to find out. One slightly positive note is that it does look like even though the total number of articles per month have been trending down, the number of climate articles have remained relatively consistent and are even currently trending up! One thought on “Climate Mentions in the News? Shockingly Low”
https://pythonalgos.com/climate-mentions-in-the-news-shockingly-low/
CC-MAIN-2022-27
refinedweb
1,318
70.53
The environment As with any other shell, marcel has environment variables. In Python terms, these variables exist in a Python namespace that is made available to marcel commands. So, for example, you can examine the value of the HOME variable as follows: The parentheses delimit the Python expression HOME. That symbol is located in marcel's namespace (the environment), and the value is printed. You can create or modify environment variables by using conventional assignment syntax. The following example assigns the integer 123 to the variable x, and then prints out the value of the variable. Your environment variables can store structured types too. For example, you can assign a list: [1, 'two', 4-1] is a list, and by enclosing this expression in parentheses, the value of the list can be assigned to the variable x. Some Python types, like lists, can be modified, and this can be done through marcel. For example, to append to x's list: Finally, environment variables can be deleted:
https://www.marceltheshell.org/the-environment
CC-MAIN-2021-04
refinedweb
166
54.93
I had been working on the Puppet Editor Services project for a while and I needed something different, but not too different. I use a lot of PowerShell, so it was time to give back to the community and help make the PowerShell extension better. I went through the issue list and one of them caught my eye; Collapsible/expandable Functions, Regions, Comment blocks, and Comment based help blocks It seems that the code folding isn’t great, and a new VS Code feature, syntax code folding, would help with that. Also it was a popular request from the community so it was definitely a wanted feature! This blog post is about my journey to creating the PowerShell VS Code Extension Syntax Folder. It won’t contain deep dives in all of the code, but more how I arrived at the solution and some of the code used to do that. Initial thoughts on the solution According to the API documentation the folding provider just returns an array of zero or more FoldingRange objects FoldingRange start: number end: number kind?: FoldingRangeKind Comment Imports Region Post blog - The kind is optional so you don’t have to set it So each object has a start and end line number, and then a number representing what type of range it is. VS Code uses this for commands like Fold all comment regions. Now that I knew what information I needed to extract, how would I get that? After doing work on the Puppet Syntax files I knew that I could use the textmate grammar files to parse a PowerShell script into its grammar tokens. These tokens could then be used to figure out where the code could be folded. And because @omniomi had done a lot of work on the PowerShell syntax files I had a lot of confidence that they could be parsed correctly. The first solution Reference - Github Pull Request Making Hello World I firstly created a static folding provider, whereby it didn’t query anything, but just returned a static list of folding regions. I don’t have the code available for this but it looked a little like this: src/main.ts import { FoldingFeature } from "./features/Folding"; ... extensionFeatures = [ ... new FoldingFeature(documentSelector), ]; src/features/Folding.ts export class FoldingProvider implements vscode.FoldingRangeProvider { public async provideFoldingRanges( document: vscode.TextDocument, context: vscode.FoldingContext, token: vscode.CancellationToken, ): Promise<vscode.FoldingRange[]> { return new vscode.FoldingRange(4, 6, 3); } } ... export class FoldingFeature implements IFeature { private foldingProvider: FoldingProvider; constructor(documentSelector: DocumentSelector) { this.foldingProvider = new FoldingProvider(); vscode.languages.registerFoldingRangeProvider(documentSelector, this.foldingProvider); } public dispose(): any { return undefined; } public setLanguageClient(languageclient: LanguageClient): void { return undefined; } } The Folding.tsfile has a folding Provider ( FoldingProvider) and Feature ( FoldingFeature) class. The Provider class generates the Folding Ranges. In this case I’m using a static list ( return new vscode.FoldingRange(4, 6, 3);) which generates a single range from line 5 to 7 (Line numbers start at zero in the API) as a comment (3 = Comment range). The Feature class registers the provider within VS Code. In the main.tsfile, we create the folding feature when the extension starts up. This is the standard template used in the VS Code PowerShell extension; Extension --> Feature --> Provider Source - Loading the grammar The next thing to do was load the textmate grammar parsing library. In VS Code this comes from the vscode-textmate npm module. However loading it was a little difficult. Fortunately someone had already come across this, and had posted a solution in VS Code Issue 46281. All I did was adapt this code into the Feature class, and we now had a function called getCoreNodeModule which would load vscode-textmate … However, this could only load the module at runtime. This meant I didn’t have access to any of the typescript typings, even though they existed, which was annoying. Source - Finding grammar file Now I needed the PowerShell Textmate grammar file. Unfortunately this file isn’t actually distributed in this extension, it comes vendored directly into VS Code itself. VS Code does have the ability to query loaded extensions so we could go through all of the extensions, looking for the one that contributes a powershell grammar file. private powerShellGrammarPath(): string { // Go through all the extension packages and search for PowerShell grammars, // returning the path to the first we find for (const ext of vscode.extensions.all) { if (!(ext.packageJSON && ext.packageJSON.contributes && ext.packageJSON.contributes.grammars)) { continue; } for (const grammar of ext.packageJSON.contributes.grammars) { if (grammar.language !== "powershell") { continue; } return path.join(ext.extensionPath, grammar.path); } } return undefined; } } Source - Creating tokens Lastly, now that we had the grammar file and the grammar parser, we could parse a text document into a series of grammar tokens using the tokenizeLine function. Grammar Tokens So what do the tokens look like? Given a simple file function New-VSCodeShouldFold { <# .SYNOPSIS Displays a list of WMI Classes based upon a search criteria .EXAMPLE Get-WmiClasses -class disk -ns rootcimv2" #> When you tokenize the document you get the following tokens. A token is a startIndex, endIndex and array of scopes. Note that the text columnn doesn’t actually exist on the token, but I added it so you can see what the token is referring to. … Converting tokens to folding regions Braces and parentheses If you look at the example above you can see that the the brace character ( {) has a particular scope name; punctuation.section.braces.begin.powershell. In fact this was also true for the closing brace and for parentheses. So to find the foldable regions we need to go through all of tokens looking for the a beginning token, and then continue looking for an ending token. This would give a simple token pair. But that wouldn’t be enough as the folding regions work with line numbers, not document index. Fortunately the VS Code document object has a handy helper for this positionAt, where you pass in an offset or index and it returns a Position object which has a line property. Now we had all the information we needed however there was one problem, what about nested regions, for example; $scriptblock = { <---- There should be folding here $hash = @{ <---- And folding here 'key' = 'value' } } In this case I used a stack to keep track of the state as it processed the tokens. Whenever it encountered a starting token I added the token to the stack, and when it found an ending token I popped a token off of the stack. $scriptblock = { (1) <---- PUSH 1 $hash = @{ (2) <---- PUSH 2 'key' = 'value' } (3) <---- POP 2 } (4) <---- POP 1 So (2) and (3) will be paired, and (1) and (4) will be paired. Because the detection code was extracted into a generic method called matchScopeElements, I could very easily add detection for both braces and parentheses. And if I ever needed it for other tokens, it would be trivial to add them too. // Find matching braces { -> } this.matchScopeElements( tokens, "punctuation.section.braces.begin.powershell", "punctuation.section.braces.end.powershell", vscode.FoldingRangeKind.Region, document) .forEach((match) => { matchedTokens.push(match); }); // Find matching parentheses ( -> ) this.matchScopeElements( tokens, "punctuation.section.group.begin.powershell", "punctuation.section.group.end.powershell", vscode.FoldingRangeKind.Region, document) .forEach((match) => { matchedTokens.push(match); }); Here Strings PowerShell Here strings are multi-line string literals that can either be bounded by @' .... '@ or @" .... "@. They are a little tricker than the braces because there are no are no start or stop regions. Instead the starting, ending and middle tokens will contain the string.quoted.single.heredoc.powershell (or string.quoted.double.heredoc.powershell for the double quoted here string). So we are looking for contiguous (non-breaking) groups of tokens, for example; For a PowerShell script; ... $I = @" double quoted herestring "@ Write-Host $I ... It would have the following tokens In the example above, as we process the tokens in order, we store the starting token when we first see string.quoted.double.heredoc.powershell scope. Then, we check the subsequent tokens to make sure they have the required scope. When we find a token that doesn’t have the required scope, we know this is the end of the block. We can then convert the start and end token to line numbers. I created a generic function called matchContiguousScopeElements, which takes a list of tokens and a scope name, and returns a list of lines where the contiguous block starts and ends. // Find contiguous here strings @' -> '@ this.matchContiguousScopeElements( tokens, "string.quoted.single.heredoc.powershell", vscode.FoldingRangeKind.Region, document) .forEach((match) => { matchedTokens.push(match); }); // Find contiguous here strings @" -> "@ this.matchContiguousScopeElements( tokens, "string.quoted.double.heredoc.powershell", vscode.FoldingRangeKind.Region, document) .forEach((match) => { matchedTokens.push(match); }); There are three types of comments, and each type of comment required a different technique to detect; Block Comments <# Block Comment #> Region Blocks #region Region blocks $something = 'value' #endregion Contiguous Line Comments # Line Comment Block # Line Comment Block # Line Comment Block Block Comments The Block Comments are the easiest to detect as they have a start and stop token scope; punctuation.definition.comment.block.begin.powershell and punctuation.definition.comment.block.end.powershell. In this case we can use the matchScopeElements function that we created for the braces and parentheses detection. // Find matching block comments <# -> #> this.matchScopeElements( tokens, "punctuation.definition.comment.block.begin.powershell", "punctuation.definition.comment.block.end.powershell", vscode.FoldingRangeKind.Comment, document) .forEach((match) => { matchedTokens.push(match); }); Region Blocks Detecting the region blocks was a little more difficult because they had no unique scope name. They are just line comments as far as the grammar parser is concerned. So to do this I instead chose to parse all of the tokens and extract all of the comment lines that start with region or endregion. I created a helper function called extractRegionScopeElements which does the following; Find all of the tokens which are a line comment For these tokens, only select line comments which start at the beginning of a line e.g. $foo = 'bar' # regionwill not match Now for these tokens, if the line comment text starts with regionthen return a new token with a scope of custom.start.region. If the line comment text starts with endregionthen return a new token with a scope of custom.end.region. Once I had these new tokens, I could then, again, use the matchScopeElements function to match the beginning and end of regions. // Find matching comment regions #region -> #endregion this.matchScopeElements( this.extractRegionScopeElements(tokens, document), "custom.start.region", "custom.end.region", vscode.FoldingRangeKind.Region, document) .forEach((match) => { matchedTokens.push(match); }); Contiguous Line Comments Contiguous line comments were the most difficult. As well as detecting line comments, it also needed to ensure the line comments were not broken up # Line Comment Block |-- This is the first block # Line Comment Block | $x = 'This will break the comment block' # Line Comment Block |-- This is the second block # Line Comment Block | To do this, I created the matchContiguousScopeElements helper function; For each token, find a line comment If the next token is also a line comment, then continue processing. If not, then this is block comment and return the start and end tokens as a match Adding a setting to disable the syntax folder While the syntax folder was probably going to work really well, it did need an option to turn it off. This meant adding a new configuration option called powershell.codeFolding.enable using the following in package.json; "powershell.codeFolding.enable": { "type": "boolean", "default": true, "description": "Enables syntax based code folding. When disabled, the default ..." }, I then needed to add some new interfaces to the settings.ts file for the new setting name. And then finally in folding feature file, if the setting is enabled, then the provider is created and registered, otherwise it doesn’t register a provider. And then the tests started failing … Not long after the initial PR was merged, the integration tests I created started failing, specifically right after VS Code 1.25.0 was released. Fortunately Keith Hill found it fairly quickly, and as luck would have it, the vscode-textmate node module had a major version jump from 3 to 4. Which of course had breaking changes, which broke the Folding Provider. Keith Hill raised an initial Pull Request which I then took and added some extra fixtures. And in no time it was fixed … To then find another problem which turned out we had a bad Typescript Promise, which Keith and I fixed quickly too. Community collaboration For The Win First release !! Soon after this the folding provider was released in version 1.8.0! #Powershell add-in for @code got updated, and folding got a much better grasp of PS syntax. My biggest gripe fixed. Whoever that did that, I salute you :-)— James O'Neill (@jamesoneill) July 12, 2018 And then the bug reports started … Not too long after, the bug reports starting coming in … Fix code folding on CRLF documents Issue - Github Issue #1417 Source - Pull Request This was an interesting problem. The initial issue came in as only Here Strings were not folding correctly, and after some trial and error, found that changing the PowerShell script from CRLF to LF line endings fixed the issue. So it turns out, how a Regular Expression engine processor interprets the end of line anchor ( $) changes depending on who the engine is. That is to say, some engines see CRLF as a line ending whereas some, like NodeJS in VS Code, does not!. Also it appeared I was using the vscode-textmate tokeniser incorrectly, and I should’ve guessed this by the name. To convert text into grammar tokens I was calling tokenizeLine; not tokenizeDocument or tokenizeString, a line. Going through the VS Code codebase, I found other instances where things were being tokenised per line, not per document. So I changed the tokeniser code to tokenise per line, and still return the same tokens as if it was the entire document. And I also added tests for LF and CRLF files to make sure the Folding Provider returned the same regions no matter the line ending. During this I also noticed I didn’t actually test for the double quoted here strings, only the single quoted ones, so I added a quick test for these as well. Make region folding case insensitive and strict whitespace Issue - Github Issue #1428 Source - Pull Request Another oversight was I was testing with lower case for region and endregion with region blocks. But you can specify Region and EndRegion as well, similar to what was defined in the original folding regular expressions in VS Code. I fixed this in the Folding Provider by adding the case insensitive matcher ( .../i) to the regular expression. I also noticed that the regular expression I used to detect regions allowed white space between the hash and the text, for example # region would be a valid starting region. However the original folding regular expression did not, in fact it required no whitespace at all, that is, only #region would be detected as a foldable region. And yet another oversight was that I was detecting regions which started at the leftmost edge. I could only detect regions which were indented by at least one space. I fixed this by changing the empty line detection to use ^\s*$ instead of ^\s+$. And yet again, I added tests for all of these errors. Fix detecting contiguous comment blocks and regions Issue - Github Issue #1437 Source - Pull Request During fixing the other issues I stumbled upon a different issue (Always the way!). If I had a script which had the following text, I expected the folding regions to be as so; # Comment Block 1 --+-- Folding Line 1-3 # Comment Block 1 | # Comment Block 1 --+ #region --+-- Folding Line 4-9 # Comment Block 2 --+-- Folding Line 5-7 | # Comment Block 2 | | # Comment Block 2 --+ | $something = $true | #endregion --+ However when I ran the Folding Provider it actually had the following regions; # Comment Block 1 --+-- Folding Line 1-7 # Comment Block 1 | # Comment Block 1 | #region | --+-- Folding Line 4-9 # Comment Block 2 | | # Comment Block 2 | | # Comment Block 2 --+ | $something = $true | #endregion --+ Because the the region comment blocks also appeared as line comments, they were being interpreted incorrectly. If there were blank lines or other content before the #region then the folding was correct. To fix this issue, I first refactored the region detection because it was too complex. I simplified the detection and changed the region detection regular expressions to be more like the original VS Code definitions. This made the code easier to maintain in the future; If the region folding changed in VS Code, then the new regular expressions could just be copied directly into the extension. The refactor resulted in one less line of code, but far more readable. Now I could make the changes to fix the original issue. I created a new regular expression which could detect a line comment, but only if it wasn’t a region begin or end directive; /\s*#(?!region\b|endregion\b)/i; And yet again, I added tests for this scenario. Second Release !! 🎉 #PowerShell for VS @code 1.8.2 released! 🎉— Tyler Leonhardt (@TylerLeonhardt) July 27, 2018 🔸Region folding fix 🔸Trailing Whitespace PSSA rule fix 🔸Better "Find references" support for variables 🔸 Misc 🐛 ☠️ Thanks to @GlennSarti, @CBergmeister & @r_keith_hill!! So far, only a few minor bugs have come but the folder is mostly running as it should! Wrapping up This was a fun little experiment but it did take a lot longer that what I originally thought!! The full list of Pull Requests is at Github. Lessons learnt TESTS ARE IMPORTANT and will save your ass. If you don’t know Typescript, things take twice as long. That’s ok because learning takes time, but something to keep in mind. TESTS ARE IMPORTANT and will save your ass. Putting up my code early gave the maintainers (Tyler, Rob and Keith) plenty of time to comment and shape the direction of this complex feature. Even though there were a lot of comments (159 at last count), it made it easier for them to finally press the “merge” button because they understood it much better. Instead of me just throwing up a Pull Request and going “Ta Da”. Communication is important Take the time to document your functions and code. It helps the project maintainers AND your future self when come back to it a few weeks later Repeat after me; TESTS ARE IMPORTANT and will save your ass.
https://glennsarti.github.io/blog/powershell-syntax-folder/
CC-MAIN-2018-47
refinedweb
3,072
53.51
Asked by: word add-in system.io.packaging Question hello everyone, I have installed the openxmlsdk and added reference to my word add-in project but i couldnt get the package class in my project. i need to implement this using (Package package = Package.Open("mydocument.docx")) { } string relID= “rId1”; PackageRelationship imagerelationship = mainPart.GetRelationship(relID); in my project namespacewordplugin {public partial class ThisAddIn {const string wordmlNamespace = ""; const string relationshipNamespace = ""; private void ThisAddIn_Startup(object sender, System.EventArgs e) {this.Application.DocumentOpen += new Microsoft.Office.Interop.Word.ApplicationEvents4_DocumentOpenEventHandler(Application_DocumentOpen); }void Application_DocumentOpen(Microsoft.Office.Interop.Word.Document Doc) { // implement the package class to retrieve the a .xml file using the relationship id "context" and retreive the cx : docid and cx : scope in a message box here. so that the a file is opened using the word it checks for the context.xml file with the realtionship id and display the content in message box. } my relationship id is <?xml version="1.0" encoding="UTF-8" standalone="no" ?><Relationship Id="context" Target="docProps/context.xml" Type="" /></Relationships>my .xml file content is<?xml version="1.0" encoding="UTF-8" standalone="no" ?> All replies If you are getting an exception, it is likely because Word has the file locked for reading. Since System.IO.Packaging would be invoked after Word has already loaded the file, it would not be possible for your Add-In to take an additional lock on the file. Could you give some more data about what the task is you're looking to accomplish? It appears you're trying to get your own XML data island out of the document by using the OPC format. Is that a correct assumption? hi art leonard, Yeah you are right. iam embedding my own .xml file with the .docx. so i need to create a add-in that check each document when it is opened with word whether my .xml file is there. if is there then i need retreive some node data and display it. for example as i specified before cx: docid from the .xml file. or is there some other way to read the .xml file from the document other than this packaging class?.. Thank you.
https://social.msdn.microsoft.com/Forums/en-US/fc7fdc1a-faea-4f77-8310-c9dfbcdb2941/word-addin-systemiopackaging?forum=oxmlsdk
CC-MAIN-2020-50
refinedweb
365
52.87
pqDataRepresentation is the superclass for a display for a pqPipelineSource i.e. More... #include <pqDataRepresentation.h> pqDataRepresentation is the superclass for a display for a pqPipelineSource i.e. the input for this display proxy is a pqPiplineSource. This class manages the linking between the pqPiplineSource and pqDataRepresentation. Definition at line 52 of file pqDataRepresentation.h. Get the source/filter of which this is a display. Returns the input pqPipelineSource's output port to which this representation is connected. Returns the data information for the data coming into the representation as input. Returns the temporal data information for the input. This can be a very slow process. Use with extreme caution!!! Returns the represented data information. Depending on the representation this may differ from the input data information eg. if the representation shows an outline of the data, the this method will return the information about the polydata forming the outline not the input dataset. Get the data bounds for the input of this display. Returns if the operation was successful. Returns the lookuptable proxy, if any. Most consumer displays take a lookup table. This method provides access to the Lookup table, if one exists. Returns the pqScalarsToColors object for the lookup table proxy if any. Most consumer displays take a lookup table. This method provides access to the Lookup table, if one exists. Returns the data size for the full-res data. This may trigger a pipeline update to obtain correct data sizes. This is convenience method to return first representation for the upstream stream filter/source in the same view as this representation. This is only applicable, if this representation is connected to a data-filter which has a valid input. Fired when the representation proxy fires the vtkCommand::UpdateDataEvent. Fired to indicate that the "LookupTable" property (if any) on the representation was modified. Signal fired to indicate that the "ColorArrayName" property (if any) on the representation was modified. This property controls the scalar coloring settings on the representation. Signal fired to indicate that the rendering attribute arrays properties (Normals, TCoords, Tangents) were modified. These properties control the shading and texture mapping. Slot to update the lookup table if the application setting to reset it on visibility changes is on. Overridden to set the VisibilityChangedSinceLastUpdate flag. called when input property on display changes. We must detect if (and when) the display is connected to a new proxy. Use this method to initialize the pqObject state using the underlying vtkSMProxy. This needs to be done only once, after the object has been created. Reimplemented from pqProxy. Definition at line 181 of file pqDataRepresentation.h.
https://kitware.github.io/paraview-docs/latest/cxx/classpqDataRepresentation.html
CC-MAIN-2022-05
refinedweb
434
52.26
Python Decorator Advertisement In this blog, we will learn about Python Decorator and understand each and every concept with the help of examples. If you have not read the previous python blog, must-read python generators that create an iterator object that generates values on the fly. To better understand Python Decorators, you should have a good knowledge of the basics of inner or nested function in python and closures in python. What is a Decorator in python? In English terms, Decorator is means a thing that is used for decoration. In python too, A Decorator is used to decorate a function. A Decorator takes a function and returns the decorated function to us. First Let’s understand how we can decorate a function using the inner function and passing function as an argument. After this example, we will learn a more easy way to do it. Python inner function as a Decorator The below code is the implementation of Decorator using Python inner function. def greet(func): def wish(): print(f"Hello, How are you? {func()}") return wish def nameing(): return "HTD" nameing = greet(nameing) nameing() Let’s Understand the program. The greet function takes a function and implements an inner function, which simply prints a greeting and executes the function passed as a parameter ( func). The greet function then returns the inner function ( wish in this case) Next, we create a different function which we want to decorate using the greet function ( naming in this case). In the next step, the naming function is assigned with the greet function that takes the naming function as an argument. Then we execute the naming function. And Hence we get a decorated function. Hello, How are you? HTD This process was big and also a bit confusing. Let’s learn an easy way to do the same using python decorators. Python Decorator The Below code is an example of a Python Decorator. def greet(func): def wish(): print(f"Hello, How are you? {func()}") return wish @greet def naming(): return "Hack The Developer" naming() Hello, How are you? Hack The Developer This time also we had to create two functions, one the decorator function and the other the function to be decorated. @greet def naming(): return "Hack The Developer" In the previous section, we assigned a function to others to achieve the decoration functionality. But this time we are using a @ symbol with the decorator function over the function to be decorated. Multiple Python Decorators We can also use Multiple Python Decorators to decorate a single function. Example: def morning(func): def wish(): print(f"{func()}") return wish def greet(func): def wish(): print(f"Hello, How are you? {func()}") return "Greeting Done" return wish @morning @greet def naming(): return "HTD" naming() The decorator that is closest to the function to be decorated runs first and then the above decorators. In this case, the @greet decorator will run first and then the @morning decorator. Let’s see the output: Hello, How are you? HTD Greeting Done Hope you like it!
https://hackthedeveloper.com/python-decorator/
CC-MAIN-2021-43
refinedweb
509
55.95
#include <deal.II/algorithms/theta_timestepping.h> A little structure, gathering the size of a timestep and the current time. Time stepping schemes can use this to provide time step information to the classes actually performing a single step. The definition of what is considered "current time" depends on the scheme. For an explicit scheme, this is the time at the beginning of the step. For an implicit scheme, it is usually the time at the end. Definition at line 45 of file theta_timestepping.h. The current time. Definition at line 48 of file theta_timestepping.h. The current step size times something. Definition at line 50 of file theta_timestepping.h.
https://www.dealii.org/developer/doxygen/deal.II/structAlgorithms_1_1TimestepData.html
CC-MAIN-2020-34
refinedweb
109
61.12
Directory Listing. all of paso now lives in its own namespace. paso: starting to polish .. Only switch on useElementsOnFace by default where we actually support it. saveVTK: Reviewed and changed parts of this function due to several issues. Confirmed that it's still working using finley/test/python/generate_dumps.py (manually inserted saveVTK) with and without MPI on 1 and 4 processors. Changes include: - removed unnecessary USE_VTK check - we are not using any VTK functions - fixed writing of Rec9 & Hex27 connectivity data in MPI mode - fixed some cases where the file was not closed and memory was not freed - replaced many instances of fprintf by fputs (simple strings) - some minor optimizations by moving or initializing variables and combining if-clauses - got rid of ~300 lines of code and switched to consistent indentation making the file a bit better readable!!) Updating svn:ignore properties. finley: Constify two variables...
https://svn.geocomp.uq.edu.au/escript/trunk/finley/?pathrev=5105&sortby=log&view=log
CC-MAIN-2019-43
refinedweb
147
55.64
! Thanks a lot of this mini guide. It was very helpful. Everything works as expected, but I keep getting this warning when doing a normal “manage.py runserver” in eclipse while the pydev remote debug server is running: —————————- PYDEV DEBUGGER WARNING: sys.settrace() should not be used when the debugger is being used. This may cause the debugger to stop working correctly. If this is needed, please check: to see how to restore the debug tracing back correctly. Call Location: File “/Applications/eclipse/plugins/org.python.pydev.debug_1.3.10/pysrc/pydevd.py”, line 743, in settrace sys.settrace(debugger.trace_dispatch) ———————– Despite the warning, things are working very well so it’s not really a big deal. Thanks again! Comment by Duc Nguyen — November 26, 2007 @ 12:53 am | You can run manage.py with “run as python run” option, not “debug as python run”. This warning won’t appear anymore. The “debug as …” is only needed when you want to set breakpoint in manage.py or the code executed before auto reload thread. In my original post, everything is ok in eclipse, but command line. I do some modification for this to make manage.py runserver work well in command line mode. def inReloadThread(): “”” Manage.py is called in reload thread or not? @return True if we are in reload thread. “”” return os.environ.get(“RUN_MAIN”) == “true” if settings.DEBUG and (command == “runserver” or command == “testserver”): # Make pydev debugger works for auto reload. usePydevd = True try: import pydevd except ImportError: if not inReloadThread(): print (“PYDEVD DEBUG DISABLED: ” “You must add org.python.pydev.debug.pysrc ” “to your PYTHONPATH for debugging in Eclipse.\n\n”) usePydevd = False if usePydevd: from django.utils import autoreload m = autoreload.main def main(main_func, args=None, kwargs=None): Please enjoy this. I am very happy this code is helpful for you. Comment by bear330 — December 1, 2007 @ 5:55 am | Hi, I was unable to get this to work. Where did you place the code in manage.py? In the first line, “if settings.DEBUG and (command == “runserver” or command == “testserver”):”, there is no “command” in manage.py. I got rid of this (since I was only running this from Eclipse). After that, it would not connect to the debugger because the port is randomized each time, but the code is using a hardcoded value. I rememdied this with the following hack: import os import re # HACK: Get remote debugger port from ppid fd = os.popen(‘ps wwo command -p %d’ % os.getppid()) re_port = re.compile(r’–port\s*(\d+)\s*’) port = int(re_port.search(fd.readlines()[1]).groups()[0]) I finally got it running with this, but now I get the same warning as comment #1 and none of my breakpoints work. The debugger works fine if I use –noreload with the default manage.py, but I would really like to get this working with autoreload. Comment by impulse — December 8, 2007 @ 10:24 am | Ignore my previous comment. I wasn’t using the PyDev extensions remote debugger. It’s working now. Comment by impulse — December 8, 2007 @ 11:29 am | works like a charm :) Thanks a lot! Comment by Duc Nguyen — December 13, 2007 @ 7:38 am | very interesting, but I don’t agree with you Idetrorce Comment by Idetrorce — December 16, 2007 @ 3:03 am | I desperately want to get this working, but I’m having trouble repeating everyone’s success from almost a year ago. First, I have the same confusion as Duc Nguyen: I don’t know where to drop in this code. When I put it at the top level of my manage.py there is no “command” variable defined. Second, even when I leave out the conditional and always run this code, I get the error “No module named pydevd” when it tries to execute the “import pydevd” line. I swear I have this properly in my PYTHONPATH. Indeed, from the Python console, I can import pydevd without any problems. Any hints for either problem? Comment by digi — September 29, 2008 @ 4:24 pm | I am very sorry to put a vague code here. The command variable is: if len(sys.argv) > 1: command = sys.argv[1] And you must make pydevd available for your project. In my .pydevproject file, I make it as external source folder: <pydev_pathproperty name=”org.python.pydev.PROJECT_EXTERNAL_SOURCE_PATH”> <path>C:\Program Files\Eclipse 3.4\plugins\org.python.pydev.debug_1.3.20\pysrc</path> </pydev_pathproperty> You can set up this by Properties->PyDev – PYTHONPATH page or simply put these lines in your .pydevproject file. If you still have any problem, feel free to let me know. Comment by bear330 — September 30, 2008 @ 11:24 am | Hi bear330, I have the following in my .pydevproject file. /home/nicta/eclipse/plugins/org.python.pydev.debug_1.6.5.2011020317/pysrc however, when I run my script with the following code at the beginning, it still saying I don’t have pysrc in my PYTHONPATH. #################### REMOTE_DBG = True # append pydev remote debugger if REMOTE_DBG: # Make pydev debugger works for auto reload. # Note pydevd module need to be copied in XBMC\system\python\Lib\pysrc try: import pysrc.pydevd as pydevd # stdoutToServer and stderrToServer redirect stdout and stderr to eclipse console pydevd.settrace(‘localhost’, stdoutToServer=True, stderrToServer=True) except ImportError: sys.stderr.write(“Error: ” + “You must add org.python.pydev.debug.pysrc to your PYTHONPATH.”) sys.exit(1) #################### My eclipse is Eclipse SDK Version: 3.6.1 Build id: M20100909-0800 any idea? Comment by peizhao — February 18, 2011 @ 11:03 pm | sorry, the debug code should change to import pydevd it works now. Comment by peizhao — February 18, 2011 @ 11:15 pm Belated thanks for the clarification! This definitely helps me understand your solution. Comment by Chris DiGiano — October 31, 2008 @ 5:57 pm | On Ubuntu pydevd source folder is located $HOME/.eclipse/org.eclipse.sdk.ide/updates/eclipse/plugins/org.python.pydev.debug_1.3.24/pysrc/ Comment by MIkko Ohtamaa — December 29, 2008 @ 10:44 am | […] should be. I’ve uploaded that to snippet to django snippets. It was originally posted here in 2007, so I’ve copied it to Django snippets in case that post […] Pingback by Django+Eclipse with Code Complete Screencast « vlku.com — June 11, 2009 @ 12:16 am | Off topic – need help with email settings How do I change Gmails SMTP settings? Dr Gil Lederman Gil Lederman Gil Lederman MD Comment by Gil Lederman — September 7, 2009 @ 3:19 am | Thanks for the hack. I had to make a few changes to get this to work with Django 1.1 (since the “inner_run” method now takes no arguments as opposed to the earlier system where it took the *args and **kwargs). I posted the changes at Comment by Raja — December 6, 2009 @ 5:07 am | Hi all, I have been following along with this. Having updated my manage.py code to reflect “if len(sys.argv) > 1: command = sys.argv[1]”, I now get an error related to sitecustomise.py when I run the debug: sys.path.extend(paths_removed) AttributeError: ‘NoneType’ object has no attribute ‘path’ coming specifically from line 148. I’m pretty sure that I have pydevd correctly set up and have it attached to my PYTHONPATH okay. I’m running python 2.6. Has anybody any ideas? Thanks in advance, G (pretty new to all this) Comment by Gerry — December 9, 2009 @ 6:08 pm |. online football betting. Comment by FrancescaRivierra — May 17, 2010 @ 2:47 am | txing Comment by ppheiko — June 4, 2010 @ 10:56 am | Welcome First time skipped right here in your web site, founde on ASK. Thanks for the short response. Your information was very element and knowledgeable. Comment by Daily File — October 2, 2010 @ 4:50 am | Please move me if this is not the right area for my post. You can call me william. I research about workout Check out my site diet program reviews. I look forward to reading more about this website bear330.wordpress.com Comment by SwewSilmhic — January 16, 2011 @ 10:06 pm | I was just browsing every now and then along with you just read this post. I have to admit that I am inside the hand of luck today otherwise getting this excellent post to learn wouldn’t are actually achievable for me personally, at the least. Really appreciate your content. Comment by tramdol {1|2|3|4|5|6}{2|3|4|5|6|7}{a|1|2|3f|g|hR} — April 7, 2011 @ 5:01 pm | Длительность диеты 9 дней, и за это время можно похудеть на 8–9 кг. Выходить из диеты следует осторожно, не набрасываясь сразу на калорийные продукты. Диеты считаются одними из самых эффективных способов избавиться от лишнего веса. Масса информации, которую необходимо знать о диетах для быстрого похуденияветы как похудеть – диеты, кремлевская диета, правильное питание, таблицы калорийности, диета, аэробика.», Comment by Swettejup — July 15, 2011 @ 10:09 pm | I am receiving “NameError: name ‘sys’ is not defined” Here is my manage.py #!/usr/bin/env python from django.core.management import execute_manager try: import settings # Assumed to be in the same directory. except ImportError: import sys sys.stderr.write(“Error: Can’t find the file ‘settings__”: if len(sys.argv) > 1: command = sys.argv execute_manager(settings) Comment by eros — October 12, 2011 @ 12:58 am | […] How to debug django web application with autoreload.: How to debug django web application with autoreload in Eclipse pydev plugin. […] Pingback by Django resources | 岭南六少 - 一朵在LAMP架构下挣扎的云 — October 17, 2011 @ 8:27 am | эффективные диеты… […]How to debug django web application with autoreload. « yet another blog.[…]… Trackback by эффективные диеты — January 29, 2012 @ 1:29 pm | What’s Going down i am new to this, I stumbled upon this I’ve found It absolutely helpful and it has aided me out loads. I’m hoping to give a contribution & help different customers like its helped me. Good job. Comment by Cindy — February 2, 2013 @ 10:50 pm |. Comment by glass — April 16, 2013 @ 5:45 am | This is the point where you can try gh supplement called releasers instead. The problem is that people cannot really openly acquire these supplements. The availability and lower price of processed and canned foods. This in turn will encourage your body to function properly. Comment by hghreleasersreview.com — May 2, 2013 @ 11:15 pm | One should never exercise if nursing schools united states it causes pain. Comment by online masters nursing programs — May 9, 2013 @ 1:16 am | Creighton 58 – Alabama 57 Well, we are planning and expect to open Tinley Park in May. Operator This concludes accelerated nursing today’s conference call. And just because there’s not — there’s more competition doesn’t mean that you should simply choose the school that can accommodate the student’s needs. Comment by what are the best nursing schools — May 9, 2013 @ 2:14 am | That’s the end of my first term as president of the United States by the aggressive American Entertainment Industry. He got some kind of comforting statement that sets some kind of bounds on good nursing schools the actions of this guy because I don’t think many historians would agree that that would have been the outcome. Comment by — May 9, 2013 @ 3:38 am | The brain scans also showed many different brain regions are involved in nyu accelerated nursing a beyblade battle to ensure fair play. Meanwhile, on the Russian space programme. Comment by — May 9, 2013 @ 4:12 am | The more experienced you are, the more evidence nursing school scholarships we can bring some of the details that support this. Jo Jo the Dog Faced Boy Dec 1, 2012, 10:16am UTC Well, we did just that, for a long time. This does not remove the fact that your claim has been refuted by the original source the daily mail cherry picked it’s data from, and go on to state it as fact. Comment by Elizabet — May 9, 2013 @ 4:25 am | Information gathered by the research firm IDC and Intel, the same milestone will be achieved in perpendicular recording. Stress : Either physical injury or emotional disturbance is frequently blamed as the initial cause of the shortfall appears to be the most sacred group in America, has some personal knowledge of the subject matter. She said that research was urgently needed. That rhetoric has become a tradition. Comment by how do i become a nicu nurse — May 23, 2013 @ 7:35 pm | Either the PET scan showed a better picture or the tumor was growing. 2 Veterinary OfficersMust have a Bachelor of Arts in a Catholic all-girls school, St. Some report as many as 1, 400 women who were victims of the worst slime known to man. A week later she issued a press release from the U. 4 16 kg, they are too ‘busy’. She amazed her nurse practitioner programs in texas doctors! Comment by — May 23, 2013 @ 11:32 pm | Defenseman Aaron Rome tga kenad elbow support is back from IR for the Dallas Stars. 9 min: Newcastle are solid. After Harper returned to school, her new cast made her a celebrity, sitting on the end with Sam in the middle and lower trapezius. Try it now Please give my family the same dignity that you allow for your own much-loved film. Fear, like everything else. Comment by — May 28, 2013 @ 11:24 pm | A veces se tarda mucho en encontrar articulos coherentemente expresados, asi que debo felicitar al autor. S2 Comment by antonio — July 10, 2013 @ 4:35 am | To understand teen photoshop fails, you often make bad choices. No doubt some of these materials. His company, Sherry Accessories, has been booked. Comment by eastern appalachian teen challenge — August 4, 2013 @ 11:50 pm | Because the admin of this web site is working, no doubt very soon it will be renowned, due to its feature contents. Comment by dover business college — August 7, 2013 @ 9:00 pm | I’m very happy to uncover this site. I need to to thank you for ones time for this fantastic read!! I definitely loved every little bit of it and i also have you book marked to check out new stuff on your site. Comment by mobile games — March 23, 2014 @ 6:58 pm | WOW just what I was looking for. Came here by searching for none Comment by hay day astuce ipad — May 8, 2014 @ 4:38 pm | Howdy! Someone in my Myspace group shared this website with us so I came to give it a look. I’m definitely loving the information. I’m book-marking and will be tweeting this to my followers! Fantastic blog and outstanding design and style. Comment by Internet manager download crack download — May 31, 2014 @ 12:07 am | You, yourself, it is important for dog training techniques a dog, either. The” shock collars are not easy. The desire of some dog training techniques commercial mixtures or home. When your dog to feel it. If your dog does something right and wrong ways to make dog training program. Electronic fences Are usually Silent fences By which restrain your pet barks. Comment by google.com — August 21, 2014 @ 4:01 pm |
https://bear330.wordpress.com/2007/10/30/how-to-debug-django-web-application-with-autoreload/
CC-MAIN-2015-22
refinedweb
2,535
66.64
Yes, it would resolve the problem. Another question is how to implement that. But how can you ensure non-replaceable CSS to be *non-replaceable*? The standard mechanism uses StyleSheet.addStyleSheet(), but you can't prevent anyone from using removeStyleSheet() for removing any of them. Additionally, it is possible to install a new StyleSheet right into HTMLDocument. The only way I see so far is to create a base static StyleSheet which will be looked up in View implementations in cases where no CSS rules are found in default StyleSheet associated with the View. Any way Views or Elements should be tweaked to just this feature. Nothing more comes into my mind... And I don't like this solution very much. Regards, Alexey. 2007/8/22, Zakharov, Vasily M <vasily.m.zakharov@intel.com>: > Alexey, > > This is really not good, both Harmony and RI seem doing wrong here. > > Is there a way to specify the element behavior in Swing Java code so > that that behavior could be overridden with style? > > Maybe creating the base, non-replaceable CSS and specifying such things > there would resolve this problem. > > Vasily > > > -----Original Message----- > From: Alexey Ivanov [mailto:alxey.ivanov@gmail.com] > Sent: Wednesday, August 22, 2007 12:00 PM > To: dev@harmony.apache.org > Subject: Re: [classlib][swing][html] CSS is used to specify HTML tag > behavior > > Andrey, > > I don't think that it ignores -- I know it honors them with some > exceptions. I talked about attributes special case for <b> tag. Try > this code: > > <b style="font-weight: normal">not bold</b> > > In Harmony it would be rendered with bold font despite this style rule. > > Additionally, > <html><head><style type="text/css"> > b { font-weight: normal } > </style></head><body> > <b>also not bold</b> > </body></html> > > would be rendered in normal font weight, *not bold*, in all the > browsers whereas both Harmony and RI would render it bold. > > That's what I talked about. > > 2007/8/22, Pavlenko, Andrey A <andrey.a.pavlenko@intel.com>: > > I think the best solution would be to merge the default CSS with > user's > > CSS. > > > > Alexey, why do you think Harmony ignores the style attribute of HTML > > elements? I've just created a simple test and it works fine for me: > > > > import javax.swing.JEditorPane; > > import javax.swing.JFrame; > > > > public class SwingTest { > > > > public static void main(String[] args) { > > JFrame f = new JFrame(); > > f.add(new JEditorPane("text/html", > > "<center style='color: red'>Hello > > world!</center>")); > > f.pack(); > > f.setVisible(true); > > } > > } > > > > -----Original Message----- > > From: Alexey Ivanov [mailto:alxey.ivanov@gmail.com] > > Sent: Wednesday, August 22, 2007 11:29 AM > > To: dev@harmony.apache.org > > Subject: Re: [classlib][swing][html] CSS is used to specify HTML tag > > behavior > > > > Vasily, > > > > Yes, I agree with your considerations too. > > > > I guess we should create a mechanism to add another base.css that > > cannot be removed. I pointed to > > javax.swing.text.html.HTMLDocument.initDefaultCharacterAttributes > > which serves as such mechanism currently. We should find a better > > approach here because user should be able to override the defaults > > specified there as well. At least browsers allow this but neither > > Harmony nor RI does. Harmony is even worse since it ignores the style > > specified the style attribute on HTML element. However, I don't think > > there are many users who'd want to change that but it's a nice feature > > to have, so that Harmony behaves closer to HTML browsers and thus > > provides better HTML support than RI does. > > > > Regards, > > Alexey. > > > > P.S. I knew all browser provided the feature to user style sheet but > > had never tried it. Additionally this feature is quite hidden in > > options dialog boxes. > > > > 2007/8/22, Zakharov, Vasily M <vasily.m.zakharov@intel.com>: > > > Alexey, > > > > > > Thank you for your attention to this issue. I'm not sure it's a bug > > that > > > needs to be fixed (RI does the same, right), but I think it's an > > > important issue that worth keeping record of. > > > > > > I fully agree with all your considerations about using default > > > stylesheet for specyfying default document look. But there's look > > > (fonts, colors, sizes, weight, margins, padding etc.) and there's > the > > > specified element behavior. For example, <center> tag is expected to > > > center text by default - and in our implementation (and in RI) it > > > doesn't. It seems strange, uncomfortable and confusing to me, and > > > probably to the users who would like to replace the default > stylesheet > > > with their own. Making a user write "center { text-align: center}" > in > > > his stylesheet is strange indeed. And by the way, IE allows > replacing > > > the default stylesheet easily, and <center> tag works normally after > > > that. > > > > > > I'm not suggesting to remove the replace default stylesheet feature, > > and > > > I'm not suggesting to merge the existing default stylesheet with > > user's > > > one. I'm only suggesting to make some (not all) default stylesheet > > > declarations (specifying core elements behavior) actual even if > > default > > > stylesheet is removed. > > > > > > Particularly, I would expect <strong> behaving the same as <b> and > > <em> > > > the same as <i>, and <center> tag actually centering the text. Sure, > I > > > don't care about font sizes, margins etc. > > > > > > Vasily > > > > > > > > > -----Original Message----- > > > From: Alexey Ivanov [mailto:alxey.ivanov@gmail.com] > > > Sent: Wednesday, August 22, 2007 10:38 AM > > > To: dev@harmony.apache.org > > > Subject: [classlib][swing][html] CSS is used to specify HTML tag > > > behavior > > > > > > Hello everyone, > > > > > > There was created JIRA issue HARMONY-4662 [1], which says that > > > specifying the default presentation of HTML tags using CSS. I > believe > > > this issue should be discussed here. > > > > > > In short, Harmony implementation of HTML support has default.css > file > > > which describes the default presentation of HTML elements. This > > > implementation is similar to that of RI. On the other hand, > programmer > > > can remove (or disable) this style sheet. It leads to the situation > > > where almost all tags look like a plain text. > > > > > > In my opinion, using CSS in this situation is the right thing to do. > > > 1) We can easily make adjustments to the way HTML document looks by > > > default. > > > 2) It gives application developers freedom for their application. > > > Developer can create their own default style sheet and easily > replace > > > the default one shipped with Harmony. Replacing is more effective > than > > > just overriding because Harmony-provided style sheet will be > excluded > > > from style resolution chain, which, in its turn, will free memory > and > > > will save time. > > > 3) I believe all modern browsers (Internet Explorer, Firefox, Opera) > > > use the similar methods to specify the default look of HTML. The > only > > > difference is that you cannot remove their default style sheet so > > > easily as in Swing. > > > > > > Other opinions, comments? > > > > > > Regards, > > > Alexey. > > > > > > > > > [1] > > > > > > Other links of interest: > > > Sample style sheet suggested by W3C: > > > > > > Index of HTML 4.01 elements: > > > > > > > > >
http://mail-archives.apache.org/mod_mbox/harmony-dev/200708.mbox/%3C1ab70d5f0708220121j1b6c2ba5mf7b6a1eff01c5fdd@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
1,112
57.57
On Fri, 8 Jan 2010 16:01:46 -0800 (PST)Linus Torvalds <torvalds@linux-foundation.org> wrote:> > > On Sat, 9 Jan 2010, Rafael J. Wysocki.> > Hmm. I get the feeling that perhaps the of the drm_driver callbacks> was very muchintentional, and that the code presumably wants to be> called purely through the PCI layer, and not through the "drm class"> logic at all?> > Your patch seems like it would always execute the silly class suspend> even though we explicitly don't want to. And a much nicer fix would> seem to register the thing properly as a PCI driver even if you don't> then use KMS.> > So it looks to me like the problem is that drm_init() will register> the driver as a real PCI driver only if> > driver->driver_features & DRIVER_MODESET> > and otherwise it does that very odd "stealth mode manual scanning"> thing which doesn't register it as a proper PCI driver.> > So could we instead make that "disable KSM" _just_ disable the mode > setting part, not disable the "I'm a real driver" part?This is the minimal fix I think (totally untested):diff --git a/drivers/gpu/drm/i915/i915_drv.cb/drivers/gpu/drm/i915/i915_drv.c index a0a2cad..1364c3e 100644--- a/drivers/gpu/drm/i915/i915_drv.c+++ b/drivers/gpu/drm/i915/i915_drv.c@@ -541,6 +541,11 @@ static int __init i915_init(void) driver.driver_features &= ~DRIVER_MODESET; #endif + if (!(driver.driver_features & DRIVER_MODESET)) {+ driver.suspend = i915_suspend;+ driver.resume = i915_resume;+ }+ return drm_init(&driver); } -- Jesse Barnes, Intel Open Source Technology Center
https://lkml.org/lkml/2010/1/8/307
CC-MAIN-2014-10
refinedweb
252
57.57
Hmm, good question. The term "dictionary-like" doesn't actually have a definitive meaning does it? In my case, the methods I was expecting to find were: __getitem__, __setitem__, __delitem__, get, has_key, items, keys, setdefault, update, values However, if util.FieldStorage is meant to be "treated like a dictionary" then I can imagine when someone will expect all the other methods too. This makes me think that if FieldStorage is going to contain some kind of internal dictionary to store the actual data, perhaps these other dict methods should be called directly on it: ie. def __getattr__(self, attr): return getattr(self.dictionary, attr) or possibly it could be re-written to inherit from dict directly? If this is going to become a pain though, I'll live with the inconvenience in my code of swapping between objects when necessary :-) Brian > -----Original Message----- > From: Graham Dumpleton [mailto:grahamd at dscpl.com.au] > Sent: 26 October 2006 12:48 > To: Brian Bird > Cc: mod_python at modpython.org > Subject: Re: [mod_python] util.FieldStorage > > Yes, but how dictionary like is the issue that I don't see anyone giving > an answer for. Ie., out of: > > >>> dir({}) > ['_'] > > what do you expect to be provided and work? Not all of them make > sense. Knowing which you think should work would be helpful. > >. > > Graham > > On 26/10/2006, at 9:32 PM, Brian Bird wrote: > > > The benefit of being more dictionary-like is just convenience. As > > util.FieldStorage doesn't have all the dictionary methods then I > > usually > > just convert it to a real dictionary. > > > > However, I'd like to also keep the getfirst and getlist methods, which > > would mean writing my own class to inherit from dict and converting > > between them. I can't easily inherit util.FieldStorage because all my > > code has standalone unittests which can't import _apache > > > > It's not the end of the world if util.FieldStorage isn't updated, > > but it > > would make my life a lot simpler ;-) I may try and have a go at > > updating > > it - is there any documentation on the best way to provide > > patches/unittests for ModPython? > > > > Brian > > > > > >> -----Original Message----- > >> From: Graham Dumpleton [mailto:grahamd at dscpl.com.au] > >> Sent: 26 October 2006 11:12 > >> To: Brian Bird > >> Cc: mod_python at modpython.org 'mod_python at modpython.org' > >> Subject: Re: [mod_python] util.FieldStorage > >> > >> Yes, FieldStorage in 3.3 is broken in as much as add_field() if > >> called > >> after the first time that the 'dictionary' attribute is accessed in > >> some way, > >> does not result in that value ending up in the dictionary. Thus it is > >> not > >> visible to __getitem__(), has_key() etc. > >> > >> Something else to fix. :-( > >> > >> Graham > >> > >> On 26/10/2006, at 7:56 PM, Graham Dumpleton wrote: > >> > >>> The util.FieldStorage implementation in mod_python 3.3 is quite a > > bit > >>> different to what is used in older versions of mod_python and which > >>> you based your changes upon. Thus your changes are not compatible > >>> with the new code base. > >>> > >>> Some have talked about making the new implementation of the > >>> FieldStorage class more dictionary like than it is, but no one has > >>> stepped up to provide any changes. > >>> > >>> To be honest I haven't actually looked into how FieldStorage is > >>> implemented as someone outside of the core developers provided > >>> the updated code and it was integrated by someone other than > >>> myself. Looking at the code now, I actually think it might be a bit > >>> broken. I'll have to do some playing with the code. > >>> > >>> BTW, what benefits do you think you get if it is more dictionary > > like? > >>> I am not sure I understand the reason for making such changes in > >>> the first place. > >>> > >>> Graham > >>> > >>> On 26/10/2006, at 7:14 PM, Brian Bird wrote: > >>> > >>>>! J ### > >>>> > >>>> > >>>> > >>>>] > >>> > >>> _______________________________________________ > >>> Mod_python mailing list > >>> Mod_python at modpython.org > >>> > > > > > > _______________________________________________ > > Mod_python mailing list > > Mod_python at modpython.org > >
http://modpython.org/pipermail/mod_python/2006-October/022395.html
CC-MAIN-2018-39
refinedweb
635
65.42
UR::Namespace - Manage collections of packages and classes In a file called MyApp.pm: use UR; UR::Object::Type->define( class_name => 'MyApp', is => 'UR::Namespace', ); Other programs, as well as modules in the MyApp subdirectory can now put use MyApp; in their code, and they will have access to all the classes and data under the MyApp tree. A UR namespace is the top-level object that represents your data's class structure in the most general way. After use-ing a namespace module, the program gets access to the module autoloader, which will automaticaly use modules on your behalf if you attempt to interact with their packages in a UR-y way, such as calling get(). Most programs will not interact with the Namespace, except to use its package. my @class_metas = $namespace->get_material_classes(); Return a list of UR::Object::Type class metadata object that exist in the given Namespace. Note that this uses File::Find to find *.pm files under the Namespace directory and calls UR::Object::Type->get($name) for each package name to get the autoloader to use the package. It's likely to be pretty slow. my @class_names = $namespace->get_material_class_names() Return just the names of the classes produced by get_material_classes. my @data_sources = $namespace->get_data_sources() Return the data source objects it finds defined under the DataSource subdirectory of the namespace. my $path = $namespace->get_base_directory_name() Returns the directory path where the Namespace module was loaded from. UR::Object::Type, UR::DataSource, UR::Context
http://search.cpan.org/~sakoht/UR-0.39/lib/UR/Namespace.pm
CC-MAIN-2016-44
refinedweb
245
54.42
Timeline 01/06/12: - 03:40 Changeset [76328] by - added a test for ticket #5325 - 03:39 Changeset [76327] by - fix for ticket #5325 - 02:02 Changeset [76326] by - Added new use test case (using a uuid as the key in a std::map). - 00:17 Ticket #6362 (Anonymous enum in bessel_ik.hpp causes problems with GCC 4.4) created by - This is a problem with GCC 4.4, and maybe earlier: discussed in GCC bugs … 01/05/12: - 18:18 Changeset [76325] by - Explorations on tree models and collapsability - 17:31 Changeset [76324] by - Fix pp-logic. Refs #6359. - 17:29 Ticket #6156 (fenv.hpp broken on clang+glibc) closed by - fixed: (In [76323]) Fix Clang workaround. Fixes #6156. - 17:29 Changeset [76323] by - Fix Clang workaround. Fixes #6156. - 17:27 Changeset [76322] by - Fix Intel-win ICU library names. - 17:27 Ticket #6361 (integer overflow in boost::chrono::process_real_cpu_clock::now() under ...) created by - As of 1.48 Boost.Chrono contains code below for … - 15:37 Ticket #6360 (Patch for "begin/end ambiguity") created by - Patch for ticket #6357. Added ADL barrier namespace suggested by Nathan … - 15:24 Changeset [76321] by - Type Traits Introspection was added to svn. - 13:52 Changeset [76320] by - Relax permissions test to reflect reality, particularly on the Sandia test … - 13:33 Ticket #6359 (Intel Composer XE 2011 (12.0) fails to build Boost.Regex with ICU support) closed by - fixed: (In [76319]) Add workaround for Intel-12.1 on Windows. Fixes #6359. - 13:33 Changeset [76319] by - Add workaround for Intel-12.1 on Windows. Fixes #6359. - 12:09 Ticket #6193 (lexical_cast overflow processing does not always work correctly) closed by - fixed: (In [76318]) Fixes #6193 - 12:09 Changeset [76318] by - Fixes #6193 - 10:10 Changeset [76317] by - Fix Intel-12.1 failures on Win32. - 06:14 Changeset [76316] by - merge [76315] from trunk - 06:10 Changeset [76315] by - doc feedback from Thomas Heller 01/04/12: - 23:43 Changeset [76314] by - Added information about libraries that have appeared in recent boost … - 23:43 Changeset [76313] by - Remove bogus comment, PGIC is defined in recent PGI compilers. - 23:42 Changeset [76312] by - Fix typo in pgi compiler version. - 23:41 Ticket #1326 (Unable to check graph isomorphism using LEDA adapter) reopened by - ad 1: well, LEDA 6.3 uses the LEDA namespace. I guess we should stick to … - 23:33 Ticket #375 (LEDA graph adaptors do not handle hidden nodes properly) reopened by - I will take a look at it. - 23:30 Ticket #373 (LEDA graph adaptors for undirected graphs) reopened by - Documentation: … - 23:22 Ticket #6359 (Intel Composer XE 2011 (12.0) fails to build Boost.Regex with ICU support) created by - === Environment === * Intel C++ Composer XE 2011 Update 8 for Windows … - 23:11 Changeset [76311] by - Container, Locale, and Move were added in Boost 1.48.0 - 22:59 Changeset [76310] by - context review is a mini-review. - 22:33 Changeset [76309] by - Switch is now marked Orphaned. Context review is ongoing. - 20:40 Ticket #6358 (Documentation) created by - I want to use boost numeric conversion. I NEED to use boost numeric … - 18:56 Changeset [76308] by - Add file missed in last commit. - 17:28 Changeset [76307] by - Refactor tests to make better use of separate file compilation and reduce … - 17:13 Changeset [76306] by - Update to GCC support: support up to 128-byte alignments. - 16:41 Ticket #3218 (string_algo algorithms are quite slow in some popular compiler/OS/hardware ...) closed by - invalid: Same thing on Windows; the bulk of the time is spent doing the … - 16:06 Ticket #6132 (lexical_cast with Source = void* broken in 1.48.0) closed by - fixed: (In [76305]) Fixes #6132 Fixes #6182 - 16:06 Ticket #6182 (lexical_cast: invalid application of 'sizeof' to incomplete type ...) closed by - fixed: (In [76305]) Fixes #6132 Fixes #6182 - 16:06 Changeset [76305] by - Fixes #6132 Fixes #6182 - 14:58 Ticket #6357 (Resolve ambiguity for unqualified call to begin/end) created by - On 12/16/2011 10:44 AM, Nathan Ridge wrote: > > Hello, > > I am running …[…] - 14:55 Changeset [76304] by - Tree and table view on an Sql-model. - 14:53 Ticket #4889 (path locale-related functions are not thread-safe) closed by - fixed: (In [76303]) Fix #4889, #6320, Locale codecvt_facet not thread safe on … - 14:53 Ticket #6320 (race condition in boost::filesystem::path leads to crash when used in ...) closed by - fixed: (In [76303]) Fix #4889, #6320, Locale codecvt_facet not thread safe on … - 14:53 Changeset [76303] by - Fix #4889, #6320, Locale codecvt_facet not thread safe on Windows. Move … - 12:57 Ticket #6356 ([boost::serialization] Document the use of ...) created by - The boost::serialization library provides the … - 10:58 Changeset [76302] by - Merged changesets 75594,75601 from trunk, incorrect used of … - 04:46 Changeset [76301] by - applied patches from #5908 01/03/12: - 23:52 Ticket #6355 (Typo at static assert documentation) created by - I noticed typo at page … - 22:31 Changeset [76300] by - Thread: Added new v2 files - 22:03 Changeset [76299] by - Merged changeset 76271 - 21:50 Changeset [76298] by - Thread: Updated Jamfiles to take care of Boost.Chrono, Boost.System and … - 21:45 Changeset [76297] by - Thread: Added doc related to a lot of tickets mainly the time related … - 21:25 Changeset [76296] by - Thread: Added test related to tickets - 21:23 Changeset [76295] by - Threads: Added a lot of unit tests - 21:12 Changeset [76294] by - Thread Towards #6273 - Add cv_status enum class and use it on the … - 20:58 Changeset [76293] by - merged rev. 75641,75859 (bugfixes: incorrect setting of current state at … - 18:01 Ticket #5640 (serialization vector backward compatibility problem) closed by - fixed: (In [76292]) Attempting to fix #5640 - 18:00 Changeset [76292] by - Attempting to fix #5640 - 17:31 Changeset [76291] by - Thread fixed Bugs: * [@ #2309] … - 17:27 Changeset [76290] by - Applied patch for #4657 - 17:07 Ticket #6354 (PGI: Compiler threading support is not turned on) created by - The following tester is reporting this compile error […] Maybe the … - 16:48 Ticket #4839 (boost_thread will not build with MinGW-w64 and bjam on Windows 7) closed by - worksforme: Closed as no one disagree :) - 14:44 Ticket #6353 (memory layout specifiers doc clarification) created by - … - 14:23 Ticket #6352 (bootstrap.sh error on MIPSPRO) closed by - fixed: Fixed in #75609. - 10:26 Ticket #6352 (bootstrap.sh error on MIPSPRO) created by - line 628: The types of operands "char" and "char *" are incompatible. … - 08:58 Ticket #6351 (Better JSON parser) created by - JSON parser in property_tree doesn't parse numbers and bools as their … - 07:44 Ticket #6350 (MINGW Build missing mingw.jam) created by - If I try to build boost on Windows7 with mingw-shell: […] then b2 and … Note: See TracTimeline for information about the timeline view.
https://svn.boost.org/trac/boost/timeline?from=2012-01-05T12%3A29%3A24-05%3A00&precision=second
CC-MAIN-2017-09
refinedweb
1,124
59.74
Difference between revisions of "E4/Builds" Revision as of 09:58, 22 June 2011 do not just take whatever is in HEAD at the time of running the build - given the amount of changes produced by the committers, this is not predictable enough. Instead, each team needs to make a build submission, which basically amounts to tagging their projects at a known good state with a tag that then gets put into the team's map file. The map files are located in so-called releng projects. These projects are checked out from HEAD by the build system, followed by checking out the referenced projects using the specified tags. This means that if a team forgets to submit to the integration build, the build will just check out whatever was submitted last time. To make a build submission, one team member would first check out (or update) all the team's projects, and run their tests locally. Then, to tag all changed projects, and updating the map file to contain those tags, they would use the releng tools (see below for installation instructions). By using the releng tools instead of tagging and updating the map file manually, the process is made less error-prone, and as an additional benefit, change log information will be generated for you at the same time. The change log is produced by searching for bugzilla ids in commit comments, and doing a lookup in the Bugzilla system to retrieve the bugs' descriptions. Sending the change log information to the mailing list makes everyone aware of what the team has worked on, and can help later if you find regressions and would like to be able to map back to what has been changed and why. Plug-in Versioning Our current release, e4 0.11, has all of our plugins marked as (Incubation) and new plugins should have a version 0.11.0.qualifier. Now that HEAD is open for our 0.12 release we are following the standard Version Numbering guidelines. For example, for existing bundles: - if you contribute bug fixes for a bundle, please increment the number to 0.9.1.qualifier - if you change the public API or public classes (to add new API or to refactor API for the 0.12 release), please increment the number to 0.10.0.qualifier For new bundles, please mark them as 0.12.0.qualifier When we graduate bundles so they are ready for Eclipse 4.0 they will retain their <1.0 version. When we vet the API and promote it, they will change their version number to 1.0.0.qualifier.7 and/or 4.1. If you cannot find it, try. Releng tools are in the eclipse updates site now that Helios is released and can be installed from the director app as well: eclipse/eclipse -application org.eclipse.equinox.p2.director \ -consoleLog -noSplash \ -repository \ -installIU org.eclipse.releng.tools.feature.group (a ppc). It also contains the PDE build directory, e4/releng/org.eclipse.e4.builder/builder/general, that builds our master feature. Our PDE build directory also contains modified code/XML to support using p2 to run our automated tests. metacity --display=:8.0 --replace --sm-disable Where auth.cfg contains at least: localhost Build Requirements The build is currently controlled from the masterBuild.sh script. There are a number of dependencies that make this a linux only build, and the machine has to be set up the same way as build.eclipse.org. This section will capture the requirements/process so that all of the needed steps are executed. See bug releng - Run a build from any platform for any discussions about this section. Setup We have a set of variables that are needed in order to make the system go. In general/build.properties We also code/re-generate a number of properties in the PDE builder. Manual Setup A lot of the directory structures must be set up before you can run a build. Perhaps the directory structure needs to be flattened so that it can be more easily created. The basebuilder (eclipse install used to run the build) and org.eclipse.e4.builder project have to be on disk somewhere. A build updates them to the latest required versions and links to them using symlinks so that they can be used to run the build, drive the build, and provide the testing instructions. The runnable repo is used for the build as well as to produce the platform zips and install supporting bundles for the automated tests. The untransformed repo location must be created, and populated with repos downloaded by hand. Currently, the eclipse SDK repo must be hand rolled as well. It would be nice to have the runnable repo populated by downloading the needed zipped repos. The other possibility is to use the repo2runnable task directly and not through repoBaseLocation. Then multiple repo locations can be specified either as local files or URLs (to zipped repos or p2 repos). There must be at least 3 JREs/JDKs installed on a machine used to build, 1.4.2, 1.5, and 1.6. A bundles BREE instructs PDE build which set of libraries to use during compile. Steps The steps we go through to run a build are controlled by the masterBuild.sh script. CVS The resources, ui, and swt teams now have plugins in their team area in the e4 project. Our project is under :pserver:anonymous@dev.eclipse.org:/cvsroot/eclipse - module e4
http://wiki.eclipse.org/index.php?title=E4/Builds&diff=257612&oldid=160582
CC-MAIN-2016-50
refinedweb
920
64.1
Coding for forgot password Coding for forgot password coding for forgot password Please go through the following links: Here you will find a jsf The password forgot Action is invoked...); The complete code of forgot password action is given below Developing Forgot Password Form password action, the forgot password action is responsible for forwarding... and password and forgot password action action will compare the form value... Developing Forgot Password Form   how to forget password in spring framework the following links: Please.../ Spring portlet with Hibernate Spring portlet with Hibernate Hi All, Could you please help me integrate the code for Forgot Password using spring portlet, hibernate withn MYSQL... link: Thanks forgot password i want to proper code for forgot password, means how to send password into email id if i forgot my password import java.io.*; import javax.mail.*; import javax.mail.internet.*; // important is displayed. With the help of this screen user can retrieve their password. Forgot... i want to develop a code for when user clicks on forgot password then the next page should be enter his mobile no then the password must be sent to his mobile no...! Thanks in advance Nag Raj coding for forgot password coding for forgot password coding for forgot password Forgot Password of Application Forgot Password of Application through servlet and jsp Struts Projects by combining all the three mentioned frameworks e.g. Struts, Hibernate and Spring... that can be used later in any big Struts Hibernate and Spring based.... Understanding Spring Struts Hibernate DAO Layer Forgot Password of Application i want sample code for Forgot Password Please visit the following link: spring hibernate encrypted password In this section, you will learn about encrypted password in spring integration with struts 2.0 & spring 2.5 - Framework integration with struts 2.0 & spring 2.5 Hi All, The total integration is Client (JSP Page) --- Struts 2.0--Spring 2.5 --- Hibernate 3.0--MySQL... for more information. spring controller V/S stuts Action - Spring spring controller V/S stuts Action we are going to use spring framework so what is better spring controller or struts action... (/HibernateMyfaces/) is not available. so, plz help me out. the process i have.... so, plz help me out. plz give the steps to run this program.(step-by-step spring hibernate - Hibernate with hibernate? Hi friend, For solving the problem Spring with Hibernate visit to : the following link: with hibernate Hi, I need the sample code for user registration using spring , hibernate with my sql. Please send the code as soon spring hibernate spring hibernate I need to save registration details in a database table through jsp using spring an hibernate....and the fields in the registration jsp are in different tables???can any one help or is there any sample code no action mapped for action - Struts no action mapped for action Hi, I am new to struts. I followed...: There is no Action mapped for action name HelloWorld Servlet action is currently unavailable - Struts Servlet action is currently unavailable Hi, i am getting the below error when i run the project so please anyone can help me.. HTTP Status 503 - Servlet action is currently unavailable Struts 2.2.1 - Struts 2.2.1 Tutorial and testing the example Advance Struts Action Struts Action... Development in Struts 2.2.1 application JUnit Using Spring mock objects...Struts 2.2.1 - Struts 2.2.1 Tutorial The Struts 2.2.1 framework is released struts struts <p>hi here is my code can you please help me to solve...; <p><html> <body></p> <form action="login.do"> <pre> user name:<input type="text" name="uname"/> help - Struts help, thans! information: struts.xml HelloWorld.jsp... is: No configuration found for the specified action: 'HelloWorld' in namespace: '/'. Form action defaulting to 'action' attribute's literal value. 2000-8 Login form using Jsp in hibernate - Hibernate ://... To hibernate, I'm facing problem in My project(JSP with hibernate).. My login form is working but the problem is ,when i enter correct user name and password The Complete Spring Tutorial will show you how you can integrate struts, spring and hibernate in your web... services, Schedulers, Ajax, Struts, JSF and many other frameworks. The Spring... framework with the help of many example codes. Spring 4 Tutorials Struts Dispatch Action Example Struts Dispatch Action Example Struts Dispatch Action... function. Here in this example you will learn more about Struts and learn to use it in the struts 2 application. Redirect After Post: This post Spring - Spring friend, I am sending you a link. This link will help you. Please visit spring rmi - Log4J should be noted. Please help me out. Thanks in advance. Hi friend, http...spring rmi HI, iam using eclipse for developing spring based rmi Struts Action Chaining Struts Action Chaining Struts Action Chaining Difference between Struts and Spring To know the difference between Struts and Spring, we must first explain... is specified in Struts-config.xml file. It than creates an instance of this action... by the Action component. View displays the result. Integrating JSF Spring and Hibernate Integrating JSF Spring and Hibernate Integrating JSF, Spring and Hibernate This article explains integrating JSF (MyFaces), Spring and Hibernate to build real struts struts hi i would like to have a ready example of struts using"action class,DAO,and services" so please help me... The requested resource (/HibernateMyfaces/) is not available. so, plz help... Why Struts 2 handling per action, if desired. Easy Spring integration - Struts 2... core interfaces are HTTP independent. Struts 2 Action classes... Why Struts 2 The new version Struts Action Class Struts Action Class What happens if we do not write execute() in Action forget password forget password can i get coding for forgot password in jsp, need using javamail also cannot.. what should i do?? Thx Struts   DataSource in hibernate. ; Generally, in spring hibernate integration or struts hibernate integration we set data source class for integration. Here is a example of spring hibernate...; <!-- Spring Web Mapping End--> <!-- Spring Hibernate Mapping
http://www.roseindia.net/tutorialhelp/comment/89694
CC-MAIN-2014-23
refinedweb
1,005
67.04
NAME ccze - A robust log colorizer, plugin infrastructure SYNOPSIS #include <ccze.h> /* Plugin support */ typedef void (*ccze_plugin_startup_t) (void); typedef void (*ccze_plugin_shutdown_t) (void); typedef int (*ccze_plugin_handle_t) (const char *str, size_t length, char **rest); CCZE_DEFINE_PLUGIN (name, type, desc); CCZE_DEFINE_PLUGINS (plugins...); /* Display */ void ccze_addstr (ccze_color_t col, const char *str); void ccze_newline (void); void ccze_space (void); void ccze_wordcolor_process_one (char *word, int slookup); /* Helpers */ ccze_color_t ccze_http_action (const char *method); void ccze_print_date (const char *date); /* Command line */ char **ccze_plugin_argv_get (const char *name); const char *ccze_plugin_name_get (void); DESCRIPTION This manual page attempts to outline the internals of CCZE plugins: how they work, how they are implemented, and how to add new ones. There are four required entry points in a plugin: a startup, a shutdown and a handler routine (more on these later), and an informational structure. The startup function must be of type ccze_plugin_startup_t. This is called right after the module is loaded. Its purpose is to initialise all kinds of module-specific global variables, such as the regular expressions. The shutdown function is its counterpart: this is used to deallocate any memory reserved by the startup code. The core part of a plugin is the handler, of type ccze_plugin_handle_t. This does the actual coloring. The string to process is passed in the str argument, its length in length. The third argument, rest is a pointer to a string. Unlike the first two, this argument is used only for output. When a handler processed a string, it must return a non-zero value, in case it could not process it, the handler must return with zero. If the string could be processed only partially, the part which was deemed unknown by the handler must be passed back in the rest variable. The fourth part, although the smallest part, is the most important. Without this, the module is useless, it cannot be loaded. This part tells CCZE what the startup, shutdown and handler functions are called. To encourage good style, the little details of this structure will not be disclosed in this manual page. Instead, the helper macro, CCZE_DEFINE_PLUGIN will be explained. CCZE_DEFINE_PLUGIN is the macro to use if one wants to make the plugin loadable. Its first argument is an unquoted string: the name of the plugin. The second part is the type of the plugin, it can be FULL, PARTIAL or ANY. The last argument is a short description of the plugin. It is assumed that the three functions mentioned earlier are called ccze_name_setup, ccze_name_shutdown and ccze_name_handle, respectively. A FULL plugin is one that accepts raw input, untouched by any other plugin before, and processes it. On the other hand, a PARTIAL plugin relies on previous ones preprocessing the input. For example, syslog is a full plugin, on which ulogd, a partial plugin relies. The syslog plugin processes the raw input from the logfile, adds colour to most of it, save the actual message sent by a process, that is left to subsequent plugins, like ulogd. An ANY plugin is one can act as both other types. With CCZE_DEFINE_PLUGINS one can place more than one plugin into one shared object. There are two other helper functions, ccze_plugin_argv_get and ccze_plugin_name_get. One can pass arguments to CCZE plugins, and these is the function to retrieve them. While ccze_plugin_name_get returns the name of the current plugin, ccze_plugin_argv_get returns a NULL-terminated array, with each entry containing an argument. DISPLAY METHODS The so-called display methods are the only supported interface to emit something to the display. These handle both the normal, ncurses-based, and the HTML output. This is a kind of abstraction so plugins will not have to worry about the differences between the output formats. The most important one is ccze_addstr, which takes a color (see ccze.h for a list of supported color tags) and a string, and displays it appropriately. The ccze_space and ccze_newline functions emit a space and a newline, respectively. Our last function, ccze_wordcolor_process_one passes word to the word colourising engine. If the second argument, slookup is non-zero, the engine will perform service lookups (like getent and friends). HELPER METHODS We only have two helper methods: ccze_print_date, which simply prints out the date in the appropriate colour, and ccze_http_action, which given a HTTP method, returns the associated colour, in a format suitable for ccze_addstr. EXAMPLE #include <ccze.h> #include <stddef.h> #include <string.h> static char **ccze_foo_argv; static int ccze_foo_handle (const char *str, size_t length, char **rest) { int i = 1; if (strstr (str, "foo")) { ccze_addstr (CCZE_COLOR_GOODWORD, str); return 1; } while (ccze_foo_argv[i]) { if (strstr (str, ccze_foo_argv[i])) { ccze_addstr (CCZE_COLOR_GOODWORD, str); return 1; } i++; } return 0; } static void ccze_foo_startup (void) { ccze_foo_argv = ccze_plugin_argv_get (ccze_plugin_name_get ()); } static void ccze_foo_shutdown (void) { } CCZE_DEFINE_PLUGIN (foo, PARTIAL, "Partial FOO coloriser."); SEE ALSO ccze(1) AUTHOR ccze was written by Gergely Nagy <algernon@bonehunter.rulez.org>, based on colorize by Istvan Karaszi <colorize@spam.raszi.hu>.
http://manpages.ubuntu.com/manpages/dapper/man7/ccze-plugin.7.html
CC-MAIN-2014-41
refinedweb
806
63.7
Answered by: Continuous Integration Unit Testing Silverlight with the Silverlight Toolkit/CruiseControl The Silverlight Toolkit contains a unit testing framework written by Jeff Wilcox with an open ended logging system. It's been great. Good job, Jeff. I want to use the logging system to produce Visual Studio TRX files and in turn consume them with CruiseControl.net. The automated build process may look something like this: - Build project normally - Copy deployed website after the build to a test website location. - Run the silverlight test app from the command line, which would produce the TRX report and upload it to a web service that handles test results - Test results are copied back to the CC artifact directory from where they came - CruiseControl xsl stylesheet delivers them with the rest of the CC log Does this look right? If so -- I don't know how to use the VisualStudioLogProvider in the toolkit. Has anyone used this with success and could possibly provide me with an example? Question Answers All replies Greetings Mr Wilcox! I would highly encourage you to please please please post this series as soon as you can. You mention here that you were going to do this, and that was back in april. :) I've created a temporary QUITE HACKY solution to this mess in the interm. It is based off this solution (the only one I've found online) and placed in an MSBuild Task. It's super happy-path and barely tested, but does what I need it to. It uses an IE Com object and MY GOD MAN it's just not how you do things. :) But it works, usually... In any case, I'd love to see a true MSBuild task built that ties directly into the framework and produces the results we need... Thank you! I haven't had a chance to blog the sample apps yet on this, but, if you can spin up a local web server using HttpListener and related .NET types... Listen on localhost post 8000 Accept a request for "/externalInterface/ping/", respond "<rsp stat="ok""> Fiddler for the rest of the events, you'll get a results summary via GET and then a log file POST. This is pretty easy to throw into a console app that can be used in any interactive session. I haven't tested them, but you can try one of these two solutions: Using Powershell: SLRunners (including MSBuild): I made a test project using the sample from the toolkit. My tests are using UIAutomation to invoke buttons and others controls from the silverlight application. Everything works fine on visual studio. Now I would like to run those tests on the team foundation server that builds the application with MSBuild. In visual studio, I can't add those tests to the vsmdi file (generated when adding a non silverlight test project). Is there any (simple ?) way to do this ? Hi Jeff, Your effort on Silverlight Unit Test Framework is awesome. I am looking for the steps how to implement MsBuild Automation with SL Unit Framework from continous integration perspective. I got some articles saything that we can do it using PowerShell but not perfectly helpful. Can you please narrate me the steps accordingly? - FYI there is a new Open Source Silverlight unit testing framework called SilverUnit that sits on top of Typemock Isolator. This framework simplifies unit testing Silverlight . Jeff, Well. Can you please get me one more information about Silverlight unit test framework implementation for Datagrid unit test cases. I could find DataGridItemAutomationPeer class but if I am trying to declare an object for that class I am getting "Namespace / type couldn't be found" error even though I referred following namespaces. Where I am missing out things? usingSystem.Windows.Automation.Peers; using System.Windows.Automation.Provider; If possible, can you please provide a small article about HOW to implement unit test framework for Silverlight DataGrid control for a Silverlight application? - Is there any update on this - I see this is quite old. We're kicking the tires on the Silverlight technology stack for a not-small LOB application. Not being able to wire up our unit testing assets to our CC.NET automated build is going to be a big issue for our organization. This doesn't seem like it would be that difficult to solve, but I don't want to reinvent any wheel, nor the testing framework from Jeff. I was able to make it using statlight () and some generic tests using this post After spending too much time looking for a viable solution, this one looks like the painless one... Anyone knows if MS is planning to add Silverlight unit test framework and a better build integration in the next version of Visual Studio / TFS? What a pain for those who wish to implement TDD and CI with Silverlight...
https://social.msdn.microsoft.com/Forums/silverlight/en-US/122bdea9-8f29-4326-8443-af895c03c24e/continuous-integration-unit-testing-silverlight-with-the-silverlight-toolkitcruisecontrol?forum=silverlightcontrols
CC-MAIN-2016-22
refinedweb
810
71.14
Difference between revisions of "IFileSystem" Latest revision as of 11:59, 2 July 2016 Contents - 1 CBaseFile - 2 Paths IDs - 3 Fixing paths - 4 File - 5 Directories - 6 Mounting Steam Application Content - 7 Interface Details - 8 See Also Source replaces the standard file handling functions ( fopen etc.) with its own. The functions, members of the global IFileSystem* filesystem object, provide access to the whole engine filesystem, including cascading access through all mounted SearchPaths and access within GCF files. You must #include "Filesystem.h" before you can use it. fopen, just do #undef fopen. CBaseFile Tier2 provides CBaseFile and various derivatives, which operate like the C++ fstream family. #include "tier2\fileutils.h" to access them. Paths IDs In Source, the same file can exist in multiple SearchPaths. IFileSystem defines a few different access modes for determining precisely where it should look: MOD - The first SearchPath only. GAME - All SearchPaths, including those inside GCFs. XGAME - Xbox 360 equivalent to GAME. GAMEBIN - The game binaries folder (client, server). EXECUTABLE_PATH - The engine binaries folder. DEFAULT_WRITE_PATH - Wherever the game is currently configured to write out to. Defaults to the first SearchPath. You can get the path of the gameinfo.txt folder like this: // server char pGameDir[MAX_PATH]; engine->GetGameDir(pGameDir, MAX_PATH); // client const char *pGameDir = engine->GetGameDirectory(); Fixing paths filename = "materials\metal\metalcombine001"; V_SetExtension( filename, ".vmt", sizeof(filename) ); V_FixSlashes(filename); These functions are self-explanatory. Always call V_FixSlashes(), as slash direction differs between Windows and Linux/Mac! File Modes These values are passed as string literals when opening files. Reading #include "Filesystem.h" FileHandle_t fh = filesystem->Open("gameinfo.txt","r","GAME"); if(fh) { int file_len = filesystem->Size(fh); char* GameInfo = new char[file_len + 1]; filesystem->Read((void*)GameInfo,file_len,fh); GameInfo[file_len] = 0; // null terminator filesystem->Close(fh); // Use GameInfo here... delete[] GameInfo; } This code opens gameinfo.txt in read-only mode, then stores its contents in GameInfo, a C string. Because the string was created with the new keyword, it is very important to call delete[] on it afterwards or the memory would be leaked. (If you know in advance the size of the file, you can do char MyString[the_length] as normal, and not risk any leaks.) There is also a helper function that handles open/read/close for you in one swoop: #include "Filesystem.h" #include "utlbuffer.h" CUtlBuffer buf; if ( filesystem->ReadFile("gameinfo.txt","GAME",buf) ) { char* GameInfo = new char[buf.Size() + 1]; buf.GetString(GameInfo); GameInfo[buf.Size()] = 0; // null terminator // Use GameInfo here... delete[] GameInfo; } To do: Is there a better way of reading from a CUtlBuffer? Writing FileHandle_t fh = filesystem->Open( "mylog.log", "a+", "MOD"); if (fh) { filesystem->FPrintf(fh, "%s", "Logging A test line..."); char* text = "Logging another test line..."; filesystem->Write(text, V_strlen(text), fh); filesystem->Close(fh); } There is also filesystem->WriteFile(char* name,char* path,CUtlBuffer &buf). Searching FileFindHandle_t findHandle; // note: FileFINDHandle const char *pFilename = filesystem->FindFirstEx( "*.*", "MOD", &findHandle ); while (pFilename) { Msg("%s\n",pFilename); pFilename = filesystem->FindNext( findHandle ); } filesystem->FindClose( findHandle ); This code lists all files and folders in the gameinfo.txt folder. The Find functions do not search subfolders, and return only the filename matched, not its entire path. To iterate over subfolders, use filesystem->FindIsDirectory() and alter the search terms (e.g. materials\\*.*). Directories Creating char* path = "test_dir/subdir"; V_FixSlashes(path); filesystem->CreateDirHierarchy(path,"DEFAULT_WRITE_PATH"); Removing IFileSystem does not include support for deleting directories, and neither does the C++ standard. You'll need to either not delete them, or fall back on platform-specific APIs. One function you will find useful when doing this is filesystem->RelativePathToFullPath. Mounting. Interface Details The IFileSystem interface provides routines for accessing content on both the local drives, and in packages. Search Paths File paths are usually formatted as a relative path to the current folder. For example: .\materials\Brick\brickfloor001a.vtf Content from several sources is merged into this common root folder using search paths. This provides useful, however complex to the uninitiated, behaviour for accessing content. While the file used in the example above originates from the source 2007 shared material.gcf, not all files found in the .\materials folder will originate from the same place. Another example: .\materials\myCustomMadeTexture.vtf This file would most likely reside on the hard disk, in a mod folder, under its materials directory. The full path may be something like this: C:\Program Files\Steam\SteamApps\SourceMods\MyMod\Materials\myCustomMadeTexture.vtf The IFileSystem function GetSearchPaths retrieves a list of the search paths used to achieve this behaviour. Typical paths that would be returned: C:\program files\steam\steamapps\sourcemods\mymod C:\program files\steam\steamapps\username\sourcesdk\bin\orangebox\hl2 When the calling application attempts to open .\materials\myCustomMadeTexture.vtf, this first search path allows it to be interpreted as the full path. There may be many more search paths than these two. Whichever search path first finds an existing file, is the search path that is used. Hence, the order of the search paths is significant, and the GetSearchPaths function returns them in the correct order. While this makes clear how hard disk files are found, the matter of mounted content requires further explanation. Mounted Content The second search path example above is what allows mounted content to be found. If one examines this folder on their hard disk, they will find little there. However, the steam file system can mount content (typically based on AppId) as paths in the local file system. When the IFileSystem is created, the content is mounted to the current directory of the calling code. Note, however that this directory should NOT be ...\orangebox\hl2. This is because the root of the associated packs are mounted to the current directory. GCFs such as the source 2007 shared material.gcf contain a folder called hl2 inside their root. When the GCF is mounted, this hl2 folder inside the gcf becomes ...orangebox\hl2. Therefore, when the IFileSystem is Loaded/Connected, the current directory should first be changed to: C:\program files\steam\steamapps\username\sourcesdk\bin\orangebox Or the appropriate path for the version of the source SDK being used. Instantiating an IFileSystem Though the IFileSystem interface cannot itself be instantiated, the closed source class that implements it can be, which provides us with a working IFileSystem. Several steps must be taken to retrieve an IFileSystem that functions as expected (functions at all). Information that must be known: - Whether steam is running or not - The game directory (for example: c:\program files\steam\steamapps\sourcemods\mymod) - the SDK base directory (C:\program files\steam\steamapps\username\sourcesdk\bin\orangebox) - Whether or not to mount ExtraAppId Is Steam Running? The easiest way to do this is to check for steam.exe as a running process. However, this is not the safest way to allow the application to proceed. (For more details, see Api.cpp API::FileSystemReady in the DuctTape project). If Steam is not ready to provide the file system interface, the calling application will terminate immediately with a message printed to the debug output: "steam is not running" Preparing the Runtime Environment - The location of steam.dllmust be in the pathenvironment variable. - If steam is an NT service (running in Vista) then the SteamAppUser environment variable must be set to the users name. This can be retrieved from the AutoLoginUser key in file: steam install path\config\SteamAppData.vdf - The SteamAppId environment variable must be set to the AppId found in the mods/games gameinfo.txt - Set the sourcesdk environment variable to the SDK base directory - Change the current directory to the SDK base directory The full path of filesystem_steam.dll must be determined, which resides in the source sdk's bin directory (orangebox\bin). This file must be used from this location, or mounted content will not be retrievable. - Call the Sys_LoadInterface function. It returns true on success: CSysModule** module; IFileSystem* fileSystem; Sys_LoadInterface(fullPathToFileSystemDLL, "VFileSystem017", module, (void**)&fileSystem); - Connect the file system. It returns true on success: fileSystem->Connect(Sys_GetFactoryThis()); - Initialize the files system. It returns INIT_OK on success. fileSystem->Init(); - At this point, current directory can be restored. Mount the extra content If SDK extra content is to be mounted, the ToolsAppID must be retrieved from gameinfo.txt. Currently it is 211. fileSystem->MountSteamContent(toolsAppId); Load Search Paths The search paths must be loaded from gameinfo.txt. For a thorough example of how this is to be done, see filesystem_init.cpp FileSystem_LoadSearchPaths in the DuctTape project. A simple explanation of how search paths are loaded can be found in the unmodified gameinfo.txt of a new mod.
https://developer.valvesoftware.com/w/index.php?title=IFileSystem&diff=cur&oldid=33331
CC-MAIN-2021-39
refinedweb
1,425
50.12
ASLR is a feature of the Oracle Solaris system that randomizes the starting address of key portions of the process address space such as stack, libraries, and brk-based heap. By default, ASLR is enabled for binaries explicitly tagged to request ASLR. The following command provides information about the status of ASLR: % sxadm info EXTENSION STATUS CONFIGURATION aslr enable (tagged-files) enable (tagged-files) The –z option to the ld(1) command is used to tag a newly created object with an ASLR requirement. The usage is as shown below: ld -z aslr[=mode] where mode can be set to enable or disable. If mode is not specified, enable is assumed. The following example demonstrates the use of the –z option to create an executable with ASLR enabled: % cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("Hello World!\n"); return (0); } % cc hello.c -z aslr ASLR tagging is provided by an entry in the object's dynamic section, which can be inspected with elfdump(1). % elfdump -d a.out | grep ASLR [28] SUNW_ASLR 0x2 ENABLE The elfedit(1) command can be used to add or modify the ASLR dynamic entry in an existing object. % cc hello.c % elfedit -e 'dyn:sunw_aslr enable' a.out % elfdump -d a.out | grep ASLR [29] SUNW_ASLR 0x2 ENABLE % elfedit -e 'dyn:sunw_aslr disable' a.out % elfdump -d a.out | grep ASLR [29] SUNW_ASLR 0x1 DISABLE The ASLR requirements for a given process are established at process startup, and cannot be modified once the process has started. For this reason, the ASLR tagging is only meaningful for the primary executable object in the process. The pmap(1) utility can be used to examine the address mappings for a process. When used to observe the mappings for an executable which has ASLR enabled, the specific addresses used for the stack, library mappings, and the brk-based heap will differ for every invocation. The sxadm(1) command is used to control the default ASLR default behavior for the system. Binaries that are explicitly tagged to disable ASLR take precedence over the system default behavior established by sxadm. Address Space Randomization may be problematic during debugging. Some debugging situations require that repeated invocations of the program use the same address mappings. You can temporarily disable ASLR in one of the following ways: Temporarily disable ASLR system wide % sxadm exec -s aslr=disable /bin/bash Use ld or elfedit commands to tag the associate binary to disable ASLR Establish an ASLR disabled shell in which to carry out debugging % sxadm exec -s aslr=disable /bin/bash See the sxadm(1M) man page and Chapter 2, Configuring Oracle Solaris Security, in Oracle Solaris 11 Security Guidelines for more information.
https://docs.oracle.com/cd/E36784_01/html/E36855/gmfcp.html
CC-MAIN-2021-10
refinedweb
456
54.22
<James_Althoff at i2.com> wrote in message news:mailman.992898207.28586.python-list at python.org... ... > >As Alex Martelli has demonstrated a few times, you can substitute a > >class instance that implements __setattr__ and __getattr__ in place of > >the module object in sys.modules. > > How would such an approach work with "import", "from", "__import__", etc.? Pretty well, thanks (it DOES give problems with 'reload', though, if that is what you mean by 'etc':-). Why not give it a try -- it *IS* as simple as a crystal-clear spring, after all. Here's a toy example: :: fake.py class _fake: __all__ = [] def __getattr__(self, name): if name.startswith('__'): raise AttributeError, name else: return 'fake_'+name import sys sys.modules[__name__] = _fake() :: end of fake.py D:\py21>python Python 2.1 (#15, Apr 16 2001, 18:25:49) [MSC 32 bit (Intel)] on win32 Type "copyright", "credits" or "license" for more information. Alternative ReadLine 1.4 -- Copyright 2001, Chris Gonnerman >>> import fake >>> fake.pop 'fake_pop' >>> fake.plap 'fake_plap' >>> from fake import foopi >>> foopi 'fake_foopi' >>> fakefake = __import__('fake') >>> fakefake <fake._fake instance at 007F8AFC> >>> fakefake.ploppy 'fake_ploppy' >>> fakefake is fake 1 >>> > Is it really viable to try to create a class whose instances "fake" module > objects? Sure! It's not just viable, it's EASY -- as long as client code does no type-tests, of course, just as for any other application of Python's signature-based polymorphism. (The "Protocol Adaptation" PEP is my personal big hope for a really powerful way to wean people who write client-code off doing rigid type-tests, which break all of the niftiest polymorphic tricks:-). > And use this as the standard idiom for modules? I'll pass on this one. I suspect that by-far-MOST modules have no need for __getattr__ and __setattr__, so I don't really see why having a module replace its entry in sys.modules would ever become "THE standard idiom". But when I *DO* need a "module __getattr__" etc, it seems to work pretty well today. I'll pass on metaclasses, "true class methods", etc, too -- it does not appear to me that the instance-as-a-fake-module idea gives anywhere near ALL the power that such deep concepts might yield. But I haven't done nearly enough Smalltalking to feel the need for such specific packaging-of-power deep in my bones. Specifically, my personal gut feeling is that classes are nice but vastly overrated in most OO literature, and other tools and mechanisms may be preferable for several tasks -- so, by instinct rather than by reason, I feel suspicious of attempts to load yet more power and functionality on classes' backs and sympathetic to ideas of "off-loading" tasks now entrusted to classes to other simpler entities (e.g., "interfaces"). But that's guts, not reason - mostly an impression that comes to me from comparing _previous_ attempts to "do it all with classes" (e.g. the class==interface identification in C++ or Eiffel, the class==namespace in old C++ versions, &c) with ones to separate concepts (the class/interface split in Java, the class/namespace split in modern C++). It may well be that my instincts are operating on inappropriate foundations when it comes to judging specific class-related ideas in the specific context of Python's future developments. Alex
https://mail.python.org/pipermail/python-list/2001-June/088076.html
CC-MAIN-2019-35
refinedweb
555
64.2
Count duplicates in circular linked list Introduction The following article aims to familiarise you with using a circular linked list (if this is a new topic for you, refer to the article here) and practice its use in real questions to build and strengthen your concepts. The following question explores how we can traverse a circular linked list and find the count of duplicate elements using a hash set. Problem Statement Count the number of duplicate elements in a circular linked list, i.e., elements that have already occurred before, and print the count as the final result. For example, Input 1: [1, 2, 3, 1, 2] In the above input [1, 2, 3, 1 <-, 2 <-], the numbers marked by arrows are duplicates. Output 1: 2 Input 2: [5, 5, 5, 5] In the above input [5, 5 <-, 5 <-, 5 <-], similar to input 1, arrows mark the duplicate elements. Output 2: 3 Approach In the below code, our purpose is to count the number of duplicates, and we achieve this by using a hash set. The use of a hash set (unordered_set in C++ STL) is generally preferred because of its O(1) retrieval and insertion time on average. The algorithm requires us to traverse through the circular linked list once and check if that element is present in the hash set for each iteration. If the element exists in the hash set, then it’s a duplicate, and we increase the duplicate count; otherwise, we insert the element into the hash set as it hadn’t occurred before. After having traversed through all the elements of the hash set, we end up with the count of the duplicate elements in the circular linked list, and we print that as the final result. Code in C++ #include <iostream> #include <unordered_set> #include <vector> using namespace std; class Node{ public: int val; Node *next; Node(int x){ val = x; next = NULL; } }; int main(){ // let us store all the content in the below vector into the circular linked list vector<int> data = {10, 1, 1, 7, 5, 5, 5}; Node *head = new Node(data[0]); Node *temp = head; for(int i = 1; i < data.size(); i++){ temp->next = new Node(data[i]); temp = temp->next; } temp->next = head; // the end element points back to the head, making it a circular linked list //from here on we have the logic for counting the duplicates int duplicates = 0; unordered_set<int> s; Node *loop_var = head; do{ if(s.find(loop_var->val) != s.end()){ // if a previous occurence is found duplicates += 1; } else{ s.insert(loop_var->val); } loop_var = loop_var->next; } while(loop_var != head); // print the total duplicates // [10, 1, 1 <-, 7, 5, 5 <-, 5 <-] // I have put arrows next to the duplicates and we expect an output of 3 for the given input cout << duplicates << endl; } Output: 3 Time Complexity The time complexity is O(N), as we traverse each element of the list once, and the retrieval from the unordered_set is O(1) on average for random data. (Do note, however, that unordered_set can have O(N) retrieval and insertion in a worst-case under particular inputs.) Space Complexity The space complexity is O(N) because of the unordered_set data structure, where we store elements to check for their earlier occurrence in the circular linked list. Frequently Asked Questions 1. What are the advantages of learning about circular linked lists? It can implement circular linked list queues and various advanced data structures such as Fibonacci heaps. They can also be helpful in multiple other situations repeatedly requiring iteration over the same data. 2. Difference between unordered_set and set in C++? The set data structure is usually implemented using a red-black tree (read more about it here). The unordered_set, however, is implemented using a hash table and hence offers an average O(1) retrieval and insertion. In contrast, set data structure takes O(log(n)) for retrieving and inserting an element. 3. Is unordered_set always better than set? No, the unordered_set will usually work well with random data and give O(1) retrieval and insertion on average. Still, in case of excessive collisions (which can be the case for some specific inputs, depending on the hash function), our retrieval and insertion may become O(N). Key Takeaways The article helps you understand and implement your knowledge of circular linked lists and C++ STL. I also recommend checking the program for multiple inputs to understand the process better if you still have some doubts. In the code, you may have noticed my use of a do-while loop instead of a while loop. I encourage you to try the same problem with a while loop, observe the difference in implementation, and decide if it’s better and easier to use a do-while loop in the case of circular linked lists. Learn more about vector and other data structures in C++ STL from the following link: here and Happy learning!
https://www.codingninjas.com/codestudio/library/count-duplicates-in-circular-linked-list
CC-MAIN-2022-27
refinedweb
823
56.59
Build a Content Management System for an E-commerce Store with Next.js and Sanity Having the ability to build an online store opens up a ton of possibilities, whether you’re building that store for a new client to pay the bills or you’re trying to start your own business. There are a growing number of solutions out there that help make this possible. You could use Stripe Checkout which can handle purchasing and payment processing or Snipcart which can add an entire shopping cart to your app. But the more your product selection grows or the more people you add to your team, managing that online store becomes even more challenging. The challenges of an online store When managing a product selection on your store, there are a ton of factors you need to consider. The products themselves have a variety of data points that need to be maintained from simply the title of the product to the SKU (Stock Keeping Unit). You also need to consider the inventory available that gives you the opportunity to sell that product in the first place. If you’re managing your products hardcoded in your app or with a static JSON file, you have to start considering who’s actively making a change. Who knows how to change the code? Who even has access? Otherwise you can run into code collisions or risk having to give access to your entire codebase to someone you don’t trust. That’s where a CMS comes in, which gives you the ability to fully manage your product selection whether it’s you or an entire team just like the big stores do, avoiding costly code errors which could bring down your site. But what exactly is a CMS? What is a CMS? A CMS, or Content Management System, is software or an application that provides a user interface to manage content and structured data that ultimately gets stored in a database. One common use case is for a blog, where each blog entry would be a new item inside of a database, but the author can create new blog posts inside of the CMS with a friendly interface, rather than manually storing that content in the database. While WordPress was once (and still is) the king of the CMS world, a variety of powerful CMS solutions have exploded in the development world providing more options and capabilities than we had before, including Sanity. What is Sanity? Sanity defines itself as more than just a CMS, but a content platform. The core of the Sanity offering is a CMS, but they provide an enhanced developer experience and content collaboration features that give new powers to the traditional CMS. For our project, we’ll use Sanity to store our product data, making it easily manageable whether it’s a single entrepreneur or a team of inventory managers. What will we build? We’re going to create a new Next.js app that sources a list of products from Sanity.io. We’ll take advantage of the Sanity Client library to pull in our products from our CMS any time our project is built. That will allow us to take that list of products and dynamically display them in a grid on our page for our visitors to browse. Step 0: Set up a Next.js application To get started, we’re going to create our new project with Next.js. In your terminal, we can create our new project by running: yarn create next-app my-sanity-store This will create a new Next.js application at my-sanity-store. Once that’s finished installing, you can navigate into your project and start your development server: cd my-sanity-storeyarn dev That will load the Next.js app and we’ll be ready to go! Follow along with the commit! Step 1: Create a new Sanity Studio project To get started with Sanity, unless you use one of their Starters which bootstraps an entire project complete with deployment, you need to use the terminal and yarn to create a new project. While using a Starter would be easier to get to a “finished” state, we’re going to walk through how to do this manually so you can then add it to an existing project or customize it yourself like you might do in a real world project. Tip: If this is your first time using Sanity, you’ll notice that there’s not a traditional “Sign Up” page like you typically would expect. Instead, once you create your project, Sanity will give you an opportunity to log in and connect your project to an account. Inside of your terminal, first install the sanity-cli with: yarn add @sanity/cli --global Then, inside of the folder you’d like to create your new project, which should be separate from your Next.js project, run: sanity init If this is your first time logging into Sanity in your terminal, you’ll now be prompted to log in to an account. If you don’t have an account, logging in will set up that account for you with Google, GitHub, or it will give you the opportunity to sign up with an email and password. Once logged in, Sanity will then prompt you to answer a few questions. You can name the project anything you’d like. Sanity will use this name to set up the project for you. For the dataset configuration, we can stick with the default for now, so you can answer "Yes." Finally, you can customize your output directory or let it use the default. The default option will create the project in the folder you’re in based on your project name. Once you answer those first few questions, it will ask you to select a project template. If you’re starting from scratch and want to create a custom CMS schema, you may want to use "Clean", but we’re going to build an ecommerce project, so let’s select the "E-commerce" option. We can even upload sample data into our project to make it easier to test this out. I’m going to select "Y" for Yes so we can easily jump into sourcing the content into our application, but feel free to do this manually if you’d like with your own product selection. After that, Sanity will create the new project and install all of the dependencies. What you end up with is a new web app that will serve as your CMS. You now have the flexibility to customize your schemas or UI inside of that code base, or just leave it as is. To test this out, navigate into your new project: cd [output directory] And start up Sanity: sanity start This will compile your CMS application and make it available at. And if you open that up in your browser and log in with your Sanity account, you’ll find your new CMS! Finally, in order to make our CMS available on the internet, we’ll need to deploy it. Here you have the option of deploying it yourself or the recommended option, deploying it to Sanity itself. If you want to deploy it to Sanity, inside of your project directory run: sanity deploy This will ask you what you’d like your hostname to be (it needs to be unique), then it will deploy your project to the Sanity cloud and make it available for you to open up, log in, and get started! It’s recommended that you add this repository on GitHub or your favorite Git provider and use that as a basis for future modifications to your Sanity project. See my Sanity Studio project on GitHub Step 2: Install Sanity Client in a Next.js app Now that we have our CMS, we want to install the Sanity Client into our Next.js app that will allow us to easily interface with our data straight from Sanity. Back inside of your Next.js project, first let’s install Sanity Client: yarn add @sanity/client Note: if you followed along with me earlier, you should be running that inside my-sanity-store With the Sanity Client installed, we now have direct access to the Sanity library that will allow us to set up requests. When we set up our Client, we’re going to need to give Sanity a few pieces of information so it knows how to find our project: We need to pass this information any time we use the Sanity Client, so to make that easier, we’re going to create a reusable instance of the client so we can use it anywhere with the same configuration. Inside of the project, create a new folder called lib, then create a new file inside called sanity.js. Inside that file, add the following: import sanityClient from '@sanity/client';export const client = sanityClient({projectId: '[Project ID]',dataset: 'production',useCdn: true}); Tip: if you want to have more flexibility with setting up your project between different environments, you could consider using environment variables for the Project ID and Dataset ID Here’s what we’re doing: - First, we import the Sanity Client from the library - We then export our client instance as a constant that we can use anywhere in our application - We pass in our project ID - We pass in our dataset ID, which if you used the default settings in Step 1 will be production - We set useCdnto true, meaning, when we read our data from Sanity, we want to get it cached, which will make it super fast Looking for your Project ID? You can either find it inside of your Sanity.io account by navigating to your project or inside of the sanity.jsonfile from Step 1 under api.projectId And once we save our file, we’re ready to get out product data from Sanity in our next step! Follow along with the commit! Step 3: Add products to our Next.js homepage from a Sanity.io project We created our CMS and we installed the Sanity Client to access the product data in the CMS, now let’s add that data to our project. We’re going to start by listing all of our new products from inside of Sanity on our homepage. Inside of pages/index.js, first import the client: import { client } from '../lib/sanity'; Then at the bottom of the file, we’re going to take advantage of the getStaticProps API from Next.js to request our CMS data and make it available as a prop to our homepage. Add the following to the bottom of pages/index.js: export async function getStaticProps() {const products = await client.fetch(`*[_type == "product"]`);return {props: {products}}} Here, we’re: - Creating our new getStaticPropsfunction - Using the Sanity client to request all items with a type of “product” - Then adding our products as a prop To test this out, we can use console.log to view our product data inside of that function. Before the return statement, add: console.log('products', products); If you reload the page, you’ll notice that you don’t see anything inside of the web console, but if you open up your terminal, you should now see all of your product data listed out! Tip: the reason we see this inside of our terminal and not the browser is because getStaticPropsruns at compile time in node. The browser never even knows it happens! With our data available as a prop, we can now destructure it from our Home component argument: export default function Home({ products }) { Tip: we can also test that this works by adding another console.logstatement at the top of the Home component, this time it will show in the browser! For showing our products, we can take advantage of the grid and cards that come with Next.js by default. Inside of the grid <div>, let’s replace all of the cards with: <div className={styles.grid}>{products.map((product) => {return (<a key={product._id} href="#" className={styles.card}><h3>{product.title}</h3><p>{product.blurb.en}</p></a>)})}</div> In the above, we’re: - Using the map function to create a card component for each of our products array - We’re using the product ID as the key, which React uses internally to render the components properly - Our hrefattribute is set as #for now, as we don’t yet have a link - We use the product.titlefor our header - We use the product.blurb.en, which grabs the English version of our product description Tip: Sanity supports localization, which is why we are using enfor our product blurb. We’re not going to cover localization in this tutorial, but you can find more information at Sanity.io. Once we save and the page reloads, we can now see all of our products are listed out on the page! In the next section, we’ll learn how to use the images available from Sanity to show what our products look like in our app. Follow along with the commit! Step 4: Adding images from the Sanity CDN to a Next.js project If you look through the data that is available to use inside of the products from Sanity, you’ll notice that there’s an array of images inside of the defaultProductVariant object. The issue, is by default, Sanity doesn’t provide a constructed URL for us to use right inside of our application. Sanity has another package @sanity/image-url that we can use to take our image object and construct a URL. Back inside of our Next.js app, let’s install a new package: yarn add @sanity/image-url Once it’s finished installing, we can import the package at the top of our pages/index.js file: import imageUrlBuilder from '@sanity/image-url' We also want to create a new instance of our image builder that we’ll use in the app. After all of the imports at the top of the file, add the following: const builder = imageUrlBuilder(client) This creates a new builder instance passing in our Sanity Client. Make sure this is below all of the imports and above the Homecomponent definition or else the page will break. Next, we can use our image builder and product data to find our default image and add it to the page. Let’s update the map of products to the following: <div className={styles.grid}>{products.map((product) => {const {defaultProductVariant = {}} = productconst {images} = defaultProductVariantreturn (<a key={product._id} href="#" className={styles.card}><img src={builder.image(images[0])} /><h3>{product.title}</h3><p>{product.blurb.en}</p></a>)})}</div> In the above, we added: - Deconstructing defaultProductVariantfrom each product - Deconstructing images from the default product variant - Passing in the first image in our array to our image builder If we reload the page, we can now see that we have images! The only issue is they’re a little big for our page. To start to fix this, we can add a line of CSS to prevent images from expanding outside of their container’s size. Inside of styles/global.css, add the following to the bottom of the file: img {max-width: 100%;} That alone fixes our images, but we’re still loading images that are way too big for our page. That means our visitors are going to be downloading bigger images than they need to, which will take longer and use more bandwidth. Because we’re using Sanity’s CDN and URL builder, we can easily reduce the size of the source image, as we don’t need images that big inside of our project. Inside of pages/index.js, update the image to the following: <img src={builder.image(images[0]).width(300)} /> And back inside of our app, our images should still load and work the same, but they’ll be 300px by 300px which is smaller than the 1000px by 1000px images we were downloading before! Tip: Sanity’s Image Builder has a variety of methods that can change how the image is represented, including blurring the image or changing the format. Follow along with the commit! What can you do next? We only scratched the surface for creating an online store. Our goal here was to add the capability to manage our product data from Sanity, a content management solution, instead of static data inside of our application. Here are a few other things you can do to continue on with this project. Customize the Sanity product schema When creating our Sanity Studio project, we used the default ecommerce data that Sanity provided us. But not all applications have the same data requirements. Whether you want to add a new field to store a price ID from an ecommerce platform like Stripe or store an attribute to change how the product looks on the page, you can customize your product schema right inside of your Sanity Studio project. Deploy the Next.js project As soon as the store is ready to go live, you can deploy your Next.js project to your favorite hosting provider like Vercel or Netlify. Because we don’t use any server-side methods in this project, we can export the project to static files and host them anywhere we want! Trigger a deploy when content is updated with a deploy hook Once the project is deployed, we’ll want to deploy the updated site any time content is changed in our Sanity Studio CMS. To do that, we can set up webhooks so that Sanity can notify our hosting provider when a change occurs, telling it to rebuild and deploy the site. Learn how to build an online store and sell products from scratch with Stripe Checkout To learn more how to get up and running with a Next.js app using Stripe Checkout to sell your products online, check out my course on egghead.io: Create an eCommerce Store with Next.js and Stripe Checkout
https://egghead.io/blog/build-cms-for-ecommerce-store-with-nextjs-and-sanity
CC-MAIN-2021-43
refinedweb
3,013
68.6
Code. Collaborate. Organize. No Limits. Try it Today. You MainPage.xaml and add in the following code: <Grid x: <Button Height="23" HorizontalAlignment="Left" Margin="169,132,0,0" VerticalAlignment="Top" Width="75" x: </Grid> This will simply put a no thrills button on the page that the user can press to call the P/Invoke code we will add shortly. Let’s go ahead and add a new class to the project. Let’s call it PlatformInvokeTest.cs and add the following code (Note: If you are having a problem getting it to work, then use my solution at the bottom of the post): using System; using System.Runtime.InteropServices; namespace SilverlightApplication26 { public class PlatformInvokeTest { [DllImport("kernel32.dll")] public static extern bool Beep(int frequency, int duration); public static void PlaySound() { Random random = new Random(); for (int i = 0; i < 50; i++) { Beep(random.Next(10000), 100); } } } } Let’s switch back over to MainPage.xaml.cs and add the following code: using System.Windows; using System.Windows.Controls; namespace SilverlightApplication26 { public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); } private void click_Click(object sender, RoutedEventArgs e) { PlatformInvokeTest.PlaySound(); } } } Now when the user fires up this project, the application will go out of browser and the computer will beep multiple times in a different frequency each time. You can also get this same functionality in-browser by going back to the Properties page and selecting “Require elevated trust when running in-browser”. The only thing to note is that the .aspx page is no longer set to the default in your web project so you will need to do a “View in Browser” on your .aspx page in order to test. As you can see, it is very easy to use P/Invoke in a Silverlight 5 application. This sample was pretty simple but imagine the possibilities such as detecting when a USB key is inserted into a PC and copying files onto it through a Silverlight 5 application. Pretty cool stuff! If you want the source code to this application and other Silverlight 5 demos, then be sure to check out Michael’s “Mega Collection of #Silverlight 5" Demos. Other Silverlight 5 resources by me are.
http://www.codeproject.com/Articles/253391/How-to-use-PInvoke-in-Silverlight-5?fid=1651697&df=10000&mpp=50&noise=1&prof=True&sort=Position&view=None&spc=Relaxed
CC-MAIN-2014-23
refinedweb
366
55.34
Supreme Court Judgments Subscribe 29/10/1969 HIDAYATULLAH, M. (CJ) HIDAYATULLAH, M. (CJ) SIKRI, S.M. MITTER, G.K. RAY, A.N. REDDY, P. JAGANMOHAN CITATION: 1971 AIR 870 1970 SCR (3) 147 CITATOR INFO: D 1973 SC2491 (8) R 1974 SC1510 (12) RF 1975 SC1564 (17,19,21,22,23,24,25,53,54,58 RF 1975 SC1652 (15,21) F 1977 SC 247 (5,16) RF 1980 SC1468 (4,13,15) R 1985 SC1689 (5) RF 1990 SC 820 (19) ACT: Constitution of India, Arts. 31(1), 32-Corporation not being a citizen whether can enforce rights under Art. 32Circumstances under which taxing statute can be challenged on ground of breach of fundamental rights by petition under Art. 32. Sales Tax-Sales 'in course of export' what are-Sale by coffee Board constituted under the Coffee Act 7 of 1942, to registered exporters whether within protection of Constitution of India Art. 286(1) (b) and Central Sales Tax Act 74 of 195 6 s. 5 (1). HEADNOTE: Under Art. 286(1)(b) of the Constitution exemption from imposition of sales tax is granted in respect of a sale or purchase of goods in the course of the import of the goods into, or export of the goods out of the territory of India. After the 6th Amendment to the Constitution, Parliament passed the Central Sales Tax Act, 1956 and in s. 5(1) thereof laid down that a sale of goods, is 'in the course of export' out of the territory of India only if the sale or purchase eitheroccasions such export or is effected by a transfer of documents of title to the goods after the goods have crossed the customs frontiers of India. Export of coffee outside India is controlled under the Coffee Act, 1942, by the Coffee Board. Coffee especially screened and selected is sold to registered exporters at 'export auctions'. Permits are given to such registered exporters to participate, at the auction. The Coffee Board has prepared a set of rules which incorporate the terms and conditions of sale of Coffee in the course of export. Under Condition 26 of the Rules a registered dealer has to give an 'export guarantee' under which export can he made only tostipulated or approved destinations. The buyer at an export auction is free to export the coffee either by himself or through a forwarding agent, without selling the goods to the forwarding agent. Immediately after the export evidence of the shipping has to be produced before the Chief Marketing Officer, otherwise under Condition 30 the permit holder is liable to fine and under Condition 31 the un exported coffee is liable to be seized. In respect of certain sales of coffee to registered exporters in March and April 1963 the Coffee Board aforesaid claimed that as the sales in question had been made 'in the course of export' outside the territory of India they could not be taxed under the Madras General Sales Tax Act,, 1959. The taxing authorities however held that the sales took place within Tamil Nadu State and were liable to be taxed under the Tamil Nadu Act. Provisional assessments were made and the tax not already paid was demanded. The Board thereupon filed petitions under Art. 32 of the Constitution challenging the levy. The State however, relying upon this Court's decision in the State Trading Corporation v. The Commercial Tax Officer, Visakhapatnam & Ors. contended that the Board was a Corporation and not a citizen and its petition under Art. 32 could not be entertained. On behalf of the State it was also urged that the petitioners did not show any breach of fundamental right justifying a petition under Art. 32; the Board had only claimed exemptions incorporated in the Constitution and 148 thestatute dealing with the levy and collection of sales tax and their grievance could he investigated and righted by taking recourse to the remedies provided in the relevant statute. HELD:(Per Hidayatullah, C. J., G. K. Mitter, A. N. Ray and P. Jaganmohan Reddy, JJ.) (i) The case of the State Trading Corporation considered the application of Art. 19(1)(f)& (g) inrelation to Corporation is and it was held therein that they could not be regarded as citizens for the purpose of that Article. The question was not considered in relation to Art. 31(i) which is not limited in its operation to citizens. It mention person,s who may be corporations or group of persons. [155 F; 158 G-H] State Trading Corporation of India Ltd. v. Commercial Tax Officer, Visakhapatnam and Ors., [1964] 4 S.C.R. 99, distinguished. (ii)The majority in Sint. Ujjam Bai's case considered that a breachof fundamental right guaranteed by Art, 32(1) is involved in a demand for tax which is not leviable under a valid law. Therefore a demand of tax, not backed by a valid law is a threat to property and gives rise to a right to move this Court under Art. 32. The petitioner in such circumstances is not compelled to wait or go through the lengthy procedure ofappeals,as laid down in Ujjam Bai's case must be made out. [158 D-E; 159 C-D] The. propositions settled by the Court in Ujjam Bai's case may besimply stated thus. The-,ruling recognises the existence of a right to move this Court under Art. 32 when under Art. 324 It is also pointed out that the proper way tocorrect them is to proceed under the provisions of appeal etc. or by 'Awayof proceedings under Art. 226 before the High Court. [156G-157A] Accordingly in the present case the petitioner could be allowed to raise the question of jurisdiction. [159D-E] Sint. Ujjam Bai v. State of Uttar Pradesh, [1963] S.C.R. 778, applied ;and explained. Ramjilal v. I.T.C. Mohindragarh, [1951] S.C.R. 127, Laxmanappa Hanumantappa v. Union of India, [1955] S.C.R. 769, State TradingCorporation of India v. Commercial Tax Officer, [1964] 4 S.C.R. 99, State Trading Corporation of India v. State of Mysore, 14 S.T.C. 416 and Firm A.T. B. Mehtab Majid & Co. v. State of Madras, 14 S.T.C. 355, referredto. (iii) The petitioner Board was not entitled to the exemption claimed. The phrases sale in the course of export' comprises in itself three essentials : (i) that there must be a sale (ii) that goods must actually be exported and (iii) the sale must be a part and parcel of the export. Therefore either thesale must take place when the goods are already in the process of being exported which is established by their having already crossed the Customs frontiers, or the sale must occasion the export. The 149 word 'cause'. The introduction of an intermediary between the seller and the importing buyer breaks the link for then there are two. sales one to the intermediary and the other to the importer. The first sale is not in the course of export for the, export begins from the intermediary and ends with the importer. [163F-164B] Therefore the tests are that there, must be a single sale which itself causes the export or is in the process or progress of export. There is no, room for two or more sales in the course of export. [164 B-C] Whether the export is by agreement between the parties or by force of law, cause of export : a sale of goods. 'No other sale can qualify for the exemption under s. 5(1) read with Art. 286(1)(b). [164 C-F] The sales by the Coffee Board were sales for export and not in the course of export. There are two independent sales involved in the export programme. The first sale is a sale between the Coffee Board which is in the course of export since it causes the movement of goods between an exporter and an importer. [164 H-165 B] The rules compelling export by the registered exports make no difference. The compulsion only compels persons who buy on their own to export in their own turn by entering into another agreement far sale. Even with 'the compulsion the sale may not result 'for clauses 26, 30 and 31 visualise such happenings. [165 E-F] Travancore Cochin & Ors. v. The Bombay Co. Ltd. [1952] S.C.R. 1112 and Stale of Travancore Cochin, & Ors. v. Shanmugha Cashew Nut Factory & Ors. [1954] S.C.R. 53, applied. State of Mysore v. Mysore Spinning and Manufacturing Co. A.I.R. 1958 S.C. 1002, Burmah Shell Oil Stortage and Distributing Company U.C., [1961] 1 S.C.R. 902 and East India Tobacco Co. v. State of Andhra Pradesh, (1962) 13 S.T.C. 529, B. K. Wadar v. Daulatram Rameshwarlall, [1961] 1 S.C.R. 924 and K. G. Khosla & Co. v. Dy. Commissioner of Commercial Taxes, (1966) 17 S.T.C. 473, referred to. 150 Ben Gorm Nilgiri Plantations Company, Coonoor v. Sales Tax Officer, [1964] 7 S.C.R. 706, distinguished. Indian Coffee Board v. State of Madras, (1956) 7 S.T.C. 135, approved. Per Sikri,J. (dissenting). When a word bears two meaning thecontext must determine which is the appropriate meaning to be adopted. The word 'occasion' is an ordinary dictionary word and not a technical word. The dictionary meaning is wider than the meaning sought to be :given in the majority judgment which was 'to cause or to be the immediate cause'. In the context of (a) the need to develop export trade and (b) the idea underlying Art. 286 namely, to restrict the power of the ,States to levy taxes on sales which might hamper export trade, it is more appropriate to give the wider meaning to the word 'occasion' in the construction of s. 5(1). It would be wrong to say that in the case of the Bombay Co. Ltd. and in Shanmugha Vilas Cashew Nut Factory's case this Court accepted the narrower meaning of the world. [166B-G; 167D] Similar expression occurring in ss. 3 and 5(2) of the Act has been interpreted by this Court on a number of occasions and it is difficult to appreciate why the same expression bears a different meaning in s. 5(1). tl68B-C] The heart of the matter lies in answering the question whether two sales can occasion an export. The question must be answered in the affirmative. Two sales can take place in the course of export if they are affected by the transfer of documents of title to the goods after the goods have crossed the customs frontiers of India and they both will be protected under s. 5(1) of the Act. Therefore it cannot be assumed that it is the intention of s. 5(1) that only one salelcan enjoy the protection of s. 5(1). The word occasion does not necessarily mean immediately cause; it also means "to bring about especially in an incidental or subsidiary manner". If the sale brings about the export in an incidental or subsidiary manner it can be said to occasion the export. [169B-D] On the facts of the present case the Coffee Board, the sellers have concern with the actual export of goods. They have made various provisions to see that the purchasers must export. Condition 26 un-exported coffee may be seized. Thus the Coffee Board retains control over the goods. The3e conditions create a bond between the sale and eventual export. The possibility that in a, particular case a purchaser might commit a breach of contract or law and not export does not change the nature of the transaction [17OG171A] Case law referred to. ORIGINAL JURISDICTION : Writ Petitions Nos. 216 and 217 of 1969. Petition under Art. 32 of the Constitution of India for enforcement of fundamental rights. M.C. Setalvad, K. J. Chandran, B. Datta, J. B. Dadachanji, and Ravinder Narain, the petitioner. S. V. Gupte and A. V. Rangam, for the respondents. 151 C.K. Daphtary, B. Datta, J. B. Dadachanji and Ravinder Narain, for the intervener. The Judgment of M. HIDAYATULLAH, C.J., G. K. MITTER, A. N. RAY and P. JAGANMOHAN REDDY JJ. was delivered by HIDAYATULLAH, C.J. SIKRI, J. gave a dissenting Opinion. Hidayatullah, C.J.-These are petitions under Art. 32 of the Constitution by the Coffee Board, Bangalore directed against the Joint Commercial Tax Officer, Madras and the State of Tamil Nadu questioning the demand of Sales Tax on certain transactions of sales which the Board claims are sales in the course of export of Coffee out of India and thus not liable to Sales Tax. A preliminary objection was taken at the hearing that the petitions do not lie since no question of a fundamental right is involved. We shall deal with the preliminary objection later as the main, petition and the preliminary objection are interlinked. But before we mention the points in controversy it is necessary to state the facts more fully. The petitioner is a statutorily constituted body and functions under the Coffee Act, 1942 (VII of 1942). This Act was passed to provide for the development under the control of the Union of the Coffee Industry. Its main function is to constitute a Coffee Board. Previously there was an Ordinance instituted the Indian Coffee Market Expansion Ordinance, 1940 (13 of 1940). A Board called the Indian Coffee Market Expansion Board was constituted under the Ordinance. The same Board now continues under the name 'Coffee Board'. On this Board, all interests are represented and some Members of Parliament and Officers of Government have also places. Sections 4 to 10 of the act are concerned with the setting up of the Board. As nothing turns upon the constitution of the Board, it is not necessary to give the gist of those sections here. The Act imposes duties of Customs and Excise-the former on all Coffee produced in India and exported from India and the latter on coffee released by the Board for sale in India from its surplus pool. The Act compels the registration of all owners of Coffee Estates and licensing of curers and dealers. The Act next imposes a control on the sale, export and re-import of coffee into India. In respect of sale, it fixes prices for sale of coffee either wholesale or retail by registered owners and licensed curers for the purpose of sale in the Indian Market. The Board fixes internal sale quota for each Estate owner and the owner has to observe this quota and also the price fixed. The registered owner may not sell coffee unless it has been cured by a licensed establishment or it is sold uncured under a speciallicence. The Act next prohibits the export of coffee from India otherwise than by the Board or under the authorization granted by the Board. To this restriction, there are a few minor 152 exceptions such as coffee in specified quantities may be exported by taking on board ships or aircrafts intended for Consumption of the crew and the passengers or carried by a passenger for his own use or exported for special purposes specified by the Central Government. The Government is authorised to specify the total quantity of coffee to he exerted during any year. Coffee once exorted cannot be re imported into India except under a permit. The registered owners are required to furnish periodical returns and to furnish such information as may be prescribed. Every registered owner after dealing with the coffee for sale in Indian markets up to the internal quota fixed for him must hand over to the Board all surplus coffee to be included in the Board's Surplus Pool. Similarly, curing establishments are required to surrender to the Board all surplus coffee. Small producers may, however, be exempted from the operation of this condition. After the coffee is delivered to the Board, the control of the Board begins. The Board classifies the coffee and assesses its value based on its quantity, kind and quality. Once the coffee is delivered to the Board, the registered owner or the licensed curer has no rights over the coffee except to receive its price in accordance with s. 34 of the Act. We are not concerned in this petition with any internal sales. The Board has elected to make monthly returns and in these petitions taxes on sales made in March and April, 1969 are challenged. Provisional assessments have been made and demand for taxes held due after allowing credit for taxes already paid, has been made by the respondents under the Madras General Sales Tax Act, 1959. of these, certain sales are claimed to be exempted from Sales Tax under the Madras Act by reason of those being in the course of export of coffee out of India. The, Taxing authorities held that those sales took place within Tamil Nadu State and were thus liable to sales tax under the Tamil Nadu Act. The point of difference arises thus The Coffee Board follows a procedure for selling coffee which is to be exported out of India. Coffee for export is specially screened and selected. It is then exposed in auctions specially held for the purpose. These auctions are known as 'Export Auctions'. To be able to bid on these occasions, exporters have to get themselves registered. The Board maintains a list of registered exporters and gives to each of them a permit which authorises him to take part in the export auction. A specimen of the permit granted with the conditions attaching to it is exhibited as Annexure 'I'. The conditions which are imposed by the permit require a security deposit and a standing deposit from the registered exporter. The security may be in cash or by a guarantee from a bank or Life Insurance Corporation of India. It is provided that the permit is liable to be withdrawn and cancelled by the 153 Chief Coffee Marketing officer if it is found that the permit holder has sold or attempted to sell coffee, bought by him at the export auctions, within the internal market without the written permission of the Chief Coffee Marketing Officer. Similar cancellation is liable to take place if some of the other conditions of the permit are not followed. The Coffee Board has also prepared a set of rules which incorporate the terms and conditions of sale of Coffee in the course of export. These rules have been exhibited as Annexure 11 and they deal with the conduct of auctions and the procedure to be followed therein. They also provide for additional conditions. Rule 4 provides that only dealers who have registered themselves as exporters of coffee with the Coffee Board and who hold permits from the Chief Coffee Marketing Officer in that behalf will be permitted to participate in the auctions. Agents may, however, participate on behalf of exporters but only for one principal at a time. Before the auction, the registered dealer or the agent must show the permit issued to him or have it in his custody for production, if so desired. Before the auction is held, a catalogue of lots of coffee to be put up for auction is issued with the reserve ,price fixed by the Chief Coffee Marketing Officer in his discretion. Samples of Coffee are available for prospective buyers. An auction in the usual way takes place but no one is allowed to retract a bid once made. The highest bid is ordinarily accepted but if there are reasons to believe that the highest or any particular bid is not bona fide or genuine or is the outcome of concerted action for the purpose of controlling or manipulating prices or for other improper purposes or that the bidder is not likely to fulfill his contract or is otherwise undesirable, the bid may be rejected. After the bidding comes to an end and the bids have been accepted, the payment of price takes place in a particular way. We are not concerned with other provisions dealing with failure to fulfill the obligation as to payment of price etc., objections to quality and so on. We are concerned with condition no. 26 which is headed 'Export Guarantee'. This condition is vital in the consideration of the questions involved in this case and may be quoted : "26. It is an essential condition of this Auction that the coffee sold thereat shall be exported to the destination stipulated in the Catalogue of lots, or to any other foreign country outside India as may be approved by the Chief Coffee Marketing Officer, within three months from the date of Notice of Tender issued by the Agent and that it shall not under any circumstances be diverted to another destination, sold, or be disposed of, or otherwise released in India. 6Sup. C. 1. 70-11 154 The aforesaid period may, on application by the Buyer, be extended by the Chief Coffee Marketing Officer in his discretion if he is satisfied that there is good ground to do so, subject nevertheless to the condition that as consideration for such extension, the Buyer shall pay the following additional amounts to the Board The buyer is free to export the coffee either by himself or through any Forwarding Agent but the coffee must not be sold to the Forwarding Agents. In other words, the buyer himself arranges for the export of the coffee he has purchased at the auction and condition 29 imposes an obligation on the buyer to produce immediately after shipping evidence of the export of the coffee to the Chief Marketing Officer. If such evidence is not produced within a period of 60 days, after the time allowed to make the export, the registered exporter is deemed to have committed a default and the provisions of conditions 30 and 31 then apply to him. These conditions are as follows :"30. If the Buyer fails or neglects to export the coffee as aforesaid within the prescribed time or within the period of extension, if any, granted to him, he shall be liable to pay a penalty calculated at Rs. 50/per 50 kilos which shall be deductible from out of the amount payable to him as per Clause 31." "31. On default by the Buyer to export the coffee aforesaid within the Described time or such extension thereof as may be granted, it shall be lawful for the Chief Coffee Marketing Officer, without reference to the buyer, to seize the un-exported coffee and for that purpose to make entry into any building, godown or warehouse where the said coffee may be stored, and take possession of the same and deal with it as if it were part and parcel of Board's coffee held by them in their Pool Stock. Conditions 33 and 34 provide for inspection of coffee stocks and accounts and the buyer is required to send weekly returns. Other conditions need not be noticed here because they have no bearing upon the rival cases. The case of the petitioners is that the purchases at the export auctions are really sales by the coffee Board in the course of export of coffee out of the territory of India since the sales themselves occasion the export of coffee and coffee so sold is not 155 intended for use in India or for sale in the Indian markets. The case of the Sales Tax Authorities is that these sales are not inextricably bound up with the export of coffee and that the sales must be treated as sales taking place within the State of Tamil Nadu which are liable to sales tax under the Madras General Sales Tax Act. The dispute is confined to this aspect of the matter on inherits. The preliminary objection to which wereferred earlier is only this that the petitions do not show a breach of a fundamental right. The petitioners only claim the benefit of the exemptions incorporated in the Constitution or the statute dealing with the levy and collection of sales tax, and their grievance can be investigated and righted by taking recourse to the appellate revisional and other remedies under the relevant statute. We shall begin by considering the preliminary objection. The preliminary objection consists of two parts. The first part questions the standing of the petitioner to move this Court for the enforcement of its so-called fundamental rights. It is argued that the petitioner being a Corporation, has no right to move this Court for the enforcement of fundamental right to hold, acquire and dispose of property since this right is available only to individuals who are citizens and a Corporation is not a citizen. Reliance is placed upon The State Trading Corporation of India Ltd. and others v. The Commercial Tax Officer, Visakhapatnam and others(1). The second part is that there is ample provision for remedies under the Sales Tax Act to question the assessment and a petition under Art. 32 ignoring those provisions should not be entertained. The case of the State Trading Corporation considered the application of Art. 19(1)(f) and (g) in relation to Corporations. It was held that Corporations could not be regarded as citizens for the purpose of Art. 19 since that article is concerned with citizens and corporations have not been declared citing zens by the Constitution. The question was not considered in relation to Art. 3 1 (1). Some other petitions by corporations complaining of breach of Art. 31(1) were entertained by this Court and the petitioner before us relies on those cases as precedents. The true position may therefore be stated. Property as a fundamental right is mentioned in the Constitution in Arts. 19(1) (f), 31, 31 (A) and 31 (B). In Art. 1911. (f) it is provided : "19. Protection of certain rights regarding freedom of speech, etc. (1) All citizens shall have the right(1) [1964] 4 S.C.R. 99. 156 (f) to acquire, hold and dispose of property; and To this sub-clause there is a proviso in cl. (5) which states that nothing in clause (f) shall affect the operation of any existing law in so far as it imposes, or prevent the State from making any law imposing, reasonable restrictions on the exercise of the right conferred, either in the interests of the general public or for the protection of the interests of any Scheduled Tribe. The main clause of the article recognises the institution of private property with all the concomitants of that institution, namely, the acquisition, holding and disposal of property. The proviso recognises, in the public interest, restrictions on the right in existing law or hereafter to be imposed by law. The institution of property thus recognised leaves freedom to acquire,, any kind of property except the one in relation to which there is a restrictive law. Thus it is that certain kinds of properties such as Narcotic drugs, explosives, property in excess of ceiling placed by law etc. cannot be acquired or held. This restriction curtails the general right and the curtailment must justify itself as a law in the public interest. Next we have Arts. 31, 31 (A) and 31 (B). They occur in a section of Part III entitled "Rights to Property". The first of these three articles deals with compulsory acquisition of property. The second and third deal with saving of laws providing for acquisition of Estates etc. and validation of certain Acts and Regulations declared void by Courts. Two fundamental concepts in Art. 31 are (a) that no person shall be deprived of his property save by authority of law, and (b) no property shall be compulsorily acquired or requisitioned save for a public purpose and save by authority of law which itself fixes the amount of compensation or specifies the principles on which compensation is to be determined and given and the manner thereof. Other provisions either restrict or amplify the operation of these two fundamental concepts. In Smt. Ujjam Bai's (1) case the question was, whether assessment of Sales Tax under a valid Act was open to challenge under Art. 32 on the ground of misconstruction of the Act or a notification under it., It was held that the answer was in the negative. That case has given some trouble in view of the different opinion ex pressed in it. It is therefore necessary to state simply the propositions which are settled by this Court. The ruling recognizes the existence of a right to move this Court under Art. 32 where (1) [1963] S.C.R. 778. 157 under Art. 32. It is also pointed out that the proper way to correct them is to proceed under the provisions for appeal etc. or by way of proceedings under Art. 226 before the High Court. In Ramjilal v. I.T.O., Mohindragarh(1) and in Laxmanappa Hanumantappa v. Union of India, (2), taxation laws were unsuccessfully challenged with the aid of Art. 31 (1) read with Art. 265 in petitions purporting to be under, Art. 32. In the former case, it was observed as follows "In our opinion, the protection against the imposition and collection of taxes save by authority of the law directly comes from articles 265 and is not secured by Clause (1) of article 31. Article 265 not being in Chapter III of the Constitution, its protection is not a fundamental right which can be imposed 31(1) to this Court is misconceived and must fail". These propositions were not accepted by the majority Ujjam Bai's(3 ) case. It was observed at p. 941 as follows :"If by these observations it is meant to convey that the protection under Art. 265 cannot be sought by a petition under Art. 32, I entirely agree. But if it is meant to convey that a taxing law which is opposed to fundamental rights must be tested only under Art. 265, I find it difficult to agree. Articles 31 (1 ) and 265 speak of the same condition. A comparison of these two articles shows this Art. 3 1 (1)"No person shall be deprived of his property save by authority of law". Art. 265-No tax shall be levied or collected except by authority of law, "This Chapter on Fundamental Rights hardly stands in need of support from Art. 265. If the law is void under that Chapter, and property is seized to recover a tax which is void, I do not see why Art. 32 cannot be (1) [1951] S.C.R 127. (2) [1955] S.C.R. (3) [1963] S.C.R. 778. 769. 158 invoked........ It is not possible to circumscribe Art. 32 by making the remedy depend only upon Art.265." The position was summed up thus "From this, it is clear that laws which do not offend Part III and are not otherwise ultra vires are protected from any challenge whether under Art. 265 or under the Chapter on Fundamental Rights. Where the, laws are ultra vires but do not per se offend fundamental rights (to distinguish the two kinds of defects), they are capable of a challenge under Art. 32. Where they are intra vires otherwise but void being opposed to fundamental rights, they can be challenged under Art. 265 and also Art. 32." Das, J. (Sarkar, J. concurring) put the same thing differently. He observed that "if a quasi-judicial authority acts without jurisdiction or wrongly assumes jurisdiction by committing an error as to a collateral fact and the resultant action threatens or violates a fundamental right, the question of enforcement of that right arises and a petition under Art. 32 will lie". He added that "where a statute is intra vires but the action taken is without jurisdiction, then a petition under Art. 32 would be competent". Similar observations are to be found in the opinion of Kapur J. Therefore, the majority view considered that a breach of fundamental right guaranteed by Art. 32(1) is involved in a demand for tax which is not leviable under a valid law. The application of these principles finds ample recognition in the following cases of the Supreme Court : (1) State Trading Corporation of India v. The Commercial Tax Officer(1) (2) State Trading Corporation of India v. The State of Mysore(1) (3) Firm A. T. B. Mehtab Majid & Co. v. State of Madras (3). It will be noticed that they are all cases of Corporations and have been considered under Art. 32. The ruling in the State Trading Corporation case referred 'to earlier did not render these petitions incompetent because Art. 31(1) is not limited in its operation to citizens. It mentions "persons" who may be Corporations and group of persons. In Indo China Steam Navigaticn Co. v. Jasjit Singh 4 ) there are some observations that in petitions under Art. 32, no claim of a fundamental right can be made under Art. 3 1 (1) if the statute under which action is taken is valid for then Art. 19 (1) (f ) does (1) [1964] 4 S.C.R. 99. (3) 14 S.T.C. 355. (2) 14 S.T.C. 416. (4) [1964] 6 S.C.R. 594. 159 not apply. These observations run counter to Ujjam Bai's(2) case which is binding on us. The first part of the preliminary objection fails. The second part need not detail us. We have already held that demand of a tax, not backed by a valid law, is a threat to property and thus gives rise to a right to move this Court under Art. 32. The petitioner in such circumstances is not compelled to wait or go through the lengthy procedure of appeals, as laid down in Ujjam Bai's(1) case, analysed by us must be made out. A threat to property unbacked by a valid law or a want of jurisdiction or a breach of the principles of natural justice must be clearly made out, to entitle one to the assistance of this Court. If that is successfully done then the provisions for other remedies do not stand in the way. We accordingly allowed the petitioner to raise the point of jurisdiction before us. -We are concerned in these petitions with the exemption granted by Art. 286(1) (b) of the Constitution which reads : "286. Restrictions as to imposition of tax on the sale or purchase of goods. ." Before the 6th Amendment, the Constitution did not contain any definition of the phrase 'in the course of export'. By that Amendment Parliament has been given the power to indicate the principles on which that phrase is to be construed. In s. 5 (1) of the Central Sales Tax Act, 1956 Parliament has given a legislative meaning of the phrase 'in the course of export' of goods out of the territory of India. It runs thus : "5(1) A sale or purchase of goods shall be deemed to take place in the course of the export of the goods out (1) [1963] S.C.R. 778. 160 of the territory of India only if the sale or purchase either occasions such export or is effected by a transfer of documents of title to the goods after the goods have crossed the customs frontiers of India." The word 'only' in the sub-,section shows that there are only two transactions which can come within the exception. In the case of sales to registered exporters, the second part does not apply and the matter must, therefore, be judged under the first part. Before the enactment of the Central Sales Tax Act, two rulings of this Court had construed the expression and as the legislative definition gives effect to what was laid down in those two cases a reference to them appears necessary. In the State of Travancore-Cochin and ors. v. The Bombay Co. Ltd.(1) four meanings were considered and sales in the course of export were equated to sales which occasioned the export. This Court said : "A sale by export thus involves a series of integrated activities commencing from the agreement of sale with a foreign buyer and ending with the delivery". Again, "We are not much impressed with the contention that no sale or purchase can be said to take place "in the course of" export or import unless the property in the goods is transferred to the buyer during their actual movement, as for instance, where the shipping documents are cleared on payment, or on acceptance,, (1) [1952] S.C.R. 1112. 161 in our opinion, too narrow a construction upon that clause, in so far as it seeks to limit its operation only to sales and purchases effected during the transit of the goods, and would, if accepted, rob the exemption of much of its usefulness". In the State of Travancore-Cochin & Ors. v. Shanmugha Vilas Cashew Nut Factory & Ors.(1) it was again emphasised that sales and purchases which themselves occasion the export of the goods came within the exemption of Art. 286(1) (b). Purchases in the/ State by the exporter for purposes of export were not within the exemption but sales in the State by the exporter by transfer of shipping documents while the goods were beyond the customs barrier were held exempted. It was pointed out that the word 'course' denoted movement from one point to another and the expression 'in the course of' implied not only a period of time during which the movement was in progress but postulated also a connected relation. An act preparatory to export could not be regarded as done in the course of the export of the goods. it was like a purchase for production or manufacture. Therefore a sale in the course of export out of the territory of India should be understood as meaning a sale taking place not only during the activities directed to the end of exportation of the goods out of the country but also as part of or connected with such activities. I Das, J. (as he then was) wished to add one more meaning which apparently was not accepted. It was that the expression indicated, the last purchase by the exporter with a view to export. The meaning given in these two cases is well established. Indeed in the State of Mysore v. Mysore Spinning and Manufacturing Co. (2). this Court said that the point could not be said at large. Parliament having accepted the construction placed by this Court on the expression, we are now required to find out what is meant by the phrase sale which occasions the export. In Burmah-Shell Oil Storage and Distributing Company U.C.T.O.(3) it was pointed out that word 'export' did not mean a mere 'taking out of the country' but that the goods must be sent to a destination ,at which they could be, said to be imported. The same meaning must obviously be given to the phrase 'in the course of export' or in the phrase 'occasions the export,. We have thus to see whether sale is one which is connected with the export of the goods from this country to an importer in another country. The course of export can only begin if there is movement from an exporter to an importer as the result of the sale, and then only the sale can be said to occasion the export. (1) [1954] S.C.R. 53. (2) A.I.R. 1958 S.C. 100'. (3) [1961] 1 S.C.R. 902. 162 In East India Tobacco Co. v. State of Andhra Pradesh(1) purchases made for executing specific orders received from foreign customers were held not to fall within the exemption. It is not enough that the sale is followed by an export or is made for the purpose or with a view to export, the sale must be integrally connected with the export. On the other hand in B. K. Wadar v. Daulatram Rameshwarlal(2) it was held that if property in the goods passed to the buyer after the crossing of the Customs frontier for export out of India the sale was in the course of export. This is because the course of export had already begun and therefore the sale followed the commencement of the export, operation. Transactions of the type of the one in Wadeyar's case do not cause difficulty. There the course of export is quite clear and it is easy to see that the sale is integrally connected with export. Difficulty is likely to be felt when the sale is not so apparently connected. In K. G. Khosla & Co. v. Dy. Commissioner of Commercial Taxes(1) the phrase 'in the course of import' was considered. It was held that in Section 3 of the Central Sales Tax Act the phrases 'Occasions the movement of goods from one State to another' and 'Occasions the import' mean the same thing. The movement, it was pointed out, must be the result of an agreement or an incident of the contract of sale, although it was not necessary that the sale should precede the import. A more direct authority is in Ben Gorn Nilgiri Plantations Company, Coonoor v. Sales Tax Officer(1). In that case sales of the tea-chests at auctions held at Fort Cochin were claimed to be exempt from the levy of Sales-tax by virtue of Art. 2 8 6 (1) (b). The Tea Act, like the Coffee Act, was passed to control tea industry. Under it also an export allotment for each year is declared and each tea Estate receives an export quota allotment. The tea Estate owner can obtain an export licence. The export quota licence is transferable. A manufacturer obtains from the Tea Board allotment of export quota. The manufacturer then puts the tea in chests which are sold in public auctions. Bids are made by agents or intermediaries of foreign buyers. Agents and intermediaries then obtain licences from the Central Government for export. The question was whether the sale to the agent or the intermediary was a sale in the course of export out of India. This Court found nothing in the transaction from which a bond could be said to spring between the sale and the intended export linking them as part of the same transaction. The sellers had no, concern with the export, the sale imposed or involved no (1) [1962] 13 S.T.C. 529. (3) [1966] 17 S.T.C. 473. (2) [1961] 1 S.C.R. 924. (4) [1964] 7 S.C.R. 706. 163 obligation to export and there was possibility that the goods might be diverted for internal consumption. The Court considered the sales as sales for export and not in the course of export. In laying this down the Court observed " . . .". The case however did not attempt to lay down any tests, observing that each case will depend on its own facts. We agree that the facts must always play their due part. We think it is possible to state some tests which can be applied in all cases.by their having already crossed the Customs, frontiers,. or the sale must occasion front India to a foreign destination must be established and the sale must be a link in the same export for which the sale is held. To establish export a person exporting and a person importing are 164 necessary elements and the course of export is between them. Introduction of a third party dealing independently with the seller on the one hand and with the importer on the other breaks the link between the two for them there are two sales one to intermediary and the other to the importer. The first sale is not in the course of export for the export begins from the intermediary and ends with the importer. Therefore the tests are that there must be a single sale which itself causes the ;export orexport sale but that too is not essentially different from the first. course of import : the export of the goods. No other sale can qualify for the exemption under Section 5 (1 ) read with Article 2 8 6 (1 ) (b). The question is whether the sale to the registered exporters can be said to be exempted. In the Indian Coffee Board v. The State of Madras(1) Rajagopalan and Rajagopala Ayyanger, JJ. held that the sale to the registered exporter was a sale for export and only the contract of sale entered into by the registered exporter with the buyer abroad that could be brought within the scope of the exemption. The test applied by the High Court is the test we have indicated and which has found approval in the two earlier cases of this Court which have received legislative recognition. The question to ask is : does the sale to the registered exporter occasion the export which ultimately takes place ? The answer is that on the rulings it must be an integral part of the precise (1) [1056] 7 S.T.C. 135. 165 export before it can be said to have occasioned that particular export. Here there are two independent sales involved in the export programme. The first is a sale between the Coffee Board. Therefore, the first sale has no connection with the second sale which is in the course of export, that is to say, movement of goods between an exporter and an importer. Mr. Setalvad tried to argue that the first sale by the Coffee Board included in it a compulsion to export and he relied upon the observations of Shah, J. in Ben Gorm Nilgiri Plantations case. These observations were not intended to give exemption to sales for export but to sales in the course of export. One of the indicate of a sale in the course of export is the compulsion to export because the sale which is protected must be itself inextricably bound up with the export. If this were not so a claim of sales each making a mere condition for terminal export, will be exempted and the distinction between a sale for "port and a sale in the course of export will completely disappear. In the Ben Gorm Nilgiri Plantations case even the purchases by agents of foreign importers were described as sales for export. No doubt it was said that the sale to the agents did not contain a compulsion to export to the principal but that was said so that the casual connection 'between the sale and the export could be established. The compulsion to export here is of a different character. If only compels persons who buy on their own to export in their own turn by entering into another agreement for sale. The first sale is, therefore, an independent sale. It is a sale for export. Even with the compulsion the sale may not result for clauses 26, 30 and 31 visualize such happenings. It follows, therefore, that unless the sale is inextricably bound up with a particular export it cannot be said to be in its course. If no particular export is in sight the sale by the Coffee Board cannot go beyond the description of sale for export. For these reasons we are of opinion that the decision of the Madras High Court in the case cited above is correct. For the same reasons we are of opinion that this case does not fall within the ruling in Ben Gorm Nilgiri Plantations' case. The petitioner cannot claim exemption from the tax and the department was right in demanding the tax. The petitions fail and will be dismissed with costs. Sikri, J. I have had the advantage of reading the draft of the judgment prepared by the learned Chief Justice. I agree with 166 him that the preliminary objection raised by the respondents is devoid of force, but I regret that I cannot concur with the conclusion that the sales in question were not made in the course of export. With utmost respect, in my opinion he has given an unduly limited meaning to the expression 'if the sale or purchase occasions such export'. My reasons in coming to this conclusion are, in brief, as follows : In Shorter Oxford Dictionary (Illustrated) the word occasion" when used as a verb means : "To give occasion to (a person); to induce; ... To be the occasion or cause of (something); to cause, bring about, especially in an incidental or subsidiary manner." It is said that in the context the word "occasion" means "to cause or to be the immediate cause." When a word bears two meanings the context must determine which is the appropriate meaning to be adopted. What then is the context with which we, are dealing ? The context is the export trade and its undoubted economic importance to this country. Further, each country is more and more organising the export trade and directing its flow in particular directions. The course of export is not the same what it was before the intervention of Governments or their agencies. Moreover the idea underlying art. 286(1)(a) was to restrict the powers of the State to levy taxes on sales or purchases in the course of export so that the export trade may not be ham-' pered. As observed by Patanjali Sastri, C.J. in State of Travancore-Cochin v. The Bombay Co. Ltd. (1), "lest similar reasoning should lead to the imposition of such cumulative burden on the export-import trade of this countrywhich is of great importance to the nation's economy, the Constituent Assembly may well have thought it necessary to exempt in terms sales by export and purchases by import from sales tax by inserting article 286(1)(b) in the Constitution." In my view, keeping in view the aforementioned considerations the wider meaning of the word " occasion" is the more appropriate to apply in the constructionof s. 5 ( 1) It is said that Parliament had accepted the narrower meaning of the word "occasion" because this was the meaning ascribed to it by this Court in State of Travancore-Cochin v. The Bombay Co. Ltd.(1) and State of Tranvancore-Cochin v. Shanmugha Vilas Cashew Nut Factory(2). 1, with respect, am unable to appreciate this argument. In the former case this Court was concerned with (1) [1952] S.C.R. 1112, 1119. (2) [1954] S.C.R. 53. 167 export sales of certain commodities to foreign buyers on C.I.F. or f.o.b. terms. After setting out the four views presented before it, Patanjali Shastri, C.J., speaking on behalf of the Court, observed: "We are clearly of opinion that the sales here in question, which occasioned the export in each case, fall within the scope of the exemption under article 286(1) (b)." Later he said "We accordingly hold that whatever else mayor, may not fall within article 286(1)(b), sales and purchases which themselves occasion the export or the import of the goods, as the case may be, out of or into the territory of India come within the exemption and that is enough to dispose of these appeals." (emphasis supplied). It seems to me that it is wrong to interpret that decision to mean that the Court held that in no other case can sales "occasion" an export. In fact the learned Chief Justice says to the contrary by saying "whatever else may or may not fall." In State of Travancore-Cochin v. Shanmugha Vilas Cashew Nut Factory(1) this Court, inter alia, held that the last purchase of goods made by the exporter for the purpose of exporting them to implement orders already received from a foreign buyer or expected to be received subsequently in the course of business was not within the protection of clause (1) (b). In the course of discussion, apart from referring to a passage from the earlier judgment in which the word "occasion" is used, the word 'occasion' is not mentioned again. No mention is made in this judgment of facts similar to which are present in the present case. What happens when there is legal certainty that the goods are headed for a foreign destination and will not be diverted to the domestic market was not considered as the question did not arise. In State of Mysore v. Mysore Spinning and Manufacturing Co.(1) facts are somewhat closer to the present case, but it does not appear that there was legal compulsion to export and that the Mills, who sold the cloth for sale, could compel the purchasers to export. The general observations therein must be read in the light of facts. With respect, I think it is erroneous to assume that Parliament by using the word "occasion" must be deemed to have used it in (1) [1954] S.C.R. 53. (2) A.I.R. 1958 S.C. 1002. 168 the same sense as Patanjali Sastri, C.J., did. It is an ordinary dictionary word andnot a technical word. He was using it to describe the transactions in those cases, and the narrower meaning was apposite. Even there he guarded himself by saying " whatever else may or may not fall within art. 286 ( 1 ) (b)." It should also be noted that Patanjali Sastri, C.J.,had also qualified the word "occasion" by adding the words "by themselves". These words do not exist in the Act. Similar expression occurring in ss. 3 and 5(2) of the Act has been interpreted by this Court on a number of occasions and I cannot appreciate why the same expression bears a different meaning in s. 5(1). The earlier cases are referred to in K. G. Khosla v. Deputy Commissioner of Commercial Taxes(1). Shah, J., in Tata Iron and Steel Co. v. S. R. Sarkar(2) had interpreted s. 3 of the Act as follows : ." In other words it was held that a sale occasion,,, the movement of goods when the movement "is the result of a covenant or incident of contract of sale." Applying this test this Court observed in Khosla & Co. v. Deputy Commissioner of Commercial Taxes(1) at pp 488-489 : . (1) 17 S.T.C. 473. (2) It S.T.C. 655,667. 169 It will be noticed that the sale which was sought to be taxed but was exempted was the sale to Southern Railway and the contract under which the movement resulted was with the Director General of Supplies. The heart of the matter lies in answering one question. Can two sales occasion an export ? I find no difficulty in answering this question in the affirmative. Two sales can take place in the course of export if they are effected by a transfer of documents of title to the goods after the goods have crossed the customs frontier of India, and they both will be protected under s. 5(1) of the Act. Therefore, it cannot be assumed that it is the intention of s. 5 (1) that only one sale can enjoy the protection of s. 5 (1) Accordingly, apart fromany assumption, can two sales occasion an export ? As I have said, "occasion" does not necessarily mean immediately cause; it also means to "bring about especially in an incidental or subsidiary manner". It the sale by the appellant brings about the export in an incidental or subsidiary manner it can be said to occasion the export. It was in view of those considerations that Shah J. , speaking for the Court, had observed in Ben Gorm Nilgiri Plantations Co. v. The Sales Tax Officers(1) : riot necessarily to be regarded as one in the course of export, unless the sale occasions export." In this passage Shah, J., clearly visualised that a transaction of sale which is preliminary to export may be regarded as in the course of export if the sale occasions the export. The test postulated may be that there must be an integral relation or bond between the sale and export. Why Shah, J., held that the sales were not in the course of export was, to use his words (1) 15 S.T.C. 753, 759. 6 Sup CI/70-12 170 "That the tea chests are sold together' with export rights imputes knowledge to the seller that the goods are purchased with the intention of exporting. But there is nothing in the transaction from which springs a bond between the sale and the intended export linking them up as part of the same transaction .... There is no statutory obligation upon the purchaser to export the chests of tea purchased by him with the export rights. with the actual export of the goods, once the goods are sold. They have no control over the goods. There is, therefore, no direct connection between the sale and expert ,of the, goods which would make them parts of an integrated transaction of sale in the course of export." The case, with, respect, points out clearly what was lacking in the transaction. It is one way of laying down tests. If these incidents had not been missing the Court would have surely held the sale to be in course of export. It seems to me that this judgment is in effect overruling earlier decisions of this Court without saying so. The Calcutta High Court (Ray and Basu JJ.) reviewed the Supreme Court cases exhaustively in S. K. Roy v. Additional Member, Board of Revenue(1) and came to the conclusion that the mere fact that there is not contract between the seller and the; foreign buyer ,does not conclusively establish that a transaction cannot be one in tile course of export'. It may still be held to be such a transaction provided it is established that the contract between the seller and the third party 'occasions' the export. Basu, J., followed this decision in Serajuddin & Company v. Commercial Tax Officer(2). On the facts of this case, the Coffee Board, the sellers, have concern with the actual export of goods. They have made various provisions to see that the purchasers must export. Condition 26, quoted by the learned Chief Justice, unexpected coffee may be seized. Thus the Coffee. (1) 18 S.T.C. 379. (1) 23 S.T.C. 259. 171 Board retains control over the goods. These conditions create a bond between the sale and eventual export. The possibility that in a particular case a purchaser might commit a breach of contract or law and not export does not change the nature of the transaction, I would accordingly allow the petition and declare that the sales held by the Coffee Board at the export auctions were in the course of export and exempt under art. 286(1)(b) of the Constitution, read with s. 5 of the Central Sales Tax Act, 1965, and quash the impugned assessments in so far as they assess such sales. ORDER In accordance with the majority judgment, the petitions fail and are dismissed with costs. G.C. Back
http://www.advocatekhoj.com/library/judgments/index.php?go=1969/october/53.php
CC-MAIN-2018-51
refinedweb
9,845
62.27
Entity Framework - Code First in the ADO.NET Entity Framework 4.1 By Rowan Miller | May 2011 The ADO.NET Entity Framework 4.1 was released back in April and includes a series of new features that build on top of the existing Entity Framework 4 functionality that was released in the Microsoft .NET Framework 4 and Visual Studio 2010. The Entity Framework 4.1 is available as a standalone installer (msdn.microsoft.com/data/ee712906), as the “EntityFramework” NuGet package and is also included when you install ASP.NET MVC 3.01. The Entity Framework 4.1 includes two new main features: DbContext API and Code First. In this article, I’m going to cover how these two features can be used to develop applications. We’ll take a quick look at getting started with Code First and then delve into some of the more advanced capabilities. The DbContext API is a simplified abstraction over the existing ObjectContext type and a number of other types that were included in previous releases of the Entity Framework. The DbContext API surface is optimized for common tasks and coding patterns. Common functionality is exposed at the root level and more advanced functionality is available as you drill down through the API. Code First is a new development pattern for the Entity Framework that provides an alternative to the existing Database First and Model First patterns. Code First lets you define your model using CLR classes; you can then map these classes to an existing database or use them to generate a database schema. Additional configuration can be supplied using Data Annotations or via a fluent API. Getting Started Code First has been around for a while, so I’m not going to go into detail on getting started. You can complete the Code First Walkthrough (bit.ly/evXlOc) if you aren’t familiar with the basics. Figure 1 is a complete code listing to help get you up and running with a Code First application. using System.Collections.Generic; using System.Data.Entity; using System.Linq; using System; namespace Blogging { class Program { static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); // TODO: Make this program do something! } } public class BlogContext : DbContext { public DbSet<Blog> Blogs { get; set; } public DbSet<Post> Posts { get; set; } protected override void OnModelCreating(DbModelBuilder modelBuilder) { // TODO: Perform any fluent API configuration here! } } public class Blog { public int BlogId { get; set; } public string Name { get; set; } public string Abstract { get; set; } public virtual ICollection<Post> Posts { get; set; } } public class RssEnabledBlog : Blog { public string RssFeed { get; set; } } public class Post { public int PostId { get; set; } public string Title { get; set; } public string Content { get; set; } public byte[] Photo { get; set; } public virtual Blog Blog { get; set; } } public class BlogInitializer : DropCreateDatabaseIfModelChanges<BlogContext> { protected override void Seed(BlogContext context) { context.Blogs.Add(new RssEnabledBlog { Name = "blogs.msdn.com/data", RssFeed = "", Posts = new List<Post> { new Post { Title = "Introducing EF4.1" }, new Post { Title = "Code First with EF4.1" }, } }); context.Blogs.Add(new Blog { Name = "romiller.com" }); context.SaveChanges(); } } } For the sake of simplicity, I’m choosing to let Code First generate a database. The database will be created the first time I use BlogContext to persist and query data. The rest of this article will apply equally to cases where Code First is mapped to an existing database schema. You’ll notice I’m using a database initializer to drop and recreate the database as we change the model throughout this article. Mapping with the Fluent API Code First begins by examining your CLR classes to infer the shape of your model. A series of conventions are used to detect things such as primary keys. You can override or add to what was detected by convention using Data Annotations or a fluent API. There are a number of articles about achieving common tasks using the fluent API, so I’m going to look at some of the more advanced configuration that can be performed. In particular, I’m going to focus on the “mapping” sections of the API. A mapping configuration can be used to map to an existing database schema or to affect the shape of a generated schema. The fluent API is exposed via the DbModelBuilder type and is most easily accessed by overriding the OnModelCreating method on DbContext. Entity Splitting Entity splitting allows the properties of an entity type to be spread across multiple tables. For example, say I want to split the photo data for posts out into a separate table so that it can be stored in a different file group. Entity splitting uses multiple Map calls to map a subset of properties to a specific table. In Figure 2, I’m mapping the Photo property to the “PostPhotos” table and the remaining properties to the “Posts” table. You’ll notice that I didn’t include the primary key in the list of properties. The primary key is always required in each table; I could have included it, but Code First will add it in for me automatically. Table-per-Hierarchy (TPH) Inheritance TPH involves storing the data for an inheritance hierarchy in a single table and using a discriminator column to identify the type of each row. Code First will use TPH by default if no configuration is supplied. The discriminator column will be aptly named “Discriminator” and the CLR type name of each type will be used for the discriminator values. You may, however, want to customize how TPH mapping is performed. To do this, you use the Map method to configure the discriminator column values for the base type and then Map<TEntityType> to configure each derived type. Here I’m using a “HasRssFeed” column to store a true/false value to distinguish between “Blog” and “RssEnabledBlog” instances: In the preceeding example, I’m still using a standalone column to distinguish between types, but I know that RssEnabledBlogs can be identified by the fact that they have an RSS feed. I can rewrite the mapping to let the Entity Framework know that it should use the column that stores “Blog.RssFeed” to distinguish between types. If the column has a non-null value, it must be an RssEnabledBlog: Table-per-Type (TPT) Inheritance TPT involves storing all properties from the base type in a single table. Any additional properties for derived types are then stored in separate tables with a foreign key back to the base table. TPT mapping uses a Map call to specify the base table name and then Map<TEntityType> to configure the table for each derived type. In the following example, I’m storing data that’s common to all blogs in the “Blogs” table and data specific to RSS-enabled blogs in the “RssBlogs” table: Table-per-Concrete Type (TPC) Inheritance TPC involves storing the data for each type in a completely separate table with no foreign key constraints between them. The configuration is similar to TPT mapping, except you include a “MapInheritedProperties” call when configuring each derived type. MapInheritedProperties lets Code First know to remap all properties that were inherited from the base class to new columns in the table for the derived class: By convention, Code First will use identity columns for integer primary keys. However, with TPC there’s no longer a single table containing all blogs that can be used to generate primary keys. Because of this, Code First will switch off identity when you use TPC mapping. If you’re mapping to an existing database that has been set up to generate unique values across multiple tables, you can re-enable identity via the property configuration section of the fluent API. Hybrid Mappings Of course, the shape of your schema isn’t always going to conform to one of the patterns that I’ve covered, especially if you’re mapping to an existing database. The good news is that the mapping API is composable and you can combine multiple mapping strategies. Figure 3 includes an example that shows combining Entity Splitting with TPT Inheritance Mapping. The data for Blogs is split between “Blogs” and “BlogAbstracts” tables, and the data specific to RSS-enabled blogs is stored in a separate “RssBlogs” table. protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Blog>() .Map(m => { m.Properties(b => new { b.Name }); m.ToTable("Blogs"); }) .Map(m => { m.Properties(b => new { b.Abstract }); m.ToTable("BlogAbstracts"); }) .Map<RssEnabledBlog>(m => { m.ToTable("RssBlogs"); }); } Change Tracker API Now that I’ve looked at configuring database mappings, I want to spend some time working with data. I’m going to delve straight into some more advanced scenarios; if you aren’t familiar with basic data access, take a minute to read through the Code First Walkthrough mentioned earlier. State Information for a Single Entity In many cases, such as logging, it’s useful to get access to the state information for an entity. This can include things such as the state of the entity and which properties are modified. DbContext provides access to this information for an individual entity via the “Entry” method. The code snippet in Figure 4 loads one “Blog” from the database, modifies a property and then prints out the current and original values for each property to the console. static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); using (var db = new BlogContext()) { // Change the name of one blog var blog = db.Blogs.First(); blog.Name = "ADO.NET Team Blog"; // Print out original and current value for each property var propertyNames = db.Entry(blog).CurrentValues.PropertyNames; foreach (var property in propertyNames) { System.Console.WriteLine( "{0}\n Original Value: {1}\n Current Value: {2}", property, db.Entry(blog).OriginalValues[property], db.Entry(blog).CurrentValues[property]); } } Console.ReadKey(); } When the code in Figure 4 is run, the console output is as follows: BlogId Original Value: 1 Current Value: 1 Name Original Value: blogs.msdn.com/data Current Value: ADO.NET Team Blog Abstract Original Value: Current Value: RssFeed Original Value: Current Value: State Information for Multiple Entities DbContext allows you to access information about multiple entities via the “ChangeTracker.Entries” method. There’s both a generic overload that gives entities of a specific type and a non-generic overload that gives all entities. The generic parameter doesn’t need to be an entity type. For example, you could get entries for all loaded objects that implement a specific interface. The code in Figure 5 demonstrates loading all blogs into memory, modifying a property on one of them and then printing out the state of each tracked blog. static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); using (var db = new BlogContext()) { // Load all blogs into memory db.Blogs.Load(); // Change the name of one blog var blog = db.Blogs.First(); blog.Name = "ADO.NET Team Blog"; // Print out state for each blog that is in memory foreach (var entry in db.ChangeTracker.Entries<Blog>()) { Console.WriteLine("BlogId: {0}\n State: {1}\n", entry.Entity.BlogId, entry.State); } } When the code in Figure 5 is run, the console output is as follows: BlogId: 1 State: Modified BlogId: 2 State: Unchanged Querying Local Instances Whenever you run a LINQ query against a DbSet, the query is sent to the database to be processed. This guarantees that you always get complete and up-to-date results, but if you know that all the data you need is already in memory, you can avoid a round-trip to the database by querying the local data. The code in Figure 6 loads all blogs into memory and then runs two LINQ queries for blogs that don’t hit the database. static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); using (var db = new BlogContext()) { // Load all blogs into memory db.Blogs.Load(); // Query for blogs ordered by name var orderedBlogs = from b in db.Blogs.Local orderby b.Name select b; Console.WriteLine("All Blogs:"); foreach (var blog in orderedBlogs) { Console.WriteLine(" - {0}", blog.Name); } // Query for all RSS enabled blogs var rssBlogs = from b in db.Blogs.Local where b is RssEnabledBlog select b; Console.WriteLine("\n Rss Blog Count: {0}", rssBlogs.Count()); } Console.ReadKey(); } When the code in Figure 6 is run, the console output is as follows: All Blogs: - blogs.msdn.com/data - romiller.com Rss Blog Count: 1 Navigation Property as a Query DbContext allows you to get a query that represents the contents of a navigation property for a given entity instance. This allows you to shape or filter the items you want to bring into memory and can avoid bringing back unnecessary data. For example, I have an instance of blog and want to know how many posts it has. I could write the code shown in Figure 7, but it’s relying on lazy loading to bring all the related posts back into memory just so that I can find the count. static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); using (var db = new BlogContext()) { // Load a single blog var blog = db.Blogs.First(); // Print out the number of posts Console.WriteLine("Blog {0} has {1} posts.", blog.BlogId, blog.Posts.Count()); } Console.ReadKey(); } That’s a lot of data being transferred from the database and taking up memory compared to the single integer result I really need. Fortunately, I can optimize my code by using the Entry method on DbContext to get a query representing the collection of posts associated with the blog. Because LINQ is composable, I can chain on the “Count” operator and the entire query gets pushed to the database so that only the single integer result is returned (see Figure 8). static void Main(string[] args) { Database.SetInitializer<BlogContext>(new BlogInitializer()); using (var db = new BlogContext()) { // Load a single blog var blog = db.Blogs.First(); // Query for count var postCount = db.Entry(blog) .Collection(b => b.Posts) .Query() .Count(); // Print out the number of posts Console.WriteLine("Blog {0} has {1} posts.", blog.BlogId, postCount); } Console.ReadKey(); } Deployment Considerations So far I’ve looked at how to get up and running with data access. Now let’s look a little further ahead at some things to consider as your app matures and you approach a production release. Connection Strings: So far I’ve just been letting Code First generate a database on localhost\SQLEXPRESS. When it comes time to deploy my application, I probably want to change the database that Code First is pointed at. The recommended approach for this is to add a connection string entry to the App.config file (or Web.config for Web applications). This is also the recommended approach for using Code First to map to an existing database. If the connection string name matches the fully qualified type name of the context, Code First will automatically pick it up at run time. However, the recommended approach is to use the DbContext constructor that accepts a connection name using the name=<connection string name> syntax. This ensures that Code First will always use the config file. An exception will be thrown if the connection string entry can’t be found. The following example shows the connection string section that could be used to affect the database that our sample application targets: Here’s the updated context code: Note that enabling “Multiple Active Result Sets” is recommended. This allows two queries to be active at the same time. For example, this would be required to query for posts associated with a blog while enumerating all blogs. Database Initializers By default, Code First will create a database automatically if the database it targets doesn’t exist. For some folks, this will be the desired functionality even when deploying, and just the production database will be created the first time the application launches. If you have a DBA taking care of your production environment, it’s far more likely that the DBA will create the production database for you, and once your application is deployed it should fail if the database it targets doesn’t exist. In this article, I’ve also overridden the default initializer logic and have configured the database to be dropped and recreated whenever my schema changes. This is definitely not something you want to leave in place once you deploy to production. The recommended approach for changing or disabling initializer behavior when deploying is to use the App.config file (or Web.config for Web applications). In the appSettings section, add an entry whose key is DatabaseInitializerForType, followed by the context type name and the assembly in which it’s defined. The value can either be “Disabled” or the initializer type name followed by the assembly in which it’s defined. The following example disables any initializer logic for the context I’ve been using in this article: The following example will change the initializer back to the default functionality that will create the database only if it doesn’t exist: User Accounts If you decide to let your production application create the database, the application will need to initially execute using an account that has permissions to create the database and modify schema. If these permissions are left in place, the potential impact of a security compromise of your application is greatly increased. I highly recommend that an application is run with the minimal set of permissions required to query and persist data. More to Learn Summing up, in this article , I took a quick look at getting started with Code First development and the new DbContext API, both of which are included in the ADO.NET Entity Framework 4.1. You saw how the fluent API can be used to map to an existing database or to affect the shape of a database schema that’s generated by Code First. I then looked at the change tracker API and how it can be used to query local entity instances and additional information about those instances. Finally, I covered some considerations for deploying an application that uses Code First for data access. If you’d like to know more about any of the features included in the Entity Framework 4.1, visit msdn.com/data/ef. You can also use the Data Developer Center forum to get help with using the Entity Framework 4.1: bit.ly/166o1Z. Rowan Miller is a program manager on the Entity Framework team at Microsoft. You can learn more about the Entity Framework on his blog at romiller.com. Thanks to the following technical expert for reviewing this article: Arthur Vickers Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
https://msdn.microsoft.com/en-us/magazine/hh126815.aspx
CC-MAIN-2019-35
refinedweb
3,112
54.12
Did i use the code tag correctly ? let me know? Im trying to figure out how to error trap this program so if the user doesnt put in a valid response then a message will cum up and they will go back 2 the beginning until they do it correctly. also i want to subtract the money they spend and tell them their remaining amount, also i want to keep track of the number of items they purchase. when i do it all that comes up is a whole bunch of numbers. can anyone help when you compile this program i want this format but the numbers arent right can anyone help? /* SIUE's bookstore is having aspecail sale on tiems embossed with the cougar logo. For a limited time, three items, mugs, teeshirts, and pens are offered at a reduced rate with tax included to simplify the sales. Mugs are going for $2.50, teeshirts for $9.50 and pens for 75 cents. Coincidentally, your parents (or spouse, friend, coworker, or other person you know off-campus) just gave you $30.00 to buy SIUE stuff for them.*/ #include<iostream> using namespace std; int main() { char symb; int item_purch, numb_item_purch, quit; double mug, teeshirt, pen, tot_mon, curr_cash, mon_spent; cout << "What do you want to buy today?\n"; cout << "You have 30 dollars to spend.\n"; cout << " A) Mugs $2.50 " << endl; cout << " B) teeshirt $9.50 " << endl; cout << " C) Pens .75 cents " << endl; cout << " D) Quit" << endl; cout << " Enter your letter and press Return when finished" << endl; cin >> symb; switch (symb) { case 'A':; case 'a':; cout << " You have choosen A) mugs for $2.50 \n"; mug = 2.50; cout << " You have " << curr_cash << " remaining \n "; tot_mon = 30; curr_cash = tot_mon - mug; cout << " You have purchased " << numb_item_purch << " today \n "; numb_item_purch = item_purch + 1; break; case 'B':; case 'b':; cout << " You have choosen B) teeshirt for $9.50 "; teeshirt = 9.50; break; case 'C':; case 'c':; cout << " You have choosen C) Pens for .75 cents "; pen = .75; break; case 'D':; case 'd':; cout << " You have choosen D) which means you want to Quit" << endl; quit = 0; break; } cin >> tot_mon; curr_cash = tot_mon - mon_spent; cin >> curr_cash; cin >> mon_spent; return 0; }
https://www.daniweb.com/programming/software-development/threads/258873/error-trap-and-statement-problems
CC-MAIN-2017-34
refinedweb
364
82.65
Find out how Python 3?s function annotations can help you write clearer, more human-friendly Python functions. Straight as an arrow If you?ve been developing Python applications for any length of time, you?ll occasionally find corners of the language that you?re not immediately familiar with. Python 3 makes this a little more common for newer developers ? because Python 3 introduces a whole body of new enhancements and features that improve the language and extend its flexibility. If you?re working through a Python 3 codebase, you might wander upon a function that looks something like this: def useful_function(x) -> int: # Useful code, using x, here return x At first, this is weird. Python doesn?t use arrow notation, like JavaScript ? so what is this arrow doing? This, friend, is a return value annotation, which is part of function annotation, and it?s a feature of Python 3 – and has been here since 3.0! Cool? but what does that mean? Simply put, it?s documentation! Return value annotations are a way to document your code elegantly in-line, by allowing you to simply describe the data type of the ?thing? the function returns. This is effectively a direct equivalent to Python 2?s docstrings ? but it?s sleeker and helps other developers to understand what your functions are kicking out at a quick, cursory glance. Nice! Takeaways So, some key takeaways, so you can get started with making your functions clearer and more readable than ever: - Python doesn?t really interact with them ? it pretty much populates and ignores the annotation, but it?s available in your function?s metadata, which can be useful if you?re doing unusual or tricky stuff that relies on knowing what a function?s return type is. - The information is available as an .__annotations__ attribute on the function, which is a dictionary. So, in our code above, that looks like: useful_function.__annotations__[?return?] ? which should give you ?int?. - Additional context: Because Python doesn?t really do anything with the data, when first introduced in 3.0, it was a pretty loose idea ? and you?d see functions with annotations that were not datatypes, like int, dict or list. You could put whatever you liked in there. After Python 3.5, however, this approach is being strongly discouraged in favour of ensuring that an annotation describes a datatype, and not an arbitrary measure in a programmer?s mind. Stick to data types to keep your code clean and the community happy. 🙂 And now for some homework ? hop on over to Parameter Annotations, and find out how they can help you too. Happy coding! 🙂
https://911weknow.com/whats-this-weird-arrow-notation-in-python
CC-MAIN-2022-21
refinedweb
442
69.38
: Load text as strings into memory. Split strings into tokens, where a token could be a word or a character. Build a vocabulary for these tokens to map them into numerical indices. Map all the tokens in data into indices for ease of feeding into models. 8.2.1. Reading the Dataset reads the dataset into a list of sentences, each sentence is a string. Here we ignore punctuation and capitalization. import collections import re # Saved in the d2l package for later use returns a list of split strings. # Saved in the d2l package for later use inconvenient to be used by models, which take numerical inputs. Now let’s build a dictionary, often called vocabulary as well, to map string tokens into numerical indices starting from 0. To do so, we first count the unique tokens in all documents, called corpus, and then assign a numerical index to each unique token according to its frequency. Rarely appeared tokens are often removed to reduce the complexity. A token does not exist in corpus or has been removed is mapped into a special unknown (“<unk>”) token. We optionally add another three special tokens: “<pad>” a token for padding, “<bos>” to present the beginning for a sentence, and “<eos>” for the ending of a sentence. # Saved in the d2l package for later use: # For padding, begin of sentence, end of sentence, and] # Saved in the d2l package for later use and in detail,. Putting All Things Together¶ Using the above functions, we package everything into the load_corpus_time_machine function, which returns corpus, a list of token indices, and vocab, the vocabulary of the time machine corpus. The modification we did here is that corpus is a single list, not a list of token lists, since we do not keep the sequence information in the following models. Besides, we use character tokens to simplify the training in later sections. # Saved in the d2l package for later use We preprocessed the documents by tokenizing them into words or characters and then mapping into indices. 8.2.6. Exercises¶ Tokenization is a key preprocessing step. It varies for different languages. Try to find another 3 commonly used methods to tokenize sentences.
https://d2l.ai/chapter_recurrent-neural-networks/text-preprocessing.html
CC-MAIN-2019-51
refinedweb
365
54.12
With. PingBack from In looking at your code, I have a related question, kind of. In using the Sharepoint ReportViewerWebPart it is easy to set the ReportPath in the Sharepoint User Interface, but I haven’t been able to figure out how to set the ReportPath in code. I have a requirement to do this in a site definition solution I am developing. Can you point me in the right direction? I am not sure what you starting point is in code, but can determine the report path by using SPContext, which gives you the context of the current site or site collection that you are in. From there you can iterate the lists or items in the list and use the SPListItem.RelativeUrl property (I think) to get the url of the item you are interested in. Hi Cliff, I probably misstated my question. In my site definition, the provisioning code knows the url for the new site, so I know the url for the report rdl files. The ReportViewerWebPart has a ReportPath property that I must update in the site provisioning code to reflect the path for the site being created but I haven’t found any way to get at that property and change it so I was looking for suggestions on how I might do that. If I try to use the SPLimitedWebPartManager, it only sees the ReportViewerWebPart as an ErrorWebPart. My hack right now is to update the onet.xml for the site definition before my provisioning code calls it. Are you using a featurereceiver to do this work in site provisioning? It may be a timing issue as far as when the feature receiver is called. Does SPLimitedWebPartManager allow you to see the properties once the site has been provisioned? If so, you may want to look at a solution like the one linked below where your master page or default.aspx has a control on it that runs the first time someone comes to the site and sets the properties then. I’m using the SPWebProvisioningProvider suggested by Connell so that I can get the site constructed before doing any custom code. The basic code outline is public override void Provision(SPWebProvisioningProperties props) { Microsoft.Office.Server.Diagnostics.PortalLog.LogString(”Beginning CPDCMSiteDef provision code.”); SPWeb elevWeb = null; try { SPSecurity.RunWithElevatedPrivileges(delegate() { // Apply the actual Web template for the CPD CM site // get elevated web – sometimes sharepoint doesn’t like to do stuff inside the elevpriv block elevWeb = props.Web; elevWeb.ApplyWebTemplate(CPDCMSiteDef#0); }); elevWeb.AllowUnsafeUpdates = true; elevWeb.Site.AllowUnsafeUpdates = true; … all of the site pieces are in the onet.xml of the CPDCMSiteDef#0 template. After the site has been created above, I run the custom code, which is where I have tried to use the SPLimitedWebPartManager to get at the ReportViewerWebPart. ListViewWebParts on the same page are accessible but the ReportViewerWebPart doesn’t seem to be. Right now I’m experimenting with an awkward hack. The page in question is provisioned in a module of the onet.xml. Before I call the code to create the site, I update the ReportPath property in the onet.xml file. I’m probably breaking a million rules but it seems to work. Dan, I am not sure why the web part is reacting that way. If I get a chance to investigate I’ll report back. I am now trying to follow your example to create a custom assembly that I can use within an RDL. In my case I have modified the code to return the ServerRelativeUrl, which I am using as the default value for a parameter in a RDL. when the RDL is loaded, it returns the error "Failed to load expression host assembly. Details: That assembly does not allow partially trusted callers". the assembly is in the GAC but I haven’t done anything with the report server config file (it wasn’t clear to me if I needed to when the assembly is in the GAC, or what to add to the config). Do you have any examples of how you’ve deployed and called a custom assembly in a RDL? thanks After I posted this I realized that you can also use the Globals!ReportFolder expression to provide the location of the report in SharePoint. For using a custom assembly in RDL follow this guidance: In your case you may just need to open your AssemblyInfo.cs file in your project (located in the properties folder). Add a reference to using System.Security; at the bottom of the AssemblyInfo file add [assembly: AllowPartiallyTrustedCallers()] thanks for the link – it was very helpful. The change to the AssemblyInfo.cs cleared up my security issue but it raised another. My test report attempts to use the custom assembly to get the default value for a parameter. When I deployed it to my sharepoint site and tried to test the report, it would never render. In debugging the queryString is "" rather than the string with embedded rdl path in your example. I’m guessing that this is because my reporting services is running with "integrated" mode rather than "native" mode? I poked around in the HttpContext while debugging but didn’t see anything with the actual report path. any ideas? What is your assembly doing? What parameter value default is it trying to provide? Where did you get the HttpContext? The example shown in the blog post is running in integrated mode. I used HttpContext embedded in the report, if I remember, to inspect the url by adding a reference to System.Web.HttpContext in the report properties code section and having it output the current request information. The report path can be gathered by using the Globals!ReportFolder expression in the report whether in integrated or native mode. I had just mimicked your code in its own dll, added it to the report references, then called it in an expression. Sounds like I don’t have it quite right. I’ve just started looking at the Globals, which seems to make it even easier. Thanks using System; using System.Collections.Generic; using System.Text; using System.Web; using Microsoft.SharePoint; using Microsoft.SharePoint.Security; using System.Security; namespace CPDCMSiteDef { public class ReportFunctions { public static string GetRelativeSiteUrl() { string relativeSiteUrl = ""; try { string siteCollectionUrl = ""; string webUrl = ""; string queryString = HttpContext.Current.Request.Url.ToString(); int indexStart = (queryString.IndexOf("?") + 1); int indexEnd = queryString.IndexOf("&"); string reportUrl = queryString.Substring(indexStart, (indexEnd – indexStart)); siteCollectionUrl = reportUrl.Substring(0, reportUrl.LastIndexOf("/")); SharePointPermission sharePointPerm = new SharePointPermission(System.Security.Permissions.PermissionState.Unrestricted); sharePointPerm.Assert(); using (SPSite siteCollection = new SPSite(siteCollectionUrl)) { int i = siteCollection.Url.Length; int j = siteCollectionUrl.Length – i; webUrl = siteCollectionUrl.Substring(i + 1, j – 1); int x = webUrl.IndexOf("Project Reports"); webUrl = webUrl.Substring(0, x – 1); using (SPWeb myWeb = siteCollection.OpenWeb(webUrl)) { relativeSiteUrl = myWeb.ServerRelativeUrl; } } sharePointPerm.Deny(); } catch (Exception ex) { relativeSiteUrl = ex.Message + ex.StackTrace; } return relativeSiteUrl; } } } I’ve been banging my head against trying to use the Globals!ReportFolder in a call to a funtion in an external library. In my simple test report I am using a expression =MyFunctions.GetRelativeUrl(Globals!ReportFolder) I get an error that implies the Globals!ReportFolder is empty. Another text box with an expression =Globals!ReportFolder does show the correct value for the ReportFolder. If I hard code a value in the function call, the correct value is returned. Is there a trick to using the Globals!ReportFolder in a function call? thanks When I put this example together I called an embedded expression and called my custom assembly from there. =Code.GetPath(Globals!ReportFolder) Function GetPath(path As String) As String Myassembly.GetPath(path) End Function Rather than doing the string operations on querystring parameter, we can pass the report folder URL as an parameter to SharePoint method. and then we can use this url to form the SPWeb object as follows: SPSite Site = new SPSite("ReportFolderURL"); SPWeb Web = Site.OpenWeb(); Report folder Url can be obtained using Global parameters in report. You are correct, report folder will give you this information.
https://blogs.msdn.microsoft.com/cliffgreen/2008/12/12/discover-sharepoint-context-within-an-integrated-ssrs-report/
CC-MAIN-2019-18
refinedweb
1,339
50.43
slow SBT warning: Getting the hostname was slow Death. A Scala method to run any block of code slowly The book, Advanced Scala with Cats, has a nice little function you can use to run a block of code “slowly”: def slowly[A](body: => A) = try body finally Thread.sleep(100) I’d never seen a try/finally block written like that (without a catch clause), so it was something new for the brain. In the book they run a factorial method slowly, like this: slowly(factorial(n - 1).map(_ * n)) FWIW, you can modify slowly to pass in the length of time to sleep, like this: def slowly[A](body: => A, sleepTime: Long) = try body finally Thread.sleep(sleepTime) iPhone/iOS: How to quit using cellular data when using WiFi I live in Colorado, where cellular reception can be very hit or miss because of the mountains and rolling hills.). My GoDaddy 4GH hosting review Here's my GoDaddy 4GH hosting review: It sucks. A friend on Twitter warned me about it, but sadly, I didn't listen. As a backup to that "review", here's the downtime on just one of my websites for the last several days: Does Yahoo Mail have a memory leak? I. My first problem with Windows How to slowly minimize a Mac OS X window to the Dock using the Genie effect If<< Apple TimeCapsule backups: Initial backup with Apple Time Capsule runs very slow over a Wireless-G network in my home network to a 500 GB Time Capsule that I just purchased, and it has been crawling along.
https://alvinalexander.com/taxonomy/term/2526
CC-MAIN-2019-51
refinedweb
268
63.12
Rendezvous: Concurrency Method in JR Rendezvous: Concurrency Method in JR Join the DZone community and get the full member experience.Join For Free In this post, we’ll see a new feature of JR : the rendezvous. Like asynchronous message passing, this synchronization method involves two processes : a caller and a receiver. But this time, the invocation is synchronous, the caller delays until the operation completes. The rendezvous does not create a new thread to the receiver. The receiver must invoke an input statement (the implementation of rendezvous) and wait for the message. Like asynchronous message passing, this is achieved using operations as message queue. The rendezvous can simplify this kind of operations : int x; int y; send op_command(2, 3); receive(x, y); To make a rendezvous, we use the input statement. I think it’s the harder (but also the most complete) statement of the JR programming language. Here is the general form : inni op_command { //Code block } [] op_command { //Code block }... An op_command specifies an operation to wait for. An op_command is of this form : return_type op_exp(args) st synch by sched Explanations : - return_type : Indicate the return type of the operation we are waiting for - op_exp : The name of the operation or the capability - args : The arguments of the operation - st synch : Add a condition to the operations. This condition indicates which messages are acceptable - by sched : Dictate the order of servicing the messages. Must be numerical. The message with the lowest value of scheduling expression will be serviced in first. If there is no synchronization expression and no scheduling expression, the first serviced invocation is the oldest. If there is a synchronization expression, the first serviced invocation is the oldest selectable expression and if there is a scheduling expression, the first serviced is the first selectable invocation that minimizes the scheduling expression. If there is no selectable message, the input statement delays until there is one. You can also add an else statement to the input statement : inni op_command { //Code block } ... [] else { //Code block } The else block will be executed if there is no selectable message. So an input statement with an else statement will never delay. Lets imagine a simple example. The server returns the sum of 2 numbers after receiving a message. If we write this simple program using an asynchronous message, it’ll give us something like: public class Calculator { private static op void request(int x, int y); private static op void response(int sum, int sub); public static void main(String... args){} private static process Client { send request(33, 22); int sum; int sub; receive response(sum, sub); System.out.printf("Sum %d Sub %d", sum, sub); } private static process Server { int x; int y; receive request(x, y); send response(x + y, x - y); } } It is a little bit complicated for a thing this simple, isn’t it ? Let’s rewrite it with input statement : public class CalculatorInni { private static op int compute(int x, int y); public static void main(String... args){} private static process Client { System.out.printf("Sum %d", compute(33, 22)); } private static process Server { inni int compute(int x, int y){ return x + y; } } } Easier, shorter and clearer, isn’t it ? The Client invokes the compute operation and gets the return value directly because it’s synchronous. If you have an operation with a return value, you doesn’t have to use the call statement, it’s implicit. If you have a void operation, you can use the call statement (but if you don’t, it’s the same by default) before the operation : call op_command(args); And the Server has only to use the input statement to return the sum of the numbers. As you’ve perhaps seen, the receive is only an abbreviation to the simplest form of input statement. So if you write : int x; int y; receive op_command(x, y); It’s the same as if you write : inni void op_command(int a, int b){ x = a; y = b; } But in this case, it’s easier to use the receive statement. The input statement can also be used to service a group of operations in an array : cap void (int) operations = new cap void (int)[12]; //Fill the array inni ((int i = 0; i < 12; i++)) operations[i](int x) { //Code block } More than return, we could also use two new statements in an input statement : - reply : return a value to the caller but doesn’t break the input statement, so you can still make operations in the input statement but you cannot return a value anymore. - forward : delegate the answer to an other operation. So this is the other operation who will answer to the caller, the input statement continues its execution but cannot return a value anymore. Now that we know how to use input statement, we can simplify the ResourceAllocator of the next post. We can do a lot more easier, with two operations and input statements : import java.util.*; public class ResourceAllocator { private static final int N = 25; //Number of clients private static final int I = 25; //Number of iterations public static void main(String... args){} private static op Resource request(); private static op void release(Resource); private static process Client((int i = 0; i < N; i++)){ for(int a = 0; a < I; a++){ Resource resource = request(); System.out.printf("Client %d use resource %d \n", i, resource.getId()); call release(resource); } } private static process Server{ Queue resources = new LinkedList(); for(int i = 1; i <= 5; i++){ resources.add(new Resource(i)); } while(true){ inni Resource request() st !resources.isEmpty() { return resources.poll(); } [] void release(Resource resource){ resources.add(resource); } } } private static final class Resource { private final int id; private Resource(int id){ super(); this.id = id; } private int getId(){ return id; } } } Clearer? The last improvement we can do is to use a send instead of a call in the Client. We doesn’t need to wait for release in client and the input statement can service send invocations as well as call but the send invocations cannot return something. So we’ve now covered the rendezvous synchronization system in the JR system. In the next, and last, post about JR programming language, we’ll see how to distribute our processes on several virtual machines. From Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/rendezvous-concurrency-method
CC-MAIN-2019-39
refinedweb
1,066
52.49
How to Use Rails Active Job Stay connected You a queuing system. You can also use queues to help normalize traffic spikes or load on the server, allowing work to be done when the server is less busy. Active Job was first included in Rails 4.2 as a way to standardize the interface to a number of queueing options which already existed. The most common queues used within Rails applications are Sidekiq, Resque, and Delayed Job. Active Job allows your Rails app to work with any one of them (as well as with other queues) through a single standard interface. For the full list of which backends you can use with Rails Active Job, refer to this page. It's also important to see which features are supported by which queueing system; some don't support delayed jobs for example. Even if you aren't ready to use a queue in your application, you can still use Active Job with the default Active Job Inline backend. Jobs enqueued with the Inline adapter get executed immediately. Using Rails Active Job Active Job has a fairly simple interface and set of configuration settings. Here's how to make use of its various features: Generating a job Active Job comes with a generator which will create not only your job class but also a test stub for it. rails g job TweetNotifier invoke test_unit create test/jobs/tweet_notifier_job_test.rb create app/jobs/tweet_notifier_job.rb Adding an item to the queue If you want to process the job as soon as possible, you can use the perform_later method. As soon as a worker is available it will process the job. UpdateUserStatsJob.perform_later user Queueing for later If you would rather have the job performed a week from now, some queue backends allow you to pass additional time parameters when adding a job. UserReminderJob.set(wait: 1.week).perform_later user). class UpdateUserStatsJob < ActiveJob::Base queue_as :default def perform(user) user.update_stats end end Using Rails Active Job with Sidekiq and Resque Both Sidekiq and Resque rely on having Redis installed, which is where they store the items in the queue. To use either of these, I recommend following the instructions found on the Resque GitHub page or the Sidekiq wiki. We will need to tell Active Job which queue we are using, which can be done in the application config file. In this example, we'll be working with Sidekiq. module MyApp class Application < Rails::Application config.active_job.queue_adapter = :sidekiq end end Sidekiq and Resque both come with web interfaces to view information about the workers and which jobs are in the queue. Sidekiq is more efficient and quick than Resque but requires that your Ruby code is thread-safe. Also, even though this is somewhat down to my own personal preference, Sidekiq has a nicer web interface than Resque does. Common Patterns for Queueing There are a number of common patterns or types of jobs that you want to process in the queue. The basic rule I follow is to ask whether it needs to happen right now and/or if it might take a long time to process. If it has to happen right now (for example, whether someone's credit card information is correct), you'll more than likely have to bite the bullet and process it before the response can go back to the user. Even still, you should think about the user experience by displaying a message letting them know that you're processing their information and it may take a little while. Sending email Sending email is the most common task that can and should be done in the background job. There is no reason to send emails immediately (before the response is rendered), and I always move all emails to the queue. Even if the email server responds in 100ms, that's still 100ms that you're making your user wait when they don't need to. Sending emails via a background job is super simple with Active Job, mainly because it comes built-in to ActionMailer. By changing the method deliver_now to deliver_later, Active Job will automatically send this email asynchronously in the queue. UserMailer.welcome(@user).deliver_later Processing images Images can take a while to be processed. This is especially true if you have a few (or more) different styles and sizes that need to be created. Luckily, both Paperclip and CarrierWave have additional gems which can help them process these images in the queue rather than at the time of uploading. Paperclip uses a gem called Delayed Paperclip, which supports Active Job, and CarrierWave uses a gem called CarrierWave Backgrounder. That doesn't yet support Active Job at the time of this article, but there is an open pull request looking to add this functionality. For Delayed Paperclip, you simply call an additional method letting it know what you would like to process in the background, and the gem will handle the rest. You can even have it process some styles right away, while other styles get processed in the queue. class User < ActiveRecord::Base has_attached_file :avatar, styles: { small: "25x25#", medium: "50x50#", large: "200x200#" }, only_process: [:small] process_in_background :avatar, only_process: [:medium, :large] end This would allow us to show the :small image right away, while the :medium and :large images are done in the background. User uploaded content Often when you have user uploaded content it needs to be processed. This may be a CSV file that needs to be imported into the system, an image which needs to have thumbnails generated, or a video that needs to be processed. A large CSV file may take a few minutes to process, in which time the browser's connection may time out. I've taken to processing most data uploads asynchronously in the queue. The process I use is as follows: Accept the file and upload it to S3 (or wherever you are storing user generated content). Add a job to the queue to process this file. The user will immediately see a success page letting them know that their file has been submitted for processing. The worker will download the file, process it, and mark it as having been processed. Another thing to keep in mind is that you will want to store a report of the import in the database. This may include any records that couldn't be processed due to invalid data. What I do is create a second error file for each import that the user can download. Generating reports Large reports can often take longer to generate than you want your user to wait for. You also might not want to put this sort of load on your app servers. You can generate a report in the queue and then email a link to the user to be able to download it when it is ready. I've seen this be incredibly useful when producing reports for the accounting department, which often needs to download reports with millions of records in them. The flow for generating this type of report is as follows: Allow user to specify which report they wish to generate along with all of its filters. Add a job to the queue to produce this report. The user will immediately see a page or notification letting them know that their report has been submitted for processing and how they can expect to receive it. The user will either be notified within the user interface of the website/app that the file is ready to download, and/or they will receive an email with a link to download the finished report. hbspt.cta.load(1169977, 'c7747d5f-9e9d-4329-8362-71fbdb824ee9', {}); Talking with external APIs External APIs can be flaky and slow, and your users' experience should not depend on them whenever possible. Take this example below where we use their IP address to find out some geo information about them using the Telize API. It generally responds in 200ms to 500ms, which, added to your current response time, can make a large difference. This is something that can wait to be done, especially when used for reporting purposes. Even though this is showing IP geo information, all external APIs should be treated the same way. Talk to them in the background if at all possible. First we'll schedule a job to be done, passing in the IP address of the current request. LogIpAddressJob.perform_later(request.remote_ip) Our job class will accept an IP address, change it to a default if "::1" (localhost) for testing purposes, and then call the LogIpAddress class to actually do the work. class LogIpAddressJob < ActiveJob::Base queue_as :default def perform(ip) ip = "66.207.202.15" if ip == "::1" LogIpAddress.log(ip) end end Here we perform the actual work to be done. This code doesn't implement actually logging the geo info to a log or database. It makes a real remote call to the API to show how long requests like this can take. class LogIpAddress def self.log(ip) self.new(ip).log end def initialize(ip) @ip = ip end def get_geo_info HTTParty.get("{@ip}").parsed_response end def log geo_info = get_geo_info Rails.logger.debug(geo_info) # log response to database end end In our Rails logs we can see what's happening. It enqueues the job with the argument "::1", performs the job right away (because we are using the Inline queue), outputs some debug info from our class, and then lets us know when the job is finished. It also shows that it took 572.39ms. [ActiveJob] Enqueued LogIpAddressJob (Job ID: 839db962-28a0-4e9d-9168-b08674ba192f) to Inline(default) with arguments: "::1" [ActiveJob] [LogIpAddressJob] [839db962-28a0-4e9d-9168-b08674ba192f] Performing LogIpAddressJob from Inline(default) with arguments: "::1" [ActiveJob] [LogIpAddressJob] [839db962-28a0-4e9d-9168-b08674ba192f] {"longitude"=>-79.4167, "latitude"=>43.6667, "asn"=>"AS21949", "offset"=>"-4", "ip"=>"66.207.202.15", "area_code"=>"0", "continent_code"=>"NA", "dma_code"=>"0", "city"=>"Toronto", "timezone"=>"America/Toronto", "region"=>"Ontario", "country_code"=>"CA", "isp"=>"Beanfield Technologies Inc.", "postal_code"=>"M6G", "country"=>"Canada", "country_code3"=>"CAN", "region_code"=>"ON"} [ActiveJob] [LogIpAddressJob] [839db962-28a0-4e9d-9168-b08674ba192f] Performed LogIpAddressJob from Inline(default) in 572.39ms Notifying others of changes When a user creates new content (for example, they tweet something), you often have to let others know of that change. Determining who to notify can be a difficult (slow) process, and there is no reason to slow down the experience of the user who is creating this content. If the tweet is created successfully, you can add a job in the controller to notify users that were mentioned or who follow this user. def create @tweet = Tweet.new(tweet_params) respond_to do |format| if @tweet.save TweetNotifierJob.perform_later(@tweet) format.html { redirect_to @tweet, notice: 'Tweet was successfully created.' } format.json { render :show, status: :created, location: @tweet } else format.html { render :new } format.json { render json: @tweet.errors, status: :unprocessable_entity } end end end In the Job class, we can simply pass the work off to a specialized class for notifying users about this tweet. class TweetNotifierJob < ActiveJob::Base queue_as :default def perform(tweet) TweetNotifier.new(tweet).notify end end Our TweetNotifier class does the bulk of the work. It parses the Tweet looking for who was @ mentioned and also adds this Tweet to the timeline of any User which follows this User. class TweetNotifier def initialize(tweet) @tweet = tweet end def notify notify_mentions notify_followers end private def notify_mentions # search for @ mentions and notify users end def notify_followers # add tweet to timelines of user's followers end end GlobalID for Object Serialization You'll notice in the last example that I actually just passed the entire tweet object to the worker. It used to be quite common to have to pass the tweet ID and then query for that tweet once inside the worker, but GlobalID allows us to pass the entire object and handles the serialization and deserialization for us. In Conclusion Active Job is a great addition to Rails. It won't get you out of having to learn how to best use the queue backend that you end up going with, but it will provide a clean and single interface for adding jobs and processing those jobs, no matter the backend. If you're starting a new Rails project or adding a queueing system to an existing one, definitely think about using Active Job rather than talking directly to the queue. Using queues can increase your website usability (by lowering response times), provide more consistent response times and server loads (by spreading the heavy lifting over various servers and workers), and open new doors to what your website can do (by allowing more complex processing out of the user request/response flow). Stay up to date We'll never share your email address and you can opt out at any time, we promise.
https://www.cloudbees.com/blog/how-to-use-rails-active-job/
CC-MAIN-2021-17
refinedweb
2,139
60.65
Trying to get my first CLR external function call working, compiled with VS.NET 2005 C++/CLI, @@version = 12.0.1.3406 Useful commands: grant connect to zookeeper; alter external environment clr location 'bin32\\dbextclr11'; alter procedure zookeeper.dummy() external name 'zookeeper.dll::dead.dummy( )' language clr; A "call zookeeper.dummy()" succeeds instantly once the environment is loaded, with no obvious error messages. Neither the class nor the function exists within the dll file. The dll file has been dropped into the BIN32 folder, ProcessMonitor confirms that it is being loaded. Debugging the dbextclr12.exe process (confirmed as 32bit on a 64bit server) shows no Zookeeper.dll module being loaded, I don't know whether this is expected or not. The only error I get is from a method call that returns a resultset, the error is "Table 'ExtEnvResultSet23' not found" (the number increases by one on each attempt) DumpBin does appear to show that this is a valid CLR executable: Dump of file zookeeper.dll PE signature found File Type: DLL clr Header: 48 cb 2.05 runtime version 12838 [ 7B34] RVA [size] of MetaData Directory 10 flags EF20 entry point (1000EF20) 0 [ 0] RVA [size] of Resources Directory 0 [ 0] RVA [size] of StrongNameSignature Directory 0 [ 0] RVA [size] of CodeManagerTable Directory 1A36C [ 10] RVA [size] of VTableFixups Directory 0 [ 0] RVA [size] of ExportAddressTableJumps Directory 0 [ 0] RVA [size] of ManagedNativeHeader Directory Section contains the following imports: KERNEL32.dll MSVCR80.dll WS2_32.dll msvcm80.dll MSVCP80.dll mscoree.dll I've got plenty of debugging to do (i.e. write a different client), but this feels like I'm not even getting my foot in the door. I had the same symptoms with the 64bit version of dbextclr11 (which probably shouldn't have successfully loaded the DLL) asked 09 Sep '11, 13:43 Erik Anderson 421●8●12●23 accept rate: 15% edited 12 Sep '11, 12:54 Volker Barth 31.6k●321●465●678 ... so what exactly is your question? I guess the question is how I get ASA to actually call into the CLR code rather than just returning "Procedure completed" without even checking to see that the procedure even exists inside the DLL. At the minimum I'd appreciate some kind of error message at least saying "nothing's working, can't load the DLL, can't find the function, function threw an exception, external environment crashed", etc. The utter silence means I have to stare at ProcessMonitor traces and see if I can divine what it's doing (or not doing) I agree that the errors you are getting seem not helpful. Unfortunalety, the calling details are way beyond my knowledge. Nevertheless, in my book it would be much more reasonable to try to call an existing function/method from a DLL and check whether that does its work - than to check what happens when a not existing function is called. Wouldn't it be more helpful to write a small DLL with a function/method that does something simple but "noticeable" (showing a message box or writing to a file) and call that from SQL? At least that's how I did my first steps with external functions, as far as I remember. Yah, already gone through that, ended up with the call to the nonexistent function. There's no difference between a function that exists and one that doesn't, no difference between a 64bit host (which shouldn't have successfully loaded the DLL) and a 32bit version. ProcessMonitor shows it opening the DLL on every request, VC.NET in CLR debugging mode shows no activity. I'm thinking there's just something fundamentally wrong or unexpected with the DLL that I'm throwing at it in a way that ASA doesn't expect or know how to handle. I have not had time to write a different client to call the functions and make sure there's nothing Bad happening, so it's possible this thing may be throwing an exception on load or someting. If I run out of ideas I may just give up on this external environment stuff and go back to writing C++ extnapi plugins, I've at least gotten those to work. Running inside of the database process is a bit scary though, you can never make mistakes. FWIW, starting with v11, you can also let your external C/C++ procs run in external environments - cf. this FAQ. Basically, you might just have to change the procedure definition. Okay, first of my answers that might actually be an answer. If this seems... umm... normal then this might be the answer for this question. (1) external environment clr is having problems loading anything other than compiled bare files (directly compiled using the document csc.exe calls). Solution: create a single-file c# wrapper that calls the library function. Compile the c# wrapper using "csc.exe /reference:..libraryDebuglibrary.dll" so it can find the destination. Distribute both DLLs together. (2) external environment clr is having problems returning error messages Solution: every call the wrapper implements will have a "catch(Exception e)", translate as much of the exception as we can (into xml), then return to a supporting SP to decode and rethrow the error message (using OPENXML). As long as the wrapper doesn't throw any errors the response looks good. answered 15 Sep '11, 20:56 Great analysis, maybe someone from Sybase can confirm or identify a bug? If it really matters anymore, a looking-back addendum to this issue. When I completed testing and deployed our code to the production server I did start getting error messages (this time about DLLs not in the wrong place) and it started to become clear to me that whatever issues I was having here were specific to my development machine. It wouldn't be the first time I ran into "me only" errors, but it's still frustrating to run into them. I'm still using the workarounds I developed to get around this issue, mostly because I don't really have any means of developing or testing without them. answered 04 Jan '12, 15:44 In your script, you are refering to alter external environment clr location 'bin32\\dbextclr11'; Shouldn't be that dbextclr12 when using v12? I'm no .NET expert at all but I guess you might call a v11 external environment, and that might not work with v12... answered 12 Sep '11, 13:01 Volker Barth 31.6k●321●465●678 accept rate: 32% Yah, I noticed that after I posted it, lol. I'm guessing it was set by the first version of ASA that supported external environments? I'm assuming this location setting is preserved through unloads. I did verify that it was loading dbextclr12.exe, changing it to bin32\dbextclr12 had no effect on the issue. I'm guessing there's some special-case logic inside the engine for detecting "olde" locations. Thank you for noticing though :-) I'm still half expecting this to be my fault somewhere, this feature has been likely working for many people for many years and two major versions. Not sure what the protocol is here in giving answers that aren't answers to your own problem. But I did get around to writing a one-line C# client and doing test calls to the same functions I'm trying to call from ASA. If I compile the C# application as "AnyCPU" then I get a BadImageFormatException (as expected, the DLL is explicitly 32bit). If I compile it as "x86" then everything seems to work as intended. I did make extra sure that I'm running the 32bit version of dbextclr, both with the ALTER described above as well as ProcessExplorer making sure it's running the version in the BIN32 folder. Next step is prob to apply EBFs, maybe strip out all code so there's nothing but stubs left in my code, etc. So no real progress yet, other than eliminating potential causes. answered 13 Sep '11, 18:02 I can't claim whether the compilation as "x86" is considered a solution or a workaround... Some more info on 32/64-bit may be found in this FAQ. And clearly Karim or others will have much more insight than me. So with a C# wrapper around your managed C++ DLL you say it works now? The "Any CPU" setting determines the "launch bitness" of the CLR - dbextclr really just "launches" the CLR and passes everything back and forth from the database server. On 64-bit machines, the .NET Common Language Runtime (CLR) has the option to run as a 32-bit environment or as a 64-bit environment. The way this is determined is through the use of the "/platform" Visual Studio project switch ( ) , which was added to Visual Studio as part of its 64-bit support. This option is usually configured through the project's "Build Target" menu in Visual Studio and is set for "Any CPU" by default. (See the article here for a longer explanation). This means that if you're running the 32-bit database server, with a 32-bit dbextclr, on a 64-bit OS, you'll need to set the switch to "/platform:x86" on csc. That C# wrapper is a wonderful idea for isolating this issue here, especially if dbextclr has a nonstandard loader on it. I'll try to put together a sample proxy and see if i can trap whatever error is being thrown (if there is an error and it's just not the dll prejudiced against loading a C++ dll. I am running a 64-bit database server with a 32-bit dbextclr with this (which is why I used the ALTER to try to force 32bits). The AnyCPU test I ran was a limitation on what the client needed to be compiled with. I'm guessing that dbextclr running as 32bit means it was compiled with /platform:x86 on it, but things are getting nuanced enough that I'm starting to not know my left from my right. From Karim's above cited answer: Note that this problem is resolved in SQL Anywhere 12 since the dbextclr12.exe is now built specifically with /platform:x86 (for the bin32 one) and /platform:x64 (for the bin64 one). Note that this problem is resolved in SQL Anywhere 12 since the dbextclr12.exe is now built specifically with /platform:x86 (for the bin32 one) and /platform:x64 (for the bin64 one). Yah, think I saw that before. Prob means I should apply the latest EBF before going much further with this (may not have been changed in the first 12 release). Also means things can get even weirder, trying to second-guess what's going on inside of dbextclr. I'll know that I've officially gone crazy once I start randomly changing the case of my DLL filenames. "I'll know that I've officially gone crazy..." - can't confirm that:) FWIW, 12.0.1.3406 is a quite fresh EBF, and the mentioned "dll" case problem should be fixed there, cf. this. More answers that aren't answers to my own question... This is now an environmental issue. I have compiled the following .cs file using the "don't compile it as part of a project" instructions in the help file and am getting the exact same symptoms. So this is no longer a "I'm doing something weird" issue, it's more of a "it's broken for me (and works for everyone else) issue". Unless the 32bit dbextclr12.exe instance is still an issue... using System; namespace zootest { class Program { static String echoTest() { return "hello"; } static string exceptionTest() { throw new OperationCanceledException("uh-oh"); } /* static void zooTest() { AsaZooClient.Delete(""); // should throw a "function sequence error" exception }*/ } } alter external environment clr location 'bin32\\dbextclr12'; grant connect to zookeeper; create function zookeeper.test1() returns long varchar external name 'zootest.dll::Program.echoTest( ) string' language clr; create procedure zookeeper.test2() external name 'zootest.dll::Program.exceptionTest( )' language clr; //create procedure zookeeper.test3() // external name 'zootest.dll::Program.zooTest( )' language clr; create procedure zookeeper.test4() external name 'zootest.dll::Program.nothingHere( )' language clr; E:\modules\zootest>csc /target:library /out:zootest.dll Program.cs Microsoft (R) Visual C# 2005 Compiler version 8.00.50727.4927 for Microsoft (R) Windows (R) 2005 Framework version 2.0.50727 Time to check for EBFs I think... answered 14 Sep '11, 19:15 From another example it seems to me, that you need to add the namespace to your function declaration, like: external name 'zootest.dll::zootest.Program.echoTest( ) string' language clr; Good idea, makes sense, no effect select zookeeper.test1() => NULL call zookeeper.test2() => Procedure Completed call zookeeper.test4() => Procedure Completed EDIT: Ooh, much better, after comparing to the samples. Still not there yet though. Dropped the namespace declaration (didn't know you could do that) and added a bunch of missing PUBLIC keywords. Then when compiling the single file I get the following: select zookeeper.test1() => 'hello' call zookeeper.test2() => Procedure Completed call zookeeper.test4() => Procedure Completed If I compile it as a module then nothing works (again), and the original library I was trying to use did not include any namespace declarations or missing PUBLIC keywords. As this could technically be considered "working" (the instructions seem to stress use of csc.exe quite heavily, which does not permit the compilation of entire programs), I'm not sure how much farther I can push. I'm also not sure how much I can leverage this without trying to use proxy C# and reflection to try to forge a reference and force-load the real library I want to run. All in all there's progress here, and a path to a solution, but not one that I like. If things go really bad I might discover that you aren't allowed to return resultsets that weren't originally created by the database engine... Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown Question tags: .net ×58 external-call ×22 external-environment ×22 clr ×17 question asked: 09 Sep '11, 13:43 question was seen: 2,846 times last updated: 04 Jan '12, 15:44 CLR external environment .Net 4.5 support Problem setting up CLR environment for .NET 4.5 Is there a property to get the process ID of an external environment? W2K8 x64 and external clr function calls Can someone clarify this "64 bit server limitation" External environment support in Python language What could cause the error "External environment could not be started, main thread not found"? Do external calls and proxy connections count for the maximum multiprogramming level? SA-12: How to speed up call external function? When to call STOP EXTERNAL ENVIRONMENT SQL Anywhere Community Network Forum problems? Maintenance log and OSQA Disclaimer: Opinions expressed here are those of the poster and do not necessarily reflect the views of the company. First time here? Check out the FAQ!
http://sqlanywhere-forum.sap.com/questions/7385/external-environment-clr-call-succeeds-without-effort
CC-MAIN-2018-05
refinedweb
2,519
63.19
Python is a high-level Computer language mainly known for its easy understandability. It allows the programmer to express their ideas in fewer lines of code without decreasing readability. It may use for Software Development, Image Processing, Machine Learning & many more. OpenCV is a library of Python used to solve theoretical problems in a program or software. OpenCV A video and image-processing library with binding in Python, OpenCV, was built by Intel and first released in 2000. First, practically used on a car named Stanley, the vehicle won DARPA Grand Challenge in 2005. OpenCV may use in every aspect of video and image analysis, like detection of a face and facial recognition, license plate reading of a car, photo editing, robotic vision, path planning, and a whole lot more. The OpenCV also supports many algorithms related to Computer Visualization and Machine Learning and is expanding day by day. This being an open-source library, developers contribute to the documentation, library, and tutorials actively to improve the library. How it Works OpenCV function works with NumPy, A highly enhanced library for numerical procedures with a MATLAB-style syntax. All the OpenCV array structures may convert to and from NumPy arrays. It also makes it easier to participate with other libraries that use NumPy such as Matplotlib and SciPy. Installation in Windows The best support of a Python developer is pip. To download and install any library, open cmd (Command Prompt) and write: pip install python-opencv2 pip install NumPy pip install matplotlib After installation, make sure to import the libraries in the framework by using these commands: Import python-opencv2 Import NumPy Import matplotlib Note: There are multiple versions of OpenCV. OpenCv2 is used for this tutorial. Load Image with OpenCV Import cv2 # colored Image Img = cv2.imread (“goat.jpg”,1) # Black&White (gray-scale) Img_1 = cv2.imread (“goat.jpg”,0) The first thing first is to make sure to import cv2, as seen in the above code. - The name of the picture is ‘goat’. - The image may read by using cv2.imread() function. - As commented in the code, the first picture with parameter ‘1’ is a color picture. Picture may import in a Color or Black and White mode depending on the parameter set by the programmer. Parameter ‘0’ indicates Black & White mode. Display Image with OpenCV Print / Display image using OpenCV is quite a simple process. The function used to display image is cv2.imshow(). cv2.imshow (“goat”, img) The cv2.imshow() function may use to display the image by opening a window. There are two parameters for the cv2.imshow() function; first is the name of the window (goat), and second is the image (img) object used to display the type of picture. There are some conditions to this function. - An 8-bit unsigned image may present as it is. - If the image is a 16-bit unsigned or 32-bit integer, 256 divide the pixels. That is, the value range [0,255*256], which is [0,255]. - To display an image that is more significant than the screen resolution, NamedWindow() function may use before cv2.imshow() Some more essential functions of OpenCV are. Print () The shape of the image may refer to the shape of NumPy array. The result would be consist of a matrix that consists of rows and columns. Print (img.shape) Resized_image () A function uses to resize an image to the desired shape. Users may input the new shape using parameters. Note: Image object used earlier is changed to resized_image from this point onward. Make sure to use, resized_image while getting the output. resized_image = cv2.resize(img, (650,500)) cv2.imshow(“Goat”, resized_image) WaitKey () After waiting for an event, WaitKey makes the window static until the user performs any operation. The parameter passed to it is the time in milliseconds, which is ‘0’ in most cases. DestroyAllWindow () It destroys all open windows. To destroy a specific window DestroyWindow(WinName) may use it. A program of discussed function is below: import cv2 Img = cv2.imread (“Goat.jpg”,0) resized_image = cv2.resize(img, (1024,768)) cv2.imshow(“Penguins”, resized_image) # Print (img.shape) cv2.waitKey(0) cv2.destroyAllWindows() Facial Recognition Using OpenCV The algorithm used for face recognition in cell phones and laptops is built using the same library. The critical activity required to proceed with this activity is as below. - First, create a Cascade Classifier with an image; it gives the features of a face. - OpenCV reads the image and the features file. At this point, there are NumPy arrays at the significant data arguments. - Now search for the row & column values of the face NumPy BArray, an array with the face rectangle coordinates. - The last step contains displaying the image with the rectangular face container. CascadeClassifier CascadeClassifier is an XML file that contains the features of the face. It extracts the features of the face, as discussed in the key points. Rectangular Face Container To define a rectangle, another pre-defined library may help that, known as cv2.rectangle (). In, cv2.rectangle a rectangle may create by passing parameters such as an object, box outline in RGB and the width of it. Sample Code of the following instance is below. import cv2 # Create a CascadeClassifier Object face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml" # Reading the image as it is img = cv2.imread("goat.jpg") # Reading the image as gray scale image gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) # Search the co-ordintes of the image faces = face_cascade.detectMultiScale(gray_img, scaleFactor = 1.05,minNeighbors=5) for x,y,w,h in faces: img = cv2.rectangle(img, (x,y), (x+w,y+h),(0,255,0),3) resized = cv2.resize(img, (int(img.shape[1]/7),int(img.shape[0]/7))) cv2.imshow("Gray", resized) cv2.waitKey(0) cv2.destroyAllWindows() Basic Applications of OpenCV There are Images and video analysis boiled down to streamlining the source as much as possible. It usually begins with a conversion to grayscale, but it can also be a color filter, gradient, or a combination of these. From here, we can do all sorts of analysis and transformations to the source—some examples of what we can do at a basic level. - Background Subtracting. - Edge Detections. - Colour Filtering. - Feature match for an Object. - Face Recognition. - Motion Detection. - Plotting of Motion Detection Graph. OpenCV is not only bound to image processing, but it is also widely used for performance measurement, mathematical tools, and arithmetic operations over images. There are machine learning areas where OpenCV is widely contributing its processing, and there is enhancement being done by the open-source community to take OpenCV to new dimensions.
https://pdf.co/blog/opencv-with-python
CC-MAIN-2021-17
refinedweb
1,101
58.18
hiiiiii! I'm going slightly insane here! This is the declaration for SDL thread creation: SDL_Thread *SDL_CreateThread( int (*fn)(void *), void *data ); Based on this and the example I found, I take it this means the first parameter is a pointer to an integer returning function ! Great I thought! So I've been trying to pass this function a pointer to an object's function. This object is declared within a namespace ! This is my latest attempt to make this happen (not exact) :- I've tried a few variations but this just looks right to me :(I've tried a few variations but this just looks right to me :(Code: void a::myclass::init() { mythread = SDL_CreateThread( &this->thefunc, NULL ); } int a::myclass::thefunc(void *whatever) { return 1; } The error I'm now getting is this :- cannot convert `int (a::myclass::*)(void*)' to `int (*)(void*)' for argument `1' to `SDL_Thread* SDL_CreateThread(int (*)(void*), void*)' Does this mean that a function declared thus will not allow anything within a namespace or an object to be passed to it? Is there anything I can do to get around this? I tried casting it with a (*) and it didn't work (I wasn't surprised lol). Thanks in advance people!
http://cboard.cprogramming.com/cplusplus-programming/94096-pointer-objects-function-parameter-printable-thread.html
CC-MAIN-2014-23
refinedweb
206
67.18
We list below websites of a few Used Cars in Murfreesboro for your information. The list is neither exhaustive nor representative of Used Cars in Murfreesboro. We wish to present this information of Used Cars in Murfreesboro websites only for reference purposes. We assume no responsibility for the professional ability or integrity of the persons or firms whose names appear on the following list of Used Cars in Murfreesboro. Alexander Chevrolet, Oldsmobile, Cadillac - Part of the Alexander Automotive Family since 1996. Build your own new vehicle or choose from pre-owned vehicles on our lot. Alexander Ford, Lincoln - Mercury, Inc. - Builds a vehicle to specifications, arrange for credit, and dealership services. European Automotives - Repairs on Volkswagon, Porsche, Mercedes-Benz, BMW, Audi, and all import autos. Future Transmission Parts - Carries a complete line of automotive parts, equipment, and tools. International Cars Company - Used car dealer. Includes inventory of vehicles, driving directions, and other information. Tennessee Truck Center - The only Nissan Diesel authorized dealer in Middle Tennessee. Sells new and used trucks and maintains 24-hour parts and service departments. Includes online vehicle inventories. Precision Tune Auto Care - Complete line of automotive services including oil changes, tune ups, brakes, and emission control repair. Offering monthly coupon specials. Reddell Honda - Search for new and pre-owned automobiles by make or price range. Also features internet specials on service and parts.
http://habibintl.com/used-cars-murfreesboro.htm
CC-MAIN-2017-30
refinedweb
227
51.14
We understand how phone numbers work, don’t we? Someone calls your number and your phone rings. But wouldn't it be nice to have more control - to screen incoming calls and treat them differently depending on who is calling? _2<< Once you’ve set this up you can hand out your Twilio phone number freely, knowing that you can easily block unwanted callers, or redirect people to any other number depending on who they are. To set this up you will need: - A Google account - A Twilio account - A Java development environment We'll set this project up in four stages: - Fetch the code and see how it works - Create a Google Sheet - Configure the code to connect to your Google account - Configure Twilio to use your code. 💼 Fetching the Java project Start by cloning the complete project from GitHub, or downloading it. Once you have fetched the code, open it in your IDE. This post will talk through the code and show you how to configure it to work with your Twilio and Google accounts. 📱⬌🖥️ How Twilio handles incoming calls When a call is received to your Twilio phone number, you decide what happens next. The mechanism for this is called a Webhook: Twilio makes an HTTP request to a URL you provide and the HTTP response should contain instructions that Twilio can understand, in a type of XML that we like to call TwiML. The webhook request will have several parameters, including one called From which contains the caller’s phone number. This project is a Spring Boot web application that can handle Twilio’s webhooks, look up the caller’s number in a Google Sheet and return the right kind of TwiML. Let's look into the code to see how that works: 🎣 The Webhook handler Open up the WebhookHandler class and you will see this code: @RestController public class WebhookHandler { @Autowired private ActionLookup actionLookup; @RequestMapping(value = "/call", produces = "application/xml") @ResponseBody public String handleIncomingCall(@RequestParam("From") String from) { Action action = actionLookup.getActionForNumber(from); return action.generateTwiml(); } } The class is annotated with @RestController and the handleIncomingCall method has @RequestMapping and @ResponseBody annotations. Together these annotations tell Spring to configure an instance of this class to handle incoming HTTP requests, and to pass back whatever the method returns, with the correct content-type header. The @RequestParam annotation on the from argument tells Spring to extract that parameter from the request and pass it as an argument to the method. ✍️ Defining Actions to generate TwiML In the WebhookHandler, the caller’s number is passed to another method called getActionForNumber. We will look at getActionForNumber in a moment, but first have a look at what it returns: an Action. This is an interface with two implementations, so it’s either a BlockAction or a ForwardAction. These are all defined in the project. Using BlockAction as an example, the code is: public class BlockAction implements Action { private final String message; public BlockAction(String message) { this.message = message; } @Override public String generateTwiml() { return new VoiceResponse.Builder().say( new Say.Builder(message).build() ).build().toXml(); } } generateTwiml uses the Twilio helper library to generate TwiML containing a <Say> verb that reads out a message and then hangs up. The content of the message is passed into the constructor. The ForwardAction is similar but has a <Dial> verb instead of a <Say>. The Twilio helper library is a dependency in pom.xml, the Maven build file. 🤔 Creating the right Action for your caller Now we know what an Action is and what it does, lets see how they are created. WebhookHandler has an ActionLookup instance set up to be autowired by Spring Dependency Injection: @Autowired private ActionLookup actionLookup; Let's have a look inside: open the ActionLookup class in your IDE. It seems like a lot of code but much of it is error handling to determine what to do if we can’t connect to Google Sheets or if the caller’s number isn’t in the sheet. The most interesting methods are parseGoogleSheetsData and createActionMap which build up a Map from phone numbers to Actions: private Map<String, Action> parseGoogleSheetsData(List<List<Object>> rawData) { return rawData.stream() .skip(1) // skip the row with column headings .collect(Collectors.toMap( row -> Objects.toString(row.get(0), null), row -> Action.create( Objects.toString(row.get(1), null), Objects.toString(row.get(2), null)))); } private Map<String, Action> createActionMap() throws IOException { String range = "CallFilters!A:C"; ValueRange response = sheetsService.spreadsheets().values() .get(SHEET_ID, range) .execute(); List<List<Object>> values = response.getValues(); return parseGoogleSheetsData(values); } In these methods, createActionMap fetches the contents of your GoogleSheet, specifying CallFilters as the name of the worksheet and asking for columns A:C. We pass the result to parseGoogleSheetsData to create an Action for each phone number in your sheet. I found that the Streams API made this code easy to write, especially after writing Actions#create, a factory method for creating Actions. The Sheets instance @Autowired into this class is created in GoogleSheetServiceBuilder. This is all boilerplate for authenticating with Google and not that interesting to dive into. 🎓 Code Recap - Twilio calls your webapp passing the caller’s number as the Fromparameter. - Spring routes this request to WebhookHandler#handleIncomingCall - That method calls into ActionLookupwhich fetches all the data from your Google Sheet and returns an Action. - The Actionis either a BlockActionor a ForwardAction. You could extend this to have other kinds of behaviours too, like RecordVoicemailActionor PlayRickAstleyAction. - The Actiongenerates TwiML which is returned to Twilio, which does the magic to the call. Note that we fetch data from the Google Sheet on every single incoming call, which means if you update the sheet, your new behaviours will take effect immediately. 📝 Creating and linking your Google Sheet Create a Google Sheet that looks like this: Fill in your own details for numbers and messages. You may need to prefix each phone number with a ' to prevent Sheets thinking it’s a formula. Make sure to name the sheet CallFilters in the tab at the bottom, to match the code in ActionLookup. Next, you need to grab the Sheet ID from the URL of your Google Sheet. This is between the last 2 / characters in the URL. Here’s mine, yours will have a different ID but it’s in the same place: The Sheet ID needs to go in the code. Copy it into src/main/resources/application.properties. You also need to enable Google Sheets API access. Start at the Google Sheets Java Quickstart. On that page click “Enable the Google Sheets API” then on the dialog that appears choose “Desktop App” and “Create”. Download the client configuration, a file called credentials.json, and place it in your project in src/main/resources. The very first time you use these credentials, you will be prompted to visit a URL in your browser to verify them. We also have a detailed video showing how to set up Java and Google Sheets. 🔬 Testing your setup Before you use your app from Twilio, it’s a good idea to test that it’s working. In a terminal at the root of the project, start the app with: ./mvnw clean spring-boot:run You can see what TwiML gets created for different numbers by calling URLs like:. Note that you need to escape a + as %2b in a URL. Test it out with a couple of different numbers, and see what happens if you pass in a number that isn’t in the sheet. (Hint: There’s a DEFAULT_ACTION specified in the Actions interface). 🏠 Using a local app for Twilio webhooks Now the code is returning different valid TwiML depending on the From parameter, it’s time to hook it up to a real phone number. There are two parts to this: - Making your application callable from Twilio - Buying and configuring a Twilio phone number 📲 Making your application callable from Twilio For the Twilio webhook call to succeed, there needs to be a public URL which your app will respond to. Localhost won’t cut it here, but luckily we have ngrok, a great tool for testing webhooks which can create temporary public URLs for your localhost applications. Download ngrok and create a tunnel pointing at your Spring app which runs on localhost port 8080 by running ngrok http 8080 in a terminal. Copy the https forwarding URL from the output: You can test that the tunnel is working by calling the same kinds of URL as before but use the ngrok forwarding URL instead of localhost:8080: ☎️ Using a real phone number If you don’t already have a Twilio account you can create one for free, then follow the instructions to purchase your first phone number. Once you have a phone number, head to the config screen and in the “Voice & Fax” section set the action to use your ngrok URL as a webhook when a call comes in. Don’t forget to add /call to the end, but don’t worry about the From parameter - Twilio will set that for you: Save this config and you’re all done 📞😍 Test again, but this time by calling your Twilio phone number and you should be forwarded or blocked as defined in the sheet. 🎁 Wrapping up This project has shown how you can work with Google Sheets from Java, and how to use that power to take control of incoming calls. Whatever you’re building with Java and Twilio I’d love to hear about it, get in touch with me at:
https://www.twilio.com/blog/take-control-incoming-calls-twilio-java-google-sheets
CC-MAIN-2021-31
refinedweb
1,589
60.95
NameOrder Since: BlackBerry 10.3.0 #include <bb/pim/contacts/ContactConsts> The NameOrder class represents the orders that can be used for the contacts display name. You can use the NameOrder::Type enumeration to specify the order that should be used for contacts display name. For example, you can use a NameOrder::Type enumeration value in ContactListFilters::setDisplayNameOrder() to change the display name returned for a contact with the first name, John and Last name, Doe to "Doe John" or "John Doe" Overview Public Types Index Public Types An enumeration of possible name orders that can be set. BlackBerry 10.3.0 - FirstLast 0 Indicates that name order should be "FirstName LastName" e.g. - LastFirst 1 Indicates that name order should be "LastName FirstName" e.g. "Doe John"Since: BlackBerry 10.3.0 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__pim__contacts__nameorder.html
CC-MAIN-2015-11
refinedweb
149
57.98
I am working on a project that is composed of several lamps that switch on and off according to signals they get from the environment through several sensors. Here I am going to present the circuit for one of the lamps. This lamp is receiving signals from a sensor that is placed in another room (sound sensor) and sends a signal from a piezo sensor to another lamp. The lamp has one 220V bulb connected to the Arduino through a relay, which I am going to explain in this instructable, and a 12V bulb connected to the micro-controller through a L7805 voltage regulator, which I explained here. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: All the Components - 1x Arduino Leonardo or Uno - 1x Prototyping shield - 1x breadboard - 1x Sound sensor (controlled by an Arduino Micro) - 1x Piezo (connected to the circuit of the lamp) - 1x tilt sensor (in case the lamp falls down) - Jumper wires : male to male, female to male - 1x L7805 voltage regulator - 1x 1m wire - 1x switch - 1x relay - 1x 220V bulb - 1x sockets - 1x 12V led - 1x 5V led - 2x rf communication 433 MHz modules transmitter/ receiver with antennas - 1x 220 ohm resistor - 1x 10K ohm resistor Tools: - Screwdriver - Wire scissors - Solder - Solder iron Step 2: Clap Sensor Circuit With Arduino Pro Micro The sound sensor has three pins that should be connected to the micro-controller, from left to right, to the ground, the a digital pin (in my case 15) and to the 5V pin. The three pins of the RF transmitter should be connected to the micro-controller, from left to right, to a digital pin (in my case 14), to the 5V pin and to the ground. Wire a led to pin 2 and ground, to check when the signal is transmitted. Upload the code to the micro-controller and clap or knock on the table.The led should turn on when sound is detected and a message is transmitted. Notes : The sound sensor has a potentiometer through which you can adjust it's sensitivity. I used an Arduino Pro Micro, but theoretically you can use and ATtiny since VirtualWire library supports it. Step 3: Clap Sensor Code include const char *message = ""; int sensor = 15; int ledc = 2; int val = 0; void setup() { pinMode (sensor, INPUT); vw_set_ptt_inverted(true); vw_set_tx_pin(14); vw_setup(4000); } void loop() { int val =digitalRead (sensor); if(val==1) { digitalWrite (ledc,HIGH); message="C"; vw_send ((uint8_t *)message, strlen(message)); vw_wait_tx(); delay (2000); } else { digitalWrite (ledc,LOW); } } Step 4: How to Connect a 220V Bulb to Arduino With a Relay You can see how to connect the wire to the switch, sockets and plug in this instructable, from the photos above or in this cool class by Paige Russell. Typically, a relay has pins on one side (right) and a screw terminal on the other side (left). Connect the bulb to the screw terminal through the source wire (the ground wire is continuous from plug to socket): - the source wire coming from the socket goes in the first position of the terminal block - the source wire from the plug/switch goes into the second position of the block On the pin side of the relay, connect the micro-controller. The order of the pins may vary from relay to relay. It should be connected to the ground, 5V pin and a digital pin (in my case pin 5) on the micro-controller. Step 5: Transmitter With Piezo/ Receiver From Clap Sensor Since I will have several lamps in the installation that will also be communicating with each other, the circuit on the lamp has both a RF receiver and a RF transmitter. However, if your lamp only receives signals from a sensor, you can skip the transmitter. Also, the final circuit has a tilt sensor added (not in the photos) to ensure the lamp turns off if it falls. For this circuit I use either an Arduino Uno or an Arduino Leonardo. I use a protoshield to have several pins for ground and 5V and because I want all the elements of the circuit on top of the micro-controller. The lamp has three led bulbs: - one is on by default (pin 5) - one is on when the lamp receives a signal (pin 7) - one is on when the lamp sends a signal (pin 3) The RF transmitter connect to the ground, 5V pin and pin 2. The tilt sensor connects to ground, 5V pin and pin 4. The RF receiver has two ground pins: - one next to the antenna - the rightmost one Connect whichever one to the ground of the micro-controller. The Data pin is second from the right and should be connected to a digital pin (in my case, pin 12) The 5V pin is the fourth one from the right. Connect the piezo to the ground and an analog pin (ex: A0). The piezo sensor sends certain values to the micro-controller. When it is pressed, the values are lower. Use Serial.begin to read the values and decide the threshold for sending the signal. I chose 100. How it works: - The 220V bulb (on pin 5) is on when nothing happens. - When the lamp receives a message, the led on pin 7 lights up and the pin 5 led turns off. - When the piezo is pressed, the led on pin 3 turns on and the lamp sends a message. - The tilt sensor is positioned vertically, which corresponds to a low reading, so when is on the horizontal it gives a high reading. Basically, as long as the sensor gives a zero value, the lamp can function normally. However, if the lamp is very stable and in a safe place, there is no need for it. I will have people walking around it, so no matter how stable it is, it can be knocked down. If you don't want (need) a tilt sensor, just delete de first if clause after void loop and the last } in the code below. Step 6: Transmitter/ Receiver Code #include <VirtualWire.h> int ledP = 5; int ledC = 3; int ledA = 7; int tiltPin =4; int sensor = A0; int val = 0; const char *message = ""; void setup () { pinMode(ledP,OUTPUT); pinMode(ledA,OUTPUT); pinMode(ledC,OUTPUT); pinMode(sensor,INPUT); pinMode(tiltPin,INPUT); vw_set_ptt_inverted(true); vw_set_rx_pin(12); vw_set_tx_pin(11); vw_setup(4000); vw_rx_start(); } void loop () { if (digitalRead(tiltPin) == 0) { digitalWrite (ledP, HIGH); digitalWrite (ledA, LOW); digitalWrite (ledC, LOW); val = analogRead (sensor); if (val < 100) { message="Z"; vw_send((uint8_t *)message, strlen(message)); vw_wait_tx(); delay (2000); digitalWrite (ledP,LOW); for (int i=0;i<10;i++) { digitalWrite (ledC, HIGH); delay (2000); } } else { digitalWrite (ledP, HIGH); digitalWrite (ledA, LOW); digitalWrite (ledC, LOW); } uint8_t buf[VW_MAX_MESSAGE_LEN]; uint8_t buflen = VW_MAX_MESSAGE_LEN; if (vw_get_message(buf, &buflen)) { if(buf[0]=='C') { digitalWrite (ledP, LOW); for(int i=0;i<10;i++) { digitalWrite(ledA,HIGH); delay(2000); } } else { digitalWrite (ledP, HIGH); digitalWrite (ledA, LOW); digitalWrite (ledC, LOW); } } } else { digitalWrite (ledP, LOW); digitalWrite (ledC, LOW); digitalWrite (ledA, LOW); } } Step 7: Lamp in Action I hope you enjoyed! :) Discussions
https://www.instructables.com/id/Lamp-With-Sensors-SoundPiezo-RF-Communication/
CC-MAIN-2019-43
refinedweb
1,193
61.7
On 9/24/2010 8:13 PM, Roumen Petrov wrote: > About pre-processor flags - better is C code to start with #define > BUIILD_FOO instead -DBUIILD_FOO in makefile. No, actually, it is not better. The reason is, any given C file *might* be used in a library, or it *might* be used in an application -- or both, depending on compile flags. For instance, suppose you have a utility library, where each function has a built-in self test: ---- int some_util_function() { .... } #if defined(PACKAGE_FOO_TESTING) int main(int argc, char *argv[]) { .... } #endif ---- You wouldn't want to unconditionally define BUILDING_LIBUTIL in this case. Now, certainly, you could do some magic like #if !defined(PACKAGE_FOO_TESTING) # define BUILDING_LIBUTIL #endif but...(a) this is a deliberately simple example, and (b) there's a better way. There is *one place* in the package where you KNOW which files are being compiled for inclusion in a library, and which are not: and that's the Makefile (or Makefile.am, or cmakefiles.list, or whatever) -- NOT the C code itself. Why should you duplicate that knowledge in the source code itself? What happens when you refactor a "big" library into multiple, smaller libraries? With the Makefile approach, you simply reassign when .c's go with which libfoo_SOURCES, and each libfoo_la_CFLAGS has a different -DBUILDING_* -- and you don't have to modify any of the .c's at all (you'd have to modify some .h's, but you'd need to do THAT regardless). Your way, this refactoring requires coupled changes in each and every .c file -- because you put "knowledge" (about which library each .c file belongs to -- inside each .c file itself, and that's the wrong place for that knowledge. It *belongs* in the buildsystem (e.g. the Makefile). -- Chuck
http://lists.gnu.org/archive/html/libtool-patches/2010-09/msg00345.html
CC-MAIN-2014-49
refinedweb
294
67.96
We have all been there. You wake up and it is a new day, and you just need to know the answer to the most burning question. Is it the weekend yet? Your week has been so busy with professional and personal responsibilities. All you want to do is take a couple of days to sit back and relax. To answer the question you could, of course, open up your calendar app on your phone or ask your favorite personal digital home assistant. But, why do those things when you could build yourself an app that sends you a text message instead? We are going to create a Ruby on Rails application that does the following: - Allows for both new subscribers and for the ability to unsubscribe from the list - Scrapes the answer to our question from isittheweekend.com - Sends the answer from our scraped data daily to all the subscribed recipients To wrap it all together, we will also be creating a Rake task that will run all these tasks at once, and designating its execution once every 24 hours. If you prefer, you can also find a fully working version of this application on GitHub. Let's get started!<< Generate the Rails Application The first thing we need to do is to create our new Rails application. From your command line execute the following: $ rails new weekend-checker-app --database=postgresql This creates the necessary file structure for our Rails application and sets the default database to PostgreSQL. Once that is done, cd into the directory that was created. Before we install our dependencies, we will add the additional gems our application will use. Open up the code in your preferred code editor and navigate to the Gemfile. Inside the Gemfile add the following gems: gem 'nexmo' gem 'watir' gem 'webdrivers', '~> 4.0' gem 'whenever', require: false gem 'dotenv-rails' We are using the nexmo gem to send the SMS updates, the watir and webdrivers gems to make the HTTP request to a site with dynamic JavaScript content, the whenever gem to schedule the Rake task, and the dotenv-rails gem to manage the environment variables. After you have saved the Gemfile, you are ready to run bundle install from the command line. The next step is creating our database schema and models. Create the Database Schema and Models Now that our Rails application is created and has its dependencies installed, the next task is to create the correct database schema to house the data we will need to operate our application. We need to store the following types of information: - Recipients: The list of subscribers with their phone numbers - DiffStorage: Copies of the website data to compare against to determine if there was a change We will use the Rails generator tool to create the migration files and then subsequently edit each one. $ rails generate model Recipient number:string subscribed:boolean $ rails generate model DiffStorage website_data:text Those commands will create both model files in app/models and migration files in db/migrate. Before committing those changes to your application, once the generator actions are done, inspect the files created in both directories to make sure they are correct. Specifically, in the migration files, you want to ensure that each migration includes t.timestamps, which adds a created_at and updated_at column to the table. You should also see the number and subscribed columns in the Recipient migration file, with types set to string and boolean, respectively. Similarly, you should see a column in the DiffStorage migration file for website_data with the type set to text. The model files inside app/models should be empty besides the class declarations and their inheritance from ApplicationRecord. When it looks satisfactory, it is time to run rake db:migrate from the command line. The command will output the results to your console, and if you inspect the db/schema.rb file you will be able to see the schema you created initialized inside the application. Lastly, we also need to create Messenger and Scraper models, but we do not need a migration for them. To do so, we run the Rails generator again and append a --migration=false flag to it: $ rails generate model Messenger --migration=false $ rails generate model Scraper --migration=false Now has come the time to define the logic inside the models. Defining the Models As mentioned above, we have four models that are responsible for unique areas of the application: DiffStorage: Checks for any differences in the website data Recipient: Manages adding and removing subscribers Messenger: Manages the sending of SMS messages Scraper: Responsible for scraping the website for data Defining the DiffStorage Model The DiffStorage model will contain two class models. One will contain the URL that we are scraping. The second will check for any changes since the last time the website was scraped and invoke the next steps in the application when the conditions are met. First, let's define the URL in its own method so that we create a single place where it exists and can be easily modified if we choose to do so later: def self.url '' end Next, the bulk of this model will live inside the #check_last_record class method: def self.check_last_record today_answer = Scraper.call(self.url) if DiffStorage.any? yesterday_answer = DiffStorage.last else yesterday_answer = '' end Messenger.send_update_message(Recipient.all, yesterday_answer, today_answer) end The above method first calls the method in the Scraper class that will begin the website scraping to obtain the most recent snapshot and assigns that data to today_answer. It then wraps the next step inside an if statement asking if there are any records in DiffStorage. If there are previous records stored there, then the method grabs the most recent one and assigns it to yesterday_answer. If there were no previous records then an empty string is assigned to yesterday_answer. Lastly, it sends the recipients and the two variables to the Messenger model to process for sending the message. Defining the Scraper Model The Scraper model will be responsible for doing the work of gathering the data from isittheweekend.com to determine if it is indeed the weekend or not. The model will have four class methods and we will define each one here: require 'nokogiri' require 'webdrivers/chromedriver' require 'watir' class Scraper < ApplicationRecord def self.call(url) self.get_url(url) end def self.get_url(url) doc = HTTParty.get(url) browser = Watir::Browser.new :chrome, headless: true browser.goto(url) parsed_page ||= Nokogiri::HTML.parse(browser.html) answer = parsed_page.css('h1#isit').text self.check_text(answer) end def self.check_text(data) if data == '' || data == nil puts "There was no text received from the web scrape." exit else puts "There was data in the text received from the web scrape." self.store_text(data) end end def self.store_text(text) record = DiffStorage.new record.website_data = text if record.save puts "Record Updated Successfully" end return record end end Each action within the act of scraping is defined into its own small method so as to keep our concerns separated. The #call method is the point of entry for the class. This is what gets invoked by other methods outside of itself. The #get_url method makes the HTTP request by simulating a Chome browser request using the Watir library and parses it with Nokogiri. The #check_text method checks if any data was obtained. The #store_text method saves that data to the database. Defining the Messenger Model Within the Messenger model will be all the code responsible for sending the daily SMS update to the subscribers. We will create a method that will send the message, a method that composes the weekend response text, a method that puts the whole message together, and a method that manages a confirmation message if a subscriber sends a removal request. First, the method to send the update message: def self.send_update_message(recipients, yesterday, today) @client = Nexmo::Client.new( api_key: ENV['NEXMO_API_KEY'], api_secret: ENV['NEXMO_API_SECRET'] ) puts "Sending Message to Each Recipient" recipients.each do |recipient| if recipient.subscribed == true client.sms.send( from: ENV['FROM_NUMBER'], to: recipient.number, text: self.weekend_message(yesterday, today) ) puts "Sent message to #{recipient.number}" end end end The value for the text parameter above refers to a class method called #weekend_message. This method will compose the string for the weekend update by checking if today is the same as yesterday or not: def self.weekend_message(yesterday, today) if today == yesterday response = "Today is the same as yesterday, and the answer is #{today}." elsif today =! yesterday response = "Today is not the same as yesterday, the answer for today is #{today}." else response = 'Today and yesterday are both neither affirmative or positive. Are we in an alternative dimension of time and space?' end self.compose_message(response) end Next, the method containing the HEREDOC string with the body of the message: def self.compose_message(response) <<~HEREDOC Hello! It is a new day, but is it a weekend day? #{response} To be removed from the list please respond with "1". HEREDOC end Finally, the method to send a removal confirmation message: def self.send_removal_message(to) @client.sms.send( from: ENV['FROM_NUMBER'], to: to, text: 'You have been successfully removed.' ) end The last model we need to define before we continue to the next step is the Recipient model. Defining the Recipient Model This model does not contain any of its class methods. The only addition we will make to this model is adding two validations to the data for recipients. These validations will act as a safeguard when adding new phone numbers to the database. We will check that a) a number is indeed being provided in the data and b) the number is not a duplicate of an already existing record. To do these validations we add two lines under the class definition: class Recipient < ApplicationRecord validates :number, presence: true validates :number, uniqueness: true end Create the Controller and Routes We are getting close to finishing the construction of our app! The next step is defining the controller actions that will dictate the flow of the application. First, let's generate the controller using the Rails generator from the command line: $ rails generate controller WeekendChecker This will create a new empty controller file in app/controllers called weekend_checker_controller.rb and complementary view files in app/views/weekend_checker. We will add an index view shortly. At this point, we'll focus on the controller. The controller needs three actions to correspond to three routes: #index, #create and #event. The #index route will be the default and only view for our website. That will be the place where individuals can subscribe to the list. The #create route will be where new numbers get processed. Finally, the #event route will be where the application receives webhook data from the SMS API, including removal requests, and processes them. class WeekendCheckerController < ApplicationController def index end def create @recipient = Recipient.new(recipient_params) if @recipient.save flash[:notice] = "Phone number saved successfully." else flash[:alert] = "Form did not save. Please fix and try again." end redirect_to '/' end def event if params[:text] == '1' recipient = Recipient.find_by(number: params[:msisdn]) if recipient if recipient.update(subscribed: false) Messenger.send_removal_message(params[:msisdn]) end end end puts params head :no_content end private def recipient_params params.permit(:number, :subscribed) end end These three controller actions need three corresponding routes defined in config/routes.rb: Rails.application.routes.draw do get '/', to: 'weekend_checker#index' get '/webhooks/event', to: 'weekend_checker#event' post '/recipient/new', to: 'weekend_checker#create' end The penultimate item for our application code setup is creating a basic view for the / route. Defining the View In order to subscribe to the SMS list, we will create a view accessible at the top-level of the URL that will contain a sign-up form. Inside the app/views/weekend_checker folder add an index.html.erb file. It will contain the following code: <h2>Is It The Weekend? Get a Daily Text to Find Out!</h2> <p> This is a free service that will analyze <a href="">isittheweekend.com</a> and check for any updates once a day. If there is an update it will send you a text message at the number you provide. </p> <p> To remove yourself from the SMS list, reply to the text message you receive with the number "1". </p> <p> SMS messages are sent using the <a href="">Nexmo SMS API</a>. </p> <% flash.each do |type, msg| %> <div> <%= msg %> </div> <% end %> <%= form_with model: @recipient, url: "/recipient/new" do |f| %> <%= f.telephone_field :number, :placeholder => '12122222222' %> <%= f.hidden_field :subscribed, value: true %> <%= f.submit "Add Number" %> <% end %> The final coding task we have to do is to set up our new Rake task that will run all this code and configure the whenever gem to execute the Rake task once a day. Create the Rake Task and Schedule It Once again, we will use a Rails generator from the command line to create the file for our Rake task. From the command line run the following: $ rails generate task scraper check_site_update The above task will create a file in lib/tasks called scraper.rake. When we open it inside our code editor it will look like this: namespace :scraper do desc "TODO" task :check_site_update => :environment do end end Let's redefine the desc with a short string of what this task will do: desc "Check Website for Any Updates". Next, inside the task block add the DiffStorage#check_last_record class method, which is the entry point for all the work we created previously: namespace :scraper do desc "Check Website for Any Updates" task :check_site_update => :environment do DiffStorage.check_last_record end end Now that our Rake task is defined, we lastly need to initialize the whenever gem and let it know that we want this task run once a day. To do that, first, we run the initializer command for the gem from the command line: $ bundle exec wheneverize . The above command creates a schedule.rb file inside the config/ folder. Add the following code to that file to run the scraper:check_site_update task daily: every 1.day do rake "scraper:check_site_update" end Now that the schedule is created, we need to update the crontab file on our machine to know about this new job. We do that by running bundle exec whenever --update-crontab from the command line. Once that is done, the task is fully initialized and configured to run once a day on our machine. The code for our application is all set. The only thing that is missing now is creating our Nexmo account, obtaining our Nexmo API credentials and provisioning a virtual phone number to send the daily text messages with. Once we have this information we will add it to our application as environment variables. Nexmo API Credentials and Phone Number To create an account navigate to the Nexmo Dashboard and complete the registration steps. Once you have finished registering, you will enter your Dashboard. If you have not done so previously, create a .env file in the top-level directory of your application and add your NEXMO_API_KEY and NEXMO_API_SECRET to it. The values for those are found at the top of the dashboard page under the Your API credentials header. NEXMO_API_KEY= NEXMO_API_SECRET= The next task we need to do inside the dashboard is to provision a phone number. After you click on the Numbers link on the sidebar navigation, a drop-down menu will appear. Once you select the Buy numbers option and then click the Search button you will see a list of possible numbers to acquire. When searching for numbers by feature, country, and type, it is recommended to select the country that your users will be based in, SMS for features and Mobile for type. After clicking the orange Buy button for the number you wish to purchase, you can add that number to your .env file as a new variable called FROM_NUMBER: NEXMO_API_KEY= NEXMO_API_SECRET= FROM_NUMBER= The last item we need to do inside our dashboard is to provide an externally accessible URL as the event webhook for the phone number. For development purposes, ngrok is a good tool to use, and you can follow this guide on how to get up and running with it. From the dashboard Numbers sidebar navigation drop-down, once you select Your numbers you will see your newly provisioned phone number in a list presentation. After clicking on the gear icon to manage its properties, a settings dialog menu will be shown to you. In the above screenshot example, you would replace the Inbound Webhook URL text field with your own URL ending with /webhooks/event. That's it! Our code is all finalized and our Nexmo credentials are all set. At this point, you face a choice for running your application. You can either run it locally or you could deploy it to an external hosting provider, like Heroku. In the final step, we will discuss how to run it locally. If you are interested in deploying it for a more long-term solution, you can visit the GitHub repository and click on the Deploy to Heroku button at the top of the README to start that process. Running the Application We are now ready to run our brand new application! In order to run it locally, the Rails event webhook needs to be accessible to the outside world outside of your local environment. For example, if you are using ngrok after following this guide then both the Rails application and ngrok need to be running simultaneously. To start the Rails application execute the following from your command line: $ bundle exec rails server Then you can navigate in your browser of choice to localhost:3000. You will see the sign-up form you created. Go ahead and fill it out with your phone number and submit it. Now, once the Rake task is run, you should expect to receive an SMS letting you know if it is the weekend and whether today is different than yesterday! Next Steps The application we built while whimsical demonstrates the potential for leveraging web scraping and SMS to create an application that delivers updates to subscribers. There are countless potential use cases for an application like this. Whether you are interested in replicating this exact scenario or in porting the code for your own use case, there is, even more, to explore on this topic. For further exploration of other SMS possibilities check out the following resources:
https://developer.vonage.com/blog/20/03/27/is-it-the-weekend-yet-build-a-web-scraping-app-with-sms-to-find-out-dr
CC-MAIN-2022-40
refinedweb
3,078
63.09
Introduction In this article, we will create an optical character recognition (OCR) application using Blazor and the Azure Computer Vision Cognitive Service. Computer Vision is an AI service that analyzes content in images. We will use the OCR feature of Computer Vision to detect the printed text in an image. The application will extract the text from the image and detects the language of the text. Currently, the OCR API supports 25 languages. A demo of the application is shown below.. Source Code You can get the source code from GitHub. Create the Azure Computer Vision Cognitive Service resource Log in to the Azure portal and search for the cognitive services in the search bar and click on the result. Refer to the image shown below. On the next screen, click on the Add button. It will open the cognitive services marketplace page. Search for the Computer Vision in the search bar and click on the search result. It will open the Computer Vision API page. Click on the Create button to create a new Computer Vision resource. Refer to the image shown below. On the Create page, fill in the details as indicated below. - Name: Give a unique name for your resource. - Subscription: Select the subscription type from the dropdown. - Pricing tier: Select the pricing tier as per your choice. - Resource group: Select an existing resource group or create a new one. Click on the Create button. Refer to the image shown below.. Installing Computer Vision API library We will install the Azure Computer Vision API library which will provide us with the models out of the box to handle the Computer Vision REST API response. To install the package, navigate to Tools >> NuGet Package Manager >> Package Manager Console. It will open the Package Manager Console. Run the command as shown below. Install-Package Microsoft.Azure.CognitiveServices.Vision.ComputerVision -Version 5.0.0 You can learn more about this package at the NuGet gallery. Create the Models Right-click on the BlazorComputerVision project and select Add >> New Folder. Name the folder as Models. Again, right-click on the Models folder and select Add >> Class to add a new class file. Put the name of your class as LanguageDetails.cs and click Add. Open LanguageDetails.cs and put the following code inside it. namespace a new class file OcrResultDTO.cs = "b993f3afb4e04119bd8ed37171d4ec71"; { string JSONResult = await ReadTextFromStream(imageFileBytes); OcrResult ocrResult = JsonConvert.DeserializeObject<OcrResult>(JSONResult); if (!ocrResult.Language.Equals("unk")) { foreach (OcrLine ocrLine in ocrResult.Regions[0].Lines) { foreach (OcrWord ocrWord in ocrLine.Words) { sb.Append(ocrWord.Text); sb.Append(' '); } sb.AppendLine(); } } else { sb.Append("This language is not supported."); } ocrResultDTO.DetectedText = sb.ToString(); ocrResultDTO.Language = ocrResult.Language; return ocrResultDTO; } catch { ocrResultDTO.DetectedText = "Error occurred. Try again"; ocrResultDTO.Language = "unk"; return ocrResultDTO; } } static async Task<string> ReadTextFromStream(byte[] byteData) { try { HttpClient client = new HttpClient(); client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey); string requestParameters = "language=unk&detectOrientation=true"; string uri = uriBase + "?" + requestParameters; HttpResponseMessage response; using (ByteArrayContent content = new ByteArrayContent(byteData)) { content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); response = await client.PostAsync(uri, content); } string contentString = await response.Content.ReadAsStringAsync(); string result = JToken.Parse(contentString).ToString(); return result; } catch (Exception e) { return e.Message; } } public async Task<AvailableLanguage> GetAvailableLanguages() { string endpoint = ""; var client = new HttpClient(); using (var request = new HttpRequestMessage()) { request.Method = HttpMethod.Get; request.RequestUri = new Uri(endpoint); var response = await client.SendAsync(request).ConfigureAwait(false); string result = await response.Content.ReadAsStringAsync(); AvailableLanguage deserializedOutput = JsonConvert.DeserializeObject<AvailableLanguage>(result); return deserializedOutput; } } } } In the constructor of the class, we have initialized the key and the endpoint URL for the OCR API. Inside the ReadTextFromStream method, we will create a new HttpRequestMessage. This HTTP request is a Post request. We will pass the subscription key in the header of the request. The OCR API will return a JSON object having each word from the image as well as the detected language of the text. The GetText. The GetAvailableLanguages method will return the list of all the language supported by the Translate Text API. We will set the request URI and create a HttpRequestMessage which will be a Get request. This request URL will return a JSON object which will be deserialized to an object of type AvailableLanguage. Why do we need to fetch the list of supported languages? The OCR API returns the language code (e.g. en for English, de for German, etc.) of the detected language. But we cannot display the language code on the UI as it is not user-friendly. Therefore, we need a dictionary to look up the language name corresponding to the language code. The Azure Computer Vision OCR API supports 25 languages. To know all the languages supported by OCR API see the list of supported languages. These languages are a subset of the languages supported by the Azure Translate Text API. Since there is no dedicated API endpoint to fetch the list of languages supported by OCR API, therefore, we are using the Translate Text API endpoint to fetch the list of languages. We will create the language lookup dictionary using the JSON response from this API call and filter the result based on the language code returned by the OCR API..1.0-preview-00002" /> < Link to Navigation. If we try to upload an image with an unsupported language, we will get the error. Refer to the image shown below where an image with text written in Hindi is uploaded. Summary We have created an optical character recognition (OCR) application using Blazor and the Computer Vision Azure Cognitive Service. We have added the feature of uploading an image file using the BlazorInputFile component. The application is able to extract the printed text from the uploaded image and recognizes the language of the text. The OCR API of the Computer Vision is used which can recognize text in 25 languages. Get the Source code from GitHub and play around to get a better understanding. See Also - Multi-Language Translator Using Blazor And Azure Cognitive Services - Facebook Authentication And Authorization In Server-Side Blazor App - Google Authentication And Authorization In Server-Side Blazor App - Policy-Based Authorization In Angular Using JWT - Continuous Deployment For Angular App Using Heroku And GitHub - Hosting A Blazor Application on Firebase - Deploying A Blazor Application On Azure
https://www.freecodecamp.org/news/how-to-create-an-optical-character-reader-using-blazor-and-azure-computer-vision/
CC-MAIN-2021-39
refinedweb
1,044
51.14
return to main index Filtering alters the statements in a Rib file, or files, before they are rendered. Filters can be written in C++ using Pixar's Rif API or they can be written in python using Pixar's "PRMan for Python". This tutorial deals with the use of Ri Filters (Rifs) written in python. The goal of the tutorial is to encourage readers to experiment with python Rifs. The filtering technique shown here unfortunately cannot be performed on Windows. It is assumed the reader has downloaded and installed the scripts presented in the tutorial RfM: Customizing. Rifs enable outputs and effects to be achieved that might either be very tedious or impossible to do directly via the Maya/RfM interface. Rifs also enable shape and shading experimentations to be done efficiently because ribgen need not repeatably occur prior to rendering. Depending on the type of filtering to be performed a sequence of ribs can be filtered/rendered and re-filtered/re-rendered several times before it is necessary to generate a fresh set of ribs. Figure 1 Maya+Python+PRman workflow The tutorial RfM: Batch Rendering demonstrates how a sequence of frames to be rendered with the the stand-alone version of PRMan. The Rif given in the listing 1 can be used to check the users Maya/RfM/python environment has been set up correctly. The Rif implemented in the module "rif_it" edits the Display statement of a beauty pass rib so that it produces a rendered image that can be seen immediately with Pixar's Image Tool ("it"). Copy the code and save it as rif_it.py in the RfM_python directory. rif_it.py RfM_python Listing 1 (rif_it.py) import prman, os class Rif(prman.Rif): def __init__(self, ri): prman.Rif.__init__(self, ri) def Display(self, name, driver, channels, params): if driver != 'shadow' and driver != 'deepshad' and driver != 'null': driver = 'it' self.m_ri.Display(name, driver, channels, params) Open the script editor in Maya and enter the following command. A rendered image should appear in an "it" window. batchRenderRI("rif_it", 1,1); or, batchRenderRI("rif_it.Rif()", 1,1); The batchRenderRI procedure does the following. 1. Queries the values of "defaultRenderGlobals". 2. If the second arg equals "1" it generates a fresh set of ribs. 3. Calls a python module named batchrender. 4. batchrender applies one or more python rifs. 4. If third arg equals "1" batchrender renders the rib(s). In a later section we will see how multiple Rifs can be specified by batchRenderRI(). batchrender Check the following if the scripts fail to work. Most importantly have the scripts presented in the tutorial RfM: Customizing been downloaded and installed ??. 1. Has the project directory been set in Maya? 2. Has the scene been saved? 3. Is batchRenderRI.mel in your maya/projects/RfM_mel directory? 4. Are the python scripts in your maya/projects/RfM_python directory? maya/projects/RfM_mel maya/projects/RfM_python When developing a Rif it is best to apply it directly to a rib file using a small python script - such as the one shown in listing 2. This can be conveniently done with Cutter. The script makes direct use of Pixar's prman module. Save the script as rif_tester.py in your maya/projects/RfM_python directory. prman Listing 2 (rif_tester.py) import prman, rif_it ribin = 'PATH_TO_A_BEAUTY_PASS_RIB' # ribout = 'PATH_TO_A_TEMPORARY_RIB' # Access prman's RiXXX procs and definitions ri = prman.Ri() # Format the output for easier reading ri.Option("rib", {"string asciistyle": "indented"}) rif = rif_it.Rif(ri) # Get an instance of our Rif prman.RifInit([rif]) # Tell prman about our Rif ri.Begin('-') # Echo the rib as it is processed prman.ParseFile(ribin) # Process the input rib ri.End() # Tell prman we're done! Open rif_tester.py in Cutter and execute it using control+e, alt+e or Apple+e. You will get error messages if rif_tester.py is not in the same directory as rif_it.py. If the script runs successfully Cutter's Process Monitor will echo the contents of the input (beauty pass) rib but with its Display statement edited. For example, rif_tester.py Display "renderman/test/images/test.iff" "it" "rgba" If the reader wishes to apply the Rif and render an image they should change, ri.Begin('-') # See the contents of the processed rib to ri.Begin(ri.RENDER) # See the rendered image Alternatively, if the reader wishes to save the filtered rib statements in another file they should change, ri.Begin('-') # See the contents of the processed rib to ri.Begin(ribout) # Create an output rib file If the Rif fails to work it is most probably because the PYTHONPATH environment variable has not been set Cutter's run script (run.bat on Windows). For example, for Linux and MacOSX. export RMANTREE=PATH_TO_YOUR_RPS_INSTALLATION export PYTHONPATH=$PYTHONPATH:$RMANTREE/bin export RMS_SCRIPT_PATHS=./:FULL_PATH_TO/RfM_ini java -Xms256m -Xmx256M -classpath .:cutter.jar Cutter For Windows the run.bat file should contain the following. set RMANTREE=PATH_TO_YOUR_RPS_INSTALLATION set PYTHONPATH=$PYTHONPATH:$RMANTREE\bin set RMS_SCRIPT_PATHS=.\:FULL_PATH_TO\RfM_ini java -Xms256m -Xmx256M -classpath .:cutter.jar Cutter The next script demonstrates how a sequence of ribs can be filtered. Listing 3 (batch_tester.py) import os.path, prman, rif_it ribs = os.listdir(PATH_TO_RIB_DIRECTORY) ri = prman.Ri() ri.Option("rib", {"string asciistyle": "indented"}) rif = rif_it.Rif(ri) prman.RifInit([rif]) for rib in ribs: if os.path.dirname(rib): continue if rib.endswith('.rib') == False: continue parent = os.path.dirname(rib) name = os.path.basename(rib) tmpRib = os.path.join(parent, 'tmp_' + name) ri.Begin(tmpRib) prman.ParseFile(rib) ri.End() os.remove(rib) os.rename(tmpRib,rib) Note this script over-writes the original input rib files. You may wish to edit the code so that the edited ribs do not replace the original rib files
http://fundza.com/rfm/batch_rif/index.html
CC-MAIN-2017-04
refinedweb
955
61.53
I wrote this little hook-engine for a much bigger article. Sometimes it seems such a waste to write valuable code for large articles whose topic isn't directly related to the code. This often leads to the problem that the code won't be found by the people who are looking for it. Personally, I would've used Microsoft's Detour hook engine, but the free license only applies to x86 applications, and that seemed a little bit too restrictive to me. So, I decided to write my own engine in order to support x64 as well. I've never downloaded Detour nor have I ever seen its APIs, but from the general overview given by Microsoft, it's easy to guess how it works. As I said, this is only a part of something bigger. It's not perfect, but it can easily become such. Since this is not a beginner's guide about hooking, I assume that the reader already possesses the necessary knowledge to understand the material. If you never heard about this subject, you'd better start with another article. There's plenty of guides out there, no need to repeat the same things here. As everybody knows, there's only one easy and secure way to hook a Win32 API: to put an unconditional jump at the beginning of the code to redirect it to the hooked function. And by secure I just mean that our hook can't be bypassed. Of course, there are some other ways, but they're either complicated or insane or both. A proxy DLL, for instance, might work in some cases, but it's rather insane for system DLLs. Overwriting the IAT is unsecure for two reasons: GetProcAddressto retrieve the address of an API (and in that case we should handle this API as well). Ok, I guess you're convinced. Let's just say that there's a reason why Microsoft uses the method presented in this article. A common technique used in combination with the unconditional jump is: This approach may seem unsafe in a multi-threading environment and it is. It might work, but our technique is much more powerful. Well, nothing new, we just put our unconditional jump at the beginning of the code we want to hook and we put the original instructions of the API elsewhere in memory. When the hooked function jumps to our code we can call the bridge we created, which, after the first instructions, will jump to the API code which follows our unconditional jump: Let's make a real world example. If the first instructions of the function/API we want to hook are: mov edi, edi push ebp mov ebp, esp xor ecx, ecx They will be replaced by our: 00400000 jmp our_code 00400005 xor ecx, ecx Our bride will look like this: mov edi, edi push ebp mov ebp, esp jmp 00400005 Of course, to know the size of the instructions we're going to replace, we need a disassembler both for x86 and x64. I searched on Google for an x64 disassembler and found the diStorm64 disassembler. I quote from its homepage: diStorm64 is a professional quality open source disassembler library for AMD64, licensed under the BSD license., SSE)! This sounded pretty good to me. Now that we have our disassembler, we can start! The first thing I wanted to know was if it was possible to create bridges without having to relocate jumps. As the reader knows jumps, most of the time, have a relative address as operand and not an absolute one. This leads to the problem that I can't relocate a jump without having to recalculate its relative address. Also, I wanted to test if this disassembler really worked fine. So, I wrote a little program which creates a log file of all the instructions of all exported functions in a DLL which are going to be overwritten by an unconditional jump. Here's the code: #include "stdafx.h" #include "distorm.h" #include <stdlib.h> #include <stdlib.h> #include <Windows.h> DWORD RvaToOffset(IMAGE_NT_HEADERS *NT, DWORD Rva); VOID AddFunctionToLog(FILE *Log, BYTE *FileBuf, DWORD FuncRVA); VOID GetInstructionString(char *Str, _DecodedInst *Instr); int _tmain(int argc, _TCHAR* argv[]) { if (argc < 2) return 0; // // Open log file // FILE *Log = NULL; if (_tfopen_s(&Log, argv[2], _T("w")) != 0) return 0; // // Open PE file // HANDLE hFile = CreateFile(argv[1], GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, 0, NULL); if (hFile == INVALID_HANDLE_VALUE) { fclose(Log); return 0; } DWORD FileSize = GetFileSize(hFile, NULL); BYTE *FileBuf = new BYTE [FileSize]; DWORD BRW; if (FileBuf) ReadFile(hFile, FileBuf, FileSize, &BRW, NULL); CloseHandle(hFile); IMAGE_DOS_HEADER *pDosHeader = (IMAGE_DOS_HEADER *) FileBuf; IMAGE_NT_HEADERS *pNtHeaders = (IMAGE_NT_HEADERS *) ((FileBuf != NULL ? pDosHeader->e_lfanew : 0) + (ULONG_PTR) FileBuf); if (!FileBuf || pDosHeader->e_magic != IMAGE_DOS_SIGNATURE || pNtHeaders->Signature != IMAGE_NT_SIGNATURE || pNtHeaders->OptionalHeader.DataDirectory [IMAGE_DIRECTORY_ENTRY_EXPORT].VirtualAddress == 0) { fclose(Log); if (FileBuf) delete FileBuf; return 0; } // // Walk through export dir's functions // DWORD ET_RVA = pNtHeaders->OptionalHeader.DataDirectory [IMAGE_DIRECTORY_ENTRY_EXPORT].VirtualAddress; IMAGE_EXPORT_DIRECTORY *pExportDir = (IMAGE_EXPORT_DIRECTORY *) (RvaToOffset(pNtHeaders, ET_RVA) + (ULONG_PTR) FileBuf); DWORD *pFunctions = (DWORD *) (RvaToOffset(pNtHeaders, pExportDir->AddressOfFunctions) + (ULONG_PTR) FileBuf); for (DWORD x = 0; x < pExportDir->NumberOfFunctions; x++) { if (pFunctions[x] == 0) continue; AddFunctionToLog(Log, FileBuf, pFunctions[x]); } fclose(Log); delete FileBuf; return 0; } // // This function adds to the log the instructions // at the beginning of each function which are going // to be overwritten by the hook jump // VOID AddFunctionToLog(FILE *Log, BYTE *FileBuf, DWORD FuncRVA) { #define MAX_INSTRUCTIONS 100 IMAGE_NT_HEADERS *pNtHeaders = (IMAGE_NT_HEADERS *) ((*(IMAGE_DOS_HEADER *) FileBuf).e_lfanew + (ULONG_PTR) FileBuf); _DecodeResult res; _DecodedInst decodedInstructions[MAX_INSTRUCTIONS]; unsigned int decodedInstructionsCount = 0; #ifdef _M_IX86 _DecodeType dt = Decode32Bits; #define JUMP_SIZE 10 // worst case scenario #else ifdef _M_AMD64 _DecodeType dt = Decode64Bits; #define JUMP_SIZE 14 // worst case scenario #endif _OffsetType offset = 0; res = distorm_decode(offset, // offset for buffer, e.g. 0x00400000 (const BYTE *) &FileBuf[RvaToOffset(pNtHeaders, FuncRVA)], 50, // function size (code size to disasm) dt, // x86 or x64? decodedInstructions, // decoded instr MAX_INSTRUCTIONS, // array size &decodedInstructionsCount // how many instr were disassembled? ); if (res == DECRES_INPUTERR) return; DWORD InstrSize = 0; for (UINT x = 0; x < decodedInstructionsCount; x++) { if (InstrSize >= JUMP_SIZE) break; InstrSize += decodedInstructions[x].size; char Instr[100]; GetInstructionString(Instr, &decodedInstructions[x]); fprintf(Log, "%s\n", Instr); } fprintf(Log, "\n\n\n"); } VOID GetInstructionString(char *Str, _DecodedInst *Instr) { wsprintfA(Str, "%s %s", Instr->mnemonic.p, Instr->operands.p); _strlwr_s(Str, 100); } DWORD RvaToOffset(IMAGE_NT_HEADERS *NT, DWORD Rva) { DWORD Offset = Rva, Limit; IMAGE_SECTION_HEADER *Img; WORD i; Img = IMAGE_FIRST_SECTION(NT); if (Rva < Img->PointerToRawData) return Rva; for (i = 0; i < NT->FileHeader.NumberOfSections; i++) { if (Img[i].SizeOfRawData) Limit = Img[i].SizeOfRawData; else Limit = Img[i].Misc.VirtualSize; if (Rva >= Img[i].VirtualAddress && Rva < (Img[i].VirtualAddress + Limit)) { if (Img[i].PointerToRawData != 0) { Offset -= Img[i].VirtualAddress; Offset += Img[i].PointerToRawData; } return Offset; } } return NULL; } The command line syntax is: pefile logfile (e.g. disasmtest ntdll.dll ntdll.log). As you can see, I took 10 bytes for x86 hooks. It's possible to use 5 bytes jumps on x86/x64, but it's necessary to check that there's less than 2 GB between the original function and our code and between the bridge and the original function. Well, we have to check that on x86 as well, but it is very likely. The worst case scenario either for x86 and x64 is this absolute jump: jmp [xxxxx] xxxxx: absolute address (DWORD on x86 and QWORD on x64) This means we'd have a worst case scenario of 10 bytes on x86 and of 14 bytes on x64. In this hook engine, I'm using only worst case scenarios (no 5 byte relative addresses), simply because if the space between the original function and the hooked one is > 2 GB or the space between the original function and the bridge is > 2 GB, then I would have to recreate the bridge from scratch every time I hook/unhook the function. A professional engine should do this (and it's not much work), but I'll keep it simple (for me) and use only absolute jumps. As for the results of the little program above, I created logs for the ntdll.dll and advapi32.dll both for x86 and x64. Here, for instance, is a small part of the ntdll.dll x86 log: mov eax, 0x44 mov edx, 0x7ffe0300 mov eax, 0x45 mov edx, 0x7ffe0300 mov eax, 0x46 mov edx, 0x7ffe0300 mov eax, 0x47 mov edx, 0x7ffe0300 mov eax, 0x48 mov edx, 0x7ffe0300 mov eax, 0x49 mov edx, 0x7ffe0300 mov eax, 0x4a mov edx, 0x7ffe0300 mov eax, 0x4b mov edx, 0x7ffe0300 mov eax, 0x4c mov edx, 0x7ffe0300 This is of course pretty encouraging, but let's see the results for the x64 platform. sub rsp, 0x48 mov rax, [rsp+0x78] mov byte [rsp+0x30], 0x0 mov [rsp+0x10], rbx mov [rsp+0x18], rbp mov [rsp+0x20], rsi push rsi push r14 push r15 sub rsp, 0x480 mov rax, rsp mov [rax+0x8], rbx mov [rax+0x10], rsi mov [rax+0x18], r12 sub rsp, 0x38 mov [rsp+0x20], r8 mov r9d, edx mov r8, rcx mov rax, rsp mov [rax+0x8], rsi mov [rax+0x10], rdi mov [rax+0x18], r12 mov [rsp+0x10], rbx mov [rsp+0x18], rsi push rdi push r12 sub rsp, 0x68 mov rax, r9 mov r9d, [rsp+0xb0] But what about the functions which just call a syscall after moving a number into a register like NtCreateProcess, NtOpenKey etc.? These functions have very few instructions and our 14 bytes jump will overwrite more code than the one of the function itself. But that doesn't seem to be a problem, since as we can see from the disassembler these functions have a 16-bytes alignment. So, we won't overwrite other functions code anyway. Here's the main code of the hook engine (all the code is about 300 lines of code): // // This function creates a bridge of the original function // VOID *CreateBridge(ULONG_PTR Function, const UINT JumpSize) { if (pBridgeBuffer == NULL) return NULL; #define MAX_INSTRUCTIONS 100 _DecodeResult res; _DecodedInst decodedInstructions[MAX_INSTRUCTIONS]; unsigned int decodedInstructionsCount = 0; #ifdef _M_IX86 _DecodeType dt = Decode32Bits; #else ifdef _M_AMD64 _DecodeType dt = Decode64Bits; #endif _OffsetType offset = 0; res = distorm_decode(offset, // offset for buffer (const BYTE *) Function, // buffer to disassemble 50, // function size (code size to disasm) // 50 instr should be _quite_ enough dt, // x86 or x64? decodedInstructions, // decoded instr MAX_INSTRUCTIONS, // array size &decodedInstructionsCount // how many instr were disassembled? ); if (res == DECRES_INPUTERR) return NULL; DWORD InstrSize = 0; VOID *pBridge = (VOID *) &pBridgeBuffer[CurrentBridgeBufferSize]; for (UINT x = 0; x < decodedInstructionsCount; x++) { if (InstrSize >= JumpSize) break; BYTE *pCurInstr = (BYTE *) (InstrSize + (ULONG_PTR) Function); // // This is an sample attempt of handling a jump // It works, but it converts the jz to jmp // since I didn't write the code for writing // conditional jumps // /* if (*pCurInstr == 0x74) // jz near { ULONG_PTR Dest = (InstrSize + (ULONG_PTR) Function) + (char) pCurInstr[1]; WriteJump(&pBridgeBuffer[CurrentBridgeBufferSize], Dest); CurrentBridgeBufferSize += JumpSize; } else {*/ memcpy(&pBridgeBuffer[CurrentBridgeBufferSize], (VOID *) pCurInstr, decodedInstructions[x].size); CurrentBridgeBufferSize += decodedInstructions[x].size; //} InstrSize += decodedInstructions[x].size; } WriteJump(&pBridgeBuffer[CurrentBridgeBufferSize], Function + InstrSize); CurrentBridgeBufferSize += GetJumpSize((ULONG_PTR) &pBridgeBuffer[CurrentBridgeBufferSize], Function + InstrSize); return pBridge; } // // Hooks a function // extern "C" __declspec(dllexport) BOOL __cdecl HookFunction(ULONG_PTR OriginalFunction, ULONG_PTR NewFunction) { // // Check if the function has already been hooked // If so, no disassembling is necessary since we already // have our bridge // HOOK_INFO *hinfo = GetHookInfoFromFunction(OriginalFunction); if (hinfo) { WriteJump((VOID *) OriginalFunction, NewFunction); } else { if (NumberOfHooks == (MAX_HOOKS - 1)) return FALSE; VOID *pBridge = CreateBridge(OriginalFunction, GetJumpSize(OriginalFunction, NewFunction)); if (pBridge == NULL) return FALSE; HookInfo[NumberOfHooks].Function = OriginalFunction; HookInfo[NumberOfHooks].Bridge = (ULONG_PTR) pBridge; HookInfo[NumberOfHooks].Hook = NewFunction; NumberOfHooks++; WriteJump((VOID *) OriginalFunction, NewFunction); } return TRUE; } // // Unhooks a function // extern "C" __declspec(dllexport) VOID __cdecl UnhookFunction(ULONG_PTR Function) { // // Check if the function has already been hooked // If not, I can't unhook it // HOOK_INFO *hinfo = GetHookInfoFromFunction(Function); if (hinfo) { // // Replaces the hook jump with a jump to the bridge // I'm not completely unhooking since I'm not // restoring the original bytes // WriteJump((VOID *) hinfo->Function, hinfo->Bridge); } } // // Get the bridge to call instead of the original function from hook // extern "C" __declspec(dllexport) ULONG_PTR __cdecl GetOriginalFunction(ULONG_PTR Hook) { if (NumberOfHooks == 0) return NULL; for (UINT x = 0; x < NumberOfHooks; x++) { if (HookInfo[x].Hook == Hook) return HookInfo[x].Bridge; } return NULL; } I implemented it as a DLL (but you can include it in your code as well). Using the code is very simple. Basically, the DLL only exports three functions: one to hook, another to unhook and the last to get the address of the bridge of the hooked function. Of course, we need to retrieve the address of the bridge, otherwise we can't call the original code of the hooked function. Let's see a little code sample which works both on x86 and x64: #include "stdafx.h" #include "NtHookEngine_Test.h" BOOL (__cdecl *HookFunction)(ULONG_PTR OriginalFunction, ULONG_PTR NewFunction); VOID (__cdecl *UnhookFunction)(ULONG_PTR Function); ULONG_PTR (__cdecl *GetOriginalFunction)(ULONG_PTR Hook); int WINAPI MyMessageBoxW(HWND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, UINT uType, WORD wLanguageId, DWORD dwMilliseconds); int APIENTRY _tWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { // // Retrieve hook functions // HMODULE hHookEngineDll = LoadLibrary(_T("NtHookEngine.dll")); HookFunction = (BOOL (__cdecl *)(ULONG_PTR, ULONG_PTR)) GetProcAddress(hHookEngineDll, "HookFunction"); UnhookFunction = (VOID (__cdecl *)(ULONG_PTR)) GetProcAddress(hHookEngineDll, "UnhookFunction"); GetOriginalFunction = (ULONG_PTR (__cdecl *)(ULONG_PTR)) GetProcAddress(hHookEngineDll, "GetOriginalFunction"); if (HookFunction == NULL || UnhookFunction == NULL || GetOriginalFunction == NULL) return 0; // // Hook MessageBoxTimeoutW // HookFunction((ULONG_PTR) GetProcAddress(LoadLibrary(_T("User32.dll")), "MessageBoxTimeoutW"), (ULONG_PTR) &MyMessageBoxW); MessageBox(0, _T("Hi, this is a message box!"), _T("This is the title."), MB_ICONINFORMATION); // // Unhook MessageBoxTimeoutW // UnhookFunction((ULONG_PTR) GetProcAddress(LoadLibrary(_T("User32.dll")), "MessageBoxTimeoutW")); MessageBox(0, _T("Hi, this is a message box!"), _T("This is the title."), MB_ICONINFORMATION); return 0; } int WINAPI MyMessageBoxW(HWND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, UINT uType, WORD wLanguageId, DWORD dwMilliseconds) { int (WINAPI *pMessageBoxW)(HWND hWnd, LPCWSTR lpText, LPCWSTR lpCaption, UINT uType, WORD wLanguageId, DWORD dwMilliseconds); pMessageBoxW = (int (WINAPI *)(HWND, LPCWSTR, LPCWSTR, UINT, WORD, DWORD)) GetOriginalFunction((ULONG_PTR) MyMessageBoxW); return pMessageBoxW(hWnd, lpText, L"Hooked MessageBox", uType, wLanguageId, dwMilliseconds); } In this sample I'm hooking the API MessageBoxTimeoutW. I tried to hook MessageBoxW and that worked fine on x86, then I tried on x64 and the code generated an exception. So, I disassembled the MessageBoxW function on x64: Unfortunately, as you can notice, the first instructions of this API include a jz which is going to be overwritten by our unconditional jump. And since we don't relocate jumps in our bridge, we can't hook this function. So, I had to hook the function MessageBoxTimeoutW, which is called inside MessageBoxW and has no jumps at the beginning. In the code example I first hook the function and call it, then I unhook it and call it again. So, the output will be: That's all. Of course, this code works only if MessageBoxTimeoutW is available. I'm not completely sure about when it was first introduced, since it's an undocumented API. I guess it has been introduced with XP, so chances are that this particular hook won't work on Windows 2000. As it's possible to see from the previous example, the hook engine isn't perfect, but it can easily be improved. I don't develop it further because I don't need a more powerful one (right now, I mean). I just needed an x86/x64 hook engine with no license restrictions. I wrote this engine and the article in just one day, it really wasn't much work. Most of the work in such a hook engine is writing the disassembler, which I didn't do. So, in my opinion, it doesn't make much sense paying for a hook engine. The only thing which I really can't provide in this engine is support for Itanium. That's because I don't have a disassembler for this platform. But I would rather write one myself than buy a hook engine. I might actually add an Itanium disassembler in the future, who knows... I hope you can find this code useful. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/system/mini_hook_engine.aspx
crawl-002
refinedweb
2,631
50.77
A flutter package that provides a base level set of classes to implement authentication controls Note: This package is still in development On its own this package does not do anything, but it is required as a dependency for other authentication packages. Currently the packages supported are: In adition to the dependency that package has on this one, the flutter_auth_starter also requires it. Please create an issue to provide feedback or an issue. Add this to your package's pubspec.yaml file: dependencies: flutter_auth_base: "^0.1.3" You can install packages from the command line: with Flutter: $ flutter packages get Alternatively, your editor might support flutter packages get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:flutter_auth_base/flutter_auth_base.dart'; We analyzed this package on May 19,_auth_base.dart.
https://pub.dartlang.org/packages/flutter_auth_base
CC-MAIN-2018-22
refinedweb
139
56.86
Created on 2009-12-10 22:27 by flox, last changed 2014-03-14 00:55 by python-dev. This issue is now closed. AFAIK these codecs were not ported to Python 3. 1. I found no hint in documentation on this matter. 2. Is it possible to contribute some of them, or there's a good reason to look elsewhere? These are not encodings, in that they don't convert characters to bytes. It was a mistake that they were integrated into the codecs interfaces in Python 2.x; this mistake is corrected in 3.x. Martin v. Löwis wrote: > > Martin v. Löwis <martin@v.loewis.de> added the comment: > > These are not encodings, in that they don't convert characters to bytes. > It was a mistake that they were integrated into the codecs interfaces in > Python 2.x; this mistake is corrected in 3.x. Martin, I beg your pardon, but these codecs indeed implement valid encodings and the fact that these codecs were removed was a mistake. They should be readded to Python 3.x. Note that just because a codec doesn't convert between bytes and characters only, doesn't make it wrong in any way. The codec architecture in Python is designed to support same type encodings just as well as ones between bytes and characters. Reopening the ticket.. I agree with Martin. gzip and bz2 convert bytes to bytes. Encodings deal strictly with unicode -> bytes. «Everything you thought you knew about binary data and Unicode has changed.» Reopening for the documentation part. This "mistake" deserves some words in the documentation: docs.python.org/dev/py3k/whatsnew/3.0.html #text-vs-data-instead-of-unicode-vs-8-bit And the conversion may be automated with 2to3, maybe. Is it possible to add "DeprecationWarning" for these codecs when using "python -3" ? >>> {}.has_key('a') __main__:1: DeprecationWarning: dict.has_key() not supported in 3.x; use the in operator False >>> print `123` <stdin>:1: SyntaxWarning: backquote not supported in 3.x; use repr() 123 >>> 'abc'.encode('base64') 'YWJj\n' Martin v. Löwis wrote: > > Martin v. Löwis <martin@v.loewis.de> added the comment: > >. Of course it does support these kinds of codecs. The codec architecture hasn't changed between 2.x and 3.x, just the way a few methods work. All we agreed to is that unicode.encode() will only return bytes, while bytes.decode() will only return unicode. So the methods won't support same type conversions, because Guido didn't want to have methods that return different types based on the chosen parameter (the codec name in this case). However, you can still use codecs.encode() and codecs.decode() to work with codecs that return different combinations of types. I explicitly added that support back to 3.0. You can't argue that just because two methods don't support a certain type combination, the whole architecture doesn't support this anymore. Also note that codecs allow a much more far-reaching use than just through the unicode and bytes methods: you can use them as seamless wrappers for streams, subclass from them, use their methods directly, etc. etc. So your argument that just because the two methods don't support these codecs anymore is just not good enough to warrant their removal. Ben. Thinking about it, I am +1 to reimplement the codecs. We could implement new methods to replace the old one. (similar to base64.encodebytes and base64.decodebytes) >>> b'abc'.encodebytes('base64') b'YWJj\n' >>> b'abc'.encodebytes('zlib').encodebytes('base64') b'eJxLTEoGAAJNASc=\n' >>> b'UHl0aG9u'.decodebytes('base64').decode('utf-8') 'Python'? Benjamin Peterson wrote: > > Benjamin Peterson <benjamin@python.org> added the comment: > >? Yes. At the time it was postponed, since I brought it up late in the 3.0 release process. Perhaps I should bring it up again. Note that those methods are just convenient helpers to access the codecs and as such only provide limited functionality. The full machinery itself is accessible via the codecs module and the code in the encodings package. Any decision to include a codec or not needs to be based on whether it fits the framework in those modules/packages, not the functionality we expose on unicode and bytes objects. I've ported the codecs from Py2: base64, bytes_escape, bz2, hex, quopri, rot13, uu and zlib It's not a big deal. Basically: - StringIO.StringIO --> io.BytesIO - 'string_escape' --> 'bytes_escape' Will add documentation if we agree on the feature. > codecs.encode()/.decode() provide access to all codecs, regardless > of their supported type combinations and of course, you can use > them directly via the codec registry, subclass from them, etc. I presume that the OP didn't talk about codecs.encode, but about the methods on string objects. flox, can you clarify what precisely it is that you miss? Martin, actually, I was trying to convert some piece of code from python2 to python3. And this statement was not converted by 2to3: "x.decode('base64').decode('zlib')" So, I read the official documentation, and found no hint about the removal of these codecs. For my specific use case, I can use "zlib.decompress" and "base64.decodebytes", but I find that the ".encode()" and ".decode()" helpers were useful in Python 2. I don't know all the background of the removal of these codecs. But I try to contribute to Python, and help Python 3 become at least as featureful, and useful, as Python 2. So, after reading the above comments, I think we may end up with following changes: * restore the "bytes-to-bytes" codecs in the "encodings" package * then create new helpers on bytes objects (either ".transform()/.untransform()" or ".encodebytes()/.decodebytes") > And this statement was not converted s/this statement/this method call/ > So, after reading the above comments, I think we may end up with > following changes: > * restore the "bytes-to-bytes" codecs in the "encodings" package > * then create new helpers on bytes objects (either > ".transform()/.untransform()" or ".encodebytes()/.decodebytes") I would still be opposed to such a change, and I think it needs a PEP. If the codecs are restored, one half of them becomes available to .encode/.decode methods, since the codec registry cannot tell which ones implement real character encodings, and which ones are other conversion methods. So adding them would be really confusing. I also wonder why you are opposed to the import statement. My recommendation is indeed that you use the official API for these libraries (and indeed, there is an official API for each of them, unlike real codecs, which don't have any other documented API). Martin v. Löwis wrote: > > Martin v. Löwis <martin@v.loewis.de> added the comment: > >> So, after reading the above comments, I think we may end up with >> following changes: >> * restore the "bytes-to-bytes" codecs in the "encodings" package +1 >> * then create new helpers on bytes objects (either >> ".transform()/.untransform()" or ".encodebytes()/.decodebytes") +1 - the names are still up for debate, IIRC. > I would still be opposed to such a change, and I think it needs a PEP. All this has already been discussed and the only reason it didn't go in earlier was timing. No need for a PEP. > If the codecs are restored, one half of them becomes available to > .encode/.decode methods, since the codec registry cannot tell which > ones implement real character encodings, and which ones are other > conversion methods. So adding them would be really confusing. Not at all. The helper methods check the return types and raise an exception if the types don't match the expected types. The codecs registry itself doesn't need to know about the possible input/output types of codecs, since this information is not required to match a name to an implementation. What we could do, is add that information to the CodecInfo object used for registering the codec. codecs.lookup() would then return the information to the application. E.g. .encode_input_types = (str,) .encode_output_types = (bytes,) .decode_input_types = (bytes,) .decode_output_types = (str,) Codecs not supporting these CodecInfo attributes would simply return None. > I also wonder why you are opposed to the import statement. My > recommendation is indeed that you use the official API for these > libraries (and indeed, there is an official API for each of them, > unlike real codecs, which don't have any other documented API). That's not the point. The codec API provides a standardized API for all these encodings. The hex, zlib, bz2, etc. codecs are just adapters of the different pre-existing APIs to the codec API. I also seem to recall that adding .transform()/.untransform() was already accepted at some point. I agree with Martin: codecs choosed the wrong direction in Python2, and it's fixed in Python3. The codecs module is related to charsets (encodings), should encode str to bytes, and should decode bytes (or any read buffer) to str. Eg. rot13 "encodes" str to str. "base64 bz2 hex zlib ...": use base64, bz2, binascii and zlib modules for that. The documentation should be fixed (explain how to port code from Python2 to Python3). It's maybe possible for write some 2to3 fixers for the following examples: "...".encode("base64") => base64.b64encode("...") "...".encode("rot13") => do nothing (but display a warning?) "...".encode("zlib") => zlib.compress("...") "...".encode("hex") => base64.b16encode("...") "...".encode("bz2") => bz2.compress("...") "...".decode("base64") => base64.b64decode("...") "...".decode("rot13") => do nothing (but display a warning?) "...".decode("zlib") => zlib.decompress("...") "...".decode("hex") => base64.b16decode("...") "...".decode("bz2") => bz2.decompress("...") Explanation the change in Python3 by Guido: )." -- See also issue #8838. STINNER Victor wrote: > > STINNER Victor <victor.stinner@haypocalc.com> added the comment: > > I agree with Martin: codecs choosed the wrong direction in Python2, and it's fixed in Python3. The codecs module is related to charsets (encodings), should encode str to bytes, and should decode bytes (or any read buffer) to str. No, that's just not right: the codec system in Python does not mandate the types used or accepted by the codecs. The only change that was applied in Python3 was to make sure that the str.encode() and bytes.decode() methods always return the same type to assure type-safety. Python2 does not apply that check, but instead provides a direct interface to codecs.encode() and codecs.decode(). Please don't mix the helper methods on those objects with what the codec system was designed for. The helper methods apply a strategy that's more constrained than the codec system. The addition of .transform() and .untransform() for same type conversions was discussed in 2008, but didn't make it into 3.0 since I hadn't had time to add the methods: The removed codecs don't rely on the helper methods in any way. They are easily usable via codecs.encode() and codecs.decode() even without .transform() and .untransform(). Esp. the hex codec is very handy and at least in our eGenix code base in wide-spread use. Using a single well-defined interface to such encodings is just much more user friendly than having to research the different APIs for each of them. Related: bytes vs. str for base64 encoding in email, #8896 I would like to know what happened with hex_codec and what is the new py3 for this. Also, it would be really helpful to see DeprecationWarnings for all these codecs in py2x and include a note in py3 changelist. The official python documentation from lists them as valid without any signs of them as being dropped or replaced. >). Martin v. Löwis wrote: > > Martin v. Löwis <martin@v.loewis.de> added the comment: > >>). ... or wait for Python 3.2 which will readd them :-) ... but don't wait to long to add them! Georg Brandl wrote: > > Georg Brandl <georg@python.org> added the comment: > > ... but don't wait to long to add them! I plan to work on that after EuroPython. Florent already provided the patch for the codecs, so what's left is adding the .transform()/ .untransform() methods, and perhaps tweak the codec input/output types in a couple of cases.? Also, can someone not unsure about the status of this report edit the type, stage, component and resolution? It would be helpful. >? It is correct. So use base64.b16encode/b16decode then. It's just that I personally prefer hexlify/unhexlify, because I can memorize the function name better. Codecs brought back and (un)transform implemented in r86934. I am probably a bit late to this discussion, but why these things should be called "codecs" and why should they share the registry with the encodings? It looks like the proper term would be "transformations" or "transforms". Alex. As per I think this checkin should be reverted, as it's breaking the language moratorium. I leave this to MAL, on whose behalf I finished this to be in time for beta. Martin v. Löwis wrote: > > Martin v. Löwis <martin@v.loewis.de> added the comment: > > As per > > > > I think this checkin should be reverted, as it's breaking the language moratorium. I've asked Guido. We may have to revert the addition of the new methods and then readd them for 3.3, but I don't really see them as difficult to implement for the other Python implementations, since they are just interfaces to the codec sub-system. The readdition of the codecs and changes to support them in the codec system do not fall under the moratorium, since they are stdlib changes. With] See issue #10807: 'base64' can be used with bytes.decode() (and str.encode()), but it raises a confusing exception (TypeError: expected bytes, not memoryview). So. This was reverted before 3.2 was out, right? What is the status for 3.3? What is the status of this issue? rot13 codecs & friends were added back to Python 3.2 with {bytes,str}.(un)transform() methods: commit 7e4833764c88. Codecs were disabled because of surprising error messages before the release of Python 3.2 final: issue #10807, commit ff1261a14573. transform() and untransform() methods were also removed, I don't remember why/how exactly, maybe because new codecs were disabled. So we have rot13 & friends in Python 3.2 and 3.3, but they cannot be used with the regular str.encode('rot13'), you have to write (for example): >>> codecs.getdecoder('rot_13')('rot13') ('ebg13', 5) >>> codecs.getencoder('rot_13')('ebg13') ('rot13', 5) The major issue with {bytes,str}.(un)transform() is that we have only one registry for all codecs, and the registry was changed in Python 3 to ensure: * encode: str->bytes * decode: bytes->str To implement str.transform(), we need another register. Marc-Andre suggested (msg96374) to add tags to codecs: """ .encode_input_types = (str,) .encode_output_types = (bytes,) .decode_input_types = (bytes,) .decode_output_types = (str,) """ I'm still opposed to str->str (rot13) and bytes->bytes (hex, gzip, ...) operations using the codecs API. Developers have to use the right module. If the API of these modules is too complex, we should add helpers to these modules, but not to builtin types. Builtin types have to be and stay simple and well defined. > transform() and untransform() methods were also removed, I don't remember why/how exactly, I don’t remember either; maybe it was too late in the release process, or we lacked enough consensus. > So we have rot13 & friends in Python 3.2 and 3.3, but they cannot be used with the regular > str.encode('rot13'), you have to write (for example): codecs.getdecoder('rot_13') Ah, great, I thought they were not available at all! > The major issue with {bytes,str}.(un)transform() is that we have only one registry for all > codecs, and the registry was changed in Python 3 [...] To implement str.transform(), we need > another register. Marc-Andre suggested (msg96374) to add tags to codecs I’m confused: does the tags idea replace the idea of adding another registry? > I'm still opposed to str->str (rot13) and bytes->bytes (hex, gzip, ...) operations using the > codecs API. Developers have to use the right module. Well, here I disagree with you and agree with MAL: str.encode and bytes.decode are strict, but the codec API in general is not restricted to str→bytes and bytes→str directions. Using the zlib or base64 modules vs. the codecs is a matter of style; sometimes you think it looks hacky, sometimes you think it’s very handy. And rot13 only exists as a codec! They were removed because adding new methods to builtin types violated the language moratorium. Now that the language moratorium is over, the transform/untransform convenience APIs should be added again for 3.3. It's an approved change, the original timing was just wrong.. Some further comments after getting back up to speed with the actual status of this problem (i.e. that we had issues with the error checking and reporting in the original 3.2 commit). 1. I agree with the position that the codecs module itself is intended to be a type neutral codec registry. It encodes and decodes things, but shouldn't actually care about the types involved. If that is currently not the case in 3.x, it needs to be fixed. This type neutrality was blurred in 2.x by the fact that it only implemented str->str translations, and even further obscured by the coupling to the .encode() and .decode() convenience APIs. The fact that the type neutrality of the registry itself is currently broken in 3.x is a *regression*, not an improvement. (The convenience APIs, on the other hand, are definitely *not* type neutral, and aren't intended to be) 2. To assist in producing nice error messages, and to allow restrictions to be enforced on type-specific convenience APIs, the CodecInfo objects should grow additional state as MAL suggests. To avoid redundancy (and inaccurate overspecification), my suggested colour for that particular bikeshed is: Character encoding codec: .decoded_format = 'text' .encoded_format = 'binary' Binary transform codec: .decoded_format = 'binary' .encoded_format = 'binary' Text transform codec: .decoded_format = 'text' .encoded_format = 'text' I suggest using the fuzzy format labels mainly due to the existence of the buffer API - most codec operations that consume binary data will accept anything that implements the buffer API, so referring specifically to 'bytes' in error messages would be inaccurate. The convenience APIs can then emit errors like: 'a'.encode('rot_13') ==> CodecLookupError: text <-> binary codec expected ('rot_13' is text <-> text) 'a'.decode('rot_13') ==> CodecLookupError: text <-> binary codec expected ('rot_13' is text <-> text) 'a'.transform('bz2') ==> CodecLookupError: text <-> text codec expected ('bz2' is binary <-> binary) 'a'.transform('ascii') ==> CodecLookupError: text <-> text codec expected ('ascii' is text <-> binary) b'a'.transform('ascii') ==> CodecLookupError: binary <-> binary codec expected ('ascii' is text <-> binary) For backwards compatibility with 3.2, codecs that do not specify their formats should be treated as character encoding codecs (i.e. decoded format is 'text', encoded format is 'binary') Oops, typo in my second error example. The command should be: b'a'.decode('rot_13') (Since str objects don't offer a decode() method any more) > *.encode('rot_13') ==> CodecLookupError I like the idea of raising a lookup error on .encode/.decode if the codec is not a classic text codec (like ASCII or UTF-8). > *.transform('ascii') ==> CodecLookupError Same comment. > str.transform('bz2') ==> CodecLookupError A lookup error is surprising here. It may be a TypeError instead. The bz2 can be used with .transform, but not on str. So: - Lookup error if the codec cannot be used with encode/decode or transform/untransform - Type error if the value type is invalid (CodecLookupError doesn't exist, you propose to define a new exception who inherits from LookupError?) On Thu, Oct 20, 2011 at 8:34 AM, STINNER Victor <report@bugs.python.org> wrote: >> str.transform('bz2') ==> CodecLookupError > > A lookup error is surprising here. It may be a TypeError instead. The bz2 can be used with .transform, but not on str. So: No, it's the same concept as the other cases - we found a codec with the requested name, but it's not the kind of codec we wanted in the current context (i.e. str.transform). It may be that the problem is the user has a str when they expected to have a bytearray or a bytes object, but there's no way for the codec lookup process to know that. > - Lookup error if the codec cannot be used with encode/decode or transform/untransform > - Type error if the value type is invalid There's no way for str.transform to tell the difference between "I asked for the wrong codec" and "I expected to have a bytes object here, not a str object". That's why I think we need to think in terms of format checks rather than type checks. > (CodecLookupError doesn't exist, you propose to define a new exception who inherits from LookupError?) Yeah, and I'd get that to handle the process of creating the nice error messages.) Then the various encode, decode and transform methods can just pass the appropriate arguments to 'codecs.lookup' without all having to reimplement the format checking logic. >) lookup('rot13') should fail with a lookup error to keep backward compatibility. You can just change the default values to: def lookup(encoding, decoded_format='text', encoded_format='binary'): ... If you patch lookup, what about the following functions? - getencoder() - getdecoder() - getincrementalencoder() - getincrementaldecoder() - getread() - getwriter() - itereencode() I'm fine with people needing to drop down to the lower level lookup() API if they want the filtering functionality in Python code. For most purposes, constraining the expected codec input and output formats really isn't a major issue - we just need it in the core in order to emit sane error messages when people misuse the convenience APIs based on things that used to work in 2.x (like 'a'.encode('base64')). At the C level, I'd adjust _PyCodec_Lookup to accept the two extra arguments and add _PyCodec_EncodeText, _PyCodec_DecodeBinary, _PyCodec_TransformText and _PyCodec_TransformBinary to support the convenience APIs (rather than needing the individual objects to know about the details of the codec tagging mechanism). Making new codecs available isn't a backwards compatibility problem - anyone relying on a particular key being absent from an extensible registry is clearly doing the wrong thing. Regarding the particular formats, I'd suggest that hex, base64, quopri, uu, bz2 and zlib all be flagged as binary transforms, but rot13 be implemented as a text transform (Florent's patch has rot13 as another binary transform, but it makes more sense in the text domain - this should just be a matter of adjusting some of the data types in the implementation from bytes to str) Issue 13600 has been marked as a duplicate of this issue. FRT, +1 to the idea of adding encoded_format and decoded_format attributes to CodecInfo, and also to adding {str,bytes}.{transform,untransform} back. What is the status of this issue? Is there still a fan of this issue motivated to write a PEP, a patch or something like that? It's still on my radar to come back and have a look at it. Feedback from the web folks doing Python 3 migrations is that it would have helped them in quite a few cases. I want to get a couple of other open PEPs out of the way first, though (mainly 394 and 409) My current opinion is that this should be a PEP for 3.4, to make sure we flush out all the corner cases and other details correctly. For that matter, with the relevant codecs restored in 3.2, a transform() helper could probably be added to six (or a new project on PyPI) to prototype the approach. Setting as a release blocker for 3.4 - this is important. FWIW it's, I've been thinking further about this recently and I think implementing this feature as builtin methods is the wrong way to approach it. Instead, I propose the addition of codecs.encode and codecs.decode methods that are type neutral (leaving any type checks entirely up to the codecs themselves), while the str.encode and bytes.decode methods retain their current strict test model related type restrictions. Also, I now think my previous proposal for nice error messages was massively over-engineered. A much simpler approach is to just replace the status quo: >>> "".encode("bz2_codec") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ncoghlan/devel/py3k/Lib/encodings/bz2_codec.py", line 17, in bz2_encode return (bz2.compress(input), len(input)) File "/home/ncoghlan/devel/py3k/Lib/bz2.py", line 443, in compress return comp.compress(data) + comp.flush() TypeError: 'str' does not support the buffer interface with a better error with more context like: UnicodeEncodeError: encoding='bz2_codec', errors='strict', codec_error="TypeError: 'str' does not support the buffer interface" A similar change would be straightforward on the decoding side. This would be a good use case for __cause__, but the codec error should still be included in the string representation. Many have chimed in on this topic but I thought I would lend my stance--for whatever it is worth. I also believe most of these do not fit concept of a character codec and some sort of transforms would likely be useful, however most are sort of specialized (e.g., there should probably be a generalized compression library interface al la hashlib): rot13: a (albeit simplistic) text cipher (str to str; though bytes to bytes could be argued since since many crypto functions do that) zlib, bz2, etc. (lzma/xz should also be here): all bytes to bytes compression transforms hex(adecimal) uu, base64, etc.: these more or less fit the description of a character codec as they map between bytes and str, however, I am not sure they are really the same thing as these are basically doing a radix transformation to character symbols and the mapping it not strictly from bytes to a single character and back as a true character codec seems to imply. As evidenced by by int() format() and bytes.fromhex(), float.hex(), float.fromhex(), etc., these are more generalized conversions for serializing strings of bits into a textual representation (possibly for human consumption). I personally feel any <type/class>.hex(), etc. method would be better off as a format() style formatter if they are to exist in such a space at all (i.e., not some more generalized conversion library--which we have but since 3.x could probably use to be updated and cleaned up). Another rant, because it matters to many of us: IMHO, the solution to restore str.decode and bytes.encode and return TypeError for improper use is probably the most obvious for the average user. -1 I see encoding as the process to go from text to bytes, and decoding the process to go from bytes to text, so (ab)using these terms for other kind of conversions is not an option IMHO. Anyway I think someone should write a PEP and list the possible options and their pro and cons, and then a decision can be taken on python-dev. FTR in Python 2 you can use decode for bytes->text, text->text, bytes->bytes, and even text->bytes: u'DEADBEEF'.decode('hex') '\xde\xad\xbe\xef' transform/untransform has approval-in-principle, adding encode/decode to the type that doesn't have them has been explicitly (and repeatedly :) rejected. (I don't know about anybody else, but at this point I have written code that assumes that if an object has an 'encode' method, calling it will get me a bytes, and vice versa with 'decode'...an assumption I know is not "safe", but that I feel is useful duck typing in the contexts in which I used it.) Nick wants a PEP, other people have said a PEP isn't necessary. What is certainly necessary is for someone to pick up the ball and run with it. I am not a native english speaker, but it seems that the common usage of encode/decode is wider than the restricted definition applied for Python 3.3: Some examples: * RFC 4648 specifies "Base16, Base32, and Base64 Data Encodings" * About rot13: "the same code can be used for encoding and decoding" * The Huffman coding is "an entropy encoding algorithm" (used for DEFLATE) * RFC 2616 lists (zlib's) deflate or gzip as "encoding transformations" However, I acknowledge that there are valid reasons to choose a different verb too. While not strictly necessary, a PEP would be certainly useful and will help reaching a consensus. The PEP should provide a summary of the available options (transform/untransforms, reintroducing encode/decode for bytes/str, maybe others), their intended behavior (e.g. is type(x.transform()) == type(x) always true?), and possible issues (e.g. Should some transformations be limited to str or bytes? Should rot13 work with both transform and untransform?). Even if we all agreed on a solution, such document would still be useful IMHO. +1 for someone stepping up to write a PEP on this if they would like to see the situation improved in 3.4. transform/untransform has at least one core developer with an explicit -1 on the proposal at the moment (me). We *definitely* need a generic object->object convenience API in the codecs module (codecs.decode, codecs.encode). I even accept that those two functions could be worthy of elevation to be new builtin functions. I'm *far* from convinced that awkwardly named methods that only handle str->object, bytes->object and bytearray->object are a good idea. Should memoryview gain transform/untransform methods as well? transform/untransform as proposed aren't even inverse operations, since they don't swap the valid input and output types (that is, transform is str/bytes/bytearray to arbitrary objects, while untransform is *also* str/bytes/bytearray to arbitrary objects. Inverses can't have a domain/range mismatch like that). Those names are also ambiguous about which one corresponds to "encoding" and which to "decoding". encode() and decode(), whether as functions in the codecs module or as builtins, have no such issue. Personally, the more I think about it, the more I'm in favour of adding encode and decode as builtin functions for 3.4. If you want arbitrary object->object conversions, use the builtins, if you want strict str->bytes or bytes/bytearray->str use the methods. Python 3 has been around long enough now, and Python 3.2 and 3.3 are sufficiently well known that I think we can add the full power builtins without people getting confused. I was visualizing transform/untransform as being restricted to buffertype->bytes and stringtype->string, which at least for binascii-type transforms is all the modules support. After all, you don't get to choose what type of object you get back from encode or decode. A more generalized transformation (encode/decode) utility is also interesting, but how many non-string non-bytes transformations do we actually support? If transform is a method, how do you plan to accept arbitrary buffer supporting types as input? This is why I mentioned memoryview: it doesn't provide decode(), but there's no good reason you should have to copy the data from the view before decoding it. Similarly, you shouldn't have to make an unaltered copy before creating a compressed (or decompressed) copy. With codecs.encode and codecs.decode as functions, supporting memoryview as an input for bytes->str decoding, binary->bytes encoding (e.g. gzip compression) and binary->bytes decoding (e.g. gzip decompression) is trivial. Ditto for array.array and anything else that supports the buffer protocol. With transform/untransform as methods? No such luck. And once you're using functions rather than methods, it's best to define the API as object -> object, and leave any type constraints up to the individual codecs (with the error handling improved to provide more context and a more meaningful exception type, as I described earlier in the thread) I agree with you. transform/untransform are parallel to encode/decode, and I wouldn't expect them to exist on any type that didn't support either encode or decode. They are convenience methods, just as encode/decode are. I am also probably not invested enough in it to write the PEP :) str.decode() and bytes.encode() are not coming back. Any proposal had better take into account the API design rule that the *type* of a method's return value should not depend on the *value* of one of the arguments. (The Python 2 design failed this test, and that's why we changed it.) It is however fine to let the return type depend on one of the argument *types*. So e.g. bytes.transform(enc) -> bytes and str.transform(enc) -> str are fine. And so are e.g. transform(bytes, enc) -> bytes and transform(str, enc) -> str. But a transform() taking bytes that can return either str or bytes depending on the encoding name would be a problem. Personally I don't think transformations are so important or ubiquitous so as to deserve being made new bytes/str methods. I'd be happy with a convenience function, for example transform(input, codecname), that would have to be imported from somewhere (maybe the codecs module). My guess is that in almost all cases where people are demanding to say e.g. x = y.transform('rot13') the codec name is a fixed literal, and they are really after minimizing the number of imports. Personally, disregarding the extra import line, I think x = rot13.transform(y) looks better though. Such custom APIs also give the API designer (of the transformation) more freedom to take additional optional parameters affecting the transformation, offer a set of variants, or a richer API. FWIW, I'm not interested in seeing this added anymore. consensus here appears to be "bad idea... don't do this." No, transform/untransform as methods are a bad idea, but these *codecs* should definitely come back. The minimal change needed for that to be feasible is to give errors raised during encoding and decoding more context information (at least the codec name and error mode, and switching to the right kind of error). MAL also stated on python-dev that codecs.encode and codecs.decode already exist, so it should just be a matter of documenting them properly. okay, but i don't personally find any of these to be good ideas as "codecs" given they don't have anything to do with translating between bytes<->unicode. The codecs module is generic, text encodings are just the most common use case (hence the associated method API). I don't see any point in merely bringing the codecs back, without any convenience API to use them. If I need to do import codecs result = codecs.getencoder("base64").encode(data) I don't think people would actually prefer this over import base64 result = base64.encodebytes(data) I't (IMO) only the convenience method (.encode) that made people love these codecs. IMHO it's also a documentation problem. Once people figure out that they can't use encode/decode anymore, it's not immediately clear what they should do instead. By reading the codecs docs[0] it's not obvious that it can be done with codecs.getencoder("...").encode/decode, so people waste time finding a solution, get annoyed, and blame Python 3 because it removed a simple way to use these codecs without making clear what should be used instead. FWIW I don't care about having to do an extra import, but indeed something simpler than codecs.getencoder("...").encode/decode would be nice. [0]: It turns out MAL added the convenience API I'm looking for back in 2004, it just didn't get documented, and is hidden behind the "from _codecs import *" call in the codecs.py source code: So, all the way from 2.4 to 2.7 you can write: from codecs import encode result = encode(data, "base64") It works in 3.x as well, you just need to add the "_codec" to the end to account for the missing aliases: >>> encode(b"example", "base64_codec") b'ZXhhbXBsZQ==\n' >>> decode(b"ZXhhbXBsZQ==\n", "base64_codec") b'example' Note that the convenience functions omit the extra checks that are part of the methods (although I admit the specific error here is rather quirky): >>> I'me going to create some additional issues, so this one can return to just being about restoring the missing aliases. Just. > It works in 3.x as well, you just need to add the "_codec" to the end > to account for the missing aliases: FTR this is because of ff1261a14573 (see #10807). Issue 17827 covers adding documentation for codecs.encode and codecs.decode Issue 17828 covers adding exception handling improvements for all encoding and decoding operations For. Also adding 17839 as a dependency, since part of the reason the base64 errors in particular are so cryptic is because the base64 module doesn't accept arbitrary PEP 3118 compliant objects as input. I also created issue 17841 to cover that that the 3.3 documentation incorrectly states that these aliases still exist, even though they were removed before 3.2 was released. With issue 17839 fixed, the error from invoking the base64 codec through the method API is now substantially more sensible: >>> b"ZXhhbXBsZQ==\n".decode("base64_codec") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: decoder did not return a str object (type=bytes) I just wanted to note something I realised in chatting to Armin Ronacher recently: in both Python 2.x and 3.x, the encode/decode method APIs are constrained by the text model, it's just that in 2.x that model was effectively basestring<->basestring, and thus still covered every codec in the standard library. This greatly limited the use cases for the codecs.encode/decode convenience functions, which is why the fact they were undocumented went unnoticed. In 3.x, the changed text model meant the method API become limited to the Unicode codecs, making the function based API more important. For anyone interested, I have a patch up on issue 17828 that produces the following output for various codec usage errors: >>> import codecs >>> codecs.encode(b"hello", "bz2_codec").decode("bz2_codec") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'bz2_codec' decoder returned 'bytes' instead of 'str'; use codecs.decode to decode to arbitrary types >>> "hello".encode("bz2_codec") TypeError: 'str' does not support the buffer interface The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: invalid input type for 'bz2_codec' codec (TypeError: 'str' does not support the buffer interface) >>> "hello".encode("rot_13") Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'rot_13' encoder returned 'str' instead of 'bytes'; use codecs.encode to encode to arbitrary types Providing the 2to3 fixers in issue 17823 now depends on this issue rather than the other way around (since not having to translate the names simplifies the fixer a bit). Issue 17823 is now closed, but not because it has been implemented. It turns out that the data driven nature of the incompatibility means it isn't really amenable to being detected and fixed automatically via 2to3. Issue 19543 is a replacement proposal for the introduction of some additional codec related Py3k warnings in Python 2.7.7. Attached patch restores the aliases for the binary and text transforms, adds a test to ensure they exist and restores the "Aliases" column to the relevant tables in the documentation. It also updates the relevant section in the What's New document. I also tweaked the wording in the docs to use the phrases "binary transform" and "text transform" for the affected tables and version added/changed notices. Given the discussions on python-dev, the main condition that needs to be met before I commit this is for Victor to change his current -1 to a -0 or higher. Victor is still -1, so to Python 3.5 it goes. The 3.4 portion of issue 19619 has been addressed, so removing it as a dependency again. With issue 19619 resolved for Python 3.4 (the issue itself remains open awaiting a backport to 3.3), Victor has softened his stance on this topic and given the go ahead to restore the codec aliases: I'll be committing this shortly, after adjusting the patch to account for the issue 19619 changes to the tests and What's New. New changeset 5e960d2c2156 by Nick Coghlan in branch 'default': Close #7475: Restore binary & text transform codecs Note that I still plan to do a documentation-only PEP for 3.4, proposing some adjustments to the way the codecs module is documented, making binary and test transform defined terms in the glossary, etc. I'll probably aim for beta 2 for that. Docstrings for new codecs mention bytes.transform() and bytes.untransform() which are nonexistent. New changeset d7950e916f20 by R David Murray in branch '3.3': #7475: Remove references to '.transform' from transform codec docstrings. New changeset 83d54ab5c696 by R David Murray in branch 'default': Merge #7475: Remove references to '.transform' from transform codec docstrings.
https://bugs.python.org/issue7475
CC-MAIN-2017-47
refinedweb
6,857
65.93
Quest RetDAO.java (part2) .. will give good points to everyone. Is this logic good? RetDAO.java (part2) .. will give good points to everyone. Is this logic good... retailer set status='active' where retailer_id='"+ret+"'"); int h=st.executeUpdate("update login_tbl set status='active' where user_name Zoho interview Quest I really need help with this assignment question Please help me out Please I really need help with this assignment question Please help me out Please * Description* You are hired to develop a laptop inventory information... be specified. However if user does not provide any data for year, set a default value Set the mapping name Set the mapping name  3. Set the mapping name to the action attribute of html...; 5. Set the parameter's value to the desired function name Set Parameter - JSP-Interview Questions Set Parameter Hi, could someone please explain the process...,In your JSP page use the Set Tag, and set the scope attribute to session.Example of set tag<s:set name="personName" value="person.name" Hibernate Tools Download Tools for the development. Hibernate Tools is really a good tool that help... The latest version of Hibernate Tools can be downloaded from its official site... from the hibernate tool site. Please visit what is set value or get value mathod? what is set value or get value mathod? i create a JTable... and i want to set third value in third column and set it in again db1? pls tell me the method and source code for it. thanks a lot...........have a good day WANT TO RESTRICT EDITING AFTER A SET OF DATE WANT TO RESTRICT EDITING AFTER A SET OF DATE localhost : Server... to restrict user from "edit" existing data, after a set of date... an example..., basically in a grey color)... i still not do any coding for this coz i really dont how to chech result set is null or not - Java Beginners how to chech result set is null or not Hi, really very thanks for fast reply.I want to know how to check whether the result set is null or not in jsp.Please help me thanks, sakthi Hi friend, Code to help1) ..reference. Is this logic good? RetDAO.java (part1) ..reference. Is this logic good? public static int searchDelete(Connection conn,String ret_id) throws Exception { //System.out.println(ret_id); st=conn.createStatement servlet not working properly ...pls help me out....its really urgent servlet not working properly ...pls help me out....its really urgent Hi, Below is the front page of my project 1)enty.jsp </form> </body> </html> </form> </body> </html>> < Misc.Online Books . But it is really child's play compared to everything else that a good programmer must...; How to be a Programmer To be a good... download 1/20 as big as this page. Don't use this site to cheat class Can you suggest any good book to learn struts Can you suggest any good book to learn struts Can you suggest any good book to learn struts.*" %> Set Interface Set Interface The Set interface extends the Collection interface.... It permits a single element to be null. The Set interface contains only methods Free Web Hosting - Why Its not good Idea of adds on your site. They make good money, but you can't get anything out... to choose good paid hosting service for hosting your web site.  ...Free Web Hosting - Why Its not good Idea This article shows you why FreeSTL: forEach and status ; It is not a good programming practice to use directive to set the attribute in a bean or a map when we...="" %> <c:set var=" Why the Google Penguin update is good for SEO update is good for SEO, well specially for white-hat SEO and not black-hat... results. There have been for everyone when you have been re-directed to a site.... But at the same time, Google is rewarding sites having strong content even though how to set the values in jsp how to set the values in jsp how to set the values text boxs in jsp frm dbase via servlet Greatest Management Theorists of Twenty First Century It is really a formidable task to select the greatest management gurus of twenty first century as the status of the management gurus in the present time has..., Searching for a Corporate Savior: The Irrational Quest for Charismatic CEOs set the arraylist values - Struts set the arraylist values ResourceIdleReport.jsp < Unknown Character set: 'utf8mb4'. Unknown Character set: 'utf8mb4'. While working on eclipse on an application that uses MySQL as the back end for the database, I was having error...: Unknown Character set: 'utf8mb4'. I'm using WAMPSERVER 2.2 and Eclipse javascript set scroll height javascript set scroll height How to set scroll height in JavaScript? CSS Scroll Method div { width:150px; height:150px; overflow:scroll; } JavaScript ScrollTo method <script type="text/javascript"> how set data in a dialog how set data in a dialog Button button = (Button) findViewById(R.id.button1); button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { setContentView how to set image - EJB how to set image public ActionForward execute(ActionMapping mapping... in this pdf, how can i set pls help me sir, my image in E:/rose.jpg,how can set this jpg to my pdf file Sort Result set Sort Result set I build a connect for my page with the database and then read all rows and show it in a table, I want to change the view of my data to be sorted uppon a specific column when click on the column name, so what EJB Books and 2.0. Filled with practical advice for good design and performance and plenty... maintainable and scalable systems difficult without a clear set of best practices..., JDO, iBatis-and EJB 3. It discusses the patterns of good lightweight Display set names Display set names If i enter the First letter of a name it will display the list of names starting with that letter in command prompt using java import java.util.*; class DisplaySetOfNames{ public static void Result=Set - Java Beginners result set, first move the pointer from first record to last record and get Java Set Java Set Collections are objects that hold other objects which are maintained under some set of rules. A set is a public interface that extends the collection interface and comes SET Syntax SET Syntax The general syntax of SET statement is : SET variable_assignment.... | @@]system_var_name = expr SET statement is used for assigning the values calculati marks for objective question set? calculati marks for objective question set? how to calculate marks for objective question set Object reference not set to an instance of an object Object reference not set to an instance of an object Object reference not set to an instance of an object checkboxes set to true using JavaScript checkboxes set to true using JavaScript To set all checkboxes to true using JavaScript how to set the value of textfield constant how to set the value of textfield constant how to set the value of textfield constant set the focus on an element using Javascript set the focus on an element using Javascript How to set the focus on an element using Javascript Set the mapping name Set the action name  2. Set the action name to the action attribute...; } 2. Set the action name to the action attribute of html Hibernate Tools set of tools from Hibernate.org to help the developers in hibernate programming... the Hibernate Tools for the development. Hibernate Tools is really a good tool that help the programmer in developing applications how to set time in jsp page how to set time in jsp page I need code for set the time in jsp code .iam using struts frame work back end oracle 10g ide is eclipse 6 set Marathi font to Label?? How to set Marathi font to Label?? Hello, I am doing one project in java-Swing Farm Management i want to set Marathi label in that so how i can? plzzzz help me its urgent special set of tags do in PHP. special set of tags do in PHP. What does a special set of tags do in PHP? Hi friends, The special tag in php is <?= ?>. This tag is used to diasplay the output directly on the browser. Thanks Set Color in JOptionPane Set Color in JOptionPane In this section, you will learn how to set color in JOptionPane. Now to do... following key value pair, we have set the background color for JOptionPane=" Java set example Java set example In this section you will learn about set interface in java. In java set is a collection that cannot contain duplicate element. The set... to add duplicate elements in the set. The set interface has method which Keep Your Current Job While You Seek a New One while negotiating the terms as you really have an option whether you want... colleagues and people in your industry. It is a good idea to do this well in advance How to add a columns with a button set to a Jtable built with database result set How to add a columns with a button set to a Jtable built with database result set hi, i have to build a gui to display account numbers and account... that button column to the table which is built with database result set. i would thank Java Set iterator with example Java Set Interface keeps the data without duplicate value. Its one subtype... Set Iterator import java.util.*; public class setiterator { public static void main(String[] args) { Set s = new TreeSet(); s.add(1000); s.add Java Set Iterator Set Interface keeps the data without duplicate value. Its one subtype... data. It uses iterator() method to traverse the data Java Set Iterator...(String[] args) { Set s = new HashSet(); s.add("car Best fleet management system operational cost and increase their benefit. To make fleet really work and perform to its strength it requires a good manager having good managerial skills to take...Best fleet management system involves a good coordination between the manager set cookieless session variables in jsp set cookieless session variables in jsp I want to know how to set cookieless session variables in jsp, because when I run my application multiple times, multiple tabs get open and the session variable is shared among them
http://www.roseindia.net/tutorialhelp/comment/872
CC-MAIN-2014-42
refinedweb
1,737
63.39
The effect is to set a trigger box. When the player triggers, a game end interface is displayed and the game ends. 1. New canvas Create a new canvas in hierarchy. Named canvas. Double click the canvas you just created. If necessary, its properties can be adjusted. By default, this canvas will cover the entire screen. When editing the UI, you should turn off the special effects and open the 2D interface. (as shown in the figure below) 2. New background Select the canvas you just created and create a new UI image under canvas. Name it background. This component is used to set the background of the UI. Select the background you just created. Adjust the value to make it spread all over the canvas. You can also change its background color by changing the color property of its image component. (as shown in the figure below) 3. Add picture on background Right click the background just created, and then create a new UI image as its sub item, named image. Click the new component to add pictures for it. Its position can also be adjusted. (as shown in the figure below) The effect is as follows. 4. Add canvas group component for background The UI cannot be displayed at the beginning, so it should be set to be transparent. Select the created background and add the canvas group component to it. Set the alpha attribute to 0 in this component. In this way, the UI will be transparent at the beginning. You can change its alpha to make it display when you need it. 5. End trigger and display UI Create an empty component, add the box collaboration component, and open the is trigger attribute. Add a script for this empty component, named gameending. The code is as follows: using UnityEngine; public class GameEnding : MonoBehaviour { bool PlayerAtExit = false; public GameObject player; //UI public CanvasGroup backgroundImageCanvasGroup; //Time to display UI public float disableImageDuration = 4.1f; //Transparency float timer; //Time to change transparency public float fadeDuration = 1.0f; //Trigger event, trigger of incoming control private void OnTriggerEnter(Collider other) { //If the player enters the trigger if (other.gameObject == player) { PlayerAtExit = true; } } // Update is called once per frame void Update() { if (PlayerAtExit) { EndLevel(); } } //End the level void EndLevel() { timer += Time.deltaTime; backgroundImageCanvasGroup.alpha = timer / fadeDuration; if (timer > fadeDuration + disableImageDuration) { //Exit application (effective after packaging) Application.Quit(); //Exit in editor UnityEditor.EditorApplication.isPlaying = false; } } } Pass the role and background into the script. Running the game, you can see that the UI is triggered successfully after the character walks into the trigger box.
https://developpaper.com/unity-create-a-simple-ui-interface-in-unity/
CC-MAIN-2022-40
refinedweb
430
58.89
I saw on boingboing a few days ago that there's now a country code reserved for Internet phones. I had a little difficulty understanding what that meant, but think I've got it now. Essentially, this is a way to bring VoIP phones into the standard phone number namespace. It is in this sense a dual of ENUM, which is a gateway to access the phone number namespace through DNS. From what I can see, this new country code is being run by FWD (Free World Dialup). You register for a free account using a simple, straightforward Web form, and you get a number. Mine is 18408. Then, you point your SIP phone's config to the FWD server, register, and then when people query the FWD server for your number, they find your phone. For example, to reach my phone, dial sip:18408@fwd.pulver.com (try it; I'll try to keep a phone app running). This number also now exists in the POTS number namespace, but your phone company won't route to it yet because they're evil. As soon as public pressure overcomes their evilness, you'll be able to reach my VoIP phone simply by dialing 011 +87810 18408 from your US phone. I think this is a huge step. To the extent that people can call your phone, it makes it practical to go VoIP only. Of course, you can do that today with a service such as Vonage, but that costs $40/month, and this is free. From what I can gather, FWD is going to make a little money off "long distance charges" from phone companies that peer with them. I like this idea - it would seem to provide a revenue stream that would actively promote the use of VoIP phones. You can bet that the telcos are going to drag their heels as much as possible. I think there's one more piece to this, which is phone cards. Even if your scumbag incumbent telco won't peer with FWD, you'll probably be able to shell out $20 for a phone card with a company that will. There's no reason why these companies can't provide service for a penny or two a minute. The standard phonecard service, after all, is basically two telco to Internet gateways joined back-to-back. Here, the caller just buys one of them. So this basically solves the problem of being callable by my Mom. All she has to dial is 1-800-call-crd, then a (typically 10 digit) pin, then 011 87810 18404. Only 34 digits, but at least she'll be able to reach me. Phones PC's running phone software don't make good phones. A dedicated piece of hardware is better. Even aside from the general flakiness of sound cards and drivers, phones are a lot better at ringing and being always on. You can buy a phone like a Cisco ATA 186 for about $150 from eBay, but I think the price is going to come down to $50 or so once D-Link or Linksys gets into the game. Basically, it's the same gear as a phone with a built-in digital answering machine (AT&T brand $30 at Best Buy), plus a 10/100 Ethernet interface. In any case, I tried out kphone and gnome-meeting again, and was successfully able to complete calls with both. I had trouble compiling GM 0.96, so no doubt I'll give it another go when I upgrade to RH 8.1. I'm less impressed with kphone. I could receive audio ok but not transmit, so I took a look at the code to see what was wrong. The actual audio interface code is buggy and unsophisticated. One of the most basic problems is their use of usleep(0) to wait for the next timer tick for basic scheduling. This, of course, is hideously dependent on the details of the underlying kernel scheduler, and in any case, gives you very poor temporal resolution on PC hardware. Even worse, if 5 ticks go by without an audio packet being ready, the code reads a packet and drops it on the floor, for what reason I don't know. There's also a problem with the kernel audio drivers I'm using (alsa 0.90beta12 with Linux 2.4.19). Even though kphone does a SNDCTL_DSP_SETFRAGMENT ioctl to set the fragment size to 128 bytes, the actual value, as returned from SNDCTL_DSP_GETISPACE, is 2048 bytes, which is way too big (it's 125ms). Combined with the packet-dropping logic above, the net result was no audio. People should not have to worry about this. I think it makes sense to wait until you can get a Chinese-made phone with Speex in it at commodity prices. Hopefully, this will happen soon. A good homepage I came across Miles Nordin's web site last night after following a link from the Java discussion on our front page. I found myself immediately observed. Miles writes well, is well read, and has a fabulously critical attitude. Many of the other pages, especially those having to do with wireless networking, are worth reading. Word cinamod: I basically agree with everything you say. If Abiword or OO are good enough, and the code is clean enough to be split out as a batch renderer, then there's no need for a separate codebase. I've had a look at the Word document format, and it's not quite so bad as I was expecting. The documentation is atrocious, but the format itself seems fairly reasonable. Of course, I'm sure that if I got into the details I'd find lots of corner cases and bad hacks. The main thing not to like is the obvious lack of design for forwards and backwards compatibility. No doubt, this is economically motivated - gotta keep that upgrade treadmill going. On the plus side, the format was clearly designed with an implementation in mind (as opposed to the W3C process, for which implementation is a distasteful afterthought). It's fairly easy to see how to process a Word file very efficiently, in both CPU time and memory usage. For example, resolving stylesheets is a straightforward linear chain, as opposed to all the nutjob nondeterministic stack automaton stuff in CSS, or the mini-Lisp in DSSSL/XSLT. I'm tempted to write here about Word's plex/fkp/character run architecture as opposed to the more generic tree approach we tend to see these days, but probably most people would be bored with that level of detail. The top-level point is that algorithms for manipulating Word's structures on-disk are straightforward, while manipulating trees efficiently on-disk seems to require a lot of cleverness. Of course, with RAM so cheap these days, it's reasonable to ask whether memory-constrained processing of files is important at all. The Word format is too tightly bound to a specific implementation, and it certainly shows in what documentation Microsoft has produced. They often seem to confuse the interface, which in this case is the on-disk representation of the document, with the implementation details. In any case, I'm glad I've learned more about the file format. Its popularity means we have to deal with it somehow. Further, as PDF continues to become document-like and less of a pure graphical representation, it's important to understand the influence that the Word design has on its evolution. I've commented before on the need for a good, open, editable document format. The lack of adequate documentation and Microsoft's proprietary lock on change control make the Word format unappealing. I've certainly thought about designing my own document format, but it's not easy to make a word-processing format much better than Word, or a graphics-oriented format much better than PDF. So that's probably a windmill I'd be happiest not tilting at.
http://www.advogato.org/person/raph/diary.html?start=317
CC-MAIN-2017-04
refinedweb
1,340
70.84
Visual Studio 2008 and C# Express 2008 This chapter is excerpted from Learning C# 3.0: Master the fundamentals of C# 3.0 by Jesse Liberty, Brian MacDonald, published by O'Reilly Media In Chapter 1, C# and .NET Programming, you learned that you can create your C# applications using Notepad. In this chapter, you'll learn why you never will. Microsoft developed Visual Studio 2008 to facilitate the creation of Windows and web applications. You will find that this integrated development environment (IDE) is a very powerful tool that will greatly simplify your work. Visual Studio 2008 offers many advantages to the .NET developer, among them: A modern interface, using a tabbed document metaphor for code and layout screens, and dockable toolbars and information windows. Convenient access to multiple design and code windows (this will make more sense when you are creating web applications, as shown in Chapter 20, ADO.NET and Relational Databases). WYSIWYG (What You See Is What You Get) visual design of Windows and Web Forms. Code completion, which allows you to enter code with fewer errors and less typing. IntelliSense, which displays tips for every method, providing the return type and the types of all the parameters., and color-coded keywords. An HTML editor, which provides both Design and HTML views that update each other in real time. A Solution Explorer, which displays all the files that make up your solution in outline form. An integrated debugger, which allows you to step through code, observe program runtime behavior, and set breakpoints, even across multiple languages and multiple processes. Customization capability, which allows you to set user preferences for IDE appearance and behavior. Integrated support for source control software. A built-in task list. The ability to modify your controls' properties, either declaratively or through the Properties window. The ability to integrate custom controls that you create or purchase from a third party. Rapid and easy deployment, including the ability to copy an entire website development project from one machine to another. The ability to integrate third-party tools into Visual Studio. The ability to program extensions to Visual Studio. The ability to rename methods, properties, and so forth and have them renamed automatically throughout the program. A Server Explorer, which allows you to log on to servers that you have network access to, access the data and services on those servers, drag-and-drop data sources onto controls, and perform a variety of other chores. Integrated build and compile support. The ability to drag-and-drop controls onto your web page, either in Design mode or in HTML mode. Visual Studio 2008 and Visual C# 2008 Express are highly useful tools that can save you hours of repetitive tasks. They are also large and complex programs, so it is impossible for us to explore every nook and cranny in this chapter. Instead, we'll take you on a quick tour of the interface and lay the foundation for understanding and using C# Express, which is our IDE of choice for this book, as well as point out some of the nastier traps you might run into along the way. Before You Read Further This chapter has a lot of information in it, and you won't need all of it all at once. In fact, much of the information will not even apply to console applications, but will be valuable when you are ready to create Windows or web applications. Many readers like to skim this chapter the first time through, and then come back for the details later. But it is your book, you paid for it (you did pay for it, didn't you?), and so you are free to read the entire chapter, take notes as you go, skip it entirely, or otherwise use it to your best advantage. Whether or not you read this chapter, we do strongly recommend that you spend time (lots and lots of time) exploring C# Express in detail. You will forever be surprised at how much is in there and how much you can set it up to behave as you want; it is your principal development tool. Ignoring C# Express would be like a race car driver never looking under the hood. In time, you not only want to know how to change the oil, but also want to understand how the valves work and why the linkage sticks. The Start Page is the first thing you see when you open C# Express (unless you configure it otherwise). From here, you can create new projects or open a project you worked on in a previous session. You can also find out what is new in .NET, access .NET newsgroups and websites, search for help online, download useful code, or adjust C# Express to your personal requirements. Figure 2.1, "The C# Express Start Page is the first thing you'll see when you start C# Express. From here, there are many different links to get you started." shows a typical Start Page, which you already saw briefly in Chapter 1, C# and .NET Programming. The Start Page has a window on the left that includes a list of your recent projects; you can click on any one to open it. Below those links, you'll find the Open link, which lets you open any existing project on your computer. Under that is the Create link, which lets you create a new project. The Getting Started box on the lower left provides links to features and helpful sites. Most of the real estate on the Start Page is taken up by the large box in the middle, which contains useful articles from MSDN online, if you have an active Internet connection. A C# program is built from source files, which are text files containing the code you write. Source code files are named with the .cs extension. The Program.cs file you created in Chapter 1, C# and .NET Programming is an example. A typical C# Express 2008 application can have a number of other files (such as assembly information files, references, icons, data connections, and more). C# Express 2008 organizes these files into a container called a project. Figure 2.1. The C# Express Start Page is the first thing you'll see when you start C# Express. From here, there are many different links to get you started. C# Express 2008 provides two types of containers for your source code, folders, files, and related material: the project and the solution. A project is a set of files that work together to create an executable program (.exe) or a dynamic link library (.dll). Large, complex projects may contain multiple .dll files. A solution is a set of related projects, although it may also have just one project-which is what you'll do most often in this book. Each time you create a new project, C# Express 2008 either adds it to an existing solution or creates a new solution. Solutions are defined within a file named for the solution, and they have the extension .sln. The .sln file contains metadata, which is basically information about the data. The metadata describes the projects that compose the solution and information about building the solution. You won't have to worry about these for the most part. There are a number of ways to open an existing solution. The simplest way is to select Open Project from the Start menu (which opens a project and its enclosing solution). Alternatively, you can open a solution in C# Express 2008 just by double-clicking the .sln file in Windows Explorer. Typically, the build process results in the contents of a project being compiled into an executable (.exe) file or a dynamic link library (.dll) file. This book focuses on creating executable files. Project Types You can create many types of projects in the full version of Visual Studio 2008, including: Console Application projects Windows Application projects Windows Service projects WPF Application projects WPF Browser Application projects Windows Control Library projects Web Control Library projects Class Library projects Smart device templates Crystal Reports Windows Application projects SQL Server projects Word and Excel Document and Template projects Note that web applications are missing from this list. Web applications do not use projects, just solutions. Visual C# Express, being a "light" version of the Visual Studio product, can't produce nearly as many types of projects. C# Express is limited to console applications, Windows Forms applications, WPF applications, WPF browser applications, and class libraries. A typical .NET application comprises many items: source files (such as .cs files), assemblies (such as .exe and .dll files) and assembly information files, data sources (such as .mdb files), references, and icons, as well as miscellaneous other files and folders. Visual Studio 2008 makes all of this easier for you by organizing these items into a folder that represents the project. The project folder is housed in a solution. When you create a new project, Visual Studio 2008 automatically creates the solution. Templates When you create a new project with C# Express, you'll see the New Project dialog box, shown in Figure 2.2, "The New Project dialog is where every new C# application starts.". Figure 2.2. The New Project dialog is where every new C# application starts. In the New Project dialog, if you're using Visual C# Express, you'll see only the templates you can choose from for your project. If you're using Visual Studio 2008, this dialog box will look different, with two panes. You select the project type (in the lefthand pane) and the template (in the right). There are a variety of templates for each project type. A template is a file that C# Express 2008 uses to set up the initial state of your project. If you're using Visual Studio 2008, for the examples in this book you'll always choose Visual C# for the project type, and in most cases you'll choose Console Application as the template. Specify the name of the directory in which your project will be stored in the Location box and name your project in the Name box. C# Express doesn't give you the option of choosing the file location; the files are stored in your local My Documents folder, in a subfolder called Visual Studio 2008. The C# Express IDE is centered on its editor. An editor is much like a word processor, except that it produces simple text (that is, text with no formatting, such as bold and italics). All source code files are simple text files. The color that you saw applied to some of the text in the Hello World project in Chapter 1, C# and .NET Programming isn't just formatting; it's a form of highlighting that Visual Studio applies to help you differentiate between keywords, comments, and other kinds of code elements. The C# Express IDE also provides support for building graphical user interfaces (GUIs), which are integral to Windows and web projects. The following pages introduce some of the key features of the IDE. The IDE is a Multiple Document Interface (MDI) application, much like other Windows applications you may be used to, such as Word and Excel. There is a main window, and within the main window are a number of smaller windows. The central window is the text editing window. Figure 2.3, "The IDE is where you'll be spending most of your time as a C# developer. Notice that the interface contains multiple windows." shows the basic layout. Figure 2.3. The IDE is where you'll be spending most of your time as a C# developer. Notice that the interface contains multiple The window will disappear, indicated only by a tab, when the cursor is not over the window. It will reappear when the cursor is over the tab. A pushpin in the upper-right corner of the window will be pointing down when Auto Hide is turned off and pointing sideways when it is turned on. -, "You can reposition all of the windows in the IDE wherever you like. The arrow icons are a help when you're positioning windows.". As you click on each positioning indicator, a shadow appears to show you where the window would go if you release the mouse. Notice in the center of the editing window that there is a cluster of five indicators. If you choose the center square, the window will be tabbed with the current window. To put the Properties window back where it belongs, hover over the Solution Explorer window; a five-part indicator will appear, and you can select the lower indicator to place the Properties window below the tabbed set of the Solution Explorer. You can also double-click on either the title bar or the tab to dock and undock the window. Double-clicking on the title while docked undocks the entire group. Double-clicking on the tab undocks just the one window, leaving the rest of the group docked. Figure 2.4. You can reposition all of the windows in the IDE wherever you like. The arrow icons are a help when you're positioning windows. You can run your application at any time by selecting either Start or Start Without Debugging from the Debug menu, or you can accomplish the same results by pressing either F5 or Ctrl-F5, respectively. You can also start the program by clicking the Start icon ( ) on the Standard toolbar. For console applications, as we mentioned in Chapter 1, C# and .NET Programming, the advantage of running the program with Ctrl-F5 is that C# Express 2008 will open your application in a console window, display its results, and then add a line to press a key when you are ready. This keeps the window open until you've seen the results and pressed a key, at which point the window will close. If you choose Start (with debugging) on a console application, if the application doesn't require any user input (as Hello World doesn't), the console window may appear and disappear too quickly for you to see what it did. You can build the program (that is, generate the .exe and .dll files) by selecting a command under the Build menu. You have the option of building the entire solution or only the currently selected project. The menus provide access to many of the commands and capabilities of C# Express 2008. The more commonly used menu commands are duplicated with toolbar buttons for ease of use. The menus and toolbars are context-sensitive, meaning that the available selection depends on what part of the IDE is currently selected, and what activities are expected or allowed. For example, if the current active window is a code-editing window for a console application such as Hello World, the top-level menu commands are File, Edit, View, Refactor, Project, Build, Debug, Data, Tools, Test (only in the full Visual Studio), Window, and Help. Many of the menu items have keyboard shortcuts, listed adjacent to the menu item itself. These are composed of one or more keys (referred to as a chord), pressed simultaneously. Shortcut keys can be a huge productivity boost because you can use them to perform common tasks quickly, without removing your hands from the keyboard, but it's really a matter of personal preference. The following sections describe some of the more important menu items and their submenus, focusing on those aspects that are interesting and different from common Windows commands. The File Menu The File menu provides access to a number of file-, project-, and solution-related commands. Many of these commands are context-sensitive. As in most Windows applications, the New menu item creates new items to work on, the Open item opens existing items, and the Save item saves your work. One item you may not have seen before is Save All, which will save all the open files in an open solution. This can be very useful when you're working with a large solution. The Edit Menu The Edit menu contains the text editing and searching commands that one would expect, but also includes commands useful in editing code. The most useful are discussed next. The Clipboard Ring The Clipboard Ring is like copy-and-paste on steroids. You can copy a number of different selections to the Windows clipboard, using the Edit → Cut (Ctrl-X) or Edit → Copy (Ctrl-C) command. Then use Ctrl-Shift-V to cycle through all the selections, and paste the correct one when it comes around. This submenu item is context-sensitive and is visible only when editing a code window. Find and Replace C# Express 2008 includes a number of advanced Find and Replace options that you'll use frequently. The most common ones are discussed in this section. Quick Find and Quick Replace. These are just slightly jazzed names for slightly jazzed versions of the typical Find and Replace. You can access Quick Find with Ctrl-F and Quick Replace with Ctrl-H. Both commands bring up essentially the same dialog boxes, switchable by a tab at the top of the dialog box, as shown in Figure 2.5, "The Find and Replace features work mostly like they do in any Windows application, although in C# Express, you have the option of searching single files or the whole solution, and other advanced features such as regular expressions.". The search string defaults to the text currently selected in the code window, or, if nothing is selected, to the text immediately after the current cursor location. The "Look in" drop-down offers a choice of Current Document, All Open Documents, Current Project, Entire Solution, or Current Method. You can expand or collapse the search options by clicking on the plus/minus button next to the "Find options" item. By default, "Search hidden text" is checked, which allows the search to include code sections currently collapsed in the code window. The Use checkbox allows the use of either regular expressions or wildcards. Figure 2.5. The Find and Replace features work mostly like they do in any Windows application, although in C# Express, you have the option of searching single files or the whole solution, and other advanced features such as regular expressions. If the Use checkbox is checked, the Expression Builder button to the right of the "Find what" text box becomes enabled, providing a very handy way to insert valid regular expression or wildcard characters. Once you've entered a search string in the "Find what" text box, the Find Next button becomes enabled. In Quick Find mode, there is also a Bookmark All button, which finds all occurrences of the search string and places a bookmark (described shortly) next to the code. In Quick Replace mode, there is also a "Replace with" text box, and buttons for replacing either a single occurrence or all occurrences of the search string. Find in Files. Find in Files (Ctrl-Shift-F) is a very powerful search utility that finds text strings anywhere in a directory or in subdirectories (subfolders). It presents the dialog box shown in Figure 2.6, "The Find and Replace in Files feature lets you search in files other than the one you're working with right now.". Checkboxes present several self-explanatory options, including the ability to search using either wildcards or regular expressions. Depending on how many files you have in your solution, you may want to use this kind of search as your default first choice. Find Symbol. Clicking the Find Symbol command (Alt-F12) will bring up the Find Symbol dialog box, which allows you to search for symbols (such as namespaces, classes, and interfaces) and their members (such as properties, methods, events, and variables). It also allows you to search in external components for which the source code is not available. Figure 2.6. The Find and Replace in Files feature lets you search in files other than the one you're working with right now. The search results will be displayed in a window labeled Find Symbol Results. From there, you can move to each location in the code by double-clicking on each result. Go To The Go To command brings up the Go To Line dialog box, which allows you to enter a line number and immediately go to that line. It is context-sensitive and is visible only when editing a text window. Insert File As Text The Insert File As Text command allows you to insert the contents of any file into your source code, as though you had typed it in. It is context-sensitive and is visible only when editing a text window. You'll see a standard file-browsing dialog box to search for the file you want to insert. The default file extension will correspond to the project language, but you can search for any file with any extension. Advanced The Advanced command is context-sensitive and is visible only when editing a code window. It has many submenu items. These include commands for: Viewing whitespace (making tabs and space characters visible on the screen) Toggling word wrap Commenting and uncommenting blocks of text Increasing and decreasing line indenting Incremental searching (see "Incremental search") The following three options are available only in Visual Studio, not C# Express: Creating or removing tabs in a selection (converting spaces to tabs and vice versa) Forcing selected text to uppercase or lowercase Deleting horizontal whitespace Incremental search Incremental search allows you to search an editing window by entering the search string character by character. As you enter each character the cursor moves to the first occurrence of matching text. To use incremental search in a window, select the command on the Advanced submenu, or press Ctrl-I. The cursor icon will change to a pair of binoculars with an arrow indicating the direction of the search. Begin typing the text string to search for. The case sensitivity of an incremental search will come from the previous Find, Replace, Find in Files, or Replace in Files search (described earlier). The search will proceed downward and from left to right from the current location. To search backward, use Ctrl-Shift-I. The key combinations listed in Table 2.1, "Incremental searching" apply to incremental searching. Table 2.1. Incremental searching Bookmarks Bookmarks are useful for marking spots in your code and easily navigating from marked spot to marked spot. There are several context-sensitive commands on the Bookmarks submenu (listed in Table 2.2, "Bookmark commands"). Note that, unless you add the item to the task list, bookmarks are lost when you close the file, although they are saved when you close the solution (as long as the file was still open). Table 2.2. Bookmark commands This menu item appears only when the current window is a code window. Outlining C# Express 2008. Several commands are available to facilitate outlining (shown in Table 2.3, "Outlining commands"). Table 2.3. Outlining commands You can set the default behavior of outlining using the Tools → Options menu item. Go to Text Editor, and then the specific language for which you want to set the options. IntelliSense Microsoft IntelliSense technology makes your life much easier. It has real-time, context-sensitive help available, which appears right under your cursor. Code completion automatically completes your thoughts for you, drastically reducing your typing (and therefore, your typing errors). Drop-down lists provide all methods and properties possible in the current context, available at a keystroke or mouse click. You can configure the default IntelliSense features by going to Tools → Options and then the language-specific pages under Text Editor. Most of the IntelliSense features appear as you type inside a code window or allow the mouse to hover over a portion of the code. In addition, the Edit → IntelliSense menu item offers numerous commands, the most important of which are shown in Table 2.4, "IntelliSense commands". Table 2.4. IntelliSense commands The member list presents itself when you type a dot operator, the member is public. Two of the subcommands under the IntelliSense menu item, Insert Snippet and Surround With, tap into a great feature to reduce typing and minimize errors: code snippets. A code snippet is a chunk of code that replaces an alias. A short alias is replaced with a much longer code snippet. For example, the alias switch would be replaced with: with the expression switch_on highlighted in yellow and the cursor in place, ready to type in your own expression. In fact, all the editable fields will be highlighted, and you can use the Tab key to navigate through them, or Shift-Tab to go backward. Any changes made to the editable field are immediately propagated to all the instances of that field in the code snippet. Press Enter or Esc to end the field editing and return to normal editing. To do a straight alias replacement, either select Insert Snippet from the menu, or more easily, press Ctrl-K, Ctrl-X. Or, just type an alias in the code window and an IntelliSense menu will pop up with a list of aliases, with the current one highlighted. Press Tab to insert the snippet. Alternatively, a code snippet can surround highlighted lines of code-say, with a for construct. To surround lines of code with a code snippet construct, highlight the code and then either select Surround With from the menu or press Ctrl-K, Ctrl-S. The View Menu The View menu is a context-sensitive menu that provides access to the myriad windows available in the C# Express 2008 IDE. You will probably keep many of these windows open all the time; others you will use rarely, if at all. The View menu is context-sensitive. For example, with an ASP.NET content file on the work surface, the first three menu items will be Code, Designer, and Markup; the Code and Designer menu items will be omitted if you're looking at a code-behind file. You don't need to worry about what these terms mean for now; you'll see them in the closing chapters of the book. When the application is running, a number of other windows, primarily used for debugging, become visible or available. You access these windows via the Debug → Windows menu item, not from the View menu item. C# Express 2008 can store several different window layouts. In particular, it remembers a completely different set of open windows during debug sessions than it does during normal editing. These layouts are stored per-user, not per-project or per-solution. Class View The Class View window (Ctrl-Shift-C) shows all the classes in the solution in a hierarchical manner. A typical Class View window, somewhat expanded, is shown in Figure 2.7, "The Class View window, obviously enough, shows the classes in your solution. You won't have many of these at first, but Windows applications will have plenty.". As with the Solution Explorer, you can right-click any item in the Class View window, which exposes a pop-up menu with a number of context-sensitive menu items. This can provide a convenient way to sort the display of classes in a project or solution, or to add a method, property, or field to a class. Code Definition The Code Definition window (Ctrl-W, D) is used in developing web pages, but is available only in the full version of Visual Studio. Error List The Error List window (Ctrl-W, Ctrl-E), which is available in all editor views, displays errors, warnings, and messages generated as you edit and compile your project. Syntax errors flagged by IntelliSense are displayed here, as well as deployment errors. Double-clicking on an error in this list will open the offending file and move the cursor to the error location. Output The Output window (Ctrl-Alt-O) displays status messages from the IDE, such as build progress. You can set the Output window to display by default when a build starts by going to Tools → Options → Projects and Solutions → General and checking "Show Output window when build starts". This window is available in all editor views. Figure 2.7. The Class View window, obviously enough, shows the classes in your solution. You won't have many of these at first, but Windows applications will have plenty. Properties The Properties window (F4) displays all the properties for the currently selected item. Some of the properties (such as Font) may have subproperties, indicated by a plus sign next to their entries in the window. The property values on the right side of the window are editable. One thing that can be confusing is that certain items have more than one set of properties. For example, a Form content file can show two different sets of properties, depending on whether you select the source file in the Solution Explorer or the form as shown in the Design view. A typical Properties window is shown in Figure 2.8, "You won't use the Properties window much with console applications, but when you design Windows Forms, you'll use it a lot.". Figure 2.8. You won't use the Properties window much with console applications, but when you design Windows Forms, you'll use it a lot. The name and type of the current object are displayed in the field at the top of the window. In Figure 2.8, "You won't use the Properties window much with console applications, but when you design Windows Forms, you'll use it a lot.", it is an object named Form1, of type Form, contained in the System.Windows.Forms namespace. You can edit most properties in place in the Properties window. The Font property has subproperties that you can set directly in the window by clicking on the plus sign to expand its subproperties, and then editing the subproperties in place. The Properties window has several buttons just below the name and type of the object. The first two buttons on the left toggle the list by category or alphabetically. The next two buttons from the left toggle between displaying properties for the selected item and displaying events for the selected item. The rightmost button displays property pages for the object, if there are any. The box below the list of properties displays a brief description of the selected property. Task List In large applications, keeping a to-do list can be quite helpful. C# Express 2008 provides this functionality with the Task List window. Toolbox The Toolbox command (Ctrl-Alt-X) displays the Toolbox if it is not currently displayed. If it is currently displayed, nothing happens-it does not toggle the display. To hide the Toolbox, click on the X in the Toolbox title bar. Other Windows Several other windows have been relegated to a submenu called Other Windows. These include: - The Command window (Ctrl-Alt-A) You use this window to enter commands directly. - The Object Test Bench window This window lets you conduct tests on your classes as you write them, but only in Visual Studio. - The Property Manager window You use this window only for C++ projects; it isn't available in C# Express. - The Resource View window (Ctrl-Shift-E) This window displays the resource files included in the project. Resources are nonexecutable data deployed with an application, such as icons and graphics, culture-specific text messages, and persisted data objects. - The Macro Explorer window (Alt-F8) Visual Studio 2008. - The Start Page This item simply reopens the Start Page, if you closed it. - The Web Browser This item opens a web browser within the Visual Studio window. The Refactor Menu Refactoring is the process of taking code duplicated in various parts of your program and extracting it out to a callable method. This is an advanced procedure, so you won't see any refactoring in this book. The Refactor menu item is available when you're looking at a code window for a web page, user control, or language source code file. It is also available from context menus when you right-click on an identifier in a Class View, Object Browser, or Solution Explorer window. The refactoring menu items will modify your code-for example, extracting common code to a method and then calling that method in the place from which it was extracted. The Project Menu The Project menu provides functionality related to project management. It is visible only when the solution is selected in the Solution Explorer. All of the functionality exposed by the Project menu is also available in the Solution Explorer, by right-clicking on the solution. The Build Menu The Build menu offers menu items for building the current project (highlighted in the Solution Explorer) or the solution. It also exposes the Configuration Manager for configuring the build process. The Debug Menu The Debug menu allows you to start an application with or without debugging, set breakpoints in the code, and control the debugging session. The Data Menu The context-sensitive Data menu is visible only when in Design mode when creating, for example, web applications. The Format Menu The Format menu is visible only in Design mode when creating, for example, web applications; further, the commands under it are context-sensitive to the control(s) currently selected. The Tools Menu The Tools menu presents commands accessing a wide variety of functionality, ranging from connecting to databases to accessing external tools to setting IDE options. Some of the more useful commands are described in the following sections. Connect to Device The Connect to Device command (available only in Visual Studio) brings up a dialog box that allows you to connect to either a physical mobile device or an emulator. Device Emulator Manager The Device Emulator Manager command (also available only in Visual Studio) helps you keep track of the various settings for devices and their emulators for which you may be developing. Connect to Database The Connect to Database command brings up the dialog box that allows you to select a server, log in to that server, and connect to the database on the server. Microsoft SQL Server is the default database (surprise!), but the Change button allows you to connect to any number of other databases, including any for which there are Oracle or ODBC providers. Connect to Server The Connect to Server command (available only in Visual Studio) brings up a dialog box that lets you enter a remote server to connect to, either by name or by IP address. Code Snippets Manager The Code Snippets Manager command (Ctrl-K, Ctrl-B) brings up the Code Snippets Manager dialog box, which allows you to maintain the code snippets (described in "IntelliSense" earlier in this chapter). This dialog box allows you to add or remove code snippets for any of the supported languages. You can also import code snippets and search online for code snippets. Choose Toolbox Items The Choose Toolbox Items command brings up the Choose Toolbox dialog box, allowing you to add COM components and custom controls. The details of doing so are beyond the scope of this book, but they are covered in full in Programming ASP.NET 3.5 by Jesse Liberty et al. (O'Reilly). External Tools Depending on the options selected at the time C# Express 2008 was installed on your machine, you may have one or more external tools available on the Tools menu. These might include tools such as Create GUID and Dotfuscator Community Edition. (Use of these tools is beyond the scope of this book.) The Tools → External Tools command allows you to add additional external tools to the Tools menu. When you select this command, you are presented with the External Tools dialog box. This dialog box has fields for the tool title, the command to execute the tool, any arguments and the initial directory, as well as several checkboxes for different behaviors. Import and Export Settings The Import and Export Settings command brings up the Import and Export Settings dialog box, which is a wizard for importing and exporting IDE settings. With this wizard, you can transfer your carefully wrought IDE settings from one machine to the next. Options The Options command also brings up the Options dialog box that allows you to set a wide variety of options, ranging from the number of items to display in lists of recently used items to HTML Designer options. The Window Menu The Window menu is the same as the Window menu you'll find in most standard Windows applications. It displays a list of all the currently open windows, allowing you to bring any window to the foreground by clicking on it. Note that all the file windows currently displayed in the IDE also have tabs along the top edge of the work surface, below the toolbars (unless you have selected MDI mode in Tools → Options → Environment → General), and you can select windows by clicking on a tab. The Help Menu The Help menu provides access to a number of submenus. If you are developing on a machine with enough horsepower, Dynamic Help is a wonderful thing. Otherwise, it can diminish the responsiveness of the IDE. Visual Studio 2008 is a powerful tool with many features to make writing programs easier. The Start Page provides an overview of your programming environment and a list of recent projects. A solution is a set of related projects, and a project is a set of related code files and associated resources, such as images and so on. Visual Studio 2008 has a number of templates that allow you to create particular types of projects, such as windows or web applications. Among other things, C# Express 2008 provides WYSIWYG support for building, testing, and debugging graphical user interfaces (GUIs). Every window in C# Express 2008 can be resized and moved. To run your application, select Start or Start Without Debugging, or press F5 or Ctrl-F5. The Clipboard Ring can hold a number of different selections that you can cycle through. The Find and Replace feature lets you locate text strings in the current file or other files, using normal text or regular expressions. Bookmarks enable you to mark spots in your code so that you can easily find them later. IntelliSense saves you keystrokes and can help you discover methods and required arguments by (for example) listing possible completions to what you're typing. The Properties window displays properties for the currently selected item. There's your whirlwind tour of the C# Express interface. If you're new to programming, the IDE probably looks quite intimidating-it has a lot more features and windows than your average Windows application. As with any Windows application, though, you'll quickly find that you use some of the features quite often, and those will become second nature, allowing you to ignore the rest until you need them. We don't expect you to be an expert on the IDE after just reading this chapter, but we do hope you're a bit more comfortable with it. Now, enough poking about in the Toolbox-let's hammer some nails! It's time to start learning the basics of the C# language, starting with types, variables, and constants, and that's what's ahead in Chapter 3, C# Language Fundamentals. Question 2-1. What is the difference between a project and a solution? Question 2-2. How do you move windows in the IDE? Question 2-3. What does the pushpin do on a window? Question 2-4. What is the difference between pressing F5 and pressing Ctrl-F5 from within C# Express 2008? Question 2-5. What is the Clipboard Ring? Question 2-6. How do you retrieve items from the Clipboard Ring? Question 2-7. What is Find Symbol for? Question 2-8. What are bookmarks? Question 2-9. What is IntelliSense? Question 2-10. What is a code snippet? Exercise 2-1. Insert a bookmark before the Console.Writeline( ) statement in Hello World. Navigate away from it and then use the Bookmarks menu item to return to it. Exercise 2-2. Undock the Solution Explorer window from the right side of the IDE and move it to the left. Leave it there if you like, or move it back. Exercise 2-3. Insert a code snippet for a for loop from the Edit → IntelliSense menu into your Hello World program after the WriteLine( ) statement. (It won't do anything for now; you'll learn about for loops in Chapter 5, Branching.)
https://msdn.microsoft.com/en-us/library/orm-9780596521066-01-02.aspx
CC-MAIN-2016-44
refinedweb
6,791
61.87
Communications for Windows Phone November 04, 2013 Applies to: Windows Phone 8 | Windows Phone OS 7.1 This topic introduces the ways your Windows Phone app can communicate with other apps and with remote data stores. Learn about sockets, Bluetooth, the Proximity API for Near Field Communications (NFC), voice over IP (VoIP), the Open Data Protocol (OData) client, and web services. This topic also introduces the network information and Data Sense APIs. This topic contains the following sections. For bidirectional communication across the web, such as with a chat app, Windows Phone OS 7.1 supports sockets-based apps. With sockets, the client or the server can initiate communication and either endpoint can send messages to the other independently. Sockets apps use the System.Net.Sockets API. For more info, see Sockets for Windows Phone. Bluetooth is a wireless communication technology through which devices within a 10-meter proximity can communicate with each other. Using this technology, devices can communicate without a physical connection. Wireless headsets, remote control toys, and multiplayer games are examples of devices and apps that use Bluetooth technology. By using APIs introduced in Windows Phone 8, your app can communicate with another app or device over Bluetooth. For more info, see Bluetooth for Windows Phone 8. Windows Phone 8 supports Proximity communication using Near Field Communication (NFC).. To learn more about the Proximity API, see Proximity for Windows Phone 8. Starting in Windows Phone 8, you can develop a voice over IP (VoIP) app for Windows Phone. Using a subset of the Windows Audio Session API (WASAPI) your app can capture and render audio streams. Windows Phone VoIP apps can also stream video-based VoIP calls. For more info, see VoIP apps for Windows Phone 8. Through the Data Sense feature in Windows Phone 8, a user can specify the limits of their data plans. Data Sense monitors data usage in relation to the user-specified limits. With this information, your app can help users save money by reducing data usage when the user is close to their data limit, or by discontinuing data usage when the user is over their limit. For more info, see How to adjust data usage using the Data Sense API for Windows Phone 8. The user experience of any web-based app is highly dependent on the quality and availability of the device’s network connection. The Microsoft.Phone.Net.NetworkInformation namespace provides several classes through which your app can learn more about the network status of the device it is running on. For example, your app can check whether a cellular data or Wi-Fi connection is enabled. You can also use the API to set cellular or non-cellular network preferences. For more information, see Network and network interface information for Windows Phone. Web services enable programmatic access to a wide variety of data over the Internet. A data service is an HTTP-based Web service that implements the Open Data Protocol (OData) to expose data as resources that are defined by a data model and addressable by Uniform Resource Identifiers (URIs). Web and data services each use an open XML-based language to describe their web-based API. The Web Service Description Language (WSDL) is used to describe the services that a web service offers. The Conceptual Schema Definition Language (CSDL) describes the data model that a data service offers. For more information, see Web Services Description Language (WSDL) and Conceptual Schema Definition File Format. Web services Because the vast majority of web services published on the Internet are based on HTTP, you can use the HttpWebRequest and WebClient classes to access web services from Windows Phone apps. To help ease the task of generating the additional code that web services often require, you can use the Service Model Proxy Generation Tool (SLsvcUtil.exe) or the Visual Studio Add Service Reference feature to generate a proxy class. For an example of how to use the WebClient class to access an RSS feed, see How to create a basic RSS reader for Windows Phone. A web service proxy class implements the serialization, request, and response code for a web service, based on the web service WSDL file. You can use the generated proxy class in your Windows Phone app for communicating with the corresponding web service. Data services (OData) A data service is an HTTP-based web service that implements the Open Data Protocol (OData) to expose data as resources that are defined by a data model and addressable by URIs. This enables you to access and change data using the semantics of representational state transfer (REST), specifically the standard HTTP verbs of GET, PUT, POST, and DELETE. Because data services are based on HTTP, you can use the HttpWebRequest and WebClient classes to access data services from Windows Phone apps. To help ease the task of generating the additional code that a data service requires, you can use the WCF Data Service Client Utility, DataSvcUtil.exe, or the Visual Studio Add Service Reference feature to generate a proxy class based on the data service CSDL file. You can use the generated proxy class in your Windows Phone app for communicating with the corresponding data service. Classes and utilities The following list contains the classes that you can use directly to make web requests, as well as the utilities available to you to generate other classes optimized to make particular kinds of web requests from your Windows Phone apps: WebClient Class: Provides common methods for sending data to and receiving data from a URI-based resource. HttpWebRequest Class: Provides an HTTP-specific implementation of the abstract WebRequest class. Silverlight Service Model Proxy Generation Tool (SLsvcUtil.exe): Generates proxy classes based on a web service WSDL file. Visual Studio Add Service Reference Feature: Generates proxy classes based on either a web service WSDL file or a data service CSDL file. WCF Data Service Client Utility (DataSvcUtil.exe): Generates proxy classes based on a data service CSDL file. The following table shows which classes can be used for the various types of HTTP-based programming: The WebClient and HttpWebRequest classes can be used for a wide range of HTTP-based programming, from general HTTP requests to programming web and data services. Depending on how your app uses a web or data service, using the WebClient or HttpWebRequest classes exclusively may require you to write a significant amount of code. When developing a web or data services client app, an alternative to programming at the HTTP-level is to use a proxy class. A proxy class is a class that represents the web or data service that is based on the corresponding WSDL or CSDL file, respectively. See the following sections of this topic for more information. Security considerations When connecting to a web service that requires an app key, don’t store the app key with an app that will run on a device. Instead, you can create a proxy web service to authenticate a user, and then call an external cloud service using the app key. For more information about security recommendations, see Web service security for Windows Phone. Web service limitations Each Windows Phone app is limited to a maximum of 6 simultaneous outgoing connections. When porting web service client code for use in a Windows Phone app, check the .NET APIs to ensure that methods used in the code are supported. For more information about supported APIs for Windows Phone, see .NET API for Windows Phone.
http://msdn.microsoft.com/en-us/library/windowsphone/develop/ff637518(v=vs.105).aspx
CC-MAIN-2013-48
refinedweb
1,249
53.31
Content-type: text/html curs_getch, getch, wgetch, mvgetch, mvwgetch, ungetch - Get (or push back) characters from a Curses terminal keyboard #include <curses.h> int getch( void ); int wgetch( WINDOW *win ); int mvgetch( int y, int x ); int mvwgetch( WINDOW *win, int y, int x ); int ungetch( int ch ); Curses Library (libcurses) Interfaces documented on this reference page conform to industry standards as follows: getch, wgetch, mvgetch, mvwgetch: XPG4, XPG4-UNIX ungetch: XPG4-UNIX Refer to the standards(5) reference page for more information about industry standards and associated tags. The getch, wgetch, mvgetch, and mvwgetch routines read a character from the terminal associated with the Curses window. In no-delay mode, if no input is waiting, these characters.. The following function keys, defined in <curses.h>, might be returned by the *getch functions if keypad has been enabled. All of these keys may not be supported on a particular terminal. In other words, the routines do not return a function key if the terminal does not transmit a unique code when the key is pressed or if the definition for the key is not present in the terminfo database. The header file <curses.h> automatically includes the header file <stdio.h>. Programmers should not use the escape key for a single-character function.. [Compaq] If the ESCDELAY environment variable is set, these functions wait for the specified time period between the escape and following. All routines return the integer ERR upon failure and OK upon successful completion. Functions: curses(3), curs_inopts(3), curs_move(3), curs_refresh(3) Others: standards(5)
http://backdrift.org/man/tru64/man3/mvgetch.3.html
CC-MAIN-2017-04
refinedweb
259
53.81
This is the first in the series of posts where I will explore a simple application that allows you to analyze and present data in interesting ways. I will be writing a nice AngularJS client, with the WebSocket connection to the Haskell back-end. If you are impatient, grab the source code from; and run cabal run to start the WebSocket server. To see the (at the moment primitive) user interface open web/numbers.html in a modern browser. You will see (funky & moving) HTML view. $ cabal run Preprocessing library hwssexp-0.1.0.0... In-place registering hwssexp-0.1.0.0... Preprocessing executable 'ws' for hwssexp-0.1.0.0... Server is running The Haskell code Our Haskell server code listens on all local addresses on port 9160 for WebSocket connections. We would also like to maintain a state that is a list of the connected sessions. During the server’s lifetime, we will be modifying this state, which is shared between the threads. -- |The main entry point for the WS application main :: IO () main = do putStrLn "Server is running" state < - newMVar newServerState WS.runServer "0.0.0.0" 9160 $ application state Leaving the obvious putStrLn aside, we create the MVar ServerState, which is the state at the server startup. We then use the ServerState value when we then (recall the desugaring Haskell does!) start the WebSocket server. The state our server keeps is a list of the query that the user sent together with the WebSocket connection the server will push the results to. This gives us a good place to define these types, together with the newServerState function. -- |Client is a combination of the statement that we're running and the -- WS connection that we can send results to type Client = (Text, WS.Connection) -- |Server state is simply an array of active @Client@s type ServerState = [Client] -- |Named function that retuns an empty @ServerState@ newServerState :: ServerState newServerState = [] Great; to complete the picture, let’s add functions that allow us to add and remove clients. -- |Adds new client to the server state addClient :: Client -- ^ The client to be added -> ServerState -- ^ The current state -> ServerState -- ^ The state with the client added addClient client clients = client : clients -- |Removes an existing client from the server state removeClient :: Client -- ^ The client being removed -> ServerState -- ^ The current state -> ServerState -- ^ The state with the client removed removeClient client = filter ((/= fst client) . fst) We now have all the auxiliary code ready; all that we need to do is to provide the implementation of the application function; this function represents what we’d call controller in the old world. It receives requests, and–as a side effect–may modify the server state and produce responses. -- |The handler for the application's work application :: MVar ServerState -- ^ The server state -> WS.ServerApp -- ^ The server app that will handle the work application state pending = do conn < - WS.acceptRequest pending query <- WS.receiveData conn clients <- liftIO $ readMVar state let client = (query, conn) modifyMVar_ state $ return . addClient client perform state client We first accept the request (we accept any WS requests), giving us a Connection value; we then receive the data that the client has sent, giving us String. Finally, we pull out the ServerState value from the shared MVar ServerState. We construct the Client value: a tuple containing the query and the WebSocket connection for that query. The rather complex line is modifyMVar_ state $ return . addClient client. We are modifying the shared ServerState. If we expand the expression, eliminating the point-free style, we will have modifyMVar_ state (\s' -> return (addClient client s')) We can eliminate s' in the (\s' -> return ... s') equation: return . addClient client is the same thing: a function that takes ServerState and returns IO ServerState. Great. Finally, we can eliminate the final brackets using the ($) function. This gives us the final modifyMVar_ state $ return . addClient client That leaves us with just the last expression; one that actually does the work. (We will keep this portion quite light at this stage, but improve it over the next few posts; and believe me, there is a lot to improve!) -- |Performs the query on behalf of the client, -- cleaning up after itself when the client disconnects perform :: MVar ServerState -- ^ The server state -> Client -- ^ The query to perform and the conn for results -> IO () -- ^ The output perform state client@(query, conn) = handle catchDisconnect $ forever $ do numbers < - replicateM 100 ((`mod` 100) <$> randomIO :: IO Int) WS.sendTextData conn (T.pack $ show numbers) threadDelay 1000000 where catchDisconnect :: SomeException -> IO () catchDisconnect _ = liftIO $ modifyMVar_ state $ return . removeClient client Dissecting the code, we wrap the forever-repeating computation in an exception handler. This gives us the basic shape of the code. In the forever block, we generate 100 random numbers, each in the range 0 – 100, which we send to the listening WebSocket. Then (Oh the humanity! More on that in the future.) we sleep for 1 second and then repeat. The catch block handles any exception by removing the client from the server’s shared ServerState. The web application Moving on, let’s hack together a nice AngularJS web application. We want to connect to our Haskell server, and then display the numbers we receive in a text field, and also–using D3.js–in a pretty chart. <!doctype html> <html> <head> <title>Number cruncher</title> ... </head> <body> <div ng- <tabs> <pane title="Raw"> <h3>Raw data</h3> <pre>{{numbers}}</pre> </pane> <pane title="Canvas"> <h3>Visual representation</h3> <barchart2d data="{{numbers}}"/> </pane> </tabs> </div> </body> </html> And that’s all there is to it–well, if you decide to ignore the AngularJS magic, specifically the NumbersCtrl controller and the barchart2d component. It is worth exploring those in slightly more detail, starting with the NumbersCtrl. angular.module('numbers.app', ['d3.directives', 'numbers.directives']) .controller('NumbersCtrl', ['$scope', function($scope) { function createWebSocket(path) { var host = window.location.hostname; if (host == '') host = 'localhost'; var uri = 'ws://' + host + ':9160' + path; var Socket = "MozWebSocket" in window ? MozWebSocket : WebSocket; return new Socket(uri); } $scope.numbers = {}; var socket = createWebSocket('/'); socket.onopen = function() { // we'll have that in the next session socket.send("even 0-100 every 1s"); }; socket.onmessage = function(e) { $scope.$apply(function() { $scope.numbers = e.data; }); }; }]); This is the code of our AngularJS application. It depends on d3.directives and numbers.directives modules; these modules contain directives (think components) for the D3 charts and our tab control. The tab directives are the raw AngularJS example, so let’s explore the D3 components. We’ve split it into two modules: one that provides the d3 service (by pulling in the D3 JavaScript), and then the module that exposes the components. // creates the d3.core module, which contains the d3Service angular.module('d3.core', []) // creates d3Service by injecting the D3JS JavaScript to the document .factory('d3Service', ['$document', '$q', '$rootScope', function($document, $q, $rootScope) { ... return { d3: ... // the d3 namespace }; }]); Grab the code from for the full gory details. The d3.directives module provides the barchart2d for us to use: // creates the d3.core module, which contains the various D3 charts angular.module('d3.directives', ['d3.core']) .directive('barchart2d', ['d3Service', function(d3Service) { return { restrict: 'E', transclude: true, scope: { data: '@' }, template: '<div class="barchart2d" ng-</div>', replace: true, link: function(scope, element, attrs) { d3Service.d3().then(function(d3) { function fmt(element, x) { element.style("width", function(d) { return x(d) + "px"; }) .text(function(d) { return d; }); } attrs.$observe('data', function(rawValue) { var data = JSON.parse(rawValue); var x = d3.scale.linear() .domain([0, d3.max(data)]) .range([0, 420]); var p = d3.select(element[0]).selectAll("div").data(data); fmt(p.enter().append("div"), x); fmt(p.transition(), x); p.exit().remove(); }); }); } }; }]); Again, this is the basic D3 chart, the slight tickery involves using the attrs.$observe to connect to the changes of the given model. Summary And there you have it. You can run a Haskell-based WebSocket server and then have a modern web application that displays the output that the Haskell server sends. Now that we have the basic building blocks, we’ll be adding more features–especially to parse the query that the users type in and then execute it.
http://java.dzone.com/articles/haskell-websockets-and-d3js
CC-MAIN-2014-49
refinedweb
1,351
57.67
ec_convert_string Name ec_convert_string — Convert a string from one encoding to another Synopsis #include "misc/converter.h" | int **ec_convert_string** ( | fromcode, | | | | fromstring, | | | | tocode, | | | | deststring ); | | const char * <var class="pdparam">fromcode</var>; string * <var class="pdparam">fromstring</var>; const char * <var class="pdparam">tocode</var>; string * <var class="pdparam">deststring<. Convert a string from one encoding to another. This is a convenience wrapper around opening a converter and converting each portion of the source string through it and storing it in the destination string. - fromcode the encoding used for the input data - fromstring the source of the input data - tocode the encoding to be used for the output data - deststring the destination for the converted data Data is read from the start of the buffer in fromstring. If it is disk backed, the next chunk will be requested until no more chunks are available. Returns ECCONV_OK on success, or some other value on error. Note This function may induce IO or otherwise block the caller. Blocking in the scheduler thread will lead to degraded performance and should be avoided at all costs. If your code is running in the IO subsystem, the core will have already taken steps to ensure that blocking is acceptable. Otherwise, you should look at using the thread pool API to run a job in the IO pool.
https://support.sparkpost.com/momentum/3/3-api/apis-ec-convert-string
CC-MAIN-2022-05
refinedweb
219
60.95
« Cairngorm 2 for Flex 2 Beta 3 | Main | Want to work for the Apollo Team ? » May 9, 2006 01:33 PM Trackback Pings TrackBack URL for this entry: Hi Steve, I might be missing something but above you mentioned the 2nd arg now being Strings, but should it not be Function refs? Posted by: Richard Leggett at May 9, 2006 02:13 PM *sorry Class refs (not Function refs) Posted by: Richard Leggett at May 9, 2006 02:14 PM Richard - my mistake, yes. The signature for addCommand now accepts a reference to a Class as follows: public function addCommand( commandName : String, commandRef : Class ) : void Posted by: Steven Webster at May 9, 2006 02:21 PM Nice one -- this is how Commands are implemented in Arp too :) Posted by: Aral Balkan at May 9, 2006 02:44 PM I did this in v1 because I couldn't wait. Now that it is in Flex 2, thank you so much! Posted by: JesterXL at May 9, 2006 02:45 PM Hey Aral ... good to hear from you ! It's a relief to be finally aligned with the stateless/reflection-based J2EE implementation we had pre-Flash. Anyway, you can give us the stateless Commands; Cairngorm gave you the Front Controller, Command, Business Delegate, Service Locator, Value Object and most recently the ModelLocator after all ;-) Go on, add a data attribute to ARP Commands execute() method , you know you want to... ;) Disappointed as well to see that aralbalkan.com doesn't have your hands on it anymore, that was cool... :) Posted by: Steven Webster at May 9, 2006 02:51 PM ask and ye shall receive Jesse ;) We've been wanting to do this since Cairngorm 0.99, but were concerned that some people might be inadvertantly relying upon the side-effect of "stateful" commands. We decided we'd keep the migration until a "significant release" of Cairngorm. It's what *we* have been using for a long time now as well ! Posted by: Steven Webster at May 9, 2006 02:53 PM Hey Steven, I wasn't aware that those design patterns had been developed in Cairngorm -- thanks for enlightening me. Looks like we've sure got a lot of re-educating to do for the Smalltalk and Java people! ;) Seriously, though, it's a good move and I hope to have more than just my hands on the site in the near future (tsk, tsk, what *are* you thinking?) :) Posted by: Aral Balkan at May 9, 2006 03:07 PM Though Aral's comment about Java/Smalltalk is tongue in cheek, it's important that I address this for those that don't know the history behind Cairngorm. We've always been open and clear in every publication we've made about Cairngorm - whether in Reality J2EE, ActionScript 2.0 Dictionary, Developing Rich Clients for Macromedia Flex, or the numerous conference talks we've given on Cairngorm - that Cairngorm was an innovation that we made upon the work that was published by Alur, Crupi and Malks (Core J2EE Patterns). Our innovation was the selection of a subset of those patterns that we felt were applicable to RIA (and more recently, we've innovated new patterns to the catalogue, most notably the ModelLocator), and in moving these patterns away from the request/response paradigm of J2EE web-application development, to the event-driven command and controller approach we adopted in Cairngorm. I am a huge believer in crediting the work and ideas that we have built upon, and I don't want to be anything but open and honest about where the inspiration for Cairngorm lies. The team at Sun who presented the Core J2EE Patterns at Java One, wholeheartedly deserve that credit, for their contribution to the J2EE community that provided the foundation research for the Cairngorm RIA microarchitecture. If we can help that community, through Cairngorm, by lowering the barrier to entry to Flex development, and if we can offer leadership and best-practice to others facing the challenge of building large RIA with Flex, then we have achieved some of our significant aims of Cairngorm. Posted by: Steven Webster at May 9, 2006 04:02 PM What else are you guys using today that you're not telling us about yet? ;-) Posted by: Hans at May 9, 2006 05:57 PM Aral has done a lot for the community packaging the J2EE design patterns with a nice red bow to encourage Flash developers to use best practices in rich application development. However my loyalty will always remain with Steven Webster and Alistair McLeod. They are the innovators or applying the J2EE (Alur, Crupi, Malks) and their own design patterns to the brand new world of Rich Internet Applications using Flash. They are the pioneers. If Aral Balkan did not have the of exposure of working with our pioneers a few years back he would probably never had the vision to package ARP. ARP and Cairngorm are so similar it’s ridiculous. If Steven is a huge believer in crediting the work and ideas that they have built upon, then I am a huge believer of crediting the work and ideas that Steven and Alistair have worked on. Go Cairngorm! Posted by: Nathan Vale at June 5, 2006 01:58 AM The above code correction fix one of the problems I was having, thanks :) However I'm still get this: code: public class LoginCommand implements Command, Responder error: Interface method onFault in namespace org.nevis.cairngorm.business:Responder is implemented with an incompatible signature in class org.nevis.cairngorm.samples.login.commands:LoginCommand. Any thoughts as to why? Posted by: Russell Munro at June 8, 2006 06:42 AM
http://weblogs.macromedia.com/swebster/archives/2006/05/cairngorm_2_for_1.cfm
crawl-002
refinedweb
950
55.37
4.3. Profiling your code line-by-line with line Python's native cProfile module and the corresponding %prun magic break down the execution time of code function by function. Sometimes, we may need an even more fine-grained analysis of code performance with a line-by-line report. Such reports can be easier to read than the reports of cProfile. To profile code line-by-line, we need an external Python module named line_profiler. In this recipe, we will demonstrate how to use this module within IPython. Getting ready To install line_profiler, type conda install line_profiler in a terminal. How do to it... We will profile the same simulation code as in the previous recipe, line-by-line. 1. First, let's import NumPy and the line_profiler IPython extension module that comes with the package: import numpy as np %load_ext line_profiler 2. This IPython extension module provides an %lprun magic command to profile a Python function line-by-line. It works best when the function is defined in a file and not in the interactive namespace or in the Notebook. Therefore, here we write our code in a Python script using the %%writefile cell magic: % 3. Now, let's import this script into the interactive namespace so that we can execute and profile our code: from simulation import simulate 4. We execute the function under the control of the line profiler. The functions to be profiled need to be explicitly specified in the %lprun magic command. We also save the report in a file named lprof0: %lprun -T lprof0 -f simulate simulate(50) *** Profile printout saved to text file 'lprof0'. 5. Let's display the report: print(open('lprof0', 'r').read()) How it works... The %lprun command accepts a Python statement as its main argument. The functions to profile need to be explicitly specified with -f. Other optional arguments include -D, -T, and -r, and they work in a similar way to their %prun magic command counterparts. The line_profiler module displays the time spent on each line of the profiled functions, either in timer units or as a fraction of the total execution time. These details are essential when we are looking for hotspots in our code. There's more... Tracing is a related method. Python's trace module allows us to trace program execution of Python code. That's particularly useful during in-depth debugging and profiling sessions. We can follow the entire sequence of instructions executed by the Python interpreter. More information on the trace module is available at. In addition, the Online Python Tutor is an online interactive educational tool that can help us understand what the Python interpreter is doing step-by-step as it executes a program's source code. The Online Python Tutor is available at. Here are a few references: line_profilerrepository at See also - Profiling your code easily with cProfile and IPython - Profiling the memory usage of your code with memory_profiler
https://ipython-books.github.io/43-profiling-your-code-line-by-line-with-line_profiler/
CC-MAIN-2019-09
refinedweb
489
64.81
Hi Hoss, > : I think the initial geosearch feature can start off with > : <str>10,20</str> for a point. > > +1. Fundamentally, how is a string a point? > > The current XML format SOlr uses was designed to be extremely simple, very > JSON-esque, and easily parsable by *anyone* in any langauge, without > needing special knowledge of types . Whoah. I'm totally confused now. Why have FieldTypes then? When not just use Lucene? The use case for FieldTypes is _not_ just for indexing, or querying. It's also for representation? > It has been heavily advertised as > only containing a very small handful of tags, representing primitive types > (int, long, float, date, double, str) and basic collections (arr, lst, > doc) ... even if id neverh ad a formal shema/DTD. Which is leading to this confusion. Your argument is kind of weird too -- just because you never had or advertised a feature like this (which SOLR allowed for a while I think), why prevent it? Allowing namespaces does _not_ break anything. > adding new tags to that > -- name spaced or otherwise -- is a very VERY bad idea for clients who > have come to expect that they can use very simple parsing code to access > all the data. I disagree. I've got a number of projects here that could potentially use this across multiple domains (planetary science, cancer research, earth science, space science, etc.) and they all need this capability. Also what's "simple" have to do with anything? Even "simple" parsers will parse what SOLR-1586 outputs. > > introducing a new 'point" concept, wether as <point> or as > <georss:point/>, is going to break things for people. Show me an example, I fundamentally disagree with this. > > As discussed with Mattman in another thread -- some public methods in > XMLWriter have inadvertantly made it possible for plugin writers to add > their own XML tags -- but that doesn't mean we should do it in the core > Solr distribution. And why is that? Isn't the point of SOLR to expand to use cases brought up by users of the system? As long as those use cases can be principally supported, without breaking backwards compatibility (or in that case, if they do, with large blinking red text that says it), then you're shutting people out for 0 benefit? It's aesthetics we're talking about here. > If you write your own custom XMLWriter you aren't > allowed to be suprised when it contains new tags, but our "out of hte box" > users shouldn't have to deal with such suprises. What surprise -- their code won't break? > > As also discussed in that same thread thread: it makes a lot of sense > in the long run to start having Response Writers that can generate more > "rich" XML based responses and if there are already well defined standards > for some of these concepts (like georss) then by all means we should > support them -- but the existing XmlResponseWriter should NOT start > generating new tags. I agree with this, but rather than waiting for that to come 2-3 months down the road, why not buy into the need for this now, with what exists? > > The contract for SolrQueryResponse has always said: > >>>>>> A SolrQueryResponse may contain the following types of Objects >>>>>> generated by the SolrRequestHandler that processed the request. >>>>>> ... >>>>>> Other data types may be added to the SolrQueryResponse, but there is >>>>>> no guarantee that QueryResponseWriters will be able to deal with >>>>>> unexpected types. > > ...unless things have changed since hte last time i looked, all of the > "out of the box" response writers call "toString()" on any object they > don't understand. Actually most of them call some variation of #toExternal, regardless, which returns a String. Also, #toInternal returns the same type, a String. > So the best way to move forward in a flexible manner > seems like it would be to add a new "GeoPoint" object to Solr, which > toStrings to a simple "-34.56,67.89" for use by existing response writers > as a string, but some newer smarter response writer could output it in > some more sophisticated manner. I'm not convinced of that. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
http://mail-archives.apache.org/mod_mbox/lucene-solr-dev/200912.mbox/%3CC74538D0.75E5%25Chris.A.Mattmann@jpl.nasa.gov%3E
CC-MAIN-2019-51
refinedweb
684
70.33
Dart language 1. In Dart, everything is an Object, and all objects are instances of class. Even numeric types, methods and even null are objects, and all objects inherit from Object 2. Although Dart is a strongly typed language, variable types are optional because Dart can automatically infer variable types 3.Dart supports templates. List , represents an integer data list, and list , is a list of objects, in which any object can be loaded 4.Dart supports top-level methods (such as main method), class methods or object methods. At the same time, you can also create methods inside the methods 5.Dart supports top-level variables, class variables or object variables 6. Unlike Java, Dart does not have keywords such as public protected private. If a variable is underlined () At the beginning, it means that this variable is private in the library. See here for details 7. Variables in dart can start with letters or underscores followed by any combination of characters or numbers variable Variable definition The following code is the method of defining variables in Dart: main() { var a = 1; int b = 10; String s = "hello"; dynamic c = 0.5; } You can explicitly specify the type of a variable, such as int bool String, or declare a variable with var or dynamic. Dart will automatically infer its data type. Default value of variable Note: variables without initial values will have a default value of null final and const If you never want to change a variable, use final or const instead of var or other types. A variable modified by final can only be assigned once. A variable modified by const is a compile time constant (const constant is also a final constant without doubt). It can be understood as follows: the variable modified by final is immutable, while the variable modified by const represents a constant. Note: instance variables can be final but not const The following is a code Description: var count = 10; final Num = count; // final can only be assigned once const Num1 = 10; // const assignment must be a compile time constant Difference between final and const: Difference 1: final requires that variables can only be initialized once, and does not require that the assigned value must be a compile time constant, which can be constant or not. const requires initialization at declaration time, and the assignment must be a compile time constant. Difference 2: final is lazy initialization, that is, it is initialized before the first use at runtime. const determines the value at compile time. Built in data type Dart has the following built-in data types: - numbers - strings - booleans - Lists (or arrays) - maps - runes (characters of UTF-32 character set) - symbols The following code is used to demonstrate the above data types: main() { // numbers var a = 0; int b = 1; double c = 0.1; // strings var s1 = 'hello'; String s2 = "world"; // booleans var real = true; bool isReal = false; // lists var arr = [1, 2, 3, 4, 5]; List<String> arr2 = ['hello', 'world', "123", "456"]; List<dynamic> arr3 = [1, true, 'haha', 1.0]; // maps var map = new Map(); map['name'] = 'zhangsan'; map['age'] = 10; Map m = new Map(); m['a'] = 'a'; //runes, which is used in Dart to obtain the characters of UTF-32 character set. The codeUnitAt and codeUnit properties of String can obtain the characters of UTF-16 character set var clapping = '\u{1f44f}'; print(clapping); // Printed is the expression of clapping emoji // symbols print(#s == new Symbol("s")); // true } function Return value of function Dart is an object-oriented programming language, so even if a Function is an object, it also has a type of Function, which means that the Function can be assigned to a variable or passed to another Function as a parameter. Although dart recommends adding a return value to the Function, the Function without a return value can also work normally. In addition, you can use = > instead of the return statement, such as the following code: // Declare return value int add(int a, int b) { return a + b; } // Do not declare return value add2(int a, int b) { return a + b; } // =>Is short for return statement add3(a, b) => a + b; main() { print(add(1, 2)); // 3 print(add2(2, 3)); // 5 print(add3(1, 2)); // 3 } Named parameter, position parameter, parameter default value Named parameters sayHello({String name}) { print("hello, my name is $name"); } sayHello2({name: String}) { print("hello, my name is $name"); } main() { // Print hello, my name is zhangsan sayHello(name: 'zhangsan'); // Print hello, my name is wangwu sayHello2(name: 'wangwu'); } It can be seen that when defining named parameters, you can declare parameters in {type paramName} or {paramName: type}. When calling named parameters, you need to call them in the form of funcName(paramName: paramValue). The parameters of named parameters are not necessary, so in the above code, if you call sayHello() without any parameters, it is also possible, but the final printed result is: hello, my name is null. In the development of fluent, you can use the @ required annotation to identify a named parameter, which means that the parameter is necessary. If you do not pass it, you will report an error, such as the following code: const Scrollbar({Key key, @required Widget child}) Location parameters The parameter enclosed in brackets [] is the position parameter of the function, which means that the parameter can be passed or not. The position parameter can only be placed at the end of the parameter list of the function, as shown in the following code: sayHello(String name, int age, [String hobby]) { // There can be multiple location parameters, such as [String a, int b] StringBuffer sb = new StringBuffer(); sb.write("hello, this is $name and I am $age years old"); if (hobby != null) { sb.write(", my hobby is $hobby"); } print(sb.toString()); } main() { // hello, this is zhangsan and I am 20 years old sayHello("zhangsan", 20); // hello, this is zhangsan and I am 20 years old, my hobby is play football sayHello("zhangsan", 20, "play football"); } Parameter defaults You can set default values for named parameters or location parameters, as shown in the following code: // Default values for named parameters int add({int a, int b = 3}) { // Cannot be written as: int add({a: int, b: int = 3}) return a + b; } // Default values for location parameters int sum(int a, int b, [int c = 3]) { return a + b + c; } main() function Whether in Dart or fluent, a top-level main() function must be required. It is the entry function of the whole application. The return value of the main() function is void, and there is an optional parameter. The parameter type is List. Functions as a class of objects You can pass a function as a parameter to another function, such as the following code: printNum(int a) { print("$a"); } main() { // Print in sequence: // 1 // 2 // 3 var arr = [1, 2, 3]; arr.forEach(printNum); } You can also assign a function to a variable, such as the following code: printNum(int a) { print("$a"); } main() { var f1 = printNum; Function f2 = printNum; var f3 = (int a) => print("a = $a"); f1(1); f2(2); f3(6); } Anonymous function Most functions have names, such as main() printName(), but you can also write anonymous functions. If you are familiar with Java, you will not be unfamiliar with the following Dart Code: test(Function callback) { callback("hello"); } main() { test((param) { // Print hello print(param); }); } Anonymous functions are similar to interfaces in Java. They are often used when the parameter of a function is a function. Function return value All functions have return values. If no return statement is specified, the return value of the function is null. operator The operators in Dart are similar to those in Java, such as + + a = = B? A: B, but there are also some operators that are different from Java. The following is a code Description: main() { // Same operator operation as Java int a = 1; ++a; a++; var b = 1; print(a == b); // false print(a * b); // 3 bool real = false; real ? print('real') : print('not real'); // not real print(real && a == b); // false print(real || a == 3); // true print(a != 2); // true print(a <= b); // false var c = 9; c += 10; print("c = $c"); // c = 19 print(1<<2); // 4 // Operator operation different from Java // The is operator is used to determine whether a variable is a certain type of data // is! Is to judge that the variable is not a certain type of data var s = "hello"; print(s is String); // true var num = 6; print(num is! String); // true // ~/Is the rounding operator. If / is used, it is a division operation without rounding int k = 1; int j = 2; print(k / j); // 0.5 print(k ~/ j); // 0 // The as operator is similar to the cast operation in Java, which casts an object into a type (emp as Person).teach(); // ??= Operator if= If the variable in front of the operator is null, it will be assigned; otherwise, it will not be assigned var param1 = "hello", param2 = null; param1 ??= "world"; param2 ??= "world"; print("param1 = $param1"); // param1 = hello print("param2 = $param2"); // param2 = world // ?. operator var str1 = "hello world"; var str2 = null; print(str1?.length); // 11 print(str2?.length); // null print(str2.length); // report errors } .. Operator (cascading operation) If you are familiar with the builder pattern in Java, the one in Dart Operators are also easy to understand. First look at the following code: class Person { eat() { print("I am eating..."); } sleep() { print("I am sleeping..."); } study() { print("I am studying..."); } } main() { // Print in sequence // I am eating... // I am sleeping... // I am studying... new Person()..eat() ..sleep() ..study(); } As you can see, use When you call an object's method (or member variable), the return value is the object itself, so you can continue to use Call other methods of this object, which is similar to the build er mode in Java. Each time a property is built, a this object is returned. control flow The if / else switch for /while try / catch statement is similar to that in Java. The try / catch statement may be slightly different. The following is a code Description: main() { // if else statement int score = 80; if (score < 60) { print("so bad!"); } else if (score >= 60 && score < 80) { print("just so so!"); } else if (score >= 80) { print("good job!"); } // switch Statements String a = "hello"; // The data type in the case statement must be consistent with that in switch switch (a) { case "hello": print("haha"); break; case "world": print("heihei"); break; default: print("WTF"); } // for statement List<String> list = ["a", "b", "c"]; for (int i = 0; i < list.length; i++) { print(list[i]); } for (var i in list) { print(i); } // The arrow function arguments here must be enclosed in parentheses list.forEach((item) => print(item)); // while statement int start = 1; int sum = 0; while (start <= 100) { sum += start; start++; } print(sum); // try catch statement try { print(1 ~/ 0); } catch (e) { // IntegerdivisionByZeroException print(e); } try { 1 ~/ 0; } on IntegerdivisionByZeroException { // Catch an exception of the specified type print("error"); // Print out error } finally { print("over"); // Print out over } } Class Class definition and construction method Classes in Dart have no access control, so you don't need to modify member variables or member functions with private, protected, public, etc. a simple class is shown in the following code: class Person { String name; int age; String gender; Person(this.name, this.age, this.gender); sayHello() { print("hello, this is $name, I am $age years old, I am a $gender"); } } The above Person class has three member variables, a constructor and a member method. What seems strange is the constructor of Person. The three parameters passed in are this XXX, and there is no method body wrapped in curly braces {}. This syntax is Dart's unique and concise method declaration method, which is equivalent to the following code: Person(String name, int age, String gender) { this.name = name; this.age = age; this.gender = gender; } To call the member variable or member method of the Person class, you can use the following code: var p = new Person("zhangsan", 20, "male"); p.sayHello(); // hello, this is zhangsan, I am 20 years old, I am a male p.age = 50; p.gender = "female"; p.sayHello(); // hello, this is zhangsan, I am 50 years old, I am a female In addition to having the same constructor as the class name, you can also add a named constructor, as shown in the following code: class Point { num x, y; Point(this.x, this.y); // Class naming and construction method Point.origin() { x = 0; y = 0; } } main() { // Call the named constructor origin() of the Point class var p = new Point.origin(); var p2 = new Point(1, 2); } Dart uses the extends keyword to inherit classes. If a class has only named construction methods, you should pay attention to the following code when inheriting: class Human { String name; Human.fromjson(Map data) { print("Human's fromjson constructor"); } } class Man extends Human { Man.fromJson(Map data) : super.fromJson(data) { print("Man's fromJson constructor"); } } Since the Human class has no default constructor and only one named constructor fromjason, when the Man class inherits the Human class, it is necessary to call the fromjason method of the parent class for initialization, and Man.com must be used fromJson(Map data) : super. From Jason (data) instead of writing super in curly braces like Java. Sometimes you just call another constructor of a class in its constructor. You can write this as follows: class Point { num x, y; Point(this.x, this.y); // The named constructor calls the default constructor Point.alongXAxis(num x) : this(x, 0); } Member method of class A member method of a class is a function that provides some behavior for the class. In the above code, there are already definitions of member methods of some classes. These definitions are very similar to Java. You can provide getter/setter methods for member variables of a class, as shown in the following code: class Rectangle { num left, top, width, height; // The construction method passes in several parameters: left, top, width and height Rectangle(this.left, this.top, this.width, this.height); // Right and bottom two member variables provide getter/setter methods num get right => left + width; set right(num value) => left = value - width; num get bottom => top + height; set bottom(num value) => top = value - height; } Abstract classes and abstract methods If you use abstract to modify a class, it is an abstract class. There can be abstract methods and non abstract methods in the abstract class. The abstract method has no method body and needs to be implemented by subclasses, as shown in the following code: abstract class Doer { // Abstract methods have no method body and need subclasses to implement void doSomething(); // Common method void greet() { print("hello world!"); } } class EffectiveDoer extends Doer { // Implements the abstract method of the parent class void doSomething() { print("I'm doing something..."); } } Operator overloading Dart has an operator overloading syntax similar to that in C + +. For example, the following code defines a vector class that overloads the + - operation of vectors: class Vector { num x, y; Vector(this.x, this.y); Vector operator +(Vector v) => new Vector(x + v.x, y + v.y); Vector operator -(Vector v) => new Vector(x - v.x, y - v.y); printVec() { print("x: $x, y: $y"); } } main() { Vector v1 = new Vector(1, 2); Vector v2 = new Vector(3, 4); (v1 - v2).printVec(); // -2, -2 (v1 + v2).printVec(); // 4, 6 } Enumeration class Use enum keyword to define an enumeration class. The syntax is similar to that of Java. The code is as follows: enum Color { red, green, blue } mixins mixins is a way to reuse the code in the class, such as the following code: class A { a() { print("A's a()"); } } class B { b() { print("B's b()"); } } // Use the with keyword to indicate that class C is a mixture of class A and class B class C = A with B; main() { C c = new C(); c.a(); // A's a() c.b(); // B's b() } Static member variables and static member methods // Static member variables and static member methods of class class Cons { static const name = "zhangsan"; static sayHello() { print("hello, this is ${Cons.name}"); } } main() { Cons.sayHello(); // hello, this is zhangsan print(Cons.name); // zhangsan } Generics Both Java and C + + languages have generics, and Dart language is no exception. Using generics has many advantages, such as: Specifying generic types correctly produces better generated code. Generics can reduce the complexity of code Dart's built-in data type List is a generic data type. You can insert any data type you want into the List, such as integer, string, Boolean, etc Dart Library (Libraries) Dart currently has many libraries for developers. Many functions do not need to be implemented by developers themselves. They only need to import the corresponding package. Use the import statement to import a package, such as the following code: import 'dart:html'; If you want to import a code file written by yourself, use the relative path. For example, there is a demo Dart file, and there is a util Dart file, the file code is as follows: // util.dart file content int add(int a, int b) { return a + b; } In demo In the dart file, if you want to reference util Dart file, imported in the following way: // demo.dart import './util.dart'; main() { print(add(1, 2)); } You can use the as keyword to set a prefix or alias for an imported package, such as the following code: import 'package:lib1/lib1.dart'; import 'package:lib2/lib2.dart' as lib2; // Uses Element from lib1. Element element1 = Element(); // Uses Element from lib2. lib2.Element element2 = lib2.Element(); You can also use the show hide keyword to import some functions in a package when importing a package, such as the following code: // Import foo only import 'package:lib1/lib1.dart' show foo; // Import all parts except foo import 'package:lib2/lib2.dart' hide foo; Using deferred as when importing a package allows the package to be loaded lazily. The lazily loaded package will only be loaded when the package is used, not at the beginning. For example, the following code: import 'package:greetings/hello.dart' deferred as hello; PPT template download asynchronous Dart provides asynchronous operations like async await in ES7, which are often encountered in the development of fluent. For example, asynchronous knowledge is required for network or other IO operations, file selection, etc. Async and await often appear in pairs. If there are time-consuming operations in a method, you need to set the method to async and add await keyword to the time-consuming operations. If the method has a return value, you need to insert the return value into Future and return it, as shown in the following code: Future checkVersion() async { var version = await lookUpVersion(); // Do something with version } The following code uses Dart to obtain data from the network and print it out: import 'dart:async'; import 'package: as Future<String> getNetData() async{ res = await return res.body; } main() { getNetData().then((str) { print(str); }); }
https://algorithm.zone/blogs/learn-dart-language-quickly.html
CC-MAIN-2022-21
refinedweb
3,183
59.94
Luckily, I couldn't quite stop playing around with this... My first thought was to rewrite everything, writing custom stuff to integrate the linear equation solving stuff and my lazy points. I guess that is still an option, but I decided to do a very simple version instead. So, here is a version which uses Andr=E9's linear equation stuff (although I've switched to numarray -- it's what I use; and numarray 1.0 is out now, so... Hooray for that :). A point is simply a list of variables. By using ordinary list notation, such as p[0] one gets at these variables (their values are available through p[0].value and so forth) but by using the syntactic sugar p.x (or p.y or p.z) one gets (or sets) their values. For general dimensions beyond the third, use p.get(dim) or p.set(dim, val) (or, equivalently, p[dim].value and p[dim].value =3D val). The lsys class is a simple wrapper that handles left and right hand sides with more than one element (for multidimensional equations). There is precious little one can do directly with points at the moment (such as addition/multiplication) but adding that should be easy. I've added a single transformation as an example: """ def rotated(point, a): assert len(point) =3D=3D 2 # To keep it simple :) return (point[0]*cos(a)-point[1]*sin(a), point[1]*cos(a)+point[0]*sin(a)) """ Note that this works with the *variables*, not their *values*. Thus, one can use this transform either way in an equation: """ from geom import * from math import pi a =3D pt() b =3D pt() c =3D pt() eqs =3D lsys() a.x =3D 0 a.y =3D 10 eqs.eq(a, rotated(b, pi/2)) eqs.eq(c, rotated(a, pi/2)) eqs.solve() print b.x, b.y # Prints out 10.0 0.0 print c.x, c.y # Prints out -10.0 0.0 """ So, presto, we've got a bidirectional thingy. Note that the angle in the transform is still a constant, though. As Andr=E9 pointed out, if the angle is to be a variable, we'd end up getting equations with sin() and cos(), and that's not exactly pleasant. So, this code is still sort of hackish, but it's a starting point that actually works... And that's always good, isn't it? :) --=20 Magnus Lie Hetland "Canned Bread: The greatest thing since sliced bread!" [from a can in Spongebob Squarepants] View entire thread
http://sourceforge.net/p/pyx/mailman/message/1701464/
CC-MAIN-2015-48
refinedweb
428
78.14