text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Arrays Java Array Interview Questions, Array Algorithm Questions and Java Array Programs to help you ace your next Job interview. Table of Contents: CHAPTER 1: Why Arrays are important? CHAPTER 2: Array Interview Questions CHAPTER 3: Array Comparison with data structures CHAPTER 4: Bubble Sort Interview Questions CHAPTER 5: Selection Sort Interview Questions CHAPTER 6: Heap Sort Interview Questions CHAPTER 7: Insertion Sort Interview Questions CHAPTER 8: Merge Sort Interview Questions CHAPTER 9: Array Algorithm Interview Questions CHAPTER 10: Array Questions From Around the Web CHAPTER 11: Keys To Interview Success CHAPTER 12: Array Interview Questions PDF Importance of Arrays in Programming Language You can think, you can read, you can write, you can study, you can dream, you can hope, you can wish or even you can memorize everything about programming languages. - How arrays works internally [Java arrays] - How to manipulate arrays using search and sorting algorithms - The performance of arrays and its comparisons with other data structures like (Linked lists, array lists and hash maps) Top 5 Array Interview Questions Java Array Interview Questions Array Internal Structure: In programming interviews( Onsite and Phone interviews ) you will be asked a variety of questions about Arrays. The most important one's will be about the basic structure of the array. The purpose of these questions is to get to know the candidate's understanding about arrays. How array works internally? What are the Basic functions of arrays? And how to use this knowledge to solve real world problems using arrays. Here are the most important java array interview questions aimed at the internal workings of arrays. Java Array Facts: - Arrays are objects which can store collection of same type of elements - An array has a certain number of elements in a fixed order - Accessing an invalid array index causes an exception - Arrays are objects,and are created on the heap, not the stack - Big-O Complexity of operations Access Θ(1), Search Θ(n), Insertion Θ(n), Deletion Θ(n). Question: What is Array in Java? or how does array works internally? - Arrays are of fixed length(Static length) - Array can hold same type of data - Arrays can even hold the reference variables of other objects - Arrays are objects,and are created on the heap, not the stack Question: Can you change size of Array in Java once created? Answer: Arrays are static, which means that we cannot change the size of array once created. If you need dynamic array, consider using ArrayList class, which can resize itself. Question: Can you use Generics with Array in Java? Answer: No, Generics cannot be used with array. That is why sometime List is better choice over array in Java. Question : Why do length of array is referred as length - 1 ? Values of an array are accessed by using index and by default array starts from index zero. so if array lenght is 10 then first value 1 is actually placed at index zero and last value 10 is placed at index 9.so. We always subtract -1 from array length to point to the last index location. Question: Difference between Array Index Out OF Bounds and ArrayStoreException? ArrayIndexOutOfBoundsException: is thrown when the code tries to access an invalid index for a given array e.g. negative index or higher index than length - 1. While, ArrayStoreException: is thrown when you try to store an element of another type then the type of array.e.g; if array is of type int and we try to add element of type String. Question: Can you pass the negative number as an Array size? No. You can’t pass the negative integer as an array size. If you pass, there will be no compile time error but you will get NegativeArraySizeException at run time. public class MainClass { public static void main(String[] args) { int[] array = new int[-5]; //No compile time error //but you will get java.lang.NegativeArraySizeException at run time } } Question: What is an anonymous array in Java? Give example? Answer: Anonymous array is an array without reference. Here is an example public static void main(String[] args) { //Anonymous Array creation System.out.println(new int[]{6,7, 3, 1, 9}.length); System.out.println(new int[]{91, 34, 55, 24, 31}[1]); } } Question: Can you assign an Array of 100 elements to an array of 10 elements? Answer: Yes, In Java an Array of 100 elements can be assigned to an Array of 10 elements. The only conditions is that, they should be of same type. Because while assigning values the compiler checks only type of the array and not the size. Here is code example written in Java. public class ArrayCopyClass { public static void main(String[] args) { int[] arrayWitTen = new int[10]; int[] arrayWith100 = new int[100]; arrayWitTen = arrayWith100 ; } } Question: What are the different ways of copying an array into another array in Java? Answer: There are four methods available in java to copy an array. - Using for loop - Using Arrays.copyOf() method - Using System.arraycopy() method - Using clone() method Question: What are jagged arrays in java? Give example? Answer: Jagged arrays in java are type of arrays which have different length. And Jagged arrays are multidimensional Arrays. Here is an example of jagged Arrays public class JaggedArraysExampleInJava { public static void main(String[] args) { //One Dimensional Array with length 3 int[] OneDimensionalArray3 = {1, 2, 3}; //One Dimensional Array with length 4 int[] oneDimensionalArray4 = {4, 5, 6, 7}; //One Dimensional Array with length 5 int[] oneDimensionalArray5 = {8, 9, 10, 11, 12}; //Jagged Two Dimensional Array int[][] twoDimensionalArray = {OneDimensionalArray3, oneDimensionalArray4, oneDimensionalArray5}; //Printing elements of Two Dimensional Array for (int i = 0; i < twoDimensionalArray.length; i++) { for (int j = 0; j < twoDimensionalArray[i].length; j++) { System.out.print(twoDimensionalArray[i][j]+”"); } System.out.println(); } } } Question: How do you check the equality of two arrays in java? Answer: You can use Arrays.equals() method to compare one dimensional arrays and to compare multidimensional arrays, use Arrays.deepEquals() method. Question: Where does Java Array is stored in memory? Answer: Array is created in heap space of JVM memory. Since array is object in Java. Even if you create array locally inside a method or block, object is always allocated memory from heap. Question: Which access modifiers can be used to declare Arrays in Java? Answer: In Java Array can be declared as PRIVATE, PUBLIC, PROTECTED, and without any modifiers. Following table gives an overview about the accessibility of array for different access modifiers. | Class | Package | Subclass | Subclass | World | | |(same pkg)|(diff pkg)| ————————————+———————+—————————+——————————+——————————+———————— public | + | + | + | + | + ————————————+———————+—————————+——————————+——————————+———————— protected | + | + | + | + | o ————————————+———————+—————————+——————————+——————————+———————— no modifier | + | + | + | o | o ————————————+———————+—————————+——————————+——————————+———————— private | + | o | o | o | o + : accessible o : not accessible Question: Are Array thread safe in Java? In general reading from array is Thread-Safe Operation but modifying an array its not. Question: What is time complexity of different Array operations in terms of big o notation? How To Compare Array With Other Data Structures? Bubble Sort Java Interview Questions? What is Bubble Sort Algorithm? In the bubble sort, as elements are sorted they gradually "bubble" (or rise) to their proper location in the array. Just How does Bubble Sort works ? for i = 1:n, swapped = false for j = n:i+1, if a[j] < a[j-1], swap a[j,j-1] swapped = true → invariant: a[1..i] in final position break if not swapped end Implementation in Java package com.codespaghetti.com; public class MyBubbleSort { public static void bubbleSort(int array[]) { int n = array.length; int k; for (int m = n; m >= 0; m--) { for (int i = 0; i < n - 1; i++) { k = i + 1; if (array[i] > array[k]) { swapNumbers(i, k, array); } } printNumbers(array); } } private static void swapNumbers(int i, int j, int[] array) { int temp; temp = array[i]; array[i] = array[j]; array[j] = temp; } private static void printNumbers(int[] input) { for (int i = 0; i < input.length; i++) { System.out.print(input[i] + ", "); } System.out.println("n"); } public static void main(String[] args) { int[] input = { 4, 2, 9, 6, 23, 12, 34, 0, 1 }; bubble_srt(input); } } What are Properties of Bubble Sort? - Stable - O(1) extra space - O(n2) comparisons and swaps - Adaptive: O(n) when nearly sorted What is Performance of Bubble Sort? Selection Sort Java Interview Questions? What is selection sort? The selection sort algorithm is a combination of searching and sorting. It sorts an array by repeatedly finding the minimum/maximum element from unsorted part. and putting it at the beginning.In selection sort, the inner loop finds the next smallest (or largest) value and the outer loop places that value into its proper location. How does it work? for i = 1:n, k = i for j = i+1:n, if a[j] < a[k], k = j → invariant: a[k] smallest of a[i..n] swap a[i,k] → invariant: a[1..i] in final position end Let's look at following table of elements using a selection sort for descending order. Remember, a "pass" is defined as one full trip through the array comparing and if necessary, swapping elements. Full implementation of selection sort in Java package codespaghetti)); } } Properties of Selection sort algorithm - Not stable - O(1) extra space - Θ(n2) comparisons - Θ(n) swaps - Not adaptive When to use Selection sort? In general Selection sort should never be used. It does not adapt to the data in any way , so its runtime is always quadratic. However, selection sort has the property of minimizing the number of swaps. In applications where the cost of swapping items is high, selection sort very well may be the algorithm of choice. What is performance of selection sort in Big'O? Bonus Video: Believe In Yourself Heap Sort Java Interview Questions? What is Heap sort algorithm? Heap sort is a comparison-based sorting algorithm. Heap sort How does it work ? ALGORITHM: # implementation in Java /* * Java Program to Implement Heap Sort */ import java.util.Scanner; /* Class HeapSort */ public class HeapSort { private static int N; /* Sort Function */ public static void sort(int arr[]) { heapify(arr); for (int i = N; i > 0; i--) { swap(arr,0, i); N = N-1; maxheap(arr, 0); } } /* Function to build a heap */ public static void heapify(int arr[]) { N = arr.length-1; for (int i = N/2; i >= 0; i--) maxheap(arr, i); } /* Function to swap largest element in heap */ public static void maxheap(int arr[], int i) { int left = 2*i ; int right = 2*i + 1; int max = i; if (left <= N && arr[left] > arr[i]) max = left; if (right <= N && arr[right] > arr[max]) max = right; if (max != i) { swap(arr, i, max); maxheap(arr, max); } } /* Function to swap two numbers in an array */ public static void swap(int arr[], int i, int j) { int tmp = arr[i]; arr[i] = arr[j]; arr[j] = tmp; } /* Main method */ public static void main(String[] args) { Scanner scan = new Scanner( System.in ); System.out.println("Heap Sort Testn"); int n, i; /* Accept number of elements */ System.out.println("Enter number of integer elements"); n = scan.nextInt(); /* Make array of n elements */ int arr[] = new int[ n ]; /* Accept elements */ System.out.println("nEnter "+ n +" integer elements"); for (i = 0; i < n; i++) arr[i] = scan.nextInt(); /* Call method sort */ sort(arr); /* Print sorted Array */ System.out.println("nElements after sorting "); for (i = 0; i < n; i++) System.out.print(arr[i]+" "); System.out.println(); } } What are properties of Heap sort? - Not stable - O(1) extra space (see discussion) - O(n·lg(n)) time - Not really adaptive When to use Heap sort?. What is performance of Heap sort? Insertion Sort Java Interview Questions? What is Insertion Sort Algorithm? Insertion sort is a simple sorting algorithm, it builds the final sorted array one item at a time. It is much less efficient on large lists than other sort algorithms. How does it work? ALGORITHM: for i = 2:n, for (k = i; k > 1 and a[k] < a[k-1]; k--) swap a[k,k-1] → invariant: a[1..i] is sorted end Let's look at following example using the insertion sort for descending order. Insertion sort implementation in Java package codespaghetti; } } Properties of Insertion sort? - Stable - O(1) extra space - O(n2) comparisons and swaps - Adaptive: O(n) time when nearly sorted - Very low overhead Advantages Selection sort? - It is very simple. - It is very efficient for small data sets. - It is stable; i.e., it does not change the relative order of elements with equal keys. - In-place; i.e., only requires a constant amount O(1) of additional memory space. What is Performance of insertion sort? Merge Sort Java Interview Questions? What is merge sort? Merge sort is a sorting technique based on divide and conquer rule. With worst-case time complexity being Ο(n log n), it is one of the most respected algorithms. Merge sort first divides the array into equal halves and then combines them in a sorted manner. How does it work? # Merge sort implementation in Java package codespaghetti.com; public class MyMergeSort { private int[] array; private int[] tempMergArr; private int length; public static void main(String a[]){ int[] inputArr = {45,23,11,89,77,98,4,28,65,43}; MyMergeSort mms = new MyMergeSort(); mms.sort(inputArr); for(int i:inputArr){ System.out.print(i); System.out.print(" "); } } public void sort(int inputArr[]) { this.array = inputArr; this.length = inputArr.length; this.tempMergArr = new int[length]; doMergeSort(0, length - 1); } private void doMergeSort(int lowerIndex, int higherIndex) { if (lowerIndex < higherIndex) { int middle = lowerIndex + (higherIndex - lowerIndex) / 2; // Below step sorts the left side of the array doMergeSort(lowerIndex, middle); // Below step sorts the right side of the array doMergeSort(middle + 1, higherIndex); // Now merge both sides mergeParts(lowerIndex, middle, higherIndex); } } private void mergeParts(int lowerIndex, int middle, int higherIndex) { for (int i = lowerIndex; i <= higherIndex; i++) { tempMergArr[i] = array[i]; } int i = lowerIndex; int j = middle + 1; int k = lowerIndex; while (i <= middle && j <= higherIndex) { if (tempMergArr[i] <= tempMergArr[j]) { array[k] = tempMergArr[i]; i++; } else { array[k] = tempMergArr[j]; j++; } k++; } while (i <= middle) { array[k] = tempMergArr[i]; k++; i++; } } } What are properties of merge sort? - Stable - Θ(n) extra space for arrays (as shown) - Θ(lg(n)) extra space for linked lists - Θ(n·lg(n)) time - Not adaptive - Does not require random access to data When to use Merge sort?. What is performance of merge sort? Array Algorithm Java Interview Questions? Question: Find Duplicate Numbers In Integer Array in Java [Google, phone] Answer: There are various ways in Java to find duplicate numbers in a given array, each solution has its own positive and negative points. But keep in mind the interviewer will be checking your ability to present a solution where you know the pros and cons of each proposed solution, like performance and other characteristics. Following are the two most common ways Solution 1: Loop and Compare Loop over an array and compare each element to every other element. For doing this, we are using two loops, inner loop, and outer loop. We are also making sure that we are ignoring comparing of elements to itself by checking for i != j before printing duplicates. for (int i = 0; i < numbers.length; i++) { for (int j = i + 1 ; j < numbers.length; j++) { if (names[i].equals(number[j])) { System.out.println("This is a duplicate element:"+ names[i] ); } } } Performance Since we are comparing every element to every other element, this solution has quadratic time complexity O(n^2) Solution 2 : Add to Set Since Set interface doesn't allow duplicates in Java. Which means if you have added an element into Set and trying to insert duplicate element again. It will not be allowed. In Java, you can use HashSet class to solve this problem. Just loop over array elements, insert them into HashSet using add() method and check return value. If add() returns false it means that element is not allowed in the Set and that is your duplicate. Here is the code sample to do this : for (String name : names) { if (set.add(name) == false) { // your duplicate element } } Performance Complexity of this solution is O(n) because you are only going through array one time, but it also has space complexity of O(n) because of HashSet data structure. which contains your unique elements. So if an array contains 1 million elements, in worst case you would need an HashSet to store those 1 million elements. Question: Find Intersection of Two Sorted Arrays in Java [Microsoft] Example: int[] a = { 1, 2, 3, 6, 8, 10 }; int[] b = { 4, 5, 6, 11, 15, 20 }; Output: Intersection point is : 6 There are two approaches to find the intersection among two arrays Solution 1: Use two for loops and compare each elements in both array and as soon as you find the intersection point, return it Time complexity O(n2) Solution 2: Let us consider Arrays are arrA[] and arrB[] and indexes for navigation are x and y respectively. Since arrays are sorted, compare the first element of both the arrays.(x=0, y=0) If both elements are same, we have our intersection point, return it. Else if element of arrA[x] > element of arrB[y], increase the arrB[] index, y++. Else if element of arrA[x] < element of arrB[y], increase the arrA[] index, x++. If any of the array gets over that means you have not found the intersection point. return –1. Time Complexity O(n) Here is complete code package codespaghetti.com; public class IntersecionPoint2Arrays { int intersectionPoint = -1; int x; int y; public int intersection(int[] arrA, int[] arrB) { while (x < arrA.length && y < arrB.length) { if (arrA[x] > arrB[y]) y++; else if (arrA[x] < arrB[y]) x++; else { intersectionPoint = arrA[x]; return intersectionPoint; } } return intersectionPoint; } public static void main(String[] args) throws java.lang.Exception { int[] a = { 1, 2, 3, 6, 8, 10 }; int[] b = { 4, 5, 6, 11, 15, 20 }; IntersecionPoint2Arrays i = new IntersecionPoint2Arrays(); System.out.println("Intersection point is : " + i.intersection(a, b)); } } Output Intersection point is : 6 Alternative Questions Based on your answer, interviewer will normally ask some other related questions to the original question. Which can be used to further test your knowledge. Following is a list of possible questions they can ask. so make sure you know these as well Question: Find Largest And Smallest Numbers In Unsorted Array in Java[Amazon Phone] Answer: In order to find the largest or smallest number we will pick first element and then iterate the array and check if each number is smaller or greater then switch the numbers Here is pseudo code for the solution for (int i : numbers) { if (i < smallest) { smallest = i; } // end finding smallest else if (i > largest) { largest = i; } // end finding largest number } // end finding largest and smallest values Here is complete solution package codespaghetti.com; import java.util.Arrays; public class FindLargestAndSmallestNumbers { public static void main(String args[]) { largestAndSmallest(new int[] { -20, 34, 21, -87, 92, Integer.MAX_VALUE }); largestAndSmallest(new int[] { 10, Integer.MIN_VALUE, -2 }); largestAndSmallest(new int[] { Integer.MAX_VALUE, 40, Integer.MAX_VALUE }); largestAndSmallest(new int[] { 1, -1, 0 }); } public static void largestAndSmallest(int[] numbers) { int largest = Integer.MIN_VALUE; int smallest = Integer.MAX_VALUE; for (int number : numbers) { if (number > largest) { largest = number; } else if (number < smallest) { smallest = number; } } System.out.println("Given integer array : " + Arrays.toString(numbers)); System.out.println("Largest number in array is : " + largest); System.out.println("Smallest number in array is : " + smallest); } } Output Given integer array : [-20, 34, 21, -87, 92, 2147483647] Largest number in array is : 2147483647 Smallest number in array is : -87 Given integer array : [10, -2147483648, -2] Largest number in array is : 10 Smallest number in array is : -2147483648 Given integer array : [2147483647, 40, 2147483647] Largest number in array is : 2147483647 Smallest number in array is : 40 Given integer array : [1, -1, 0] Largest number in array is : 1 Smallest number in array is : -1 Alternative Questions Based on your answer, interviewer will normally ask some other related questions to the original question. Which can be used to further test your knowledge. Following is a list of possible questions they can ask. so make sure you know these as well Question: Find Missing Number In Array [Facebook] Note: You can download the fully functional example at the end Answer: An array can have one or more numbers missing and we can find them by following two approaches 1) Sum of the series: Formula: n (n+1)/2( but only work for one missing number) 2) Use BitSet, if an array has more than one missing number in array n one missing elements. Below is the implementation of the second solution OutputOutput package codespaghetti.com; import java.util.Arrays; import java.util.BitSet; public class MissingNumber { public static void main(String args[]) { // one missing number printMissingNumber(new int[] { 1, 2, 3, 4, 6 }, 6); // two missing number printMissingNumber(new int[] { 1, 2, 3, 4, 6, 7, 9, 8, 10 }, 10); // three missing number printMissingNumber(new int[] { 1, 2, 3, 4, 6, 9, 8 }, 10); // four missing number printMissingNumber(new int[] { 1, 2, 3, 4, 9, 8 }, 10); // Only one missing number in array int[] iArray = new int[] { 1, 2, 3, 5 }; int missing = getMissingNumber(iArray, 5); System.out.printf("Missing number in array %s is %d %n", Arrays.toString(iArray), missing); } /** * A general method to find missing values from an integer array in Java. * This method will work even if array has more than one missing element. */ private static void printMissingNumber(int[] numbers, int count) { int missingCount = count - numbers.length; BitSet bitSet = new BitSet(count); for (int number : numbers) { bitSet.set(number - 1); } System.out.printf("Missing numbers in integer array %s, with total number %d is %n", Arrays.toString(numbers), count); int lastMissingIndex = 0; for (int i = 0; i < missingCount; i++) { lastMissingIndex = bitSet.nextClearBit(lastMissingIndex); System.out.println(++lastMissingIndex); } } private static int getMissingNumber(int[] numbers, int totalCount) { int expectedSum = totalCount * ((totalCount + 1) / 2); int actualSum = 0; for (int i : numbers) { actualSum += i; } return expectedSum - actualSum; } } Missing numbers in integer array [1, 2, 3, 4, 6], with total number 6 is 5 Missing numbers in integer array [1, 2, 3, 4, 6, 7, 9, 8, 10], with total number 10 is 5 Missing numbers in integer array [1, 2, 3, 4, 6, 9, 8], with total number 10 is 5 7 10 Missing numbers in integer array [1, 2, 3, 4, 9, 8], with total number 10 is 5 6 7 10 Missing number in array [1, 2, 3, 5] is 4 Question: There is an array with every element repeated twice except one. Find that element? As an example we may consider [1, 1, 2, 3, 3, 5, 5] and the element that we look for is 2. Can we achieve it in O(log n) and in place? Let’s consider what we know already about the problem. All the elements appears twice, except one: it means that the size of the array is an odd number.we can see that the non repeating number is located in the odd-size part of the array. The binary search always divides the search space into two pieces. Dividing an odd size space, gives two subspaces – one of the even size, and the second one of the odd size. Unfortunately, dividing the array into two subarrays doesn’t give as any information which half is the odd size, and which is the even size. But we can divide the array arbitrary, so that the first half is always even size. Then comparing the last element L of the left subarray to the first element R of the right subarray, is sufficient to establish, in which half the extra element is located. If L != R then we need to check the second half of the array, otherwise the left one. On the base of the previous paragraph we can develop an algorithm described in the pseudocode below. The complexity is obviouslyThe complexity is obviously int findOneElement(int array[], int size) { if(size == 1) return array[0]; int medium = size/2; // make the size even number medium = medium % 2 == 0 ? medium : medium + 1; if(array == array) { // look in the left subarray return findOneElement(array, medium - 1); } else { // look in the right subarray return findOneElement(array + medium + 1, size - (medium + 1)); } } as we are using binary search. Moreover we don’t use any extra memory, so we get O(1). All the requirements are met. Here is full implementationas we are using binary search. Moreover we don’t use any extra memory, so we get O(1). All the requirements are met. Here is full implementation O(log n) package array; public class ElementThatAppearsOnceInSortedArray { public static void main(String[] args) { new ElementThatAppearsOnceInSortedArray(); } public ElementThatAppearsOnceInSortedArray() { int arr[] = {1, 1, 2, 2, 3, 3, 4}; int value = getElementAppearedOnce(arr, 0, arr.length-1); if(value==-1){ System.out.println("There is no element that appeared once in given sorted array."); }else{ System.out.println("Element that appeared once in given sorted array is :"+value); } } private int getElementAppearedOnce(int arr[], int start, int end) { if(start>end){ return -1; } //This case is will appear when input is {1, 1, 2} if(start==end){ return arr[start]; } int mid = (start + end)/2; if(mid%2==0){ //EVEN if(arr[mid] == arr[mid+1]){ return getElementAppearedOnce(arr, mid+2, end); }else{ return getElementAppearedOnce(arr, start, mid); } }else{ //ODD if(arr[mid] == arr[mid-1]){ return getElementAppearedOnce(arr, mid+1, end); }else{ return getElementAppearedOnce(arr, start, mid); } } } } Question: Given a max-heap represented as an array, Return the kth largest element without modifying the heap. Answer: Full Implementation in Java: public static void findKthElementFromHeap(int[] heap, int k) { PriorityQueue q = new PriorityQueue( new HeapComparator(heap)); int val = -1; q.add(0); int i = 0; while (!q.isEmpty()) { i++; int temp = q.poll(); if (i == k) { val = heap[temp]; break; } int n = (2 * temp) + 1; if (n < heap.length) { q.add(n); } n = (2 * temp) + 2; if (n < heap.length) { q.add(n); } } System.out.println(val); } public static void main(String[] args) { int[] heap = {15, 12, 10, 8, 9, 5, 6, 6, 7, 8, 5}; findKthElementFromHeap(heap,7); } static class HeapComparator implements Comparator{ int[] heap; public HeapComparator(int[] heap){ this.heap = heap; } @Override public int compare(Integer o1, Integer o2) { return Integer.compare(heap[o2], heap[o1]); } } Question: Given a sorted array, find all the numbers that occur more than n/4 times. Full Implementation in Java: public Set findNumbers(int[] arr, double k) { Set result = new HashSet<>(); double size = arr.length / k; if (size <= 1) { return result; } int step = (int) size / 2; step = step < 1 ? 1 : step; for (int i = 0; i < arr.length - step; i += step) { if (arr[i] == arr[i + step]) { int start = binarySearch(i - step, i, arr); int end = start + (int) size; if (end < arr.length && arr[end] == arr[i]) { result.add(arr[i]); } } } return result; } private int binarySearch(int start, int end, int[] arr) { if (start < 0) { return 0; } int target = arr[end]; while (start < end) { int mid = (start + end) / 2; if (arr[mid] == target) { end = mid - 1; } else { start = mid + 1; } } return start; } Question: How to randomly select a number in an array in Java? Example:array: [15, 2, 4, 5, 1, -2, 0] Java implementation : static public int pickRandom(int[] array, int[] freq) { int[] sums = new int[array.length]; int randValue = 0; int sum = 0; int randIndex = 0; Random random = new Random(); for (int i = 0; i < array.length; i++) { sums[i] = sum + freq[i]; randValue += random.nextInt(freq[i] + 1); sum += freq[i]; while(randIndex < (array.length - 1) && randValue >= sums[randIndex] && randIndex <= i ) { randIndex++; } } return array[randIndex]; } Question: Remove duplicates from Array in Java [Google Phone] Answer: Java program to remove duplicates from a sorted array. package codespaghetti.com; public class DuplicateElements { public static int[] removeDuplicates(int[] input){ int j = 0; int i = 1; //return if the array length is less than 2 if(input.length < 2){ return input; } while(i < input.length){ if(input[i] == input[j]){ i++; }else{ input[++j] = input[i++]; } } int[] output = new int[j+1]; for(int k=0; k<output.length; k++){ output[k] = input[k]; } return output; } public static void main(String a[]){ int[] input1 = {2,3,6,6,8,9,10,10,10,12,12}; int[] output = removeDuplicates(input1); for(int i:output){ System.out.print(i+" "); } } } Question: You are given array A and arrayB, write a function to shuffle arrayA and so you can get countA > countB[Google] Analysis: There are two integer array arrayA and arrayB in the same size and two integer countA and countB. If arrayA[i] > arrayB[i], then we increase countA by 1. If arrayB[i]>arrayA[i], then we increase countB by 1. We will do nothing otherwise. Now you are given arrayA and arrayB, write a function to shuffle arrayA and so you can get countA > countB. Assume the input array are always valid, not empty and the input is guaranteed to have answer. Example: arrayA = [12, 24, 8, 32] arrayB = [13, 25, 32, 11] After shuffle: arrayA = [24, 32, 8, 12] arrayB = [13, 25, 32, 11] Full Implementation in Java: public void shuffle(int[] a, int[] b){ int k = a.length % 2 == 0 ? a.length / 2 - 1: a.length /2; partitionAsc(a,k); int i = 0; int j = b.length - 1; while(i < j){ if(b[i] < a[k]){ swap(b,i,j); j--; } i++; } } private void partitionAsc(int[] a,int k){ int i = 0; int j = a.length - 1; Random rnd = new Random(); while(i < j){ int piv = i + rnd.nextInt(j - i + 1); piv = partitionHelp(a,piv,i,j); if(piv == k){ break; } if(piv < k){ i = piv + 1; }else{ j = piv - 1; } } } private int partitionHelp(int[] a, int piv, int i, int j){ swap(a,piv,j); piv = j--; while(i <= j){ if(a[i] > a[piv]){ swap(a,i,j); j--; } i++; } swap(a,piv,i); return i; } Question: Write 2 functions to serialize and deserialize an array of strings Java implementation: public class ArraySerializerDeserializer { public static String serialize(String[] a) { StringBuilder output = new StringBuilder(); int maxLenght = 0; for (String s : a) if (s.length() > maxLenght) maxLenght = s.length(); maxLenght++; output.append(maxLenght).append(":"); String delimiter = generateRandString(maxLenght); for (String s : a) output.append(delimiter).append(s.length()).append(":").append(s); System.out.println(output.toString()); return output.toString(); } public static String[] deserialize(String s, int size) { String[] output = new String[size]; StringBuilder sb = new StringBuilder(); StringBuilder num = new StringBuilder(); int i = 0; while (s.charAt(i) != ':') { num.append(s.charAt(i)); i++; } i++; int maxWordSize = Integer.valueOf(num.toString()); num = new StringBuilder(); boolean parsingNum = false; boolean parsingDelimiter = true; int charCount = 0; int nextWordLenght = 0; int wordCount = 0; while (i < s.length()) { if (parsingDelimiter) { while (charCount < maxWordSize) { i++; charCount++; } parsingDelimiter = false; parsingNum = true; charCount = 0; } else if (parsingNum) { while (s.charAt(i) != ':') { num.append(s.charAt(i)); i++; } parsingNum = false; nextWordLenght = Integer.valueOf(num.toString()); num = new StringBuilder(); // Emptying. i++; } else { while (nextWordLenght > 0) { sb.append(s.charAt(i)); i++; nextWordLenght--; } parsingDelimiter = true; output[wordCount] = sb.toString(); wordCount++; sb = new StringBuilder(); // Emptying. } } return output; } private static String generateRandString(int size) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < size; i++) { sb.append((char) (65 + (26 * Math.random()))); } return sb.toString(); } public static void main(String[] args) { String[] a = { "this", "is", "very", "nice", "I", "like" }; String s = serialize(a); String[] output = deserialize(s, a.length); for (String out : output) System.out.print(out + " "); } } Question: Find the smallest range that includes at least one number from each of the k lists. Example:List 1: [4, 10, 15, 24, 26] List 2: [0, 9, 12, 20] List 3: [5, 18, 22, 30] The smallest range here would be [20, 24] as it contains 24 from list 1, 20 from list 2, and 22 from list 3. Full Implementation in Java: import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.PriorityQueue; import java.util.SortedSet; import java.util.TreeSet; public class GoogleProblem { public static void main(String[] args) { List<List> lists = new ArrayList<List>(); List list1 = new ArrayList(); list1.add(4); list1.add(10); list1.add(15); list1.add(24); list1.add(26); List list2 = new ArrayList(); list2.add(0); list2.add(9); list2.add(12); list2.add(20); List list3 = new ArrayList(); list3.add(5); list3.add(18); list3.add(22); list3.add(30); lists.add(list1); lists.add(list2); lists.add(list3); Result result = findCoveringRange(lists); System.out.println(result.startRange + ", " + result.endRange); } public static Result findCoveringRange(List<List> lists) { Result result = null; int start = -1, end = -1; int rDiff = Integer.MAX_VALUE; int k = lists.size(); PriorityQueue pQueue = new PriorityQueue(); SortedSet entries = new TreeSet(); Map<Integer, Data> listNoAndEntry = new HashMap<Integer, Data>(); for (int i = 0; i < k; i++) pQueue.add(new Data(lists.get(i).remove(0), i)); while (!pQueue.isEmpty()) { Data minData = pQueue.remove(); if (lists.get(minData.listNo).size() > 0) pQueue.add(new Data(lists.get(minData.listNo).remove(0), minData.listNo)); if (listNoAndEntry.size() == k) { Data first = entries.first(); if ((entries.last().data - first.data) + 1 < rDiff) { start = first.data; end = entries.last().data; } entries.remove(first); listNoAndEntry.remove(first.listNo); } if (listNoAndEntry.containsKey(minData.listNo)) entries.remove(listNoAndEntry.remove(minData.listNo)); listNoAndEntry.put(minData.listNo, minData); entries.add(minData); } if (listNoAndEntry.size() == k) { Data first = entries.first(); if ((entries.last().data - first.data) + 1 < rDiff) { start = first.data; end = entries.last().data; } entries.remove(first); listNoAndEntry.remove(first.listNo); } result = new Result(start, end); return result; } } class Result { public final int startRange, endRange; public Result(int startRange, int endRange) { this.startRange = startRange; this.endRange = endRange; } } class Data implements Comparable { public final int data; public final int listNo; public Data(int data, int listNo) { this.data = data; this.listNo = listNo; } @Override public int compareTo(Data o) { return data - o.data; } } Question: Check if array can represent preorder traversal of binary search tree[Google] Given an array of numbers, return true if given array can represent preorder traversal of a Binary Search Tree, else return false. Expected time complexity is O(n). Examples: Input: pre[] = {2, 4, 3} Output: true Given array can represent preorder traversal of below tree 2 4 / 3 Input: pre[] = {2, 4, 1} Output: false Given array cannot represent preorder traversal of a Binary Search Tree. Input: pre[] = {40, 30, 35, 80, 100} Output: true Given array can represent preorder traversal of below tree 40 / 30 80 35 100 Input: pre[] = {40, 30, 35, 20, 80, 100} Output: false Given array cannot represent preorder traversal of a Binary Search Tree. A Simple Solution is to do following for every node pre[i] starting from first one. 1) Find the first greater value on right side of current node. Let the index of this node be j. Return true if following conditions hold. Else return false (i) All values after the above found greater value are greater than current node. (ii) Recursive calls for the subarrays pre[i+1..j-1] and pre[j+1..n-1] also return true. Time Complexity of the above solution is O(n2) An Efficient Solution can solve this problem in O(n) time. The idea is to use a stack. Here we find next greater element and after finding next greater, if we find a smaller element, then return false. 1) Create an empty stack. 2) Initialize root as INT_MIN. 3) Do following for every element pre[i] a) If pre[i] is smaller than current root, return false. b) Keep removing elements from stack while pre[i] is greater then stack top. Make the last removed item as new root (to be compared next). At this point, pre[i] is greater than the removed root (That is why if we see a smaller element in step a), we return false) c) push pre[i] to stack (All elements in stack are in decreasing order) Full Implementation: // Java program for an efficient solution to check if // a given array can represent Preorder traversal of // a Binary Search Tree import java.util.Stack; class BinarySearchTree { boolean canRepresentBST(int pre[], int n) { // Create an empty stack Stack s = new Stack(); // Initialize current root as minimum possible // value int root = Integer.MIN_VALUE; // Traverse given array for (int i = 0; i < n; i++) { // If we find a node who is on right side // and smaller than root, return false if (pre[i] < root) { return false; } // If pre[i] is in right subtree of stack top, // Keep removing items smaller than pre[i] // and make the last removed item as new // root. while (!s.empty() && s.peek() < pre[i]) { root = s.peek(); s.pop(); } // At this point either stack is empty or // pre[i] is smaller than root, push pre[i] s.push(pre[i]); } return true; } public static void main(String args[]) { BinarySearchTree bst = new BinarySearchTree(); int[] pre1 = new int[]{40, 30, 35, 80, 100}; int n = pre1.length; if (bst.canRepresentBST(pre1, n) == true) { System.out.println("true"); } else { System.out.println("false"); } int[] pre2 = new int[]{40, 30, 35, 20, 80, 100}; int n1 = pre2.length; if (bst.canRepresentBST(pre2, n) == true) { System.out.println("true"); } else { System.out.println("false"); } } } Output: true false Question: How to merge two sorted Arrays in Java? Algorithm: If we have two arrays and both are sorted in ascending order and we want resulting array to maintain the same order. Algorithm to merge two arrays A[0..m-1] and B[0..n-1] into an array C[0..m+n-1] is as following: e.g. arr1={4,6, 9, 20, 56} , arr2={1, 7, 25, 45, 70} result = {1,4,6,7, 9,20, 25, 45, 56, 70} - Introduce read-indices i, j to traverse arrays A and B, accordingly. Introduce write-index k to store position of the first free cell in the resulting array. By default i = j = k = 0. - At each step: if both indices are in range (i < m and j < n), choose minimum of (A[i], B[j]) and write it to C[k]. Otherwise go to step 4. - Increase k and index of the array, algorithm located minimal value at, by one. Repeat step 2. - Copy the rest values from the array, which index is still in range, to the resulting array. Enhancements Algorithm could be enhanced in many ways. For instance, it is reasonable to check, if A[m - 1] < B[0] or B[n - 1] < A[0]. In any of those cases. There is no need to do more comparisons. Algorithm could just copy source arrays in the resulting one in the right order. More complicated enhancements may include searching for interleaving parts and run merge algorithm for them only. It could save up much time. When sizes of merged arrays differ in scores of times. Complexity analysisMerge algorithm's time complexity is O(n + m). Additionally, it requires O(n + m) additional space to store resulting array. Code snippets Java implementation Display ar1 : 1 3 7 11 Display ar2 : 2 5 8 22 Display merged array: 1 2 3 5 7 8 11 22 Question: You are presented with following code examples and asked to review it and improve? Answer:Remember we are focusing on the use of arrays in this program there are following things that can be improvedAnswer:Remember we are focusing on the use of arrays in this program there are following things that can be improved public int private printFirstThreeValues(){ public int someArray[] = new int[1000]; someArray[0]=1; someArray[1]=2; someArray[2]=3; System.out.println(someArray[1]+someArray[2]+someArray[3] ) } Answer:Remember we are focusing on the use of arrays in this program there are following things that can be improved - Access modifier: Since array is in a private method no need to declare it as public - name of Array: Remember clean code principles, always use intention revealing names, name can be improved to e.g; integerArray - Size of array: We are printing first three values of arrays and only populating these three values so no need to creat an array of 1000 Download the Source Code for Java Examples find largest smallest number find missing number in array Resources: Array Interview Questions From Around the Web Although this guide has cover 99.99% of Array interview questions. But still if you want to consult some more excellent resources. Then in this section I have collected high quality articles, guides and questions related to arrays in various programming languages. Free Online Course About Java Arrays: MIT Exercises About Java Arrays:Best Array Interview Resources: - Interview Cake: Java Arrays - Geeks for Geeks: Array Data Structures - Career Cup: Array Interview Questions - Career Guru: top 50 Array Interview Questions C# Array Interview Questions: - Arrays in C# - Toptal: C# Array interview questions - C# Array Interview Questions and Answers - Basic C# Array Questions and Answers C++ Array Interview Questions: Keys To Interview Success Arrays are important from interview point of view. Because arrays are the fundamental data structure in any programming language. In real world a lot of programming problems are solved by using arrays and algorithms. If you want to increase your chances of success in real interview be prepared there will be at least 1-3 questions from arrays and sorting algorithm is almost a must. Don't forget to Check Related interview Questions: AngularJS Interview Questions , Spring Interview Questions, Algorithms Interview Questions , Java Inheritance Interview Questions , Java MultiThreading Interview Questions , DevOps Interview Questions Array Interview Questions PDF About The Author References - - - - - -
http://www.codespaghetti.com/array-interview-questions/
CC-MAIN-2019-30
refinedweb
6,907
55.54
30227/right-concatenate-files-appending-character-python-binary Trying to concatenate multiple files in a root to a single file. These are the problems I face: I'm using Python3 here, so what am I missing? It seems like you're using the wrong encoding for the file in this case. If you write a file in one encoding and read it in a different encoding, you will end up getting nonsensical characters made up of the same bytes being interpreted in the wrong way IMHO. Hope this helps! Hi @Mike. First, read both the csv ...READ MORE To check if o is an instance ...READ MORE Use the traceback module: import sys import traceback try: ...READ MORE If the strings you are concatenating are ...READ MORE You can use the + operator to combine them: list1 .. scipy.stats.rv_discrete is what you ned IMHO. You can supply ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/30227/right-concatenate-files-appending-character-python-binary?show=30228
CC-MAIN-2020-24
refinedweb
156
77.03
Starting with the .NET Compact Framework version 2.0, you can perform direct COM interop, use the MarshalAsAttribute, and have an increased set of types you can marshal. Describes providing unmanaged code access to managed components. Describes providing managed code access to COM components. Discusses differences in the .NET Compact Framework for using platform invoke to call a native component such as a DLL file. Discusses differences in the .NET Compact Framework for marshaling data types between managed and unmanaged code. Also describes how to set a registry key to perform interop logging. Describes advanced programming techniques for interoperating with native controls. Provides a class for implementing managed window procedures as described in How to: Subclass a TreeView by Using Native Callbacks. Describes subclassing the TreeView control to create an implementation of the NodeMouseClick event. Describes subclassing the Button control to display a colorful gradient fill. Provides helper functions used for subclassing controls as described in in How to: Subclass a TreeView by Using Native Callbacks. Describes how to use the MessageWindow and Message class in the Microsoft.WindowsCE.Forms namespace. This example does not require a native component. Describes interoperability services in the full .NET Framework.
http://msdn.microsoft.com/en-us/library/k3f1t3ct(VS.80).aspx
crawl-002
refinedweb
197
52.05
Usage 🔋 To its core DSS is a simple compiler that takes regular CSS files and generates atomic CSS classes. Pre-requisites The compiler is written in JavaScript and therefore you will need Node.js v7.6+ installed on your machine. Please read the how it works page before you continue. dss-compiler required - The DSS compiler. Add the compiler to your project: npm i dss-compiler The easiest way to use the DSS compiler is via the CLI tool which accepts a glob to match your css files to compile, a dist folder and an optional bundle filename (by default it would write to index.css): dss ./components/*.css ./build --bundleName bundle.css This will generate a bundle.css in the build folder. You can then include this bundle to your app using a simple link tag. DSS will also write the atomic CSS classnames mappings to JSON files to the same build folder. For example when compiling components/button/styles.css DSS writes build/components/button/styles.css.json. This file contains mappings of selector-array of atomic classes. Optionally DSS can generate JavaScript modules instead of JSON files. Prefer this option if you are consuming the mappings in a JavaScript application since this allows you to import from components/button/styles.css right away. dss ./components/*.css ./build --bundleName bundle.css --outType js dss-compiler as a library The compiler can be used as a library in two modes: singleton and multi instance. The multi instance version is for when you are using asynchronous compilations eg. in a webpack loader. const fs = require('fs') const dss = require('dss-compiler') const src = ` .btn { color: red } ` dss.singleton(src).then({ locals, css, flush } => { fs.writeFileSync('./component1/styles.css.json', , JSON.stringify(locals)) fs.writeFileSync('./bundle.css', flush()) }) For more details see the atomic-css page. dss-classnames This package implements the classnames helper required to consume the DSS styles. Right now it contains only a JavaScript implementation, however we are planning to add implementations in other languages and always welcome user contributions! If you want to implement this helper in another language you can find more details on the classnames helper page. npm i dss-classnames Which you can use similarly to the popular classnames library: import classNames from 'dss-classnames' import styles from './component1/styles.css' const test = `<div class="${classNames(styles.btn, styles.anotherClass, 'a-custom-class')}"> hi </div>` This helper accepts a mix of DSS tokens and regular CSS classnames and makes sure that styles are resolved deterministically. It accepts a list of comma separated classes and you can even have conditions. classNames(styles.btn, isDisabled && styles.btnDisabled) When using DSS with React you might want to pair this helper with babel-plugin-classnames which imports classNames for you automatically and lets you write this instead: import styles from './component1/styles.css' <div className={[styles.btn, styles.anotherClass, 'a-custom-class']}>hi</div> dss-webpack DSS comes with a webpack loader and plugin and since it works similarly to CSS Modules can leverage existing tools like extract-text-webpack-plugin (webpack 3) and mini-css-extract-plugin (webpack 4) to allow you to easily compile your styles. For more details see the dedicated webpack page. dss-next If you use Next.js we prepared a simple plugin for you to seamlessly integrate DSS. npm i dss-next In next.config.js const withDSS = require('dss-next-dss') const localIdentName = process.env.NODE_ENV === 'production' ? 'DSS-[hash:base32]' : '[name]-[local]--[hash:base32:5]' module.exports = withDSS({ dssLoaderOptions: { localIdentName, filename: 'static/index.css' } }) You will then need to add a link to /_next/static/index.css in pages/_document.js <link rel="stylesheet" href="/_next/static/index.css" />
https://dss-lang.com/usage/
CC-MAIN-2018-34
refinedweb
618
51.65
'Coding style' refers to the way source code is formatted. For C, this involves things like brace placement, indentation, and the way parentheses are used. GNOME has a mix of coding styles, and we do not enforce any one of them. The most important thing is for the code to be consistent within a program or library - code with sloppy formatting is not acceptable, since it is hard to read. When writing a new program or library, please follow a consistent style of brace placement and indentation. If you do not have any personal preference for a style, we recommend the Linux kernel coding style, or the GNU coding style. Read the (Standards)Writing C info node in the GNU documentation. Then, get the Linux kernel sources and read the file linux/Documentation/CodingStyle, and ignore Linus's jokes. These two documents will give you a good idea of what we recommend for GNOME code. For core GNOME code we prefer the Linux kernel indentation style. Use 8-space tabs for indentation. Using 8-space tabs for indentation provides a number of benefits. It makes the code easier to read, since the indentation is clearly marked. It also helps you keep your code honest by forcing you to split functions into more modular and well-defined chunks - if your indentation goes too far to the right, then it means your function is designed badly and you should split it to make it more modular or re-think it. 8-space tabs for indentation also helps you to design functions that fit nicely in a single screen, which means that people can understand the code without having to scroll back and forth in order to understand it. If you use Emacs, then you can select the Linux kernel indentation style by including this in your .emacs file: On newer Emacsen or with a newer cc-mode, you may be able to simply do this instead: If you use vim, then you can select the GNOME kernel indentation style by including this fragment in your ~/.vimrc file: The GNU indentation style is the default for Emacs, so you do not need to put anything in your .emacs to enable it. If you wish to select it explicitly, substitute "gnu" for "linux" in the example above. If you know how to customize indentation styles in other popular editors, please tell us about it so that we can expand this document. It is important to follow a good naming convention for the symbols in your programs. This is especially important for libraries, since they should not pollute the global namespace - it is very annoying when a library has sloppily-named symbols that clash with names you may want to use in your programs. Function names should be of the form module_submodule_operation, for example, gnome_canvas_set_scroll_region or gnome_mime_get_keys. This naming convention eliminates inter-module clashes of symbol names. This is very important for libraries. Symbols should have descriptive names. As Linus says, do not use cntusr(), use count_active_users() instead. This makes code very easy to read and almost self-documenting. Try to use the same naming conventions as in GTK+ and the GNOME libraries: Function names are lowercase, with underscores to separate words, like this: gnome_canvas_set_scroll_region(), gnome_mime_get_keys(). Macros and enumerations are uppercase, with underscores to separate words, like this: GNOMEUIINFO_SUBTREE() for a macro, and GNOME_INTERACT_NONE for an enumeration value. Types and structure names are mixed upper and lowercase, like this: GnomeCanvasItem, GnomeIconList. Using underscores to separate words makes the code less cramped and easier to edit, since you can use your editor's word commands to navigate quickly. If you are writing a library, then you may need to have exported symbols that are to be used only within the library. For example, two of the object files that compose the library libfoo.so may need to access symbols from each other, but this symbols are not meant to be used from user programs. In that case, put an underscore before the function name and make the first words follow the standard module/submodule convention. For example, you could have a function called _foo_internal_frobnicate(). It is important that your variables be consistently named. For example, a module that does a list manipulation may choose to name the variables that hold a list pointer "l", for terseness and simplicity. However, it is important that a module that manipulates widgets and sizes does not use variables called "w" for both widgets and widths (as in width/height values); this would make the code inconsistent and harder to read. Of course, these very short and terse names should only be used for the local variables of functions. Never call a global variable "x"; use a longer name that tells what it does. GNOME code should be as clean as possible. This implies using a consistent indentation style and good naming conventions, as described above. It also implies the following. Learn the correct use of the static keyword. Do not make all your symbols global. This has the advantage that you can use shorter names for internal functions within a single source file, since they are not globally visible and thus you do not need the module/submodule prefix. Learn the correct use of the const keyword. Use it consistently, as it can make the compiler catch a lot of stupid bugs for you. If you have a function that returns a pointer to internal data which the user is not supposed to free, you should use a const modifier. This will warn the user if he tries to do something incorrect, for example: The compiler will warn the user if he tries to free the returned string. This can catch a lot of bugs. If you have random 'magic values' in your program or library, use macros to define them instead of hardcoding them where they are used: If you have a list of possible values for a variable, do not use macros for them; use an enum instead and give it a type name - this lets you have symbolic names for those values in a debugger. Also, do not use an 'int' to store an enumeration value; use the enum type instead. This lets the compiler catch errors for you, allows the debugger to show proper values for these values and makes it obvious what values a variable can take. An example follows: If you define a set of values for a bit field, do it like this: This makes it easier to modify the list of values, and is less error-prone than specifying the values by hand. It also lets you use those values as symbols in a debugger. Do not write obfuscated code, but also try to be spartan. Do not use more parentheses than are necessary to clarify an expression. Use spaces before parentheses and after commas, and also around binary operators. Please do not put hacks in the code. Instead of writing an ugly hack, re-work the code so that it is clean, extensible and maintainable. Make sure your code compiles with absolutely no warnings from the compiler. These help you catch stupid bugs. Use function prototypes in header files consistently. Within GNOME you can use the GNOME_COMPILE_WARNINGS Autoconf macro in your configure.in. This will take care of turning on a good set of compiler warnings in a portable fashion. Comment your code. Please put a comment before each function that says what it does. Do not say how it does it unless it is absolutely necessary; this should be obvious from reading the code. If it is not, you may want to rework it until the code is easy to understand. While documenting API functions for a library, please follow the guidelines specified in the file gnome-libs/devel-docs/api-comment-style.txt. This allows your source code to provide inline documentation that is later extracted by the gtk-doc system to create a DocBook manual automatically. GTK+ lets you do a lot of magic and obfuscation with signal handlers, passed closures, and datasets. If you find yourself doing a lot of gtk_object_set_data() all over the place, or passing state around in bizarre ways via signal handlers, please rework the code. If you need to attach a lot of data to a particular object, then it is a good candidate for a new derived class, which will not only make the code cleaner, but more extensible as well. A lot of heuristics in complicated event handlers can often be replaced by clean code in the form of a state machine. This is useful when you want to implement tricky things like selection and dragging behavior, and will make the code easier to debug and extend.
http://developer.gnome.org/doc/guides/programming-guidelines/code-style.html
crawl-002
refinedweb
1,456
61.87
Trying to brain storm the best load/save options for a large rpg - binaryformatter seems like a great option, but im wondering if it can save more than just int, float, string.. Wondering if there is an actual list of types that are safe to pass to binaryformatter? What do you mean "safe to pass"? Any type can be serialized if marked with SerializableAttribute. What i mean is, from my understanding, you cannot save a vector3 with binaryformatter . But you are saying binaryformatter will save any Unity type, be it a Vector3, Quaternion, or anything else? as long as i slap a [System.Serializable] over it? Answer by Cherno · May 02, 2019 at 02:40 AM Types in the System namespace. link text sbyte short int long byte ushort uint ulong float double decimal char bool string Object -- based off of the link I get this list, which does not mention anything about arrays, which I know can be saved with the binaryformatter... Can a List<> also be saved with the binaryformatter? Thanks Cherno - this lead me to some more searching where I found all of your other posts xD Appreciated! Answer by Bunny83 · May 02, 2019 at 01:46 PM The BinaryFormatter can save all types which are marked with the Serializable attribute. Unfortumately many of Unity's internal types (like Vector3) do not have this attribute. Would you happen to know of a complete list of all types which are marked with the Serializable() not calling the the binaryformatter = file.create 0 Answers Build not working on some saves for binary formatter but works on others 0 Answers Save and Load in the build 0 Answers Which types are actually serializable? Is the documentation incorrect or am I? 3 Answers Saving and loading data from file 0 Answers
https://answers.unity.com/questions/1627713/is-there-a-list-of-types-binaryformatter-can-save.html
CC-MAIN-2020-24
refinedweb
300
63.39
A python library for making ascii-art into network graphs. Project description Asciigraf is a python library that turns ascii diagrams of networks into network objects. It returns a networkx graph of nodes for each alpha-numeric element in the input text; nodes are connected in the graph to match the edges represented in the diagram by -, /, \ and |. Installation Asciigraf can be installed from pypi using pip: ~/$ pip install asciigraf Usage Asciigraf expects a string containg a 2-d ascii diagram. Nodes can be an alphanumeric string composed of characters in A-Z, a-z, 0-9, and _, {, }. Edges can be composed of -, /, \ and |. import asciigraf network = asciigraf.graph_from_ascii(""" NodeA----- | |---NodeB """) print(network) >>> <networkx.classes.graph.Graph at 0x7f24c3a8b470> print(network.edges()) >>> [('NodeA', 'NodeB')] print(network.nodes()) >>> ['NodeA', 'NodeB'] Networkx provides tools to attach data to nodes and edges, and asciigraf leverages these in a number of ways; in the example below you can see that asciigraf uses this to attach a x, y position tuple to each node indicating where on the (x, y) plane each node starts ( 0,0 is at the top-left). It also attaches a length attribute to each edge which matches the number of characters in that edge, as well as a list of positions for each character an edge print(network.nodes(data=True)) >>> [('NodeA', {'position': (10, 1)}), ('NodeB', {'position': (23, 3)})] print(network.edges(data=True)) >>> [('NodeA', 'NodeB', OrderedDict([('length', 10), 'points', [...]))] print(network.edge['NodeA']['NodeB']['points']) >>> [(15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (19, 2), (19, 3), (20, 3), (21, 3), (22, 3)] Asciigraf also lets you annotate the edges of graphs using in-line labels — denoted by parentheses. The contents of the label will be attached to the edge on which it is drawn with the attribute name label. network = asciigraf.graph_from_ascii(""" A---(nuts)----B----(string)---C | | | D---(string)----E """) print(network.get_edge_data("A", "B")["label"]) >>> nuts print(network.get_edge_data("B", "C")["label"]) >>> string print(network.get_edge_data("D", "E")["label"]) >>> string print(hasattr(network.get_edge_data("B", "D"), "label")) >>> False Have fun! import asciigraf network = asciigraf.graph_from_ascii(""" s---p----1---nx / | | / | 0---f 6l-a c-- / | \--k / ua | 9e q \ | / \-r7z jud \ | m y \ | v-ow """) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/asciigraf/
CC-MAIN-2019-04
refinedweb
397
65.83
define a dynamic endpoint using a placeholder. Scenario We will enhance the flight scenario from the previous blog PI REST Adapter – JSON to XML conversion where a travel agency gathers flight details from an airline. Besides the flight details query, the travel agency likes to check the availability of flights. Depending on the respective service name as part of the endpoint URL, the travel agency either calls BAPI_FLIGHT_GETDETAIL or BAPI_FLIGHT_CHECKAVAILABILITY. In the SAP Process Integration Designer perspective of the NetWeaver Developer Studio (NWDS), we open the existing Integration Flow and add an interface split together with the new inbound interface. Select the interface split, and maintain the condition. In the condition, we use the adapter specific message attribute with name service and namespace. The attribute is set within the REST sender channel, see below. For service equals getdetail, the request is routed to BAPI_FLIGHT_GETDETAIL. For service equals checkavail, the request is routed to BAPI_FLIGHT_CHECKAVAILABILITY. Configuring the REST sender adapter Double-click on the sender channel of type REST, and switch to the REST Resources tab below the Adapter-Specific tab. We would like to pass the service name as part of the endpoint URL, so we define the custom pattern as /{service_part} whereas service_part is a placeholder which is filled during runtime. The REST adapter comes with a set of predefined adapter specific message attributes such as resource, service, and id which are commonly used among RESTful services. We use the service attribute to hold the name of the service. As dynamic attribute select REST Service (service) from the drop down menu. The Value Source is the beforehand defined URL Pattern Element service_part. The rest of the configuration remains unchanged. For details, refer to the blog PI REST Adapter – JSON to XML conversion. Running the scenario For testing the scenario we use the Advanced REST Client Application in the Google Chrome browser. The endpoint URL of the RESTful service starts with http://<host>:<port>/RESTAdapter with host and port of the SAP PI system, followed by what we have defined in the sender channel, here /demo/flight/<service>. For either service, we enter the same request message in JSON format. For checking the flight availability, we call the URL http://<host>:<port>/RESTAdapter/demo/flight/checkavail. The response provides us the number of seats available in JSON format. For requesting the flight details, we call the URL http://<host>:<port>/RESTAdapter/demo/flight/getdetail. The response provides us the flight details such as destination and schedule in JSON format. You can access the dynamic attributes values from the DynamicConfiguration message header. In the message monitor, select the corresponding message, switch to tab Message Content, and open the DynamicConfiguration message header. Here you can see that the service attribute value is either checkavail or getdetail. I hope this blog was helpful to understand the options that you have to define custom endpoints within the SAP PI REST adapter. If you like to learn more, check out the other blogs in the series, accessible from the main blog PI REST Adapter – Blog Overview. Hi Alex, We are trying to create Dynamic URL for REST Receiver Channel. Our Channel configuration looks like below.{DYNAMIC}, and have configured the channel to populate value for parameter DYNAMIC at runtime from payload (using XPATH Expression). The problem is, the value that comes during runtime has reserved characters like = (equals) and ; (semicolon) . Thus these values are getting url encoded and converted to %3D and %3B. Is there a way to avoid this? For example, url we need is;partnerName=xyz URL generated at runtime is Is there any setting that we can use to avoid the url encoding for the dynamic part of the URL?
https://blogs.sap.com/2014/12/18/pi-rest-adapter-defining-a-dynamic-endpoint/
CC-MAIN-2017-30
refinedweb
618
55.13
Why Wikipedia Articles Vary So Much In Quality 160 Hugh Pickens writes "A new study shows that the patterns of collaboration among Wikipedia contributors directly affect the quality of an article. 'These. 'We. 'We then clustered the articles based on these roles and examined the collaboration patterns within each cluster to see what kind of quality resulted,' says Ram. 'We found that all-round contributors dominated the best-quality entries. In the entries with the lowest quality, starters and casual contributors dominated.'" Really? (Score:5, Funny) Re:Really? (Score:5, Interesting) Also updates vary. For example: Christina_Applegate [wikipedia.org] Current career Applegate starred in the ABC comedy, Samantha Who?, until it was canceled on May 18, 2009. The series costarred Jean Smart, Jennifer Esposito, and Melissa McCarthy. The series was about a 30-year-old who, after a hit-and-run accident, develops amnesia and has to rediscover her life, her relationships, and herself.[9] Shortly after the cancellation was announced, Applegate began a campaign to get the show back into production,[10] which was unsuccessful.. Also, it doesn't help that I am too lazy to edit the changes myself. Leave it up to the snobby community. I've tried to contribute before, it was the last time I made that mistake. Re:Really? (Score:5, Insightful). Quality isn't such a simple metric, never will be (Score:4, Insightful) You can easily have an extremely high quality, 100% accurate and in-depth Wikipedia article without a single external reference. Therefore, the entire analysis is bullshit. Which is about what I've come to expect from anything that tries to meta Wikipedia. It's a mish-mosh. As long as article creation and revision is open, it will remain one. Legitimate attempts to characterize any article's quality can only be done by a true expert in the subject matter at hand, if one can even be found. Which is why Wikipedia's resident pedants utterly foul up so many excellent contributions. A-, B- and C-class articles, my ass. Re:Quality isn't such a simple metric, never will (Score:4, Insightful) You can easily have an extremely high quality, 100% accurate and in-depth Wikipedia article without a single external reference. No, you can't. Without references, a reader has no way of knowing whether the article is accurate or not; and an editor who writes an article who is unfamiliar with the references that could be cited is unlikely to be sufficiently knowledgeable to genuinely produce a high-quality article. Re:Quality isn't such a simple metric, never will (Score:4, Insightful) Take a look at the math articles. Heck most of the original content like episodes of BattleStar Galactica, information about cartoon characters or fringe political movements didn't have high quality references. Wikipedia built itself by specializing in materials for which only so / so or no references existed. Articles on wikipedia were higher quality that the same material on the same topics anywhere else. Re:Quality isn't such a simple metric, never will (Score:5, Insightful) Fixed that for you. If wikipedia allowed users to volunteer to sign pages, and in that signature were their qualifications, then some credence could be attached to the article -- referenced or not. Add to that, that all wikiadmins really should be identified on the site. If they are going to edit, delete, attack and defend content, as well as ban users, we really should know what their qualifications actually are. I'd be willing to bet 90% of the current problems with wikipedia would disappear overnight if the admins lost their anonymity. Much of the neofascist behavior, and agenda-ism, would certainly disappear. It solves the "who watches the watchers" problem overnight. While there are good reasons why articles can be submitted anonymously, those in charge of the site do NOT need to be anonymous -- and for the sake of transparency, honesty and ethical credibility, we NEED to know who they are. Are they afraid of the truth? What do they have to hide? Re: (Score:2) I'd be willing to bet 90% of the current problems with wikipedia would disappear overnight if the admins lost their anonymity. Much of the neofascist behavior, and agenda-ism, would certainly disappear. It solves the "who watches the watchers" problem overnight. I'm not so sure about that. If the admins lose there anonimity , it becomes very easy for someone with enough power , to pressure them. For example , you could write something about a local politician , which that politician doesn't like ( even if it's true ) . If your identity is known , that person can easily pressure you into changing your story. Re: (Score:2) Utter nonsense. The reader may, if they wish to verify anything, simply turn to Google and further educate themselves on the subject matter, or turn to researching it themselves. The article may, in fact, justify itself by explaining matters sufficiently. If the article is accurate and in-depth, it *is* high quality because the point is to impute correct information in its perusal, which such an arti Re:Quality isn't such a simple metric, never will (Score:4, Funny) You can easily have an extremely high quality, 100% accurate and in-depth Wikipedia article without a single external reference. [Citation needed.] :-P Re: (Score:2) Here you go : You can easily have an extremely high quality, 100% accurate and in-depth Wikipedia article without a single external reference. Re: (Score:2) However, good clear writing can be judged. The study points out that the best wikipedia entries are done by editors who are GOOD writers who know how to a) contribute new sentences (write a first draft), b) re-write sentences (re-drafting), c) add references (source checking), d) make grammatical and other edits (final drafting). The formula for writing good content has not changed. It's just the proportions (collaboration) that have made the process more efficient and provided more content which are in ne Re: (Score:3, Interesting). While true, one might say that stasis is the proper state for a repository of knowledge. Why should articles be under continual maintenance when the subject area is for the most part static? Politics, religion, and anything that passes for either are the least desirable things for Wiki. Any articles dealing in either area are essentially useless, bias magnets. But there is very little new information on the vast majority of subjects, so having 90% of them "dead" is just fine. Equally nonsensical are the seem Then add the citation (Score:4, Informative) Equally nonsensical are the seemingly random insertion of [citation needed] tags on things that are matters of public record. In that case, you can help Wikipedia by removing the {{citation needed}} and replacing it with <ref>name of the relevant public record</ref>. Re: (Score:2) What a bunch of nonsense. Why should ever mention of a persons name hold a reference to the county courthouse in which his birth records are stored? Re:Then add the citation (Score:4, Informative) Re: (Score:3, Insightful) "Which is why Wikipedia is doomed to a slow bit-rot into irrelevance." Wikipedia is still one of the most popular websites on the internet - claiming it is dead or dying is premature and probably wrong. There are more than enough editors to maintain the vast majority of popular articles. More esoteric topical articles such as a living actress will become stale every now and then (this has always been true on Wikipedia), but established topics have, well... established articles. And these types of articles wi Citizendium or Veropedia (Score:2) Wikipedia is prime to be "taken down" by a peer reviewed competitor (or simply by someone who can bother with basic copy-editing). Either Wikipedia provides that service themselves, for example by cleaning up and freezing articles, or eventually someone else will do it for them. Then why hasn't Citizendium or Veropedia already done this? Re: (Score:3, Insightful) Agreed, though it doesn't have to be that way. I see articles that need to be created or extensive revised all the time. But 4 years ago people worked together to create content. Now they work together to destroy content. Re: (Score:3, Insightful) Because 90% of Wikipedia is dead. People drive-by now and then and drop in a sentence or fix a spelling error, but for the most part nobody is editing the articles unless it's a politically contentious topic. Oh come on. 90% of articles probably concern topics that are either "finished" or are part of a domain in which scholarship is currently very slow moving. Once an article on a particular deceased author is written for example it shouldn't be updated unless some new insights are gained at some point. Likewise for some scientists and theories which have been superseded or are well established. Knowledge doesn't "bit-rot". Re: (Score:3, Funny) In January 2010, Apple announced the iPad. The iPad is a tablet form factor computer due to be released in 2010. Within one line of each other, one post is talking about the past tense in 2010 and the future tense in 2010... Oh, the horror! The month (Score:2). Consider an edit made on March 2009. January of that year was the past and November was the future. Re: (Score:2) Yes, the article seems to be just stating the obvious. "casual contributor" is defined, apparently, a somebody who adds text, but not citations or links. An "A" quality article is defined as one, among other things, incorporating a lot of citations and links. Surprise, the casual contributors mostly contribute to articles that aren't "A" quality! Re: (Score:1) That's the beauty of data mining; you can find things out that would have otherwise been totally unknown. TFA states that they will next be applying these techniques to determine whether water is wet... Re: (Score:3, Insightful) You might think this is obvious, but any Slashdot article on Wikipedia inevitably includes lots of comments saying "My drive-by edit was reverted and I'm never contributing again and Wikipedia is dying." Lots of people on Slashdot do seem to think that an agglomeration of off-the-cuff edits could somehow produce quality articles. Re: (Score:2) You might think this is obvious, but any Slashdot article on Wikipedia inevitably includes lots of comments saying "My drive-by edit was reverted and I'm never contributing again and Wikipedia is dying." I wouldn't go so far to support the unsupported generalization in the rest of your post, but this part does seem to be true, and it's becoming really annoying. This attitude appears so prevalent at times that we actually see such completely anecdotal posts, painfully devoid of anything resembling a rational argumentation common in the technical community, get upvoted as "Insightful" or "Informative", and a lot. A google search gave me an example within fifteen seconds: a comment with score +5, Informative [slashdot.org] Re: (Score:2) There's just a lot of people repeating the "LOL Wikipedia" line, so it's not surprising people pick it up seemingly randomly. It's easy to reinforce because there is a nugget of truth to it. Re: (Score:2) How do you want people to source allegations about an article deletion if Wikipedia hides the history of deleted pages as if they never existed? There's no citing a memory hole. You can always help people find any deleted article on Wikipedia simply by stating its exact name, and even better by linking its old location. For example, like this [wikipedia.org]. An anonymous user doesn't get much from that, but a logged-in user and especially a sysop sees additional information there that tells the entire history. Then any of those users can also paste whatever they deem useful for others to see. It only takes one single person to do that, out of many thousands that have the permission. AFAIR, an ar Re: (Score:2) That, or articles written by people with the most hours logged writing fanfiction about the subject. Re: (Score:2) Practice makes perfect? Political Correctness and Wikipedia (Score:2, Redundant) The so-called "study" is a farce. There is no mentioned of articles which fit the "PC" criteria that are filled with lies and deceits, such as one article where the government of a certain country has posted, blaming every problem they have on their former British colonial masters. There is no "collaboration" whatsoever, in term of the Wikipedia readers/editors, for every time anyone tried to edit that said article will get nullified, as the government of that country has employed a "cyber patrol" group which Oh. (Score:4, Funny) I always figured that some of the articles were poor because they were written by Americans, rather than much more intelligent Europeans or Asians. Re:Oh. (Score:4, Funny) I always figured that some of the articles were poor because they were written by Americans, rather than much more intelligent Europeans or Asians. At first I thought you were trolling, but then I checked the facts [wikipedia.org]! With how well-written that article was I can only assume it was someone from Hong Kong. Missing role: deleters (Score:5, Insightful) Seriously, I'm encountering more and more 'deleted' articles when I search Wikipedia. Can someone stop deleters? Or at least offer an option to view deleted articles (Deletionpedia works only for English language). Re:Missing role: deleters (Score:5, Funny) Re: (Score:2) Re: (Score:3, Insightful) Exactly. And then these people who revert -any- change without even looking at it. What? An anonymous contributor added a few words to make a phrase make sense? Revert it! Reason for edit: Change a word to make a phrase make since ;) Re:Missing role: deleters (Score:4, Funny) Re:Missing role: deleters (Score:4, Funny) Audacity is open source, so anyone can have it. Here's my citation: [wikipedia.org] Unreliable source (Score:2) Here's my citation: [wikipedia.org] Bad citation. An encyclopedia is not a reliable source, a wiki doubly not. I'll have to report you to the IRS [wikipedia.org]. Re: (Score:2, Insightful) They're only looking at people who contribute, not at the people who destroy. I'm with you on the deletionist troll issue though. Many interesting articles have been deleted outright and many wiki pages for interesting projects are deleted, just because someone, somewhere hasn't heard of it. The deletionism also makes the whole Wikipedia experience that much more annoying, because when you click on a link for &Name, obviously expecting a meaningful answer to how it ties into this article, you instead get Re: (Score:2, Interesting) Deletionpedia [dbatley.com] archives deleted wikipedia pages. Unfortunately, the site is mostly not working at the moment but they do say they're continuing to archive deleted pages while they get the site up again. Re:Missing role: deleters (Score:5, Insightful) Yes, this is really quite pathetic. On several occasions now I have wanted some information on a particular topic (e.g. a shitty old game I picked up, my mobile phone, or even a description of lemon party). I go to the wikipedia page, I can tell that several people went to the effort of writing an entry on that topic but the page was deleted by someone who decided that no-one would ever want to see that information. This is arrogance in the extreme - destroying some people's work because they incorrectly assumed that no-one would ever want to see it. Was the article getting in the way before it was deleted?! Surely Wikipedia could have a link to view pages that were 'deleted' for non-notability - what would be so bad about that? Re:Missing role: deleters (Score:4, Insightful) Re: (Score:3, Informative) Actually the policies themselves say that blogs and other self-published sources are never good enough unless they happen to be written by the subject of the article. Even then, they're not used for notability, but they can be used as reliable sources. The only time self-published sources can be used for anything not about the subject is in the case when its written by a recognized expert in the field. Even then, its reliable, but its usefulness in establishing notability is questionable. The threshold for Notability does not work that way (Score:2) This is arrogance in the extreme - destroying some people's work because they incorrectly assumed that no-one would ever want to see it. Notability does not work that way. Verifiability of each claim against reliable sources is Wikipedia's core content policy. "Reliable sources" is Wikipedia-speak for scholarly or mainstream media. Notability of a topic [wikipedia.org] is merely an upper bound on verifiability of claims made about a topic: whether it "has received significant coverage in reliable sources that are independent of the subject." But if you know of one or more reliable sources about the "particular topic" in question, try this: Re: (Score:2) Well, I just browsed through the list of articles proposed for deletion on Wikipedia. A lot of it, I'd say about 70% or so was articles about people or bands/albums/songs to be deleted on notability grounds. The rest were a mixed bag of general cleanup. The question is, notable compared to what? I can assure you that of all the samples I looked at, none would have qualified for an encyclopedia entry. None were anyone I'd be surprised to find missing. I think if you want to include people of less notability, Re: (Score:2) Seriously, I'm encountering more and more 'deleted' articles when I search Wikipedia. Wikipedia has to be careful not to fill up the Internet. Why Wikipedia Articles Vary So Much In Quality ? (Score:1, Funny) Re: (Score:3, Funny) Maybe looking at it the wrong way? (Score:4, Interesting) Re: (Score:2) True, and at a certain point the casual contributor no longer has anything to add and the very knowledgeable have to move in for any contributions to made and move the article from "just ok" to "good" and beyond. It's very unlikely because of demographics a good all-rounder with a lot of knowledge of a topic will move in and create an article from scratch. The casuals are more likely to get there first. Quality Ratings (Score:5, Insightful) In that scheme, excellent articles with posters who tend to brush up against some of wikipedia's more picky guidelines, would be rated lower. It's minor, because in general wikipedia's guidelines are there to make better articles, but it sometimes happens. It's like defining intelligence as the ability to do well on intelligence tests. It's certainly related, and there's not much of a better alternative, but you have to remember you aren't measuring the trait directly. Re:Quality Ratings (Score:4, Insightful) That flaw has always been there, and similar was included in every version of every printed encyclopedia. It's hard to get around that without thousands of editors working full time. The premise of Wikipedia is good, but if you want to trust some information you found on the Internet... errrmm, you need to validate it, corroborate it, and research it yourself if necessary. For me, Wikipedia makes a great starting point to learn about something, just as any single book on any given subject is a good place to *start*. The principle of trust but verify applies for many things, but caveat emptor equally applies. Personally, much of the content of Wikipedia is better than asking Yahoo! Answers and others. meh, it's a thing. If you were supposed to get all your answers from a single source, god wouldn't have made Al Gore invent the Internet. Get off my lawn! Re: (Score:2) That is kind of the point of wikipedia. While there are plenty of unsourced stubs out there, any article of substance is more than likely to have sources. You can go verify what is written there at other sources. Wikipedia is little more than an aggregate that people have tried to mold into something readable. Re: (Score:2) More in depth, I'm saying they're measuring something real, but they aren't measuring exactly what they claim. They're measuring how different types of contributors create articles that meet Wikipedia's internal quality standards... not how they actually create quality articles. Small but potentially important distinction, but yes, I used many words. Sorry about that; on a positive note, I'm sure your ability to handle that will grow over time. Because different people write the entries? (Score:2, Insightful) I've seen some shocking entries, but I can't commit to spending the 20 hours or so it'd take to write a new, decent article from scratch. I guess some people can't tell that the articles suck and go ahead and quote them or whatever. Re: (Score:3, Informative) Quality (Score:5, Funny) Re:Quality (Score:4, Interesting) Re: (Score:2) Wikipedia is great for anything involving mathematics or Star Wars. Everything else seems kind of suspect to me. Actually, I find that Wikipedia is absolutely terrible for math/science. It's fine if you're doing your thesis in the subject, but the majority of math/science articles are way beyond the comprehension of the average reader. The simple entries help, but there aren't enough of them yet. My experience with WikiPedia (Score:5, Insightful) Re: (Score:3, Informative) Re: (Score:2) Re: (Score:2) As a couple people have pointed out, it is very difficult in some cases to get anything done. I've run into plenty of articles where some people will guard them religiously and even if there are issues on the page, any maintenance tag is immediately reverted, people are insulted, and even if 10 people showed up to claim the maintenance tag was necessary they'd fight tooth and nail until blocked, and unblocked only to continue. These people were viewed as "good" editors. Which meant dealing with them pointles Re: (Score:2) Re: (Score:2) Because I don't have the weeks or months it takes to work through that byzantine procedure. Nor do I have any desire to do so, because it favors the 'side' with the most time on its hands and the most sockpuppets at its beck and call. Wikipedia's Editors (Score:5, Insightful) On the Nose (Score:5, Insightful) To this I would add that Wikipedia's policies make it very difficult for Wikipedia to be anything more than a web aggregator and pop-culture barometer.. I still use Wikipedia to satisfy trivial inquiries, but it's nowhere near as useful as it used to be. Re: (Score:2) articles that were written by experts in the relevant field. And how did you know they were experts? Other than them telling you of course. Cos you can totally believe what people tell you about themselves on the internet. Degrees to Source (Score:3, Informative) Well, I vaguely remember a great big "outing" of one big Wikipedia contributor who claimed to be an expert in all sorts of stuff but who turned out to have hardly any education at all. So your point is well taken. But it seems to me that injecting the additional level of, "Blog X says Professor Y is an expert, so he's an expert for purposes of Wikipedia" is not an improvement over "Professor Y says he's an expert who contributes directly to Wikipedia." In other words, the question of who is an expert is Reliable sources are scholarly or mainstream media (Score:2) But it seems to me that injecting the additional level of, "Blog X says Professor Y is an expert, so he's an expert for purposes of Wikipedia" is not an improvement over "Professor Y says he's an expert who contributes directly to Wikipedia." Which is why Wikipedia doesn't allow "Blog X says Professor Y is an expert"; instead it requires "Scholarly or mainstream media source X says Professor Y is an expert". Re: (Score:2) And of course what qualifies as "scholarly or mainstream" is equally obvious to everyone involved, I'm sure. [/sarcasm] In practice, what happens is that anything from virtually any third party on the web is treated as a usable source, as many Wikipedians see the blogosphere as equivalent to the "mainstream press" as long as it appears the b Re: (Score:2). That's a real dilemma though. Do you accept on faith that un-cited information from an anonymous source because it looks right ? Complete nonsense [museumofhoaxes.com] can be made to sound good. Or do you accept only a more limited set of information for which you can at least validate the sources so you have a fighting chance ? The only optimal solution would be to offer both with the article with citations being the preferred one but that adds unwanted complexity and cost. Personally I think your expert friends should have jus Re:Wikipedia's Editors (Score:5, Insightful) It's funny, some comments in here complain that many articles have gotten stale and aren't well-maintained. Others, like yours, complain that there aren't enough articles. These two complaints are at odds with each other - a fixed number of editors can either maintain a smaller, more important set of articles, or can devote their time to starting and watching new articles. Your criticism is largely overblown too: there are, on average, over 1000 new articles a day. I'd like to see any print Encyclopedia do this in a year. Frankly, I prefer less but higher-quality articles, because it minimizes the amount of misinformation (one of the biggest plagues in early Wikipedia). It helps minimize the number of esoteric articles from being started and then forgotten. The only real rule you need to know when starting an article is notability: the 22342342343 policies are only in place to remove subjectivity from the process. Common sense can get you most of the way there, but if you are in the habit of starting articles understanding the five "general notability guideline" will save you a lot of hassle. And only takes about five minutes. Re:Wikipedia's Editors (Score:4, Interesting) Re: (Score:2) Re: (Score:2) I don't specifically know about the Pokemon case, but I see several Pokemon species have their own pages and the rest have their own sections on "List of Pokemon" pages. I suspect that the deletions were due to this notability guideline ( or [wikipedia.org]). I agree that when articles get Re: (Score:2) A) It is one source for -everything- you don't have to hunt for 234242344 other sites to get the information B) No ads C) Fast loading D) Easy URLs, its pretty easy to find what article without having to find Re: (Score:2) The cost is not kilobytes and pennies, it's time. Watching "List of Pokemon" for vandalism is much easier than keeping an eye on 700+ individual articles; similarly, many of Wikipedia's policies are designed around making it possible for a small number of people to handle a large number of articles. Re: (Score:2) Yes, but Wikipedia doesn't have a fixed number of editors. I used to edit, but the deletions and the policy douchebaggery drove me away. S Re:Wikipedia's Editors (Score:5, Informative) Why is it that editors think deleting articles somehow makes it better? Because ; - if the quality of Wikipedia is measured by averaging the quality of all its articles, deleting the crap raises the quality of Wikipedia. - crap inevitably attracts more crap. If the crap articles weren't deleted they would multiply. - crap pages, written by people who mistake Wikipedia for a free web-host for their fan site, give Wikipedia a bad name. - if you can't find the good articles for stumbling over the crap, you're likely to stop looking and go some place else. If crap pages weren't deleted Wikipedia would drown under them. Regardless of infinite disk space, or unlimited bandwidth. Wikipedia is essentially a database. If you fill a database with too much garbage it becomes useless, no matter how much data of true value in in there also. The noise to signal ratio becomes unbearable. Re:Wikipedia's Editors (Score:4, Insightful) Why is it that editors think deleting articles somehow makes it better? Because ; - if the quality of Wikipedia is measured by averaging the quality of all its articles, deleting the crap raises the quality of Wikipedia.... [Emphasis mine.] Wow. So in your mind, 'not notable' is equivalent to 'crap'. That's quite a leap. Perhaps you should make that case first before you embark on any other argument. Re: (Score:2) Because you're doing it wrong. All these things could be improved by having: 1. Stable article revisions, which would contain merges of useful information from the unstable versions. The length of cycle should roughly depend on how many contributors are there in the article. 2. Notability score for articles (say, from 1 - the best of 100 - to 8 - non-notable) and ability to filter categories by these (and maybe, having articles with different notability score in different namespace, so they could be recognize Re: (Score:2) Re: (Score:2) The quality of wikipedia is the sum total of information it contains In that case I propose that Wikipedia simply takes a raw dump of google's index and stores it. Never mind the quality, how about that quantity! Sure, it takes a bit of looking to find exactly what you want, the quality of most of it is poor, and there's forty different copies of everything that says different things, but it's all in there somewhere! You seem to be confusing the role of Wikipedia with that of the internet. And not finding the article...I mean, how often have you gone looking for Poland and instead been sucked into Pubic Hair I don't know. Do you mean "Poland" the country, "Poland" the bar in downtown New Yor Re: (Score:2) Re: (Score:2) the notability requirement killed wikipedia (Score:2, Insightful) The notability requirement killed wikipedia. Encouraging people to provide ever better sources is something I agree with but to delete articles because the sources "aren't good enough" is ridiculous. Maybe for the George Bush article there ought to be some sort of minimum requirement but for an article on an alien race in a science fiction show? If the best source you have for a particular claim is an episode of that show than I don't see what the problem is. Besides, I'd consider an episode to be a bet "Study" (Score:1) It sounds like articles cared for by people that stick around turn out better than ones edited by drive-by people, eh? Interesting and all, but you know, this sorta studies get cited to support all kindsa wackjob social "theories", don't they? I mean, citing such studies are deemed "rigor" and whatnot. Less deletion (Score:2, Insightful) I'm with the rest who say too many articles are being deleted. Several times I've been able to, or thought I was able to, find an article on a subject I was wanting information on. Then all I get is a deleted page, with no way to see what was deleted, and about as much clarity as to why it was deleted. At least send me to the page where you explain and quote why and what you deleted. Preferably if you have more knowledge on the subject, write a better article and put up that as a replacement. Empty pages ben back in the day (Score:4, Informative) there was a book called the cathedral and the bazaar [wikipedia.org] it delineates the difference between bottom up and top down organization, specifically in regards to software development models like linux versus gnu obviously, this overlaps thematically with wikipedia in that wikipedia was once a bazaar, and is now becoming a cathedral regardless of which model is better for wikipedia, the pluses and minuses of the cathedral versus the bazaar models of software development should be instructive for what exactly wikipedia is winning, and losing, in its trade off between bazaar and cathedral Wikipedia has always been a Cathedral (Score:3, Insightful) Actually, a large part of the problem is that Wikipedia always has been a Cathedral. Cathedrals are venues where the decisions are made by a person or persons in a position of near-absolute power over the cathedral's output. That elite position exists on Wikipedia too. It's called The Last Guy To Edit. In wiki theory, it doesn't matter that every person to edit, at the time they are editing, are acting as the Supreme Ruler of the Cathedral. The theory is that any abuse of this power will be corrected becau One key flaw (Score:4, Interesting) A, B, and C-class assessments are not Wikipedia-wide. They are assessed by individual Wikiprojects (of which there are literally hundreds of these). And each Wikiproject has their own standard of what it considers A, B, and C. Some Wikiprojects are much easier, others are more rigorous (like WikiProject Military history [wikipedia.org]). Furthermore, C-class is relatively new, having been created just within the past two years or so; so there's probably still a lot of B-class articles that should be C-class. Re:One key flaw (Score:5, Insightful) What the holy shit are you talking about? Maybe the study should have been, "why are people working with Wikipedia completely unable to communicate in English to other people?" Shit, at the very least, why not tell us what your constantly-used GA and FA acronyms actually *mean*. Anybody care to translate that into English? In increasing quality order: B, GA, A, FA (Score:2) They totally neglected GA-class [...] A, B, and C-class assessments are not Wikipedia-wide But what WikiProjects' assessment criteria have in common is that B is below GA and A is between GA and FA. Roles (Score:4, Informative) Why Wikipedia Articles Vary So Much In Quality? (Score:2) My initial reaction was because it's a free encyclopedia that anybody can edit. Why delete anything? (Score:2)
http://news.slashdot.org/story/10/03/06/1917223/why-wikipedia-articles-vary-so-much-in-quality
CC-MAIN-2014-49
refinedweb
5,789
61.26
(Sorry, answered to one person again. Reposting) I like the bind() idea because it doesn't clutter the builtin namespace. It solves the import problem and feels very natural IMO. Advertising The only issue is the name. In my mind, bind() convey the idea you are modifying the function it self, while partial() convey the idea you return new function. It could also be written in C for performances. Le 20/09/2016 à 10:22, אלעזר a écrit : > Yeah I did say it was a strawman :) > > On Tue, Sep 20, 2016 at 11:17 AM Chris Angelico <ros...@gmail.com > <mailto:ros...@gmail.com>> wrote: > > On Tue, Sep 20, 2016 at 6:09 PM, אלעזר <elaz...@gmail.com > <mailto <mailto:Python-ideas@python.org> > > Code of Conduct: > > > > _______________________________________________ > Python-ideas mailing list > Python-ideas@python.org > > Code of Conduct: > _______________________________________________ Python-ideas mailing list Python-ideas@python.org Code of Conduct:
https://www.mail-archive.com/python-ideas@python.org/msg00974.html
CC-MAIN-2016-40
refinedweb
152
77.64
Intermediate Language (IL) Common Intermediate Language (a.k.a. CIL, or more commonly just IL) is the lingua franca of the CLR. Managed programs and libraries are comprised of metadata whose job it is to (1) describe the physical and logical CTS components and abstractions of your code, and (2) represent the code that comprises those components in a way that the CLR may inspect and deeply understand, the latter of which utilizes IL. IL is an assembly-like language into which all managed languages are compiled. For example, the compilers for C#, VB, and C++/CLIs transform the source language into metadata, which contains IL instructions to represent the executable portion. The CLR understands this metadata and IL, and uses the information contained within it to load and run the program's code. Execution does not happen by interpreting the IL at runtime; rather, it occurs by compiling the IL into native code and executing that instead. By default, this compilation occurs lazily at runtime when the code is needed, hence the name Just in Time (JIT); alternatively, you can generate native images ahead of time using a technology called NGen, saving the initial cost of jitting the code and enabling static optimization of the layout of native images. We will see a bit more on these technologies shortly and in detail in Chapter 4. Example IL: "Hello, World!" To illustrate how metadata and IL represent a managed program written in C#, consider a very small snippet of C# code: using System; class Program { static void Main() { Console.WriteLine("Hello, World!"); } } This is a canonical "Hello, World!" example, which, when run, simply prints the text Hello, World! to the console standard output. The C# compiler compiles the above program into the binary form of the following textual IL: .assembly HelloWorld {} .assembly extern mscorlib {} .class Program extends [mscorlib]System.Object { .method static void Main() cil managed { .entrypoint .maxstack 1 ldstr "Hello, World!" call void [mscorlib]System.Console::WriteLine(string) ret } } The textual IL format used here is an easier-to-read representation of the actual binary format in which compiler-emitted code is stored. Of course, the C# compiler emits and the CLR executes binary-formatted IL. We will use the text format in this chapter for illustrative purposes. Seldom is it interesting to stop to consider the binary layout of assemblies, although we will note interesting aspects as appropriate. Deconstructing the Example There's a bit of information in the sample IL shown above. There are two kinds of .assembly directives — one to define the target of compilation, the HelloWorld assembly, and the other to indicate that our program depends on the external library mscorlib. mscorlib defines all of the core data types that nearly all managed programs depend on; we will assume a basic familiarity with it throughout the book, but detail these data types in Chapter 5. There are also .class and.method directives in the IL, whose job it is to identify the CTS abstractions in our program; if we had created any interfaces, fields, properties, or other types of abstractions, you'd see those in their respective locations, too. These bits of metadata describe to the CLR the structure of and operations exposed by data types, and are used by compilers to determine what legal programs they can create at compile time. Inside the .method directive, you will find a few additional directives, for example an .entrypoint, indicating the CLR loader should begin execution with this method (when dealing with an EXE), and .maxstack, indicating the maximum number of items that the evaluation stack will ever contain during execution of this function. Each directive can have a number of arguments and pseudo-custom attributes (keywords) associated with them. But they are not actual IL instructions representing code; they are bits of metadata representing the components which compose our program. Everything else inside the .method directive's block is the actual IL implementing our program's executable behavior. The method body consists of three statements, each of which involves an instruction (sometimes called an opcode), for example ldstr, call, and ret, consume and produce some state from the execution stack, and can optionally take a set of input arguments. Each instruction differs in the stack state and arguments with which it works, and the side effects (if any) that result from its execution. When run, the CLR will compile this code into its jitted native code counterpart, and execute that. Any references to other DLLs will result in loading them and jitting those as needed. Assembling and Disassembling IL The ilasm.exe and ildasm.exe tools that ship with the Framework (ilasm.exe in the redist, ildasm.exe in the SDK) enable you to compile textual IL into its binary .NET assembly format and disassemble .NET assemblies into textual IL, respectively. They are indispensable tools for understanding the inner workings of the CLR and are great companions to this chapter. Stack-Based Abstract Machine IL is stack-based. Programs in the language work by pushing and popping operands onto and off the stack; each instruction is defined by the way it transforms the contents of the stack. Many instructions involve side effects and may take additional arguments, too, but the conventional way to communicate into and out of an instruction is via the stack. This is in contrast to many physical machines whose execution relies on a combination of register and stack manipulation. The stack is sometimes further qualified by calling it the logical execution stack in order to differentiate it from the physical stack, a segment of memory managed by the OS and used for method calling. Example IL: The Add Instruction To illustrate the stack-based nature of IL, consider the following example. It uses the IL add instruction. add pops two numbers off the stack and pushes a single number back on top, representing the result of adding the two popped numbers. We often describe instructions by the transformations they perform on the current stack state, called a stack transition diagram. This highlights the stack state the instruction expects and what state it leaves behind after execution. add's stack transition is: ..., op1, op2 ‡ ..., sum. To the left of the arrow is the state of the evaluation stack prior to executing the instruction — in this case, two numbers op1 and op2 — and to the right is the state after execution — in this case, the number sum. The top of the stack is the rightmost value on either side of the arrow, and the ...s indicates that zero or more stack states can already exist, that it is uninteresting to the instruction, and that the instruction leaves such state unchanged. The following IL might be emitted by a high-level compiler for the statement 3 + 5, for example: ldc.i4 3 ldc.i4 5 add This sequence of IL starts off by loading two integers, 3 and 5, onto the stack (using the ldc instruction, more on that later). The add instruction then is invoked; internally, it pops 3 and 5 off the stack, adds them, and then leaves behind the result, 8. This transformation can be written as ..., 3, 5 ‡ ..., 8, and is graphically depicted in Figure. Usually a real program would do some loading of fields or calling methods to obtain the numeric values, followed by a store to some memory location, such as another local or field variable. While the execution stack is a nice abstraction, the JIT compiled native code is more efficient. If it knows the address of a value — for example, if it's relative to the current stack pointer — there will be as little copying of the arguments as possible. For instance, the add IL instruction will likely move the values into the respective registers and perform an add instruction in the underlying physical machine's instruction set. It might omit a store to another location if it uses the value right away and knows it won't need it later on. The final result differs based on the instruction and undocumented implementation details, of course, but the high-level point is that the stack is a logical representation, not the physical representation. The JIT manages physical representation for you. Some instructions take arguments in addition to reading values on the stack. For example, many of the constants require that you pass an argument representing the literal constant to load onto the stack. Similarly, many instructions deal with integer constants, which represent metadata tokens. The call instruction is a perfect example. Notice in the "Hello, World" example above, we passed the method reference (methodref) to void [mscorlib]System.Console::WriteLine(string), which actually compiles into an integer token in the binary format. The instruction uses this information, but it does not get pushed and popped off of the stack — it is passed directly as an argument. Register-Based Machines Stack-based machines are much easier for programmers to reason about than register-based machines. They enable the programmer and compiler writer to think at a higher level and in terms of a single storage abstraction. Each instruction can be defined simply by the arguments and stack operands it consumes, and the output that is left on the stack afterward. Less context is required to rationalize the state of the world. Register-based machines, on the other hand, are often more complex. This is primarily due to the plethora of implementation details that must be considered when emitting code, much like explicit memory management in many C-based languages. For illustration purposes, consider a few such complications: There is only a finite number of registers on a machine, which means that code must assign and manage them intelligently. Management of registers is a topic whose study can span an entire advanced undergraduate Computer Science course and is not easy to do correctly. For example, if we run out of registers, we might have to use the machine's stack (if one exists and if we've reserved enough space) or write back to main memory. Conversely, the logical stack is infinite and is managed by the CLR (even the case when we run out of stack space). Instructions for register-based machines often use different registers for input and output. One instruction might read from R0 and R1 and store its result back in R0, whereas another might read from R0 …R3, yet store its result in R4, for example, leaving the input registers unchanged. The intricacies must be understood well by compiler authors and are subject to differ between physical machines. All IL instructions, on the other hand, always work with the top n elements of the stack and modify it in some well-defined way. Different processors offer more or less registers than others and have subtle semantic and structural differences in the instruction set. Using as many registers as possible at any given moment is paramount to achieving efficiency. Unfortunately, if managed compilers generated code that tries to intelligently manage registers, it would complicate the CLR's capability to optimize for the target machine. And it's highly unlikely compiler authors would do better than the JIT Compiler does today. With that said, a simple fact of life is that most target machines do use registers. The JIT Compiler takes care of optimizing and managing the use of these registers, using a combination of register and the machine's stack to store and share items on the logical IL stack. Abstracting away this problem through the use of a stack enables the CLR to more efficiently manage and optimize storage. Binary Instruction Size Most instructions take up 1 byte worth of space in binary IL. Some instructions take up 2 bytes due to exhaustion of all 128 possible single-byte encodings in the set, however, indicated by a special leading marker byte 0xFE. Even more instructions take arguments serialized in the instruction stream in addition to the inputs on the stack, consuming even more space. This topic is mostly uninteresting to managed code developers but can be useful for compiler authors. As an example, br is encoded as the single-byte number 38 followed by a target 4-byte jump offset. Thus, a single br instruction will take up 5 (1 + 4) bytes total in the IL body. To combat code bloat, many instructions offer short form variants to save on space; this is particularly true with instructions whose ordinary range of input is smaller than the maximum it can accept. For example, br.s is a variant of br that takes a single-byte target instead of a 4-byte one, meaning that it can squeeze into 2 bytes of total space. Nearly all branches are to sections of code close to the jump itself, meaning that br.s can be used for most scenarios. All good compilers optimize this usage when possible. Consider the add sequence we saw above. As shown above (using ldc.i4 with arguments), it would consume 11 bytes to encode the IL. That's because the generic ldc.i4 instruction consumes 1 byte of space, plus 4 additional bytes for the argument (meaning 5 bytes each). However, an intelligent compiler can optimize this sequence using the shorthand ldc.i4.3 and ldc.i4.5 instructions. Each consumes only a single byte, takes no argument, and compresses the total IL stream to only 3 bytes. The resulting program looks as follows: ldc.i4.3 ldc.i4.5 add The IL byte encoding for both is shown in Figure. A Word on Type Tracking IL is a typed language. Each instruction consumes and produces state of well-defined types, often dependent on the values of its arguments (e.g., those that accept a method or type token). In isolation, an instruction might not have a type, but when combined with legal sequences of IL, it does. The verifier is responsible for tracking such things to prove type safety (or the absence thereof). It will detect and report any violations of the type system rules. peverify.exe is a useful tool in the .NET Framework SDK that permits you to inspect the verifiability of an assembly's code. Running it against a managed assembly will report violations, along with the guilty line numbers, of any of the CLR's type safety rules. This utility is a compiler writer's best friend. If you're a user of somebody else's compiler, you can use this to determine whether a program crash is due to a compiler bug (or you intentionally stepping outside of the bounds of the CLR's type system, for example using C++/CLI). Verifiability and the general notion of type safety were both discussed in more detail in Chapter 2. Exploring the Instruction Set There are over 200 IL instructions available. Most managed code developers can get by without deep knowledge of IL, still becoming quite productive in a higher-level language such as C# or VB. But a firm understanding of some of the most important instructions can prove instrumental in understanding how the CLR operates. With this rationale in mind, this section will walk through some important categories of instructions. Feel free to skim through it. My recommendation, of course, is to try to understand the details of most, as it will help you to understand how the platform is executing your code. Because of the large number of instructions, some will intentionally be omitted here; please refer to Appendix A for a complete guide to the entire instruction set. To perform any interesting operation, we first have to get some data onto the stack. There are a few pieces of data you might want to load, including constant values, data local to a method activation frame such as locals and arguments, fields stored in your program's objects, and various bits of metadata held by the runtime. The reverse is also important. That is, once you've manipulated the data on the stack, you will often want to store it somewhere so that it can be accessed later on. This section explores the mechanisms the CLR provides to do both. It's also worth considering for a moment what it means to load and store something onto or from the stack. A core difference between reference and value types is the way values are copied around as opaque bytes instead of object references to the GC heap. Loading a value onto the stack results in a bitwise copy of the value's contents; this means that if you are loading from a local slot, a field reference, or some other location, any updates to the structure are not visible unless the original location is updated with them. Objects, however, are always accessed through a reference. So, for example, if an object is modified through a reference loaded onto the stack, the reference itself needn't be saved back to its original location; all accesses occurred via a reference to the shared object on the heap. As an example, consider this code: ldarg.0 // load the 'this' pointer ldfld instance int32 MyType::myField ldstr "Some string object" Loading an object's field of type int32 and a string literal will result in two very different things on the stack: a sequence of 32-bits representing the value of the integer, and a sequence of 32- or 64-bits (depending on whether you're on a 32-bit or 64-bit machine) representing the value of the object's reference, or 0 to represent null. If the value type itself had updatable fields, it would have to be stored back into myField after these updates; conversely, if an object was used instead, it would not need to be restored because the updates occurred through the reference to the shared object. Seldom do you need to think about IL in this low level; your compiler does it all for you. The C# compiler, for example, ensures that the address of the value is used for modifications when possible, for example using the ld*a instructions. But understanding this point will help to solidify your understanding of reference and value types. Constants We've already seen a couple instances of constant usage. The "Hello World" example above loaded a literal string using the ldstr instruction, and the addition example loaded a 4-byte integer using two variants of the ldc instruction. For obvious reasons, the discussion of constants involves only loads and no corresponding store instruction(s). The various types and their interesting members mentioned only in passing here, such as String, Int32, Double, and so forth, are described in Chapter 5. Strings are simple data structures representing self-describing sequences of characters. ldstr loads a reference to one onto the stack. It takes an argument representing the string to load onto the stack in the form of a metadata token into the assembly's string table, which must be generated by the compiler and is composed of all of the unique strings in a binary. Executing ldstr doesn't modify any prior stack state and simply pushes a reference to the heap-allocated String object onto the existing stack. It allocates memory as necessary. Because of string interning, two ldstrs in the same program that use strings with identical characters will be shared. For example, this code: ldarg "Some random string" ldarg "Some random string" ceq will evaluate to true, indicating that both strings are equal. Even if the ldarg takes place in entirely different components of the program, they will be identical. Similarly, the ldc instruction loads a numeric constant onto the stack and offers a number of variants based on the data type. The table below shows each. The convenient shorthand instructions exist for common constants, helping to make the size of programs smaller. We already saw an example above of how this can be used to reduce IL footprint. Lastly, the null constant can be used in place of object references, an instance of which can be loaded onto the stack using the ldnull instruction. Arguments and Locals When a function is called, an activation frame is logically constructed that contains all arguments supplied for the function's formal parameters in addition to slots for all local data allocations. This frame is allocated by the JIT compiled code on the OS stack. Rather than referencing offsets into the physical stack in order to access arguments and locals — which is precisely what the native code generated must do — we simply refer to individual items by their 0-based sequence number. Much like the ldc instruction discussed above, both the ldarg and ldloc instruction have shorthand variants. They each have an ordinary ldarg or ldloc, which accepts an unsigned int16 as an argument representing the index of the item to load. Similarly, they have ldarg.s and ldloc.s versions, each of which takes a single byte unsigned integer (instead of 2-byte), saving some space in cases where the index is less than 255, which is highly likely. Lastly, each has a shorter ldarg.num and ldloc.num version, where num can be from 0 to 3, avoiding the need to pass an argument altogether. Note that for instance methods, the ldarg.0 instruction will load the this pointer — a reference to the target of the invocation. Of course, both ldarg and ldloc have counterparts that store some state on the stack into a storage location: starg and stloc. They have similar shortcuts to the load instructions, that is, starg.num and stloc.num, where num, once again, is an integer from 0 to 3. These instructions pop off the top of the stack and store it in a target location. For this operation to be verifiable, clearly the top of the stack must be of a type compatible with the storage destination. The CLR supports so-called byrefs. A byref enables you to pass the address of data in an activation frame — either an argument or local — to other functions. This ability is supported only so long as the original frame in which they reside is still active when they are accessed. To share data that survives a frame, you must use the GC heap. To support the byref feature, there exist ldarga, ldarga.s, ldloca, and ldloca.s instructions. Each takes an index argument (the .s versions take only a single byte) and will push a managed pointer to the desired item in the activation frame. Refer to details later in this section for storing to an address through a managed pointer and coverage in Chapter 2 of the method passing mechanisms supported in the CTS. Fields Accessing the fields of objects or values is a common operation. For this, there are two instructions: ldfld and ldsfld. The former is used for instance fields and thus expects a target on the top of the stack in the form of an object reference, value, or a pointer; the latter is for static fields, and therefore does not need a target. (Technically, you can pass a null as the object reference to ldfld and use it to access a static method, but ldsfld was developed to avoid the additional ldnull instruction.) Each takes a field token defining which field we are accessing. The result of executing the instruction is that the value of the field remains on the top of the stack. In other words, ldfld's stack transition is ..., target ‡ ..., value, and ldsfld's is ... ‡ ..., value. You can also use the stfld and stsfld to store values on the top of the stack into instance and static fields. stfld expects two things on the stack — the target object and the value — and just like the load version accepts a single argument, a field token. Its stack transition is ..., target, value ‡ .... stsfld only expects a single item on the stack — the new value to store — and similarly takes a token as input. Its stack transition is ..., value ‡ .... These instructions take into account the accessibility of the target field, so attempting to access a private field from another class, for example, will result in a FieldAccessException. Similarly, if you attempt to access a nonexistent field, a MissingFieldException will be thrown. Lastly, much like the ability to load an address for local variables described above, you can load the address to a field. ldflda works much like ldfld does, except that instead of loading the value of the field, it loads the address to the field in the form of a managed or native pointer (depending on the type the field refers to). A static-field instruction is also provided, named ldsflda. Indirect Loads and Stores We've seen a few ways to load pointers to data rather than the data itself. For example, ldarga can be used to load an argument's address for byref scenarios, and ldflda can be used to refer to an object's field. What if you wanted to use that address to access or manipulate a data structure's contents? The ldind and stind instructions do precisely that, standing for "load indirect" and "store indirect," respectively. They expect a managed or native pointer onto the stack that refers to the target, and dereference it in order to perform the load or store. ldind and stind both have subtle variations depending on the type of data being accessed. Verifiability ensures that you only use the correct variants when accessing specific types of components in the CLR. These variations are specified using a .<type> after the instruction, that is, ldind.<type> and stind.<type>, where <type> indicates the type of the data and is one of the following values: i1 (int8), i2 (int16), i4 (int32), i8 (int64), r4 (float32), r8 (float64), i (native int), ref (object reference). ldind also permits the values u1 (unsigned int8), u2 (unsigned int16), u4 (unsigned int32), u8 (unsigned int64); stind performs the necessary coercions to the value on the stack to store in an unsigned target. Basic Operations Some basic operations are provided that all modern instruction sets must provide. These include arithmetic, bitwise, and comparison operations. Because of their simplicity, general purposefulness, and elementary nature, we'll only mention them in passing: Arithmetic: Addition (add), subtraction (sub), multiplication (mul), division (div). There are also various overflow and unsigned variants of these instructions. Each pops the two top items off the stack, performs the arithmetic operation on them, and then pushes the result back onto the stack. The remainder (rem) instruction computes the remainder resulting from the division of items on the top of the stack, also called modulus (e.g., % in C#). These instructions work with both integral and floating point values. The neg instruction pops a single number off the stack, and pushes back the inverse of it. Bitwise operations: Binary and, or, xor and unary not. There are also shift operations for shifting left (shl) and right (with sign propagation [shr] and without [shr.un]). Comparisons: Compare equal (ceq), compare greater than (cgt), compare less than (clt). Each of these pop two items off the top of the stack, and leave behind either 1 or 0 to indicate that the condition was true or false, respectively. You'll see shortly that the various branch instructions offer convenient ways to perform things like greater than or equal to checks. Please refer to Appendix A for more complete coverage of these instructions. Control Flow and Labels All control flow in IL utilizes branch instructions, of which several variants exist. Each branch instruction takes a destination argument indicated by a signed offset from the instruction following the branch. Each branch-style instruction has an ordinary version that takes a 4-byte signed integer for its destination, and also a short version (suffixed by .s) that takes only a single-byte signed integer. In most cases, the branch uses a predicate based on the top of the stack to determine whether the branch occurs or not. If it doesn't occur, control falls through the instruction. The simplest of all of these is an unconditional branch, represented by the br instruction. For example, an infinite while loop might look like either of these programs (each line is a separate program): br.s -2 // offset version LOOP: br.s LOOP // label version Because the branch target is calculated from the point immediately after the branch instruction, we jump backward 2 bytes (br.s -2 takes up 2 bytes). If it were a br instead of br.s, it would be 5 bytes. And then the CLR executes the br.s instruction again, ad infinitum. Labels are often used in textual IL to make calculation of offsets like this easier. In reality, they are just an easier-to-work-with notation which compilers patch up to an offset in the resulting binary IL. For example, ilasm.exe will transform any references to labels into binary offsets in the resulting code. The second line is an example of a label version of this loop, using LOOP as the label we jump to; its binary encoding is identical to the first line. There are also brtrue (or its alias brinst) and brfalse (or one of its aliases, brnull, brzero) instructions, which take a single value on the top of the stack and jump if it is true or false, respectively. These can be used to implement a C# if statement, for example: Foo f = /*...*/; if (foo.SomeMethod()) { // Body of code if true. } else { // Body of code if false. } // Code after if stmt. Using labels, this could be compiled to the following IL: ldloc.0 // Assume 'f' is stored as a local in slot #0. call instance bool Foo::SomeMethod() brfalse FALSE // Body of code if true. br.s AFTER FALSE: // Body of code if false. AFTER: // Code after if stmt. The remaining branch instructions are all very similar. They each take two items off the top of the stack, compare them in some fashion, and then branch to the target. Their stack transition is ..., value1, value2 ‡ ..., and they are branch on equal (beq), branch on not equal or unordered (bne.un), branch on greater than (bgt and bgt.un), branch on greater than or equal to (bge and bge.un), branch on less than (blt and blt.un), and branch on less than or equal to (ble and ble.un). Each of these has a short version, that is, br.s, brfalse.s, beq.s, and so forth, which can be used when the target of the jump is within 255 bytes or less, and as expected consumes less space. Allocating and Initializing In order for instances of reference and value types to be used from your program, they must first be allocated and initialized. The process differs depending on which type you are dealing with. Reference types are always initialized using the newobj instruction, which ends up invoking one of its class's constructors to initialize state. Value types, on the other hand, can use the initobj instruction instead, which zeroes out its state and avoids any constructor invocation overhead. newobj takes an argument representing the constructor method token to invoke. It also expects n items on the stack, where n is the number of parameters the target constructor expects. In other words, the stack transition diagram is ..., arg1, ..., argN ‡ ..., obj. It will allocate a new instance of the target type, zero out its state, invoke the constructor, passing the new instance as the this pointer and the constructor arguments from the stack, and then push the newly initialized instance onto the stack. In the case of value types, the bits are copied onto the stack, while reference types result in a managed reference to the object on the GC heap. This example snippet of code constructs a new System.Exception object: ldstr "A catastrophic failure has occurred!" newobj instance void [mscorlib]System.Exception::.ctor(string) // Right here we have a new Exception object on the stack. initobj is useful for constructing new value types without invoking a constructor. It can also be used to set a location containing a reference type pointer to null, although the former is a much more common use. initobj expects a pointer on the top of the stack that refers to the destination to be initialized and takes a type metadata token as an argument representing the target's type. Boxing and Unboxing Values are sequences of bits composing state. They lack self-description information — that is, a method table pointer — which objects on the heap make available. This has advantages, namely that values have less overhead. But there are clear disadvantages; for example, often we'd like to either pass values around to methods that expect System.Objects or perhaps to make an invocation on a method inherited from Object or ValueType. To do that on the CLR, you need something whose structure has a method-table pointer as the first DWORD, as explained in Chapter 2. Boxing a value with the box instruction allocates a new data structure on the GC heap to hold the value's data, copies the bits from the stack to that, and leaves a reference to it behind. This data structure also has a method-table pointer, meaning that it can then be used as described elsewhere. box expects a value on the top of the stack and takes a type token argument representing the type of the value. Its stack transition is ..., value ‡ ..., obj. Unboxing with the unbox instruction does the reverse, that is, it will copy boxed data into an unboxed storage location. There are two variants of the unbox operation: unbox and unbox.any, the latter of which has been added in 2.0 and is used by C# exclusively over the other. unbox leaves behind a pointer to the unboxed data structure, usually computed simply as an interior pointer to the boxed value on the heap, which can then be accessed indirectly, for example using ldind. unbox.any, on the other hand, copies the actual value found inside the boxed instance to the stack and can be used against reference types (necessary when dealing with generics), which equates to just loading a reference to the object. There is an additional facet to the above description. A new feature in 2.0 called a nullable value type enables the wrapping of any value in a Nullable<T> data structure. The result gives ordinary values null semantics. Compilers — such as C# — treat instances of Nullable<T>'s in a way that permits programmers to realize null semantics, for example, when comparing an instance for nullability. When something is boxed, however, its runtime type becomes opaque. Thus, boxing a Nullable<T> that represents null (i.e., HasValue == false) results in a null reference. Otherwise, a boxed T is left behind. The converse is also true: a null reference or boxed T may be unboxed into a Nullable<T>. Calling and Returning from Methods Calling a method is achieved using one of a few instructions: call, callvirt, and calli. Each one has its own distinct purpose and semantics. call and callvirt are used to make direct method invocations against a target method, using either static or virtual dispatch, respectively. calli is used to make a method call through a function pointer, hence its name "call indirect." Both call and callvirt are supplied a method metadata token as an argument and expect to see a full set of method call arguments on the top of the stack in left-to-right order. In other words, they have a transition diagram much like newobj, i.e. ..., arg1, ..., argN ‡ ..., retval. The number of arguments popped off depends on the method metadata token supplied as an argument to the instruction. The first item pushed onto the stack must be the object that is the target of the invocation for instance (hasthis) methods, which is then accessed with the ldarg.0 instruction from within the target method's body. Static methods instead use the 0th argument as their first real argument. And the retval result pushed onto the stack can be absent in cases where a method with a void return type has been called. Of course, to be verifiable, all arguments passed to the method must be polymorphically compatible with the expected parameter types. The previous description was instruction agnostic. That is, it didn't differentiate between ordinary and virtual calls. The only difference is that callvirt performs a virtual method dispatch, which uses the runtime type of the this pointer to select the most-derived override. We described the process of selecting the proper overload in Chapter 2. Indirect Calls The calli instruction stands for "call indirect," and can be used to call a method through a function pointer. This pointer might have been obtained using a ldftn or ldvirtftn instruction, both of which accept a method metadata token and return a pointer to its code, through native code interop, or perhaps from a constructed delegate, for example. The very first thing on the top of the stack must be the pointer to the method to invoke. Like the other call instructions, calli expects the this pointer for the method invocation to be first on the stack for instance methods, and requires that the arguments laid out in left-to-right order follow. To ensure type safety, a call-site description token must be passed as an argument, which the runtime uses to ensure that the items on the stack match, although it can't ensure at runtime that the target is actually expecting these items. If you mismatch the pointer and description, a failure will occur at runtime (hopefully, unless you end up accidentally corrupting some memory instead). Returning from a Method Inside of a method's implementation, a return instruction ret must always be present to exit back to the caller. It takes a single argument on the top of the stack that is returned to the caller. A ret is required even if the return type of the method is void, although no return value is pushed onto the stack prior to calling it. In all cases — after popping the return value in the case of non-void return types — the stack must be empty. Producing IL that contains stack state after a return indicates a compiler bug; as a user of one of those languages, you rarely need to worry about such things, although peverify.exe can be useful for diagnosing the error. Tail Calls A tail call is a commonly used term in functional languages (e.g., LISP, ML, Haskell), where recursion is usually preferred rather than iteration (as in Algol-derived languages, e.g., C, C++, C#). Recursion is simply a way to make repeated invocations to the same method, using modified values each time and a base case to terminate the recursive call chain. This piece of C# code demonstrates the difference: /* Iterative */ void f(int n) { for (int i = n; i > 0; i--) Console.WriteLine(i); } /* Recursive */ void f(int n) { if (n == 0) return; Console.WriteLine(n); f(n - 1); } Each prints a descending sequence of numbers, although in different manners; the iterative version uses a for loop, while the recursive version calls itself and terminates when n == 0. In languages where working with functions is more natural than introducing somewhat awkward C-style loop structures, this technique is very commonplace. Another example of a recursive algorithm might make this clearer. Writing a factorial computation is often taught using the following algorithm in C# as well as functional languages: int fact(int n) { return fact(n, 1); } int fact(int n, int v) { if (n > 0) return fact(n - 1, n * v); else return v; } One problem with recursion, as you might have noticed, is that the call stack is continuing to grow with every new function call — keeping around any temporary data on each stack frame — versus iteration which runs in constant stack space. This is simply a byproduct of the way calls to functions are made, not necessarily a result of inherent properties of the algorithm. But this means that the fact function, as written above, will run out of stack space when supplied with large values of n. Tail calls enable recursive code to run in constant space although the call stack is logically growing. When the stack is empty immediately after the function call or when the only value is used as the caller's own return value (for non-void return types), a tail call can be made. The fact function above satisfies these criteria. A tail call is indicated by the tail. prefix in IL; if a tail. is found just prior to a call, callvirt, or calli the CLR can reuse the current stack frame, overwriting the arguments just before making the call. This can be much more efficient in examples like those above but is usually a compiler-specific feature — seldom will you worry about it in user code. Interestingly, C# does not implement tail calls; iteration is more natural for its users, and therefore the compiler writers haven't made supporting them a priority. Most functional language compilers, such as F#, do, however. Constrained Calls A topic we skirted above is how both ordinary and virtual method calls occur when the target is an unboxed value. We now know that value type instances are simply a sequence of bits that we interpret a certain way. Unlike objects on the heap, there is no easily accessible, self-describing method table. This makes resolving virtual methods based on type identity impossible. And furthermore, passing a value type as the this pointer to a method defined on System.Object won't result in the correct behavior, because it is expecting a reference to an object which has a type identity structure. This has the consequence of requiring the boxing of value types in order to make calls to methods defined on Object, ValueType, or Enum, and to make any virtual calls. If a compiler knows the type of the method, this is easy to do; it just has to know the special rules, and it will insert the box instructions at the necessary locations in the IL. But in the case of generics, the compiler might not know the type when it generates the IL. For example, consider this C# example: static string GetToString<T>(T input) { return input.ToString(); } How does the compiler know whether to box input prior to emitting a callvirt to System.Object's virtual method ToString? It doesn't until T has been supplied, which isn't known when C# compiles the above code. Thus was born the constrained. prefix. It takes care of the relevant details. A constrained call essentially does the following: If the target of the constrained call is a reference type, simply call it using the this pointer passed in to the call. If the target of the constrained call is a value, but the value type has defined its own version of (has overridden) the method, simply call it on the value without boxing. If the method calls its base version, it will have to box it. Else, we have a value type with an implementation on either Object, ValueType, or Enum. The CLR boxes the value and makes the call with that in hand. So in the example above, the compiler can simply emit the constrained. prefix just prior to the callvirt to ToString. This may or may not result in the boxing of the input based on the type of T. Nonvirtual Calls to Virtual Methods As we saw above, there are two primary instructions for making direct method calls: call and callvirt. It actually is possible to make a call to a virtual method without using the callvirt method. This might be surprising, but consider a few examples. First, in an overridden method in a subclass, developers often will want to call the base class's implementation; this is done using the base keyword in C#, and is compiled as a call to the base class's method. But it's virtual! Clearly emitting a callvirt to the base class would be incorrect, leading to an infinite loop. You will also see code like this in C++ rather frequently: using namespace System; ref class A { public: virtual void f() { Console::WriteLine("A::f"); } }; ref class B : public A { public: virtual void f() override { Console::WriteLine("B::f"); } }; int main() { B b; b.f(); b.A::f(); } That last line b.A::f() looks a little strange, but it uses the scoping operator <type>:: to bypass normal dynamic virtual dispatch, and instead make a direct call to A's implementation of f. If compiled on the CLR, this too is implemented as a call to a virtual method. Unfortunately, some authors of class libraries implicitly rely on security through inheritance. That is, they assume that just because they've overridden a base class's method, that the only way to call that method on an instance of their class is with a virtual call. This enables them to check invariants, perform security checks, and carry out any other validation before allowing the call to occur. To preserve this (to some extent), a change was made in 2.0 of the CLR to make nonvirtual calls to some virtual methods fail verification. This means that untrusted code cannot use the idiom shown above, but fully trusted code can. Notice that I said "some virtual methods" in the preceding paragraph. The CLR still permits the ordinary "call to base" pattern, as C# and other languages use it quite extensively. What 2.0 now prohibits is nonvirtual calls to virtual methods on types entirely outside of the caller's type hierarchy. The verifier implements this by ensuring that the caller and callee are equivalent. In the C++ example above, it would now fail verification because main is not defined on class A or B. Type Identity There are two related instructions that perform a runtime type identity check: castclass and isinst. They are used to inspect the runtime type identity of an object on the top of the stack using its method table. Values must be boxed prior to passing them through these instructions. castclass doesn't modify the item on the top of the stack at all. It simply takes a metadata type token, and checks that the item is compatible with this type. If the check succeeds, the type tracking for the IL stream is patched up so that the item can be treated as an instance of the checked type; otherwise, an InvalidCastException is generated by the runtime. Compatible in this case means the instance is of an identical type or a derived type; similarly, if the runtime type is B[] and the type token is A[], and if B can be cast to A, this check succeeds; lastly, if the runtime type is T and the type token is Nullable<T>, the check also succeeds. If the type token is a reference type and the instance is null, the check will succeed, because null is a valid instance of any reference type. isinst is very similar in semantics to castclass. The only difference is in its response to an incompatible item. Rather than throwing an InvalidCastException, it leaves null behind on the stack. Notice that the use of null to indicate failure here means that checking the type identity of a null reference will technically succeed (e.g., null is a valid instance of System.String), but code inspecting its result can't differentiate between success and failure. C# Is, As, and Casts (Language Feature) C# uses the isinst instruction to implement the is and as keywords. object o = /*...*/; string s1 = o as string; if (s1 != null) // Can work with 's1' as a valid string here. bool b = o is string; if (b) // Can cast 'o' to string without worry, etc. string s2 = (string)o; // Can work with 's2' here; InvalidCastException results if it's not a string. as just emits an isinst and pushes the result as its return value; is does nearly the same thing but compares the result to null and leaves behind a bool value on the stack resulting from the equality check. castclass is used when performing a cast (assuming that no explicit conversion operator has been supplied) and results in an InvalidCastExceptions if the cast fails. The following IL corresponds to the above C#: // C#: object o = /*...*/; // Assume 'o' is in local slot #0. // C#: string s1 = o as string; ldloc.0 isinst [mscorlib]System.String stloc.1 // 's1' is stored in local slot #1 as a 'System.String.' // C#: bool b = o is string; ldloc.0 isinst [mscorlib]System.String ldnull cgt.un stloc.2 // 'b' is stored in local slot #2 as a 'System.Boolean.' // (Control flow logic omitted.) // C#: string s2 = (string)o; ldloc.0 castclass [mscorlib]System.String stloc.3 // 's2' is stored in local slot #3 as a 'System.String.' Based on this example, we can briefly summarize how it will execute: if o's runtime type is System.String, then s1 will refer to that instance, b will be true, and the cast will succeed; otherwise, s1 will be null, b will be false, and an InvalidCastException will be generated by the cast. Arrays Arrays are unlike other collections of data in the runtime in that they have special IL instructions to access elements and properties of them. Your compiler does not emit calls to methods and properties on the System.Array class — as it would for, say, System.Collections.Generic.List<T> — but rather specialized IL to deal with arrays in a more efficient manner. First, new arrays are allocated using the newarr instruction; the System.Array class also permits dynamic allocation of arrays without using IL, but we defer discussion of those features to Chapter 6. newarr pops an expected integer off the top of the stack, representing the number of elements the array will hold. It also takes an argument type metadata token to indicate the type of each element. The act of creating a new array also zeroes out the array's memory, meaning that for value types each element will be the default value for that type (i.e., 0, false), and for reference types each element will be null. The following C# and corresponding IL demonstrates this: // C#: int[] a = new int[100]; ldc.i4 0x1f4 newarr [mscorlib]System.Int32 Of course, once you have an instance of an array, you'll want to access its length, and load and store elements from and to the array. There are dedicated instructions for each of these operations. ldlen pops a reference to an array off the stack and leaves behind the number of elements it contains. ldelem takes an array reference, an integer index into the array on the stack, and a type token representing the type of element expected from the array; it extracts that element and places it onto the stack. A ldlema instruction is also available that loads a managed pointer to a specific element in an array. Lastly, stelem takes an array reference, an integer index, and an object or value to store into the array. It also takes a type token. There are a variety of variants of both ldelem and stelem (i.e., ldelem.<type> and stelem.<type>) that don't require a type token, but they are omitted here for brevity. - - Comment
http://codeidol.com/community/dotnet/intermediate-language-il/10271/
CC-MAIN-2017-13
refinedweb
8,600
62.48
There is so much great code written in Python 2, yet time has passed, and these days you really need to be using Python 3 in almost all situations. Many people don’t even have an installation of Python 2 they could use if they wanted, so it becomes important to be able to keep all of that Python goodness and make it work with our current Python version. (At the time of writing Python 3.8 is the latest main version available.) You may well have come across some great Python 2 code that wouldn’t run correctly in Python 3. The last time I had this problem was recently, looking at some really interesting code for an MIT algorithms course which was written in Python 2 and has errors when run in Python 3. UPDATE: since writing this article I have adopted an easier way to convert files from Python 2 to Python 3. Open a command line in the folder containing the file you wish to convert, and type python -m lib2to3 my_file.py -w (replacing my_file.pywith your file name), and hey presto! You will have a Python 3 version of your file as well as a backup of the original should you need it. Some people get around the problem of outdated Python code by using a virtual environment, which is certainly an option, but that approach has some drawbacks. - You may might not know how to do set up a virtual environment - Virtual environments can become memory-expensive and could well be overkill for a small project - There may be parts of the code you wish to use in your own projects using Python 3 which still need converting. There are a few things you can do to convert small files yourself, just by knowing some of the key differences between Python 2 and Python 3. Some of the “gotchs” commonly encountered are listed below: Key Differences Between Python 2 and Python 3 print "Hello World!"becomes print("Hello World") raw_input()is now just input() All those pesky instances of raw_input() will need editing. for i in xrange(10)becomes for i in range(10) xrange was a way to more efficiently do loops by using an iterable. The range object in Python 3 is an iterable by default. It can soon become tedious to make these changes manually, so what can you do? One solution is to use an online converter, which can be handy for converting small files. One such converter I have used is If a tool like that doesn’t meet your needs, for example if you want to convert multiple files at once, fortunately, python 3 comes with a tool for performing Python 2 to Python 3 conversions for you automatically! It’s located along with you Python installation. For example, on my system, it’s at C:\Program Files (x86)\Python38-32\Tools\scripts\2to3.py if for any reason you don’t have the utility, you can run pip install 2to3 from a command line to get it. One way you can easily find out where your python installation is (on Windows) is by opening PowerShell and doing the following: where.exe python On my system this gives C:\Program Files (x86)\Python38-32\python.exe On Mac/Linux, the equivalent command is which python. NB Make sure you backup any important files before trying out 2to3.py You can use this tool from a command lines like so: create a file called example.py: # 2to3 example.py name = raw_input("What is your name? ") for i in xrange(10): print name Now from a command line in the same folder, run this: "C:\Program Files (x86)\Python38-32\Tools\scripts\2to3.py" example.py --output-dir=python3 -w -n replacing the path to 2to3.py with the one on your system. You might need to dig around a bit to find this 2to3.py. Use the path I have provided along with where.exe python as shown above to help locate the file. Once you have found it, you can use shift+right-click to get a context menu with “copy path” as an option. That is one approach anyway. Sooner or later you will need to be able to do this kind of stuff, so if it’s a bit tricky, now is your chance to learn about this kind of deep file system work…) …and ta da!, a new folder is created with a the following file inside # 2to3 example.py name = input("What is your name? ") for i in range(10): print(name) Pretty neat huh? A few details about what happened there: - the -w flag was used to enable write-back, which applies the changes to the file. Without this you get just a “test run.” - the -n disables backups. You may wish to omit this. You may also want to put 2to3.py in your system path if you want to use it a lot. A Script to convert all Python 2 Files in a Folder to Python 3 The following script is used if you have folder containing Python 2 files (along with maybe some other files), and which to make a Python 3 version of the folder. import os def makepython3(): """ Transforms all python 2 files in the current folder into python 3 files and places them in new folder. Original files are not affected. """ files = os.listdir('.') current_folder = '.' destination_folder= 'py3' if not os.path.exists(ex3folder): os.mkdir(ex3folder) for f in files: os.system('cp {} {}'.format(current_folder + os.sep + f, destination_folder+ os.sep + f)) if f.endswith('.py'): os.system(r'"C:\Program Files (x86)\Python38-32\Tools\scripts\2to3.py" -w -n --no-diffs {}'.format( destination_folder+ os.sep + f)) print('All done!') if __name__ == '__main__': makepython3() A few points about this code: - You need to find the path to 2to3.pyas discussed above and replace os.system(r'"C:\Program Files (x86)\Python38-32\Tools\scripts\2to3.py"..with the correct path for your system - It copies all files to a new folder and converts the Python files - You can name the destination folder whatever you want - It should work cross-platform as we are being careful to use os.sepetc rather than relying on hard coding the paths That may all seem like a lot of faff to get old code working, but I think it is worth the effort learning how to make these conversions as there is so much great Python 2 code out there. I’d be curious to hear about any legacy code you have found that you convert to Python 3 – why not let me know in the comments? Happy Computing.
https://compucademy.net/converting-python-2-code-to-python-3/
CC-MAIN-2022-27
refinedweb
1,120
72.66
NB-IoT, also known as Narrowband-IoT, is a new cellular technology that promises low cost, low power consumption, wide area coverage and long battery life. These characteristics help make “smart devices” a reality. T-Mobile has deployed NB-IoT coverage in the United States and Twilio is the first company to provide a NB-IoT developer kit. Twilio’s Alfa Developer Kit features a development board created in collaboration with Seeed Studio. The development board can access the T-Mobile NB-IoT network using a Twilio Narrowband SIM (which comes in the kit). Once on the network, developers can exchange data between multiple NB-IoT kits using the Twilio Breakout SDK. This post demonstrates how to connect to T-Mobile’s NB-IoT network using Twilio’s Developer Kit. Once connected, we’ll send a “hello world” message over the network using the Breakout SDK. You can also find the the completed project on GitHub under TwilioIoT. Ready to say “hello?”. Let’s connect! Prerequisites to Connecting to Narrowband Before you begin, you’ll need to either create a new Twilio account or log in to an existing account. You can sign up for a new account for free. Beyond a Twilio account, here is all of the hardware and software you’ll need to put in place to get connected. Hardware Requirements - Twilio Developer Kit for T-Mobile Narrowband - Twilio Narrowband SIM - LTE Antenna - Micro USB cable - Lithium battery Software Requirements Explore the Developer Kit The Developer Kit ships with a development board that is specifically designed for connecting to T-Mobile’s NB-IoT network. Also included are several hardware attachments by Seeed Studio that can be used to develop a NB-IoT “smart device”. Open the Developer Kit box. The kit contains: - Twilio Narrowband SIM (full size, mini, micro, and nano) - Development board - LTE antenna - GPS antenna - Set of Grove sensors - Pushbutton - Ultrasonic - Temperature/Humidity - Lithium battery - Micro-USB cable - Additional cabling Set up the Twilio Narrowband SIM Remove the Twilio Narrowband SIM from the Developer Kit. Next, register and activate your Narrowband SIM in the Twilio Console. The process for the Narrowband SIM follows the same procedure as the Twilio Programmable Wireless SIM. Connecting the pieces Break out the Nano SIM (smallest size) from the Twilio SIM card. Remove the development board from the Developer Kit. Insert the Twilio Narrowband SIM into the SIM slot underneath the board. Next, attach the LTE antenna to the board. Attach the battery lithium battery. The lithium battery is recommended to be plugged in at all times since the USB power source does not provide sufficient power for the board at peak levels. Connect the development board to the computer using the Micro-USB cable provided. You are geared up to connect to the network. Configure the NB-IoT Kit Firmware Before we can start programming the board we need to update the board’s firmware. To do this on a Macintosh we will need Homebrew to install dfu-util. Instructions for installing dfu-util for Windows and Linux can be found here. If you don’t yet have it installed, open a terminal and paste the following to install Homebrew. /usr/bin/ruby -e "$(curl -fsSL) Once the installation is complete install the dfu-util package. This package is used to download and upload firmware to and from USB connected devices. dfu-util 0.9 or greater is preferred if available. brew install dfu-util libusb For Windows users there is a different set of USB Drivers that are needed. Set up the software environment The development board uses the Arduino IDE to program the microcontroller. Twilio has developed a NB-IoT specific software development kit called Breakout. This SDK makes it possible for devices to send M2M Commands over the T-Mobile NB-IoT network. The Breakout SDK can be found on GitHub. Download the Breakout_Arduino_Library.zip from GitHub. Open the Arduino IDE and add the .zip to the Arduino IDE Library. Go to Sketch > Include Library > Add .ZIP library and select the Breakout_Arduino_Library.zip. After the .zip file has been installed we need to install a set of board cores. The development board is based on the STM32F4 chipset. To develop on the board we need to download the STM32F4 cores in the Arduino IDE. Go to Arduino > Preferences. Copy the following URL into the Additional Boards Manager URLs field: Click OK. The STM32F4 boards will now be available in the Arduino IDE Boards Manager. Next open the Boards Manager to install the STM32F4 board cores. In the Boards Manager search for “Seeed”. Find and select the “Seeed STM32F4 Boards” version “1.2.3+” and click install. Restart the Arduino IDE. With the STM32F4 cores installed the development board is now ready to be programmed. Next select the board and the board port. - Click Tools > Boards > Wio Tracker LTE - Click Tools > Port > {Your Modem Port Here} - OSX: /dev/{cu|tty}.usbmodem{XXXX} - Linux: /dev/ttyACM{X} - Windows: COM{X} Configure the HelloWorld.ino file Open the Hello World example provided by the Breakout SDK in the Arduino IDE. - Click File > Examples > Breakout Arduino Library > HelloWorld In the HelloWorld.ino we need to make a few modifications so we can connect to the T-Mobile NB-IoT network. Find the psk_key in the “HelloWorld.ino” file. Each development board has a unique SIM ICCID and Pre-Shared Key (psk). The psk for the board we are using needs to be copied into the HelloWorld.ino sketch. This key is required to connect to the T-Mobile Narrowband network. - Navigate to Programmable Wireless in the Twilio Console - Click SIMs - Find the Narrowband SIM that was previously registered - Under the tab Breakout SDK find Credentials - Where it says Pre-Shared Key (psk) click the eye logo to reveal the key - Copy the psk - Paste your psk into the HelloWorld.ino file in the code above After the psk is set let’s); Below is the complete Arduino sketch. Further details on how to the Breakout SDK can found on GitHub. #include <Seeed_ws2812.h> #include <BreakoutSDK.h>); // Optional, set to 1 minute // Powering the modem and starting up the SDK"); } // Set RGB-LED to green strip.WS2812SetRGB(0, 0x00, 0x40, 0x00); strip.WS2812Send(); LOG(L_WARN, "... done powering on and registering.\r\n"); LOG(L_WARN, "Arduino loop() starting up\r\n"); } void your_application_example() { if (breakout->hasWaitingCommand()) { char command[141]; size_t commandLen = 0; bool isBinary = false; command_status_code_e code = breakout->receiveCommand(140, command, &commandLen, &isBinary); switch (code) { case COMMAND_STATUS_OK: LOG(L_INFO, "Rx-Command [%.*s]\r\n", commandLen, command); break; case COMMAND_STATUS_ERROR: LOG(L_INFO, "Rx-Command ERROR\r\n"); break; case COMMAND_STATUS_BUFFER_TOO_SMALL: LOG(L_INFO, "Rx-Command BUFFER_TOO_SMALL\r\n"); break; case COMMAND_STATUS_NO_COMMAND_WAITING: LOG(L_INFO, "Rx-Command NO_COMMAND_WAITING\r\n"); break; default: LOG(L_INFO, "Rx-Command ERROR %d\r\n", code); } } } void loop() { your_application_example(); breakout->spin(); delay(50); } Enter Bootloader Mode To upload code to the development board the unit needs to be put into Bootloader mode. Press and hold the BOOT0 button underneath the Developer Board. Press and hold the RST on the top of the Developer Board. Release the RST on the top of the Developer Board. Release the BOOT0 button to enable Bootloader mode. Press Upload in the Arduino IDE. After the code has been uploaded to the development board press the RST button. This will take the board out of Bootloader mode. The completed code can be found on the TwilioIoT GitHub. Connect to the network and sending a Command After resetting the board start the NB-IoT network registration process. This will register the board on the network and allocate bandwidth for the device. During this process the Network Connectivity LED will glow orange. Open the Serial Monitor to observe the board registering and connecting to the network. When the development board successfully registers to the NB-IoT network the Network Connectivity LED will glow blue. The following message will display in the Arduino Serial Monitor when the connection is stable: When the board successfully connects to the NB-IoT network, the Breakout SDK will be initialized. This is the Serial Monitor output when the Breakout SDK sends a Command to Twilio. Every Command sent and received by the Breakout SDK is logged. Commands sent over the NB-IoT network can be found in the Twilio Console under Programmable Wireless. - Navigate to Programmable Wireless in the Twilio Console - Click SIMs - Find the Narrowband SIM that was previously registered - Click the Commands tab Receive a Command with the Breakout SDK The Breakout SDK will poll for a new Command every minute. Using cURL, you can send a Command to the NB-IoT board by using the Sim unique name. curl -X POST \ --data-urlencode "Sim=Breakout" \ --data-urlencode "Command=this is a test" \ -u ACXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:your_auth_token Watch the Arduino IDE Serial Monitor to see the Command as it is received: How does it feel to be one of the first pioneers of NB-IoT? The future of “things” using NB-IoT It’s a special time for IoT development on both the hardware and software side. Many processes are becoming optimized… and many are still shrouded in mystery. These “things” impact our daily lives – often without us even realizing it. Every segment from scooters to home automation now has some element of interconnectivity. NB-IoT takes it a step further still. The low cost, low power consumption, wide area coverage and long battery life of Narrowband make “Smart Devices” even smarter. Become an IoT pioneer with Twilio Narrowband. Let’s build dreams together. If you ever want to chat about IoT, hardware or modular synthesizers ping me anytime on Twitter or via Email. Let’s connect. - Github: cskonopka - Twitter: @cskonopka
http://www.seeedstudio.com/blog/2019/01/04/pioneer-nb-iot-with-twilios-alfa-development-kit/
CC-MAIN-2019-22
refinedweb
1,619
57.06
#include "avcodec.h" #include "wma.h" #include "wmadata.h" #include <assert.h> Go to the source code of this file. Get the samples per frame for this stream. Definition at line 76 of file wma.c. Referenced by decode_init(), and ff_wma_init(). Decode an uncompressed coefficient. consumes up to 34 bits decode length Definition at line 438 of file wma.c. Referenced by decode_coeffs(), and ff_wma_run_level_decode(). Definition at line 109 of file wma.c. Referenced by encode_init(), and wma_decode_init(). Decode run level compressed coefficients. normal code EOB escape NOTE: this is rather suboptimal. reading block_len_bits would be better escape decode NOTE: EOB can be omitted Definition at line 471 of file wma.c. Referenced by decode_coeffs(), and wma_decode_block(). Definition at line 400 of file wma.c. Referenced by encode_block(), and wma_decode_block().
http://www.ffmpeg.org/doxygen/0.6/wma_8c.html
CC-MAIN-2017-30
refinedweb
130
64.67
In today’s Programming Praxis exercise, our goal is to calculate the number of ways a number can be expressed as a McNugget number. Let’s get started, shall we? A quick import: import Control.Monad.Identity We use the same basic technique of building up a table of numbers where each number is the sum of the number above it and the number x spaces to its left, with x being the size of the McNugget box. We construct it differently though; rather than explicitly setting array values we use a bit of laziness to express the whole thing as a fold. The first row is a 1 followed by zeroes. For each subsequent row, we use the same principle as for the typical implementation of the Fibonacci algorithm, namely zipping a list with itself (using the fix function to avoid having to name it). The first x spaces of the previous row are maintained by adding zero to them. mcNuggetCount :: Num a => [Int] -> Int -> a mcNuggetCount xs n = foldl (\a x -> fix $ zipWith (+) a . (replicate x 0 ++)) (1 : repeat 0) xs !! n Some tests to see if everything works properly: main :: IO () main = do print $ mcNuggetCount [6,9,20] 1000000 == 462964815 print $ mcNuggetCount [1,5,10,25,50,100] 100 == 293 print $ mcNuggetCount [1,2,5,10,20,50,100,200] 200 == 73682 Tags: bonsai, code, combinator, fix, Haskell, kata, mcnugget, numbers, praxis, programming, y
http://bonsaicode.wordpress.com/2012/04/13/programming-praxis-mcnugget-numbers-revisited/
CC-MAIN-2014-35
refinedweb
237
59.74
Send a pulse to a process #include <sys/neutrino.h> int MsgSendPulse ( int coid, int priority, int code, int value ); int MsgSendPulse_r ( int coid, int priority, int code, int() and MsgSendPulse_r() kernel calls send a short, nonblocking message to a process's channel via the connection identified by coid. Use these calls to send an integer value; for pointers, use MsgSendPulsePtr() or MsgSendPulsePtr_r(). These functions are identical except in the way they indicate errors. See the Returns section for details. You can send a pulse to a process if: Or: You can use MsgSendPulse() for many purposes; however, due to the small payload of data, you shouldn't use it for transmitting large amounts of bulk data by sending a great number of pulses.() to send pulses across the network. The only difference between the MsgSendPulse() and MsgSendPulse_r() functions is the way they indicate errors: If the server faults on delivery, the pulse is either lost or an error is returned.
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.neutrino.lib_ref/topic/m/msgsendpulse.html
CC-MAIN-2022-27
refinedweb
161
60.04
From: Scott (cheesy4poofs_at_[hidden]) Date: 2006-06-14 12:45:02 Hi Chris, Thanks very much for the reply. > It sounds to me as though you're using the SSL example from the > "boost layout" asio proposal (where everything is in the namespace > boost::asio) with the headers from the non-boost package of asio > (where the namespace is just asio). The SSL example is found in > src/examples/ssl in the non-boost package. You are correct. We grabbed the version from sourceforge several months ago (v3.6). I downloaded the latest from anonymous CVS - wow. Quite a few changes there (demuxer is now io_service, etc. etc.). I'm in the process of rewriting our code to the latest asio. When I'm done, I hope you (or someone with SSL experience) wouldn't mind answering a few questions about getting SSL working.. The example client/server SSL seems unwieldy. It actually makes you type a pass phrase when the server starts. I really don't want that. Thanks, Scott Boost list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/Archives/boost/2006/06/106325.php
crawl-001
refinedweb
198
78.04
utimes - set file access and modification times #include <sys/time.h> int utimes(const char *path, const struct timeval times[2]); The utimes() function sets appropriate privileges to use this call in this manner. Upon completion, utimes() will mark the time of the last file status change, st_ctime, for update. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error, and the file times will not be affected. The utimes() function will fail if: - [EACCES] - Search permission is denied by a component of the path prefix; or the times argument is a null pointer and the effective user ID of the process does not match the owner of the file and write access is denied. - : - [ENAMETOOLONG] - Pathname resolution of a symbolic link produced an intermediate result whose length exceeds {PATH_MAX}. None. None. None. <sys/time.h>.
http://pubs.opengroup.org/onlinepubs/7990989775/xsh/utimes.html
crawl-003
refinedweb
144
59.94
You can use mirroring with a DataGrid control by setting the layoutDirection property on the application. Like most controls, the DataGrid control inherits this value. Setting the layout direction to RTL causes the columns to be arranged from right to left (column 0 is the right-most). This also aligns the sort arrows to the left of the header text. If the DataGrid is editable and the user clicks on a cell, the focus starts on the right side of the grid. If you use an RTL character set such as Hebrew or Arabic in the cells of a DataGrid control, you must set the "Use Flash Text Engine in MX components" check box in Flash Builder. Otherwise, the characters will not render correctly because the TextField control does not support bidirectionality. To align the text in the cells to the right, reverse the direction of the header text, and reverse the text in the individual cells of a DataGrid, you must also set the direction style property to "rtl". You can set this on a parent container or on the DataGrid itself. <?xml version="1.0" encoding="utf-8"?> <!-- mirroring\MirroredDataGrid.mxml --> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns: <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; [Bindable] private var dp:ArrayCollection = new ArrayCollection([ {Artist:'Train', Album:'Drops of Jupiter', Price:13.99}, {Artist:'Charred Walls of the Damned', Album:'Ghost Town', Price:8.99}, {Artist:'Bleading Deacons', Album:'Rule the Night', Price:11.99}, {Artist:'Three Stooges', Album:'Greatest Hits', Price:9.99} ]); ]]> </fx:Script> <s:DataGrid </s:Application>The executing SWF file for the previous example is shown below: The executing SWF file for the previous example is shown below: Twitter™ and Facebook posts are not covered under the terms of Creative Commons.
http://help.adobe.com/en_US/flex/using/WS19f279b149e7481c-11cc159f12ea10f6efe-7ffd.html
CC-MAIN-2017-13
refinedweb
301
55.84
After a lot of troubles, i finally got my code converted to Swift 3.0. But it seems like my incrementID function isn't working anymore? Any suggestions how i can fix this? My incrementID and primaryKey function as they look right now. override static func primaryKey() -> String? { return "id" } func incrementID() -> Int{ let realm = try! Realm() let RetNext: NSArray = Array(realm.objects(Exercise.self).sorted(byProperty: "id")) as NSArray let last = RetNext.lastObject if RetNext.count > 0 { let valor = (last as AnyObject).value(forKey: "id") as? Int return valor! + 1 } else { return 1 } } There's no need to use KVC here, or to create a sorted array just to get the max value. You can just do: func incrementID() -> Int { let realm = try! Realm() return (realm.objects(Exercise.self).max(ofProperty: "id") as Int? ?? 0) + 1 }
https://codedump.io/share/9vdE5YJaaAfP/1/auto-increment-id-in-realm-swift-30
CC-MAIN-2017-26
refinedweb
137
55.5
Where does the speed go? You buy a really fast computer, and a photo editor like iMovie or Photoshop seems really fast on it. Colors change as quickly as you change the slider. But larger pictures don't seem to process as quickly in Java as in Photoshop. Why? In reality, computers do not understand Java, C, Visual Basic, Python, or any other language. The basic computer only understands one kind of languagemachine language. Machine language instructions are just values in the bytes in memory, and they tell the computer to do very low-level activities. In a real sense, the computer doesn't even "understand" machine language. The computer is just a machine with lots of switches that make data flow this way or that. Machine language is just a bunch of switch settings that make other switches in the computer change. We interpret those data switchings to be addition, subtraction, loading data, and storing data. Each kind of computer has its own machine language. Apple computers and computers that run Windows can't run one another's programs, not because of any philosophical or marketing differences, but because each kind of computer has its own processor (core of the computer that actually executes the machine language). They literally don't understand one another. That's why an .exe program from Windows won't run on a Macintosh, and a Macintosh application won't run on a Windows computer. Those executable files are (almost always) machine language programs. Machine language looks like a bunch of numbersit's not particularly user-friendly. Assembler language is a set of words (or near-words) that corresponds one-to-one with machine language. Assembler instructions tell the computer to do things like store numbers into particular memory locations or into special locations (variables or registers) in the computer, test numbers for equality or comparison, or add numbers together or subtract them. An assembler program (and the corresponding machine language generated by an assembler) to add two numbers together and store them somewhere might look like this: LOAD #10,R0 ; Load special variable R0 with 10 LOAD #12,R1 ; Load special variable R1 with 12 SUM R0,R1 ; Add special variables R0 and R1 STOR R1,#45 ; Store the result into memory location #45 01 00 10 01 01 12 02 00 01 03 01 45 An assembler program that might make a decision could look like this: LOAD R1,#65536 ; Get a character from keyboard TEST R1,#13 ; Is it an ASCII 13 (Enter)? JUMPTRUE #32768 ; If true, go to another part of the program CALL #16384 ; If false, call func. to process the new line 05 01 255 255 10 01 13 20 127 255 122 63 255 Input and output devices are often just memory locations to the computer. Maybe when you store a 255 to location 65,542, suddenly the red component of the pixel at (101,345) is set to maximum intensity. Maybe each time that the computer reads from memory location 897,784, it's a new sample just read from the microphone. In this way, these simple loads and stores handle multimedia, too. Machine language is executed very quickly. The computer on which this chapter is being typed has a 900 megahertz (Mhz) processor. What that means exactly is hard to define, but roughly, it means that this computer processes 900 million machine language instructions per second. A 2-gigahertz (Ghz) processor handles 2 billion instructions per second. A 12-byte machine language program that corresponds to something like a = b + c executes on this mid-range computer in something like 12/900,000,000 of a second. Applications like Adobe Photoshop and Microsoft Word are typically compiled. That means that they were written in a computer language like C or C++ and then translated into machine language using a program called a compiler. Those programs then execute at the speed of that base processor. However, programming languages like Python, Scheme, Squeak, Director, and Flash are actually (in most cases) interpreted. Java can be interpreted, too, but in a subtly different way that is explained later (Section 15.2.3). Interpreted programs execute at a slower speed. It's the difference between translating and then doing instructions versus simply doing the instructions. A detailed example might help. Consider this exercise from an earlier chapter: Here's a solution to the exercise. The implementation here reads a file, a line-at-a-time into a string. It's checked to see if it starts with "circle" or "line." Using split, it gets chopped into pieces, then each of the little strings (the numbers for the coordinates) is converted to an integer using Integer.parseInt(). import java.io.*; import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; /** * Class that reads in a file of graphics instructions, and * executes them, showing the result.Interpreter { /** * Method to interpret the commands in the given file */ public Picture interpretCommands(String fileName) { String line = null; Picture frame = new Picture(640,480); String [] params = null; int x1, y1, x2, y2, diameter; Graphics g = frame.getGraphics(); g.setColor(Color.black); // try the following try { // read from the file BufferedReader reader = new BufferedReader(new FileReader(fileName)); [Page 508] //, draw the line in g.drawLine(x1,y1,x2,y2); } else if (line.startsWith("circle")) { // Get the parameters for drawing the circle params = line.split(" "); // params[0] should be "circle" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); diameter = Integer.parseInt(params[3]); // Now, draw the circle in g.drawOval(x1,y1,diameter,diameter); } else { System.out.println("Uh-oh! Invalid command! "+line); return frame;} } } catch (FileNotFoundException ex) { System.out.println("Couldn't find file " + fileName); fileName = FileChooser.pickAFile(); interpretCommands(fileName); } catch (Exception ex) { System.out.println("Error during read or write"); ex.printStackTrace(); } return frame; } public static void main(String[] args) { GraphicsInterpreter interpreter = new GraphicsInterpreter(); String fileName = FileChooser.getMediaPath("graphics-commands.txt"); [Page 509] Picture p = interpreter.interpretCommands(fileName); p.show(); } } This solution workssee Figure 15.1 which results from executing this program with the graphics-commands.txt file containing: The graphics commands are assumed to be in the file whose filename is passed to the interpretCommands method. We open a blank 640 x 480 frame for drawing on, then get the graphics context for drawing on. For each string line in the input file, we check to see if it starts with "line" or "circle." If it's a "line", we chop out the starting x and y coordinates and the ending x and y coordinates by using split on the string. Then we draw the line. If the command is a "circle," we get the two coordinates and the diameter, and draw the circle as an oval whose height and width are both the diameter. At the end, we return the resulting Picture object. What we've just done is implement a new language for graphics. We have even created an interpreter that reads the instructions for our new language and creates the picture that goes along with it. In principle, this is just what Postscript, PDF, Flash, and AutoCAD are doing. Their file formats specify pictures in just the way that our graphics language does. When they draw (render) the image to the screen, they are interpreting the commands in that file. While we probably can't tell from such a small example, this is a relatively slow language. Consider the program below. Imagine that we compiled it and ran itwould it run faster than reading the list of commands and interpreting them? Both this program and the list in Figure 15.1 generate the exact same picture. import java.awt.*; public class GeneratedDrawing{ public static void main(String args[]){ Picture frame = new Picture(640,480); Graphics g = frame.getGraphics(); g.setColor(Color.black); g.drawOval(20,20,100,100); g.drawOval(300,20,100,100); g.drawLine(210,120,210,320); g.drawLine(210,320,310,320); [Page 510] g.drawLine(20,350,400,350); frame.show(); } // end main() } // end class In general, we'd probably guess (correctly) that the direct instructions above will run faster than reading the list and interpreting it. Here's an analogy that might help. Mark took French in college, but he says that he is really bad at it. Let's say that someone gave Mark a list of instructions in French. He could meticulously look up each word and figure out the instructions, and do them. What if he was asked to do the instructions again? He would have to look up each word again. What if they asked him to do it 10 times? He would do 10 lookups of all the words. Now, let's imagine that he wrote down the English (his native language) translation of the French instructions. He can repeat doing the list of instructions as often as you like very quickly. It takes him no time to look up any words (though it probably depends on what he is being asked to dobrain surgery is out). In general, figuring out the language takes some time that is just overheadjust doing the instructions (or drawing the graphics) will always be faster. Here's an idea: Could we generate the preceding program? Could we write a program that takes as input the graphics language we invented, then writes a Java program that draws the same pictures? This turns out not to be that hard. This would be a compiler for the graphics language. import java.io.*; /** * Class that reads in a file of graphics instructions, and * then generates a NEW Java Program that [Page 511] * does the same thing as the instructions. TheCompiler { /** Method to write out the prologue for the new program: * All the imports, the class definition, main, etc. * @param file BufferedWriter to write the prologue to **/ public void writePrologue(BufferedWriter file) { try { // Write out the prologue lines file.write("import java.awt.*;"); file.newLine(); file.write("public class GeneratedDrawing{"); file.newLine(); file.write(" public static void main(String args[]){"); file.newLine(); file.write(" Picture frame = new Picture(640,480);"); file.newLine(); file.write(" Graphics g = frame.getGraphics();"); file.newLine(); file.write(" g.setColor(Color.black);"); file.newLine();} catch (Exception ex) { System.out.println("Error during write of prologue"); } } /** Method to write out the epilogue for the new program: * Show the picture. Close the main and the class. * @param file BufferedWriter to write the epilogue to **/ [Page 512] public void writeEpilogue(BufferedWriter file){ try { // Write out the epilogue lines file.write(" frame.show();"); file.newLine(); file.write(" } // end main()"); file.newLine(); file.write("} // end class"); file.newLine();} catch (Exception ex) { System.out.println("Error during write of epilogue"); } } /** * Method to compile the commands in the given file * @param fileName the file to read from */ public void compileCommands(String fileName) { String line = null; String [] params = null; int x1, y1, x2, y2, diameter; // try the following try { // read from the file BufferedReader reader = new BufferedReader(new FileReader(fileName)); BufferedWriter writer = new BufferedWriter(new FileWriter( FileChooser.getMediaPath("GeneratedDrawing.java"))); writePrologue(writer); //, write the line that will LATER // draw the line writer.write(" g.drawLine("+x1+","+y1+", "+x2+","+y2+");"); [Page 513] writer.newLine(); } else if (line.startsWith("circle")) { // Get the parameters for drawing the circle params = line.split(" "); // params[0] should be "circle" x1 = Integer.parseInt(params[1]); y1 = Integer.parseInt(params[2]); diameter = Integer.parseInt(params[3]); // Now, draw the circle in writer.write(" g.drawOval("+x1+","+y1+", "+diameter+","+ diameter+");"); writer.newLine(); } else { System.out.println("Uh-oh! Invalid command! "+line); return;} } writeEpilogue(writer); writer.close(); } catch (FileNotFoundException ex) { System.out.println("Couldn't find file " + fileName); fileName = FileChooser.pickAFile(); compileCommands(fileName); } catch (Exception ex) { System.out.println("Error during read or write"); ex.printStackTrace(); } } public static void main(String[] args) { GraphicsCompiler compiler = new GraphicsCompiler(); String fileName = FileChooser.getMediaPath("graphics-commands.txt"); compiler.compileCommands(fileName); } } The compiler accepts the same input as the interpreter (a filename to a file that contains our graphics commands), but instead of opening a Picture to write to, we open a file named "GeneratedDrawing.java" in the current mediasources directory. We write to the file the start of a class and a main method using the writePrologue methodthe public class GeneratedDrawing, and so on. We also write out the code to create a Picture and a graphics context. Note that we're not really making the Picture herewe're simply writing out the Java commands that will make the Picture. The commands will be executed later when the class GeneratedDrawing is compiled and its main method is executed. Then, just like the interpreter, we figure out which graphics command it is ("line" or "circle") and we figure out the coordinates from the input string. Then we write out to the code file "GeneratedDrawing.java" the commands to do the drawing. Notice that we're reading the commands when executing the class GraphicsCompiler, and the result is that we're writing the class GeneratedDrawing that will be compiled and executed later. At the end of the method compileCommands, we write out commands to show the frame. Finally we close the file. Now the compiler has a bunch of overhead, too. We still have to do the looking up of what the graphics commands mean. If we only have a small graphics program to run, and we only need it once, we might as well just run the interpreter. But what if we needed to run the picture 10 times, or a 100 times? Then we pay the overhead of compiling the program once, and the next nine or 99 times, we run it as fast as we possibly can. That will almost certainly be faster than doing the interpretation overhead 100 times. This is what compilers are all about. Applications like Photoshop and Word are written in languages like C or C++ and then are compiled to equivalent machine language programs. The machine language program does the same thing that the C language says to do, just as the graphics programs created from our compiler do the same things as our graphics language says to do. But the machine language program runs much faster than we could interpret the C or C++. Compilers are one of the most magical things in computer science. Look again at the list of graphics commands that generated Figure 15.1. That's a program. Now look again at the Java program that GraphicsCompiler generated. Those are two completely different programs, but they do the same thing. A compiler writes an entirely new program in one language, given input in a different language. It's a program that writes programs. Originally, Java programs were designed to be interpreted. Java programs didn't originally compile to machine language for whatever computer they were being run on. Java programs compiled to a machine language for a make-believe processora virtual machine. The Java Virtual Machine (JVM) doesn't really exist as a physical processor. It's a definition of a processor. What good is that? It turns out that since machine language is very simple, building a machine language interpreter is pretty easy. It's just like our GraphicsInterpreter except that it reads in the bytes of a machine language program for a JVM, then just does what they say. The result is that a JVM interpreter can be very easily made to run on just about any processor. That means that a program in Java is compiled once and then runs everywhere. Devices as small as wristwatches can run the same Java programs that run on large computers, because a JVM interpreter can run even on processors that live on one's wristwatch. There's also an economic argument for virtual machines. Imagine that you're writing software for a programmable toaster oven. If the manufacturer decides to change the processor in the toaster oven, you have to recompile your traditional C or C++ programs to run on the new processor. But if both the old and new processor have JVM interpreters, then your Java programs will run on both without change or recompilation. Thus, a virtual machine can mean that you're less bound to a given processor, and the manufacturer has more flexibility to buy the least-expensive processor available. On most computers today, Java does execute as machine language. Java can be compiled to machine language. But even when Java is compiled to JVM machine language, modern JVM interpreters are actually JVM compilers. When you tell Java on a Windows or Macintosh computer today "Go run this JVM machine language program," what it actually does is pause a moment, compile the JVM to native machine language, then run the native machine language. Computers are so fast today that you don't really notice the pause while it's compiling. That's the first part of the answer to the question "Why is Photoshop faster than Java for large programs?" Photoshop is running in native machine code, while our Java programs are running on a JVM interpreterwhich, even if it does compile to native machine language first, is still slightly slower than straight machine language. Then why have an interpreter at all? There are many good reasons. Here are two: Do you like the Interactions Pane? Did you even once ever type in some example code just to try it? That kind of interactive, exploratory, trying-things-out programming is available with interpreters. Compilers don't let you easily try things out line-by-line and print out results. Interpreters are good for learners. Once a program is compiled to Java machine language, it can be used anywhere from huge computers to programmable toaster ovensas is! That's a big savings for software developers. They only ship one program, and it runs on anything. Virtual machines are safer than running machine language. A program running in machine language might do all kinds of non-secure things. A virtual machine can carefully keep track of the programs that it is interpreting to make sure that they only do safe things, like use only valid indices in arrays. The raw power of compiled vs. interpreted programs is only part of the answer of why Photoshop is faster. The deeper part, and one which can actually make interpreted programs faster than compiled programs, is in the design of the algorithms. There's a temptation to think, "Oh, it's okay if it's slow now. Wait 18 months, we'll get double the processor speed, and then it will be fine." There are some algorithms that are so slow, they will never end in your lifetime, and others that can't be written at all. Rewriting the algorithm to be smarter about what we ask the computer to do can make a dramatic impact on performance. An algorithm is a description of behavior for solving a problem. A program (classes and methods in Java) are executable interpretations of algorithms. The same algorithm can be implemented in many different languages. There is always more than one algorithm to solve the same problem. Some computer scientists study algorithms and come up with ways to compare and decide which ones are better than others. We've seen several algorithms that appear in different ways but are really doing the same things: Sampling to scale up or down a picture or to lower or raise the frequency of a sound. Blending to merge two pictures or two sounds. Mirroring of sounds and pictures. All of these process data in the same way. It's just the data that changespixels for pictures, samples for sounds. We say that these are the same algorithms. We can compare algorithms based on several criteria. One is how much space the algorithm needs to run. How much memory does the algorithm require? That can become a significant issue for media computation because so much memory is required to hold all that data. Think about how bad (unusable in normal situations) an algorithm would be that needed to hold all the frames of a movie in a list in memory at the same time. The most common criterion used to compare algorithms is time. How much time does the algorithm take? We don't literally mean clock time, but how many steps does the algorithm require. Computer scientists use Big-Oh notation, or O() to refer to the magnitude of the running time of an algorithm. The idea of Big-Oh is to express how much slower the program gets as the input data get larger. If the data get twice as large, an O(n) algorithm would take twice as long to run, but an O(n2) algorithm would take four times longer to run. Big-Oh notation tries to ignore differences between languages, even between compiled versus interpreted, and focus on the number of steps to be executed. Think about our basic picture and sound processing examples like increaseRed or increaseVolume. Some of the complexity of these programs are hidden in provided methods like getPixels() and getSamples(). In general, though, we refer to these as being O(n). The amount of time that the program takes to run is proportional linearly to the input data. If the picture or sound doubled in size, we'd expect the program to take twice as long to run. When we figure out Big-Oh, we typically clump the body of the loop into one step. We think about those functions as processing each sample or pixel once, so the real time spent in those programs is the main loop, and it doesn't really matter how many statements are in that loop. Unless there is another loop in that loop body, that is. Loops are multiplicative in terms of time. Nested loops multiply the amount of time that is needed to run the body. Think about this simple example: > int count = 0; > for (int x=0; x<5; x++) for (int y=0; y<3; y++) {count = count + 1; System.out.println("Ran "+count+" times: x="+x+" y="+y);} When we run it, we see that it actually executes 15 timesfive for the x's, three for the y's, and 5 * 3 = 15. Ran 1 times: x=0 y=0 Ran 2 times: x=0 y=1 Ran 3 times: x=0 y=2 Ran 4 times: x=1 y=0 Ran 5 times: x=1 y=1 Ran 6 times: x=1 y=2 Ran 7 times: x=2 y=0 Ran 8 times: x=2 y=1 Ran 9 times: x=2 y=2 Ran 10 times: x=3 y=0 Ran 11 times: x=3 y=1 Ran 12 times: x=3 y=2 Ran 13 times: x=4 y=0 Ran 14 times: x=4 y=1 Ran 15 times: x=4 y=2 How about movie code? Since it takes so long to process, is it actually a more complex algorithm? No, not really. Movie code is just processing each pixel once, so it's still O(n). It's just that the n is really, REALLY big! Not all algorithms are O(n). There is a group of algorithms that are called sorting algorithms that are used to order data in alphabetical or numerical order. The simplest of these algorithms (like the bubble sort or insertion sort) has complexity O(n2). If a list has 100 elements, it'll take on the order of 10,000 steps to sort the 100 elements with that kind of sort. However, there are smarter algorithms (like the quicksort) that have complexity O(nlogn). That same list of 100 elements would only take 460 steps to process. Those kinds of differences start to have huge real clock-time differences when you're talking about processing 10,000 customers to put them in order for reports... Consider how you might look up a word in the dictionary. One way is to check the first page, then the next page, then the next page, and so on. That's called a linear search, and it's O(n). It's not very efficient. The best case (fastest the algorithm could possibly be) is that the problem is solved in one stepthe word is on the first page. The worst case is n steps where n is the number of pagesthe word could be on the last page. The average case is n/2 stepsthe word is halfway through. We can implement this algorithm as a linear search of an array of strings. /** * Class that demonstrates search algorithms * @author Mark Guzdial * @author Barb Ericson **/ public class Searcher { /** * Implement a linear search through the list **/ public static String linearFind(String target, String[] list) { for (int index=0; index < list.length; index++) { if (target.compareTo(list[index]) == 0) {return("Found it!"); } } return("Not found"); } /** main for testing linearFind */ public static void main(String[] args) { String[] searchMe = {"apple","bear","cat","dog","elephant"}; System.out.println(linearFind("apple",searchMe)); System.out.println(linearFind("cat",searchMe)); System.out.println(linearFind("giraffe",searchMe)); } } When we run this, we get what we would expect: > java Searcher Found it! Found it! Not found But let's use the fact that dictionaries are already in sorted order. We can be smarter about how we search for a word, and do it in O(logn) time (logn = x where 2x = n). Split the dictionary in the middle. Is the word before or after the page you're looking at? If after, look from the middle to the end (e.g., again split the book, but from the middle to end). If before, look from start to middle (split halfway between start and middle). Keep repeating until you find the word or it couldn't possibly be there. This is a more efficient algorithm. In the best case, it's in the first place you look. In the average and worst case, it's logn stepskeep dividing the n pages in half, and you'll have at most logn splits. Here's a simple (i.e., not the best possible, but illustrative) implementation of this kind of a search, called a binary search. Add it to the Searcher class. Then modify the main method as shown below. /** * Method to use a binary search to find a target string in a * sorted array of strings */ public static String binaryFind(String target, String[] list) { int start = 0; int end = list.length - 1; int checkpoint = 0; while (start <= end) { // While there are more to search // find the middle checkpoint = (int)((start+end)/2.0); if (target.compareTo(list[checkpoint]) == 0) { return "Found it!"; } else if (target.compareTo(list[checkpoint]) > 0) { start=checkpoint + 1; } else if (target.compareTo(list[checkpoint]) < 0) { end=checkpoint - 1; } } return "Not found"; } /** * Main for testing binaryFind */ public static void main(String[] args) { String[] searchMe = {"apple","bear","cat","dog","elephant"}; System.out.println(binaryFind("apple",searchMe)); System.out.println(binaryFind("cat",searchMe)); System.out.println(binaryFind("giraffe",searchMe)); } } We start with the low-end marker start at the beginning of the list, and end for the last index of the list (length of the list minus one). As long as there is something between start and end, we continue to search. We compute checkpoint as halfway between start and end. We then check to see if we found it. If so, we're done and we return. If not, we figure out if we have to move start up to checkpoint or end down to checkpoint and we continue searching. If we ever get through the whole loop, we didn't take the "Found it!" return, so we return that we didn't find it. To test this, we stuck in a line after assigning checkpoint: System.out.println("Checking at: "+ checkpoint+" start="+start+" end="+end); Here's the same main running. With this additional statement, we can see how the code narrows in on "apple" then "bear" and then never finds "giraffe." Welcome to DrJava. > java SearchMethods Checking at: 2 start=0 end=4 Checking at: 0 start=0 end=1 Found it! Checking at: 2 start=0 end=4 Found it! Checking at: 2 start=0 end=4 Checking at: 3 start=3 end=4 Checking at: 4 start=4 end=4 Not found Here's a thought experiment: Imagine that you want to write a program that will generate hit songs for you. Your program will recombine bits of sounds that are some of the best riffs you've ever heard on various instrumentssome 60 of them. You want to generate every combination of these 60 bits (some in, some out; some earlier in the song, some later). You want to find the combination that is less than 2 minutes 30 seconds (for optimal radio play time) and has the right amount of high and low volume combinations (and you've got a checkSound() function to do that). How many combinations are there? Let's ignore order for right now. Let's say that you've got three sounds: a, b, and c. Your possible songs are a,b,c,bc,ac,ab, and abc. Try it with two sounds or four sounds, and you'll see that the pattern is the same that we saw earlier with bits: For n things, every combination of include-or-exclude is 2n. (If we ignore the fact that there is an empty song, it's 2n 1.) Therefore, our 60 sounds will result in 260 combinations to run through our length and sound checks. That's 1,152,921,504,606,846,976 combinations. Let's imagine that we can do the checks in only a single instruction (unbelievable, of course, but we're pretending). On a 1.5-gigahertz computer, we can handle that many combinations in 768,614,336 seconds. Spell that out: That's 12,810,238 minutes, which is 213,504 hours, which is 8,896 days. That's 24 YEARS to run that program. Now, since Moore's Law doubles process rates every 18 months, we can soon run that program in much less time. Only 12 YEARS!. If we cared about order, too (e.g., abc vs. cba vs. bac), the number of combinations has 63 zeroes in it. Finding the absolute optimal combination of just about anything is always time expensive. O(2n) like this is not an uncommon running time for these kinds of algorithms. But there are other problems that seem like they should be doable in reasonable time, but aren't. One of these is the famous Traveling Salesman Problem. Imagine that you're a salesperson, and you're responsible for a bunch of different clientslet's say 30, half the size of the optimization problem. To be efficient, you want to find the shortest path on the map that will let you visit each client exactly once, and not more than once. The best-known algorithm that gives an optimal solution for the Traveling Salesman Problem is O(n!). That's n factorial. To calculate the factorial of a number n you multiply n by (n 1) then by (n 2) all the way down to 1. The factorial of 5 is 5 * 4 * 3 * 2 * 1 = 120. There are algorithms that take less time to run and give a good path but that path isn't guaranteed to be the shortest. For 30 cities, the number of steps to execute a O(n!) algorithm that finds the shortest path is 30! or 265,252,859,812,191,058,636,308,480,000,000. Go ahead and run that on a 1.5-gigahertz processorit won't get done in your lifetime. The really aggravating part is that the Traveling Salesman Problem isn't some made-up, toy problem. There really are people who have to plan shortest routes in the world. There are similar problems that are basically the same algorithmically, like planning the route of a robot on a factory floor. This is a big, hard problem. Computer scientists classify problems into three piles: Many problems (like sorting) can be solved with an algorithm whose running time has a complexity that's a polynomial, like O(n2). We call these class P (P for Polynomial) problems. Other problems, like optimization, have known algorithms (solutions to those problems) but the solutions are so hard and big that we know we just can't solve them in a reasonable amount of time even for reasonable amounts of data. We call these problems intractable. Still other problems, like Traveling Salesman, seem intractable, but maybe there's a solution in class P that we just haven't found yet. We call these class NP. One of the biggest unsolved problems in theoretical computer science is either proving that class NP and class P are completely distinct (i.e., we'll never solve Traveling Salesman optimally in polynomial time), or that class NP is within class P. You might wonder, "How can we prove anything about algorithms?" There are so many different languages, and different ways of writing the same algorithm. How can we positively prove something is doable or not doable? We can, it turns out. In fact, Alan Turing proved that there are even algorithms that can't be written! The most famous algorithm that can't be written is the solution to the Halting Problem. We've already written programs that can read other programs and write out other programs. We can imagine a program that can read one program and tell us things about it (e.g., how many print statements are in it). Can we write a program that will input another program (e.g., from a file) then tell us if the program will ever stop? Think about the input program having some complex while loops where it's hard to tell if the expression in the while loop is ever false. Now imagine a bunch of these, all nested within one another. Alan Turing proved that such a program can never be written. He used proof by absurdity. He showed that if such a program (call it H) could ever be written, you could try feeding that program to itself as input. Now H takes input, a program, right? What if you modified H (call it H2) so that if H would say "This one halts!" H2 would instead loop forever (e.g., while (true)). Turing showed that such a setup would announce that the program would halt only if it loops forever, and would halt only if it announces that it would loop forever. The really amazing thing is that Turing came up with this proof in 1936almost ten years before the first computers were ever built! He defined a mathematical concept of a computer called a Turing machine that he was able to make such proofs about before physical computers were ever built. Here's another thought experiment for you: Is human intelligence computable? Our brains are executing some process that enables us to think, right? Can we write down that process as an algorithm? And if a computer executes that algorithm, is it thinking? Is a human reducible to a computer? This is one of the big questions in the field of artificial intelligence. We can now answer the question of why Photoshop is faster than our programs in Java. First, Photoshop is compiled, so it runs at raw machine language speeds. But the other part is that Photoshop has algorithms that are smarter than what we're doing. For example, think about the programs where we searched for colors, like in Chromakey or in making hair red. We know that the background color and the hair color was clumped next to one another. What if, instead of linearly searching all pixels, you just searched from where the color was what you were looking for, until you didn't find that color anymoreyou reached the boundary. That would be a smarter search. That's the kind of thing that Photoshop does.
https://flylib.com/books/en/1.79.1.137/1/
CC-MAIN-2018-30
refinedweb
6,006
65.32
Viewer component which uses a virtual trackball to view the data. More... #include <Inventor/Qt/viewers/SoQtEx: SbMatrix mx; mx = pCamera->orientation.getValue(); SbVec3f viewVec(-mx[2][0], -mx[2][1], -mx[2][2]); SbVec3f camPos = pCamera->position.getValue(); float focDist = pCamera->focalDistance.getValue(); SbVec3f focalPt = camPos + (focDist * viewVec); Viewer components: Left Mouse: Rotate the virtual trackball. Middle Mouse or Ctrl + Left Mouse: Translate up, down, left, right. Ctrl + Middle Mouse or Left + Middle Mouse: Dolly in and out (gets closer to and further away from the object). QtExaminerViewer is encapsulated in a QMain window, the QMenuBar may receive the ALT key press event before the QtExaminerViewer. If this happens, the ALT key won't have any effect on the viewer's viewing mode.. SoQtFullViewer, SoQtViewer, SoQtComponent, SoQtRenderArea, SoQtWalkViewer, SoQtFlyViewer, SoQtPlaneViewer CorrectTransp, QtCustomViewer, QtMultiViewer, QtReadFile, QtSpaceMouse, QtTreeView Constrained viewing mode. Viewing mode. Constructor which specifies the viewer type. Please refer to SoQt. Restores the camera values. Reimplemented from SoQt SoQtFullViewer. Sets the constrained viewing mode. This method is useful to associate a key combination with a constrained mode. Notes: Sets whether the viewer is allowed to change the cursor over the renderArea window. When disabled, the cursor is not defined SoQtViewer. Sets the point of rotation feedback size in pixels (default 20 pix). Shows/hides the point of rotation feedback, which only appears while in viewing mode (default is off). Set the viewer into/out of seek mode (default OFF). Actual seeking will not happen until the viewer decides to, for example, on a mouse click. Note: Setting the viewer out of seek mode while the camera is being animated will stop the animation at the current location. Reimplemented from SoQtViewer. Sets whether the viewer is turned on or off. When turned on, events are consumed by the viewer. When viewing is off, events are processed by the viewer's render area. This means events will be sent down to the scene graph for processing (i.e. picking can occur). Note that if the application has registered an event callback, it will be invoked on every message,). Reimplemented from SoQtFull SoQtViewer.
https://developer.openinventor.com/refmans/latest/RefManCpp/class_so_qt_examiner_viewer.html
CC-MAIN-2022-05
refinedweb
351
52.26
A very simple multithreading parallel URL fetching (without queue) threadpoolexecutor python python threading example stackoverflow flask multithreading example python run same function in parallel python concurrent requests python concurrency python thread return value I spent a whole day looking for the simplest possible multithreaded URL fetcher in Python, but most scripts I found are using queues or multiprocessing or complex libraries. Finally I wrote one myself, which I am reporting as an answer. Please feel free to suggest any improvement. I guess other people might have been looking for something similar. Using Python Threading and Returning Multiple Results (Tutorial , You can start potentially hundreds of threads that will operate in parallel, and work It's easy to learn, quick to implement, and solved my problem very quickly. Returning values from threads is not possible and, as such, in this example we #load up the queue with the urls to fetch and the index for each job (as a tuple):. A very simple multithreading parallel URL fetching (without queue) I spent a whole day looking for the simplest possible multithreaded URL fetcher in Python, but most scripts I found are using queues or multiprocessing or complex libraries. Python Multithreading Tutorial: Concurrency and Parallelism, The scripts in these threading examples have been tested with Python 3.6.4. Threading is one of the most well-known approaches to attaining Python concurrency and On every iteration, it calls self.queue.get() to try and fetch a URL to from a thread-safe queue. Therefore, this code is concurrent but not parallel. Saving whole json response from 100 URLS and then processing one by one also looks in-correct. Can some one suggest what would be best way of doing it ? My Question is related to below discussion. A very simple multithreading parallel URL fetching (without queue) The main example in the concurrent.futures does everything you want, a lot more simply. Plus, it can handle huge numbers of URLs by only doing 5 at a time, and it handles errors much more nicely. Of course this module is only built in with Python 3.2 or later… but if you're using 2.5-3.1, you can just install the backport, futures, off PyPI. All you need to change from the example code is to search-and-replace concurrent.futures with futures, and, for 2.x, urllib.request with urllib2. Here's the sample backported to 2.x, modified to use your URL list and to add the times: import concurrent.futures import urllib2 import time start = time.time() urls = ["", "", "", "", ""] # Retrieve a single page and report the url and contents def load_url(url, timeout): conn = urllib2.urlopen(url, timeout=timeout) return conn.readall() # '"%s" fetched in %ss' % (url,(time.time() - start)) print "Elapsed Time: %ss" % (time.time() - start) But you can make this even simpler. Really, all you need is: def load_url(url): conn = urllib2.urlopen(url, timeout) data = conn.readall() print '"%s" fetched in %ss' % (url,(time.time() - start)) return data with futures.ThreadPoolExecutor(max_workers=5) as executor: pages = executor.map(load_url, urls) print "Elapsed Time: %ss" % (time.time() - start) Queue – A thread-safe FIFO implementation, The Queue class implements a basic first-in, first-out container. Queue() for i in range(5): q.put(i) while not q.empty(): print q.get() how to use the Queue class with multiple threads, we can create a very simplistic podcasting client. For our example we hard code the number of threads to use and the list of URLs to fetch. Re: Multithreaded URL Fetching jwenting Jul 5, 2007 1:24 PM ( in response to 807605 ) what you're suggesting is very much frowned upon by people running websites, as it tends to seriously bog down their servers and bandwidth. I am now publishing a different solution, by having the worker threads not-deamon and joining them to the main thread (which means blocking the main thread until all worker threads have finished) instead of notifying the end of execution of each worker thread with a callback to a global function (as I did in the previous answer), as in some comments it was noted that such way is not thread-safe. import threading import urllib2 import time start = time.time() urls = ["", "", "", "", ""] class FetchUrl(threading.Thread): def __init__(self, url): threading.Thread.__init__(self) self.url = url def run(self): urlHandler = urllib2.urlopen(self.url) html = urlHandler.read() print "'%s\' fetched in %ss" % (self.url,(time.time() - start)) for url in urls: FetchUrl(url).start() #Join all existing threads to main thread. for thread in threading.enumerate(): if thread is not threading.currentThread(): thread.join() print "Elapsed Time: %s" % (time.time() - start) How to make Python code concurrent with 3 lines, The entire code runs on Python 3.2+ without external packages. Python iterates on 1000 URLs and calls each of them. This thing on my computer occupies 2% of the CPU and spends most of the import a new API to create a thread pool from concurrent.futures import The API is very simple to use. This script fetches the content from a set of URLs defined in an array. It spawns a thread for each URL to be fetch, so it is meant to be used for a limited set of URLs. Instead of using a queue object, each thread is notifying its end with a callback to a global function, which keeps count of the number of threads running. import threading import urllib2 import time start = time.time() urls = ["", "", "", "", ""] left_to_fetch = len(urls) class FetchUrl(threading.Thread): def __init__(self, url): threading.Thread.__init__(self) self.setDaemon = True self.url = url def run(self): urlHandler = urllib2.urlopen(self.url) html = urlHandler.read() finished_fetch_url(self.url) def finished_fetch_url(url): "callback function called when a FetchUrl thread ends" print "\"%s\" fetched in %ss" % (url,(time.time() - start)) global left_to_fetch left_to_fetch-=1 if left_to_fetch==0: "all urls have been fetched" print "Elapsed Time: %ss" % (time.time() - start) for url in urls: "spawning a FetchUrl thread for each url to fetch" FetchUrl(url).start() Advances in Computer Vision and Information Technology, Here the user has to give the no of links i.e., the exact depth to which user time can be significantly reduced, if many downloads are done in parallel. Java provides easy-to-use classes for both multithreading and handling of lists. (A queue In this case, we allow the crawler to only fetch URLs from queue 1 and add Multithreading. Multithreading run parallel task on a shared memory space. The advantage of shared memory space is to reduce overhead. This mean both threads can read from the same object without creating a duplicate copy. However precautions have to be taken when writing an object back into memory. Below code is to run f1(x) and f2(x) in The Python 3 Standard Library by Example: Pyth 3 Stan Libr Exam _2, The program reads one or more RSS feeds, queues up the enclosures for the be downloaded, and processes several downloads in parallel using threads. It does not have enough error handling for production use, but the skeleton The example uses hard-coded values for the number of threads and list of URLs to fetch. Multithreading and Kotlin. and keep threads around, I’ll run a simple test and spawn a million of them: Similar to threads, coroutines can run in parallel or concurrently, wait for and Multithreaded Python: slithering through an I/O bottleneck, Multiple threads in Python is a bit of a bitey subject (not sorry) in that advantages afforded by running multiple tasks in parallel. Compare the difference between fetching from main memory and sending a simple packet over the Internet which can really speed up compute-bound (or CPU-bound) tasks.. Beginning Excel Services, Workbook URL (default) — The WFE will use a hash based on the workbook URL In the basic case, a request is taken out from the queue and assigned a thread that The number of requests that get executed in parallel is the same as the an I/O or networking operation (such as fetching a file), its thread is not utilized. Multithreaded Programming and Synchronization. Part 1: Simple Multi-threaded Programming using Pthreads; Part 2: Multi-threaded/Parallel Programming using OpenMP; Part 3: OpenMP Solution for Queue Scheduling Problem in Task 1.3; Part 1: Simple Multi-threaded Programming using Pthreads. Files found for part 1 are in the part_1/ directory. Tasks - just to add:in Python case, multithreading is not native to core due to GIL. - It stills looks that fetching the URLs in parallel is faster than doing it serially. Why is that? is it due to the fact that (I assume) the Python interpreter is not running continuously during an HTTP request? - What about if I want to parse the content of those web pages I fetch? Is it better to do the parsing within each thread, or should I do it sequentially after joining the worker threads to the main thread? - I made sure to claim that this was simplified "as far as possible", because that's the best way to make sure someone clever comes along and finds a way to simplify it even further just to make me look silly. :) - I believe it's not easy to beat that! :-) it's a great improvement since the first version I published here - maybe we can combine the first 2 loops into one? by instantiating and starting the threads in the same forloop? - @DanieleB: Well, then you have to change the list comprehension into an explicit loop around append, like this. Or, alternatively, write a wrapper which creates, starts, and returns a thread, like this. Either way, I think it's less simple (although the second one is a useful way to refactor complicated cases, it doesn't work when things are already simple). - @DanieleB: In a different language, however, you could do that. If thread.start()returned the thread, you could put the creation and start together into a single expression. In C++ or JavaScript, you'd probably do that. The problem is that, while method chaining and other "fluent programming" techniques make things more concise, they can also breaks down the expression/statement boundary, and are often ambiguous. so Python goes in almost the exact opposite direction, and almost no methods or operators return the object they operate on. See en.wikipedia.org/wiki/Fluent_interface. - I have a question regarding the code: does the print in the fourth line from the bottom really return the time it took to fetch the url or the time it takes to return the url from the 'results' object? In my understanding the timestamp should be printed in the fetch_url() function, not in the result printing part. - @UweZiegenhagen imap_unordered()returns the result as soon as it is ready. I assume the overhead is negligible compared to the time it takes to make the http request. - Thank you, I am using it in a modified form to compile LaTeX files in parallel: uweziegenhagen.de/?p=3501 - This is by far the best, fastest and simplest way to go. I have been trying twisted, scrapy and others using both python 2 and python 3, and this is simpler and better - Thanks! Is there a way to add a delay between the calls?
http://thetopsites.net/article/54062144.shtml
CC-MAIN-2020-50
refinedweb
1,888
64.71
Internationalizing a React Application using Polyglot Internationalizing an application is always simple at first glance. It only means applying a translate function on strings to be translated, right? This function is generally a mapping function, linking an input key (the string to be translated) to the returned translated string. Sounds simple? What about handling plural forms? It doesn't consist of just adding a s at the end of the word (see mouse and mice for instance). And, what about some Slavic languages where there are several plural forms? For instance, in Russian, there is the singular form for a single element, the dual form for between 2 and 4 elements, and the plural form for 5 or more elements. It already becomes quite complex, yet we just talked about strings, not even date or currency formats. Fortunately, we can rely on some open-source i18n libraries. The most famous in React ecosystem are probably node-polyglot and react-i18next. I won't cover the differences between these two different libraries, as I don't know enough react-i18next. Indeed, Polyglot was already present on the project we maintain and does the job perfectly. Why switch to another lib if everything works well and is simple to develop? Just be pragmatic! Discovering Polyglot First, we need to grab the Polyglot dependency: npm install node-polyglot Be careful: the Polyglot we retrieve here is the Airbnb one, called node-polyglot. There is also a polyglot package, but it is not the one covered by this post. A First Translation using Polyglot Reading the official documentation, we can achieve our first translation very easily, using a code similar to: import Polyglot from 'node-polyglot'; const locale = 'fr'; const phrases = { 'actions.fullscreen': 'Voir en plein écran', }; const polyglot = new Polyglot({ locale, phrases }); console.log(polyglot.t('actions.fullscreen'); // Voir en plein écran We instantiate a Polyglot instance passing it two properties: - locale: current locale, used only for pluralizations, - phrases: a list of translations. Then, translating a string is as simple as calling the t function and passing it the string to be translated. String Interpolation Let's imagine we need to display a customized welcome message to our users. We would need a username variable in our string. That would be especially useful to handle punctuation properly. Indeed, assuming we want to display Welcome Jonathan! once logged in, we may use a code like: const phrases = { "home.welcome": "Welcome", }; const polyglot = new Polyglot({ locale, phrases }); const login = "Jonathan"; // Welcome Jonathan! console.log(polyglot.t("home.welcome") + login + "!"); It works for English. But not in French, where there is always a space before exclamation points. So, we need to embed the login into our translation. Polyglot supports string interpolation, easing our job: const phrases = { "home.welcome": "Welcome %{login}!", }; const polyglot = new Polyglot({ locale, phrases }); const login = "Jonathan"; // Welcome Jonathan! console.log(polyglot.t("home.welcome", { login })); Polyglot replaces all instances of %{variableName} by the value of variableName, allowing to embed our punctuation directly into our translated strings. In French, it would have been Bienvenue %{login} !. Handling Plural Forms As explained above, handling plural forms is not as trivial as it may sound. Fortunately, Polyglot handles it natively: const phrases = { numberChildren: "%{smart_count} child |||| %{smart_count} children", }; const polyglot = new Polyglot({ locale: "fr", phrases }); // 1 child console.log(polyglot.t("numberChildren", { smart_count: 1 })); // 3 children console.log(polyglot.t("numberChildren", { smart_count: 3 })); Note the |||| symbol which is the plural form separator. As Polyglot is configured in French (via the locale attribute), it splits the translation string on this symbol. Polyglot considers the first part as the singular form, and the last one as the plural form. If we need to support Russian for instance, we can have several |||| in the same translated string. The number of items is retrieved from the special smart_count variable. So, that was the getting started of Polyglot. Yet, how do we use it in a real-world application, where you need it in several files? Using Polyglot in a Real World Application There is generally a huge gap between the straightforward getting started tutorial and the integration into a real-world application. Polyglot is not an exception to the rule. Just following the tutorial, how can we use it in all our React components without instantiating it several times? The quick and dirty solution would be to declare it globally via a global.translate property. Yet, not fully satisfying, as we are inevitably going to face some troubles using global variables. Explaining how to implement Polyglot on a real-world application requires a real-world application. I bootstrapped a really basic video list application. You can grab the whole source code on GitHub, each commit bringing improvement compared to the previous one. Our sample application contains three components: a <VideoList /> displaying a list of <Video />, each one having a <Metas /> component (for duration and number of views). The code is basic React and I'll assume you have a basic knowledge of this awesome lib. That's why I won't cover the setup part and focus on the internationalization part. The Naive Solution The first naive solution would be to declare a Polyglot instance once, in our top level script, and then to pass it manually via props to all children. For instance, we may write the following code: import React from "react"; import ReactDOM from "react-dom"; import Polyglot from "node-polyglot"; import videos from "./data"; import messages from "./messages"; import VideoList from "./VideoList"; const locale = window.localStorage.getItem("locale") || "fr"; const polyglot = new Polyglot({ locale, phrases: messages[locale], }); const translate = polyglot.t.bind(polyglot); ReactDOM.render( <VideoList videos={videos} translate={translate} />, document.getElementById("root") ); export const VideoList = ({ translate, videos }) => ( <div className="videos-list"> {videos.map(video => ( <Video key={video.title} video={video} translate={translate} /> ))} </div> ); export const Video = ({ video, translate }) => ( <div className="video"> <img src={video.picture} alt={video.title} /> <div className="infos"> <h2 className="title">{video.title}</h2> <Metas metas={video.metas} translate={translate} /> </div> </div> ); export const Metas = ({ metas, translate }) => ( <div className="video-metas"> <div className="duration"> {translate("minutes", { smart_count: metas.duration })} </div> <div className="views"> {translate("views", { smart_count: metas.views })} </div> </div> ); This naive implementation works perfectly, but is incredibly cumbersome and thus error-prone, due to the numerous translate prop transfers. This basic sample handles only three levels of components. What about a more complex application with sometimes a dozen depth levels? We need a better solution. Especially as only the <Metas> component needs the translate prop. What About Using Context? To Use or Not To Use Context? Fortunately, React provides a solution. Reading the official documentation: In some cases, you want to pass data through the component tree without having to pass the props down manually at every level. You can do this directly in React with the powerful "context" API. That's exactly what we need. Yet, a few lines later, we can read, still on the same page: If you want your application to be stable, don't use context. It is an experimental API and it is likely to break in future releases of React. Not really encouraging. So, it perfectly fits our need but is not recommended. What should we do? When facing such questions, the best solution is to refer to the developer collective intelligence. Dan Abramov, a high-skilled developer you need to follow if you are interested in the React ecosystem, shared a code snippet on Twitter: function shouldIUseReactContextFeature() { if (amIALibraryAuthor() && doINeedToPassSomethingDownDeeply()) { // A custom <Option> component might want to talk to its <Select>. // This is OK but note that context is experimental API and doesn't update // correctly in some cases so you might want to roll your own subscriptions. return amIFineWith(API_CHANGES && BUGGY_UPDATES); } if (myUseCase === "theming" || myUseCase === "localization") { // In apps, context can be used for "global" variables that rarely change. // If you insist on using it, provide a higher order component. // This way when we change the API, you will only need to update one place. return iPromiseToWriteHOCInsteadOfUsingItDirectly(); } if (libraryAsksMeToUseContext()) { // Ask them to provide a higher order component! throw new Error("File an issue with this library."); } // Good luck. return yolo(); } So, using context for localization is fine, but only with a Higher Order Component (HOC). Let's focus on how to use context for now. Using context consists in creating a Provider component, which should fulfill three different requirements: - Declare data structure of context passed data, - Fill context data, - Render childrencomponents. Writing our First Provider In our case, we are going to pass the translate function to the context. But also the locale as a string. Indeed, we may need it if we want to localize dates using moment for instance. So, let's declare our context new data types: import { Children, Component, PropTypes } from "react"; class I18nProvider extends Component {} I18nProvider.childContextTypes = { locale: PropTypes.string.isRequired, translate: PropTypes.func.isRequired, }; export default I18nProvider; Now, we need to fill the context new attributes, via the getChildContext method. That's where we need to instantiate Polyglot: // [...] class I18nProvider extends Component { getChildContext() { const { locale } = this.props; const polyglot = new Polyglot({ locale, phrases: messages[locale], }); const translate = polyglot.t.bind(polyglot); return { locale, translate }; } } // [...] We use the locale prop passed to the Provider instead of retrieving it directly via the local storage. Indeed, this logic should not be embedded in such a "dumb" provider. We can now use our Provider in our application. So, let's change our index.js render script: const locale = window.localStorage.getItem("locale") || "fr"; ReactDOM.render( <I18nProvider locale={locale}> <VideoList videos={videos} /> </I18nProvider>, document.getElementById("root") ); We wrapped the <VideoList> component in <I18nProvider>, adding it a locale property. That's the only change required. All the instantiation logic is moved to the provider, keeping our code really readable. The <I18nProvider> component has no render method yet. This method should just act as a proxy and render its child component. import { Children } from "react"; // [...] class I18nProvider extends Component { render() { return Children.only(this.props.children); } } // [...] But We Promised Dan to Write an HOC... If we remember correctly Dan's previous tweet, we promised him to write a Higher Order Component (aka HOC). But what is it? A higher-order component (HOC) is a function that takes a component and returns a new component. In our case, an HOC is useful as it allows to map our base component with our new context attributes. It would return a new component passing translate and locale as props. Such an HOC code would be: import React, { PropTypes } from "react"; export const translate = BaseComponent => { const TranslatedComponent = (props, context) => ( <BaseComponent translate={context.translate} locale={context.locale} {...props} /> ); TranslatedComponent.contextTypes = { translate: PropTypes.func.isRequired, locale: PropTypes.string.isRequired, }; return TranslatedComponent; }; export default translate; We map our component to the context thanks to the contextTypes property. It takes the same data structure as our I18nProvider.childContextTypes. Then, we can retrieve it thanks to the second argument of our functional component. Now that we have an HOC, we just need to remove all former translate props from components, and just call the translate function on the <Metas> component: import translate from './translate'; export const Metas = ({ metas, translate }) => ( // [...] ); export default translate(Metas); Our internationalization is now quite straightforward, just requiring a function call to our translate component to access Polyglot. And we also learned how to create a Provider. Level up! The final code is available on GitHub as the penultimate commit. HEAD of the repository uses recompose to simplify code slightly.
https://marmelab.com/blog/2017/05/16/internationalizing-react-application-using-polyglot.html
CC-MAIN-2021-25
refinedweb
1,908
50.12
Re: Aegis versus jaxrs Hi If I'm reading the JAX-RS code correctly, which I'm probably not, it doesn't use the JAXB DataBinding object. It interacts class-by-class with JAXB. At the moment yes, and this is a reson I excluded a jaxb-databinding module, just to minimize the size of the cxf-jaxrs bundle. I think jaxb Re: Aegis versus jaxrs to look into it anyway, I'm sure we'll a have a cool JAX-RS AegisProvider eventually :-) Cheers, Sergey P.S. I'll be on holidays for the most part of August, so I won't be able to contribute to this thread during the next month On Thu, Jul 31, 2008 at 10:23 AM, Sergey Beryozkin [EMAIL Re: Server Response Policy Hi I reckon it would be useful if it were possible to explicitly state in the CXF WS-Policy feature that a given policy expression is inbound/outbound only. Perhaps another option is to provide reusable policy interceptors which could simply (and blindly) assert some policies using Re: MTOM threshold and WSDL Another possibility would be to use an MTOM policy expression [1], oth the older version used by CXF tests, and use the scheme extension mechanism to add an additional cxf-specific configuration : For ex : wsdl:service wsp:Policy msoma:Mtom Re: REST header case sensitivity in CXF-2.1.2 / Tomcat AJP RE: More on server response policies Hi I agree with what Dan suggested - logging a message in a PolicyVerificationOutInterceptor should suffice in most cases. Few more comments. I think that asserting a policy on the outbound path makes sense only if a specification for a given policy expression explicitly states that it applies Re: More on server response policies Hi RE: More on server response policies Hi, I'd like to try to summarize what we've talked at this thread. Fred - please feel free to challenge what I'm about to say :-) Original problem : server-side outbound Policy interceptor assumes that no policy alternative has been asserted on the outbound path and reports a failure by Re: Wiki JAX-RS(JSR-311) documentation Hi , fixed now - thanks for pointing it out Cheers, Sergey There are an error on page: the code: jaxrs:providers bean ref=isProvider / bean ref=longProvider / /jaxrs:providers Should be: jaxrs:providers ref Re: Aegis versus jaxrs is something CXF Aegis can do as far as I understand) then it would be super... Cheers, Sergey --benson On Fri, Oct 10, 2008 at 11:44 AM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi Benson I'm sorry If I'm too slow :-) but I'm still not getting what is it that you're proposing. That said, I RE: [VOTE] Release CXF 2.0.9 +1 Willem Jiang wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 This is a vote to release CXF 2.0.9 Once again, there have been a bunch of bug fixes and enhancements that have been done compared to the 2.0.9 release. Over 37 JIRA issues are resolved for 2.0.9 which is a large RE: Aegis versus jaxrs of AbstractJAXBProvider, or so it seems to me. Am I missing anything? Anyway, I'm going to bang something together. On Fri, Oct 10, 2008 at 12:42 PM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi Sergey, I'm not feeling like I'm doing a very good job of explaining myself here. It's probably RE: Aegis versus jaxrs ? [EMAIL PROTECTED] wrote: Hi Benson Re: How to support inheritance ? Hi, Please consider switching to jax-rs. Perhaps seeing the other cxf jax-rs pioneers struggling to make things working with our JAX-RS being involved is not very encouraging :-), it's still a better move. We'll get the JAX-RS issues sorted out eventually and I reckon that the inhertitance is Re: Aegis versus jaxrs Hi Benson I missed your merge email to the jax-rs module, the one introducing the AegisProvider. Even though there might still be some work left to do there, this is great nonetheless, thanks. Perhaps Aegis fans such as Dan Diephouse will notice it too :-). In CXF JAX-RS we can support 3 data RE: svn commit: r705692 - /cxf/trunk/rt/frontend/jaxrs/src/main/java/org/apache/cxf/jaxrs/provider/PrimitiveTextProvider.java Hi Dan, Not sure why it didn't compile on the trunk - I thought I did run it in my snapshot with no problems. Will try it tomorrow again without updating the source...Are you compiling using jdk 1.6 or 1.5 with some stricter options ? Thanks for fixing it anyway :-) Sergey -Original Re: [VOTE] Release Apache CXF 2.1.3 (2nd try) +1 On Saturday 18 October 2008 11:19:14 am Willem Jiang wrote: Once again, there have been a bunch of bug fixes and enhancements that have been done compared to the 2.1.2 release. Over 52 JIRA issues are resolved for 2.1.3. List of issues: Re: Aegis versus jaxrs Hi, I'm getting into trouble here now that I'm debugging a test case. readFrom takes ClassObject, not Class?. I don't see how the JAXB code could work, as it will try to obtain a type for 'Object.class', Unfortunately Class? does not compile. Plain Class is passed in to it from above and it Re: Aegis versus jaxrs Hi, First the good news - I heroically :-) fixed the AegisTest by adding a plain old cast when passing AegisTestBean.class and your provider works perfectly well. This is how it works for JAXB too. The RI/required section of the 1.0 FR calls for a MessageBodyReader that delivers Re: Aegis versus jaxrs , Sergey On Tue, Oct 21, 2008 at 9:55 AM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi, First the good news - I heroically :-) fixed the AegisTest by adding a plain old cast when passing AegisTestBean.class and your provider works perfectly well. This is how it works for JAXB too. The RI Re: Aegis versus jaxrs :-) Cheers, Sergey On Tue, Oct 21, 2008 at 1:15 PM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi Do you participate in the JSR Process for this thing? As the FR stands, @Provider seems to be required. I don't participate - I started working with our JAX-RS impl at the time it was at 0.5 Jetty Continuations in CXF Hi I'd like to continue the discussion on how to handle Jetty continuations[1] in CXF[2] here. In short the requirement is for CXF to be able to handle the application code (ServiceMix JBI consumers served by ServiceMix CXF BindingComponent in this case) doing explicit continuations. Ex. CXF Upgrade to JAX-RS 1.0 Forwarding it to the right address... Hi, I've just updated a jax-rs frontend in 2.2-SNAPSHOT trunk to support JAX-RS 1.0 [1]. The following main changes in 1.0 (some of them came from 0.9) will affect CXF JAX-RS 0.8 users : 1. ProduceMime and ConsumeMime annotations have been renamed to Re: Jetty Continuations in CXF Re: Jetty Continuations in CXF Hi No. I actually expect this to be more important for the JMS folks than the HTTP folks which is why it needs to be transport independent. Basically, MOST HTTP users expect a fairly synchronous invokation path. That's pretty much how its always been so people using HTTP, unless they Re: Getting the Distributed OSGi component out of the CXF sandbox Hi - align the ListenerHook interface with the final version used in Equinox I've added CXF-1896 for this. this should be just about renaming 3 standard methods and dropping the one we introduced to support the direct lookups. - find some alternative mechanism for transparent Re: Jetty Continuations in CXF NPE in system JMS test (DefaultMessageListenerContainer.java:898) at java.lang.Thread.run(Thread.java:595) Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.425 sec looks like it's swallowed - can someone else see it ? Sergey - Original Message - From: Sergey Beryozkin [EMAIL PROTECTED] To: Daniel Re: Jetty Continuations in CXF , Sergey On Wed, Nov 12, 2008 at 11:21 AM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi, I have had a look. At the moment I don't see why we would have to do this sort of sophisticated handling of continuations in CXF JettyDestination. With CXF, it's the the code being invoked further down the line Re: Jetty Continuations in CXF thing. On Tue, Nov 11, 2008 at 6:51 PM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi I have 10 threads involved, 5 control ones + 5 application ones, I see a loss of message approximately once in 5 cases. The fact that cont.resume() is done virtually immediately after cont.suspend() can Re: Jetty Continuations in CXF Hi, Question : how will SMX CXF Binding Component interact with (Jetty) continuations when dealing with CXF-originated invocations ? The Continuation wrappers will be available through an internal CXF input Message and through JAXWS WebServiceContext (or JAXRS one later on) - will CXF BC be Re: Jetty Continuations in CXF I've added a simple HTTPS test and with HTTPS we have no luck at the moment, CxfJettySslSocketConnector extends Jetty SslSocketConnector and calling continuation.suspend(timeout) simply blocks the calling thread. I've checked the archives and I believe at a time SslSelectChannelConnector[1] Re: Jetty Continuations in CXF not support continuations. In such a case, the continuation simply blocks and waits for a call to resume(), in which case, the thread is unlocked and the processing is resumed. On Thu, Nov 13, 2008 at 12:06 PM, Sergey Beryozkin [EMAIL PROTECTED] wrote: I've added a simple HTTPS test and with HTTPS we Re: Jetty Continuations in CXF a user would face when writing an asynchronous code in the first place. Having a 2 liner code branch is not a big deal at all IMHO. A user would have to check for null anyway, right ? So this check will fit naturally Cheers, Sergey On Fri, Nov 14, 2008 at 11:57 AM, Sergey Beryozkin [EMAIL Re: Jetty Continuations in CXF Hi, Continuations support for HTTP and JMS is on the trunk now, updates to the 2.1.x/2.0.x branches will be done next. JAXRS runtime will be updated independently. Any comments - let me know please Cheers, Sergey Re: Jetty Continuations in CXF Hi, Continuations support for HTTP and JMS is on the trunk now, updates to the 2.1.x/2.0.x branches will be done next. It's on 2.1.x and 2.0.x now JAXRS runtime will be updated independently. on 2.2-SNAPSHOT, 2.1.x - the documentation to be updated shorlty. In short, if you're on Jetty Re: Jetty Continuations in CXF JAXRS runtime will be updated independently. on 2.2-SNAPSHOT, 2.1.x - the documentation to be updated shorlty. In short, if you're on Jetty and do JAXRS you can rely on CXF Continuations API, with/without JAXWS involed at the same time. Apart from doing few usual typos :-), I've also Re: Plan for CXF 2.2 Hi 2.2-SNAPSHOT supports JAXRS 1.0 api. Things which still need to be done to claim 100% support : - complete UriBuilder implementation (some of its methods are actually supported) - GenericEntity and FormParam support - coming shortly - a couple of new methods on UriInfo - support for HTTP Re: Problem with cxf-bundle-minimal Hi Eoghan In this case it looks like some other module recursively pulls in cxf-tools-wsdlto-* ... Is it the case for a big bundle too ? Cheers, Sergey - Original Message - From: Eoghan Glynn [EMAIL PROTECTED] To: dev@cxf.apache.org Sent: Wednesday, November 26, 2008 1:58 PM JAXRS : JavaScript client code generation Hi Benson I was wondering, how realistic could it be to have some of the code in a frontend/javascript reused to have a client java script code generated which will be able to invoke on a JAXRS service ? I reckon it would be up to a custom JAXRS RequestHandler (which has an access to a Re: JAXRS : JavaScript client code generation , it will all be about doing some reflection on a resource class and generating either JavaScript or some form of description, etc... Cheers, Sergey On Tue, Dec 2, 2008 at 5:48 AM, Sergey Beryozkin [EMAIL PROTECTED] wrote: Hi Benson I was wondering, how realistic could it be to have some Re: file path after transfer in cxf You can get it in your jaxws application like this : @Resource WebServiceContext context; HttpServletRequest request = (HttpServletRequest)context.getMessageContext() .get(MessageContext.SERVLET_REQUEST); Cheers, Sergey - Original Message - From: Gill Bates ServletDestination and baseAdddress Hi Is there any specific reason why ServletDestination overrides AbstractHttpDestination.getBaseAddress() ? As far as I can see ServletDestination.getBaseAddress() has a bug in that it loses part of the address, specifically, the one specified as part of (CXF)Servlet url pattern. For ex, Re: Issue with multiple CXF servlets (and proposed fix) Hi, Since you include ...cxf.xml in the jaxrs spring configuration file, the jaxrs endpoint can't take the already created bus of the application context into consideration. The new created jaxrs endpoint will use its own context's bus instead using the already configured bus. To fix this Re: Jaxen dependency Hi, javascript component uses it too. But it's the use of Javen within the Aegis component which is problematic as Aegis databinding is used by DOSGI... Benson - is there any reason why Java(x) XPath can not be used instead ? Cheers, Sergey Hi all, I noticed that there is a dependency on Re: Jaxen dependency ? David 2009/1/2 Benson Margulies bimargul...@gmail.com: The problem here is that Aegis also uses JDOM. On Fri, Jan 2, 2009 at 6:28 AM, Sergey Beryozkin sergey.beryoz...@progress.com wrote: Hi, javascript component uses it too. But it's the use of Javen within the Aegis component which is problematic Re: Not out of Jaxen woods ... Hi this dependency is less critical given that the JAXRS frontend is not used by DOSGI at the moment. Also, abdera allows for a custom parser implementation (which would not depend on axiom), so that would be the way to get rid of jaxen completely Cheers, Sergey From the jaxrs frontend. +- Proposal to deprecate CXF HTTP Binding Hi, I'd like to propose to have CXF HTTP Binding deprecated for the following reasons : 1. It's not mantained at all 2. CXF implements JAXRS which offers superior options toward building restful services 3. It adds to the overall build time and distribution size More specifically, I'd like Re: Proposal to deprecate CXF HTTP Binding yet that it will be close to what HTTPBinding provides and even it will be of some reasonable quality in time for 2.2 release... Cheers, Sergey +1 to deletion from the trunk for 2.2. On Tue, Jan 20, 2009 at 11:21 AM, Sergey Beryozkin sergey.beryoz...@progress.com wrote: Hi, I'd like Re: Proposal to deprecate CXF HTTP Binding 20, 2009 at 12:20 PM, Sergey Beryozkin sergey.beryoz...@progress.com wrote: Hi Benson I'd give HTTP Binding a bit more time to live, just declare it as deprecated and then remove it altogether in the next release. I think a number of users are actually using its client support and while I can Managing large attachments Hi I've done some initial work in CXF JAXRS for multipart/related requests be supported. At the moment the JAXRS component relies on AttachmentInInterceptor only, which makes it possible to get to all the individual parts. It looks like that just relying on AttachmentInInterceptor alone can Re: Managing large attachments That's really, really cool then... Cheers, Sergey - Original Message - From: Daniel Kulp dk...@apache.org To: dev@cxf.apache.org Cc: Sergey Beryozkin sbery...@progress.com Sent: Thursday, January 22, 2009 5:08 PM Subject: Re: Managing large attachments That SHOULD be all that's Re: UriBuilder.buildFromEncoded() spec interpretation Hi Andy Thanks for sharing your thoughts...It all looks perfectly correct to me. You might also want to consider sending queries like this one to us...@jsr311.dev.java.net for some additional feedback/clarifications... Cheers, Sergey Hi, JAX-RS specification 1.0 for Re: CXF 2.2 release Hi, CXF 2.2 might get released in March. As far as the JAXRS implementation is concerned, the main focus is to continue stabilizing it, support multiparts, do some work with client api, and prepare it for TCK (though given the limited time frame we might not get it TCK-certified for 2.2 JAXRS: CXF-1991 (Re: svn commit: r738937) Hi Andy I'm commenting on the dev list after seeing my comments for CXF-1991 lost due to server maintenance issues :-). Patch has been applied - thanks a million. I also completed the implementation of UriInfo.getMatchedURIs() for the template var substitutions to be taken into account. Re: Issues with @Context injected fields in ExceptionMapper Hi, It's been fixed, in 2.1.4/2.2 snapshots, thanks for tracing the problem and suggesting a fix... Cheers, Sergey Tong, Gary (FID) wrote: Hello, There seem to be issues with @Context-injected fields in ExceptionMapper, where it throws an NPE with this: public class MyMapper Re: [VOTE] Release CXF 2.0.10 +1 - Original Message - From: Daniel Kulp dk...@apache.org To: dev@cxf.apache.org Sent: Tuesday, February 03, 2009 9:27 PM Subject: [VOTE] Release CXF 2.0.10 Re: [VOTE] Release Apache CXF 2.1.4 +1 - Original Message - From: Daniel Kulp dk...@apache.org To: dev@cxf.apache.org Sent: Tuesday, February 03, 2009 9:30 PM Subject: [VOTE] Release Apache CXF 2.1.4 This is a vote to release CXF 2.1.4 With a longer than normal cycle, a LOT of stuff is in this. Over 88 JIRA issues Re: [VOTE] Release Apache CXF 2.1.4 (Take 2) + 1 On Thu, Feb 5, 2009 at 3:27 PM, Daniel Kulp dk...@apache.org wrote: (The issue with the J2EE has been corrected. This is a new vote on the new artifacts) This is a vote to release CXF 2.1.4 With a longer than normal cycle, a LOT of stuff is in this. Over 88 JIRA issues are Re: [VOTE] Release CXF 2.0.10 (Take 2) +1 On Thu, Feb 5, 2009 at 3:27 PM, Daniel Kulp dk...@apache.org wrote: (The issue with the J2EE has been corrected. This is a new vote on the new artifacts) Since it's been about 4 months since the 2.0.9 release, I decided to go ahead and do a 2.0.10 release along with 2.1.4. This is a Re: TOC wiki URLs fail when accessed from main CXF site Hi Andy I've never used a {toc} macro so I can't comment. But it looks like, if we look at say that a pattern which has been followed so far is to introduce an entry page which repersents the content and then each link links to a seperate Re: TOC wiki URLs fail when accessed from main CXF site consistently that's why anchors are broken. Personally I would prefer to make this working instead invest much more into wiki pages rework :) cheers, andy. Sergey Beryozkin-2 wrote: Hi Andy I've never used a {toc} macro so I can't comment. But it looks like, if we look at say http Re: TOC wiki URLs fail when accessed from main CXF site Hi Andy many thanks - it's much, much easier to navigate now. I hope you can agree that eventually we migth want to split the current JAXRS page into subpages when much more content gets added to it (multuiparts, client api, etc, etc), but at the moment it looks perfect Cheers, Sergey Re: JSON in CXF Hi Gary JSON via JAXB definitely leaves something to be desired. Do you reckon it's the limitations of the underlying JSON library that we use (Jettison) or do you refer to the insufficient number of hooks for our JSON JAXRS reader/writer whiich would help in producing a better quality JSON Re: JSON in CXF to provide real control over the JSON. Thanks, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 10 February 2009 10:48 To: dev@cxf.apache.org Subject: Re: JSON in CXF Hi Gary JSON via JAXB definitely leaves something to be desired. Do you reckon Re: JSON in CXF ]} Seems like the JSON is generated via JAXB and an XMLStreamWriter, which unfortunately is too limited to provide real control over the JSON. Thanks, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 10 February 2009 10:48 To: dev@cxf.apache.org Subject Re: JSON in CXF of any jaxb-quality, annotation-driven JSON serializers? 2) Are you guys interested in replacing the existing JSON provider, or making an alternative one available that allows a bit more control over how the JSON is rendered? Thanks, Gary -Original Message- From: Sergey Beryozkin JAXRS: Client API - feedback is needed Hi, Some initial work has been done for CXF JAXRS have its own Client API. The code is at [1]. Some of the design goals were to come up with at least something different compared to what other JAXRS implementations have (Jersey and RestEasy), reuse the existing JAXRS classes and approaches Re: JSON in CXF Hi Gary 2) Are you guys interested in replacing the existing JSON provider, or making an alternative one available that allows a bit more control over how the JSON is rendered? I'd happy to consider replacing the existing one with a better quality one if it were JAXB driven as a number of [RESULT] Andrzej Michalec for committer Many thanks to all those who have voted. We have 11 +1s : binding votes : sergeyb, bimargulies, willem, freeman, ulhasb, seanc, eoghan, dkulp committers : davidb, bharathg community : dsantosh Thus the vote has passed Andy - welcome to the team :-) Cheers, Sergey Re: JSON in CXF Just FYI : Hopefully it will be fixed in Jettison eventually though it's difficult to predict when. As I said we can discuss the option of shipping a non-JAXB based JSON provider but I reckon it is also worth enhancing the Jettison when possible - so RE: CXF dual-servlet issues !baseSlash) { path = / + path; } m.put(Message.REQUEST_URI, baseAddress + path); } I'd submit a patch + unit tests, but I can't access the source repo from here. Cheers, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 20 Re: JSON in CXF this could also be done via getters/setters What's the interest level in putting something like this into CXF? Cheers, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 18 February 2009 15:15 To: Sergey Beryozkin; dev@cxf.apache.org Subject: Re: JSON in CXF Hi Re: CXF dual-servlet issues in this regard which is what we definitely appreciate Cheers, Sergey - Original Message - From: Sergey Beryozkin sbery...@progress.com To: dev@cxf.apache.org Sent: Tuesday, February 24, 2009 7:48 PM Subject: RE: CXF dual-servlet issues Hi Gary It's most helpful - many thanks. I'll pick up RE: CXF dual-servlet issues saved us a lot of development time, so I'm happy to contribute back any way I can. Working out more issues with servlet isolation atm. Will let you guys know if any useful code comes out of it. Cheers, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 03 Re: Sun StaX versus JAX-RS Hi Benson I presume it's JDK 6 which needs to be used ? I'll try to look asap - I do need to close few issues raised recently against JAXRS plus do a bit of client api cleanup before 2.2 goes out. Can this test be disabled when run as part of the Psjsxp profile if it blocks you ? I'll try to Re: Sun StaX versus JAX-RS Ok, thanks - I'll will look into it asap Cheers, Sergey You don't need jdk 1.6. This uses a maven depedency to get the parser independently. Yes, we'll find some way to spike the test if there's no solution. On Wed, Mar 4, 2009 at 5:52 AM, Sergey Beryozkin sbery...@progress.com wrote: Hi How to replace the output stream Hi, I'm looking at at the moment. The problem is that after some initial writes to an out message's OutputStream an exception is thrown and it's handled seperately by also writing something to the output stream. Now, when it occurs the original Re: How to replace the output stream , we should update the AbstractOutDatatbinding to accept the same config option. Dan On Wed March 4 2009 9:01:45 am Sergey Beryozkin wrote: Hi, I'm looking at at the moment. The problem is that after some initial writes to an out message's Re: svn commit: r750522 - Re: ProviderFactory singleton? identical @adress values then CXF JAXRS has to ensure per-endpoint providers don't clash with eath other. thanks, Sergey On Fri March 6 2009 7:12:51 am Sergey Beryozkin wrote: One issue is that ProviderFactory is also used at the moment in the client api, Doesn't the Client API also use Re: ProviderFactory singleton? . This really shouldn't happen, but some OSGi (most?) containers allow for shared classloading in one fashion or another, which could cause issues. Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 09 March 2009 10:38 To: Daniel Kulp; dev@cxf.apache.org Subject Re: Issue with multiple CXF servlets (and proposed fix) I see, thanks for reminding, I remember this post. I remember Willem suggesting removing explicit import statements altogether so I kind of forgot about it straight afterwards, but I guess it didn't make any difference ? I'll check myself too. I'll definitely look into it and try to fix it asap - Re: ProviderFactory singleton? classloading in one fashion or another, which could cause issues. Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 09 March 2009 10:38 To: Daniel Kulp; dev@cxf.apache.org Subject: Re: ProviderFactory singleton? Hi, I wish someone explained me why things will get Re: ProviderFactory singleton? /jaxrs/JAXRSClientServerResourceCreatedSpringProviderTest.java Cheers, Sergey - Original Message - From: Sergey Beryozkin sbery...@progress.com To: dev@cxf.apache.org; 'Daniel Kulp' dk...@apache.org Sent: Tuesday, March 10, 2009 12:35 PM Subject: Re: ProviderFactory singleton? Hi Gary Re: GSoC ideas.... Few more ideas. 1. Event publication bus extension, with Atom publisher being a concrete implementation. Specifically, this publisher should be capable of posting events as Atom entries to external Atom consumers or provide them as an Atom Feed for consumers to poll. Event publisher interface Multiple HTTP methods per method in JAXRS (Re: Possibly retarded question) I renamed the subject a bit :-) I don't think it's a bad idea per se. JAXRS does not prohibit multiple annotations per se (AFAIK), it's just undefined what happens. In CXF the first annotation is used, if you chnage the order then POST methods will be supported. I think we can basically support Re: Multiple HTTP methods per method in JAXRS (Re: Possibly retarded question) Don't make Roy Fielding have to come around to your house! This is funny :-) I think if you use GET and POST on the same URI then you are innappropriately 'crossing the streams' and making it ambiguous as to whether the invocation is idempotent or not. That is true. That said, one way of Re: Status of 2.2...... Anyone have anything else to add? Any major blockers that would hold this up any longer? (I'm hoping not) I should be ok too - will do few minor updates to the client api plus fix few other minor issues Cheers, Sergey Keep in mind, I'm then hoping to do a 2.2.1 fairly quickly (4 weeks RE: [VOTE] Release Apache CXF 2.2 +1 -Original Message- From: Daniel Kulp [mailto:dk...@apache.org] Sent: 15 March 2009 18:58 To: dev@cxf.apache.org Subject: [VOTE] Release Apache CXF 2.2 This is a vote to release CXF 2.2 This release is a major step forward for CXF with several new features including: * Re: Working towards a DOSGi 1.0 release Hi, I'm quite keen to emded a JAXRS component into DOSGI as I reckon we now have all the pieces in place (proxy based client api support, and Benson's Aegis provider) so it should, fingers crossed, be a fairly straighforward exercise - but then you never know what could actually happen at the Re: Working towards a DOSGi 1.0 release ? If its a bit further out, why not do a DOSGi 1.0 release based on CXF 2.2 and then do another 1.1 release with the JAXRS stuff as soon as 2.2.1 is out? Cheers, David 2009/3/26 Sergey Beryozkin sbery...@progress.com: Hi, I'm quite keen to emded a JAXRS component into DOSGI as I reckon we now Re: JSON in CXF Hi Gary I ended up writing my own converter for JSON that uses its own annotations seperate from JAXB. It's a pretty quick implementation, and only does what I need it to do. It depends on the JSON objects from json.org, which are also included in Jettison under a different package. The Re: JSON in CXF for GSoC, if it's not too late to submit projects. Rewriting Sun libraries? Working with JSON? Hells yeah. Cheers, Gary -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 06 April 2009 12:01 To: dev@cxf.apache.org Subject: Re: JSON in CXF Hi Gary I ended up Re: JSON in CXF you want to give this to someone from GSoC? This could actually be a pretty sweet project. -Original Message- From: Sergey Beryozkin [mailto:sbery...@progress.com] Sent: 07 April 2009 12:14 To: dev@cxf.apache.org Subject: Re: JSON in CXF Hi Gary If you give me until next week I should Re: svn commit: r763339 - in /cxf/trunk: rt/frontend/jaxrs/src/main/java/org/apache/cxf/jaxrs/servlet/CXFNonSpringJaxrsServlet.java Thanks Dan, indeed I forgot to add one file - will do shortly cheers, Sergey - Original Message - From: dk...@apache.org To: comm...@cxf.apache.org Sent: Wednesday, April 08, 2009 7:25 PM Subject: svn commit: r763339 - in /cxf/trunk: Re: JAX-RS test failure with IBM JDK... Hi Dan I'm working on trying to get things building/passing with the IBM JDK again. (mostly to attempt to get the Progress AIX builds to actually succeed) I'm getting this failure in JAX-RS: testSchemeHostPortQueryFragment(org.apache.cxf.jaxrs.impl.UriBuilderImplTest) Time elapsed: 0.003 Re: svn commit: r763742 - /cxf/trunk/systests/src/test/java/org/apache/cxf/systest/jaxrs/GenericHandlerWriter.java When running system tests, I often do mvn test -Dtest=JAXRS*, to save the time, if I'm confident no other CXF functionality has been affected by the current JAXRS related changes. I always do now 'mvn clean install' when building frontend/jaxrs, as I've already been bitten by Eclipse and Maven Re: Distributed OSGi Discovery implementation in CXF Hi, Hi all, Over the past while I have done some experimentation around a possible implementation of the Distributed OSGi RFC 119 Discovery service. The current CXF-DOSGi codebase only contains the Distribution Software (DSW) component, which means that you need to configure the location of
https://www.mail-archive.com/search?l=dev@cxf.apache.org&q=from:%22Sergey+Beryozkin%22
CC-MAIN-2018-51
refinedweb
5,306
63.09
JScript .NET, Part VII: Consuming add from ASP.NET: Creating a Proxy - Doc JavaScript JScript .NET, Part VII: Consuming add from ASP.NET Creating a Proxy Let's create a proxy for the add Web service from Column 112. We put all files in d:\aspDemo, including the Web service definition, simpleCalc.asmx. Now you need to open the IIS control window. In Windows XP, click start -> Control Panel -> Performance and Maintenance -> Administrative Tools -> Internet Information Services. Expand the menu by clicking the + button, until you get the Default Web Sites entry: Let's create a virtual directory now under Default Web Sites. Right-click the Default Web Sites entry, and pick New Virtual Directory. A wizard will guide you through two entries and a set of options. The first entry is the alias of this Web site. We choseo reflected in the right window of the IIS control panel. You can check now that indeed you can see the add Web service via (assuming you put simpleCalc.asmx in d:\aspDemo). Now you can start creating the proxy to the add Web service. It's a two-step process. The first step creates the proxy code in JScript .NET and adds your class to the specified namespace. The executable that does this is wsdl.exe. Your .NET Framework should support this executable by including its directory in your path. Open a Command Prompt window and cd (change directory) to d:\aspDemo. Type wsdl and verify you get the help for this executable. Type the full command now (in one line): wsdl /l:js /namespace:calcService /out:calcProxy.js The last entry on this line is the input to the wsdl command: which is the full path to the Web service definition. The other switches are: /l:specified the language of the input file. Put jsfor JScript. /namespace:specifies to which namespace you want to add your new simpleCalcclass.oin our case). The following Command Prompt window shows the content of simpleCalc.asmx and the echo of the wsdl command: Next: How to create a .dll file Produced by Yehuda Shiran and Tomer Shiran Created: June 30, 2002 Revised: June 30, 2002 URL:
http://www.webreference.com/js/column113/4.html
CC-MAIN-2017-09
refinedweb
363
68.77
import "github.com/spf13/hugo/tpl/strings" init.go regexp.go strings.go truncate.go Namespace provides template functions for the "strings" namespace. Most functions mimic the Go stdlib, but the order of the parameters may be different to ease their use in the Go template system. New returns a new instance of the strings-namespaced template functions. Chomp returns a copy of s with all trailing newline characters removed. Contains reports whether substr is in s. ContainsAny reports whether any Unicode code points in chars are within s. CountRunes returns the number of runes in s, excluding whitepace. CountWords returns the approximate word count in s. func (ns *Namespace) FindRE(expr string, content interface{}, limit ...interface{}) ([]string, error) FindRE returns a list of strings that match the regular expression. By default all matches will be included. The number of matches can be limited with an optional third parameter. HasPrefix tests whether the input s begins with prefix. HasSuffix tests whether the input s begins with suffix. Replace returns a copy of the string s with all occurrences of old replaced with new. ReplaceRE returns a copy of s, replacing all matches of the regular expression pattern with the replacement text repl. SliceString slices a string by specifying a half-open range with two indices, start and end. 1 and 4 creates a slice including elements 1 through 3. The end index can be omitted, it defaults to the string's length. Split slices an input string into all substrings separated by delimiter.. Title returns a copy of the input s with all Unicode letters that begin words mapped to their title case. ToLower returns a copy of the input s with all Unicode letters mapped to their lower case. ToUpper returns a copy of the input s with all Unicode letters mapped to their upper case. Trim returns a string with all leading and trailing characters defined contained in cutset removed. TrimPrefix returns s without the provided leading prefix string. If s doesn't start with prefix, s is returned unchanged. TrimSuffix returns s without the provided trailing suffix string. If s doesn't end with suffix, s is returned unchanged. Package strings imports 13 packages (graph) and is imported by 1 packages. Updated 2017-05-22. Refresh now. Tools for package owners.
https://godoc.org/github.com/spf13/hugo/tpl/strings
CC-MAIN-2017-22
refinedweb
384
67.55
Red Hat Bugzilla – Bug 166067 Can not write a buffer beyond 2 giga bytes in a C program (if kernel version >= 2.6.11) Last modified: 2007-11-30 17:11:11 EST From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.7.8) Gecko/20050718 Firefox/1.0.4 (Debian package 1.0.4-2sarge1) Description of problem: After updating the kernel version >= 2.6.11 (so FC3 has the same problem), under a C program, when I use write() or fwrite() (or some other low level C write fonction) to write a buffer beyond 2147483648 bytes, program fails with error message : invalid argument. Solution I used now is just downgrading to a 2.6.10 kernel. My Athlon64 box config: 4 Giga bytes memory with a athlon64 3500+. Sample program: #include <stdlib.h> #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <fcntl.h> #include <errno.h> int main (int argc, char *argv[]) { int fd; size_t size_to_wrt; void *buffer_to_wrt; ssize_t size_wrten; size_to_wrt = 2147483648; /* can not beyond 2147483647 bytes */ buffer_to_wrt = malloc(size_to_wrt); if(buffer_to_wrt == NULL) printf("Malloc error\n"); fd = open(argv[1], O_CREAT | O_RDWR, S_IRUSR|S_IWUSR); size_wrten = write(fd, buffer_to_wrt, size_to_wrt); if(size_wrten != size_to_wrt) printf("Error message: %s\n", strerror(errno)); free(buffer_to_wrt); close(fd); return 0; } Version-Release number of selected component (if applicable): if kernel version >= 2.6.11 How reproducible: Always Steps to Reproduce: 1. A athlon64 PC has more that 2 Gbytes memory. 2. Compile ths sample C program. (cc -o test_64bit_write test_64bit_write.c) 3. Execute with: test_64bit_write foo. Actual Results: error message test_64bit_write : invalid argument. Expected Results: Program pass. Additional info: Is there any genuine need for this? This change was a deliberate upstream decision, made after one or two code paths in the kernel were found which didn't handle >=2GB IOs correctly. Our policy is to follow upstream behaviour in all such cases. In many industrial code we do need some kind of the IO stream copy between the memory and storage device, specially for a 64 bit program this kind of the application comes soonly. Certainly we can loop the I/O by buffer of 2Gb, but this should not be done in a high level application, but at some low level system IO menagement. Thanks for any opinion. Anyway, the error message should be more explit --- I had spend hours to found this. POSIX simply does not provide such fine-grained error codes; EINVAL is the closest one available. Unix in general just doesn't have a mechanism for reporting errors with such specificity. Our behaviour in this case is going to continue to match upstream. It's certainly worth asking on the linux kernel mailing lists if you think this restriction is inappropriate.
https://bugzilla.redhat.com/show_bug.cgi?id=166067
CC-MAIN-2017-34
refinedweb
471
68.57
In this walkthrough, I will demonstrate how to convert an existing ASP.NET/jQuery application that consumes data from a Windows Communication Foundation (WCF) service to Silverlight. Here are some topics that we will cover: - How to use the Visual Studio 2010 Silverlight Designer - XAML and Silverlight control concepts - How WCF services can be integrated into Silverlight applications - Silverlight data binding techniques - How to make asynchronous calls to services - How to work with cross domain services - Similarities between ASP.NET and Silverlight applications Step 1: Explore the ASP.NET Web Forms application Let’s take a look at the code that we’ll be migrating. To start exploring the ASP.NET Web Forms application, follow the following steps: - Open Visual Studio 2010, and from the File menu and select Open Project. The Open Project dialog box is displayed. - Open the following Visual Studio Solution file from the downloaded offline kit: “Firestarter\Labs\02 – ASP.NET\Source\Starting Point\VB\CustomerViewer.sln” - The following projects are available in the application: - CustomerService.Model – This project contains the entities and data repository classes that are used to access the AdventureWorks LT database. - CustomersService – This project is a WCF service application that displays the entities to various applications. - CustomerViewer – This is a Windows Forms project that takes data from a WCF service. - CustomerViewer.Web – This is an ASP.NET Web Forms project that uses jQuery to make RESTful calls to a WCF service. - In Solution Explorer, right-click CustomerService.svc in the CustomersService project. - From the popup menu select View in Browser. This will start a local WCF server and show a test page. - Go back to Visual Studio application. - In Solution Explorer, right-click the CustomerViewer project. - From the popup menu select Set as StartUp Project. - Press F5 to run the application. The first time the application runs there will be short delay before data is loaded. - Select a customer from the drop-down list. The details of the customer are displayed in the form, allowing the data to be updated or deleted using the AJAX techniques. - Go back to Visual Studio application. - To see the Entity Framework 4 model, in Solution Explorer, double-click the AdventureWorksLT.edmx file in the CustomerService.Model project. The entity model contains a Customer object that is used by the ASP.NET application. - From the Repository folder, open the CustomerRepository.vb page and review the code that interacts with the entity model. The RepositoryBase class is responsible for all communication with Entity Framework and acts as a reusable repository layer in the application. - Open the ICustomerService.vb page. The methods in the page are used to load the customer objects and handle the update and delete operations. Some of the operations support RESTful calls. Note: The ASP.NET project currently uses a WCF service proxy object as well as jQuery to communicate with the different service operations. These service calls are forwarded to the CustomerRepository class. The WCF services work well in environments where data must be exposed to different types of clients without requiring a specific technology or framework. This application uses WCF services to promote data re use, allow different types of clients to consume data, and provide a standards compliant way to access the data. - In the CustomerViewer.Web project, right-click Default.aspx. From the popup menu select View Code. Review the code and note the following: - A WCF service proxy is used to call a service that supplies customer data - If an error occurs loading customer data, a script is sent to the client and used to display an alert - Open the Default.aspx page and note the following: - A stylesheet named Default.css is used to add CSS styles into the page - A script named Default.js is loaded by the page - div tags are used to arrange HTML controls in the page - Open the Default.js page review the jQuery code and note the following features: - jQuery selectors are used to locate controls in the DOM and access their values - jQuery AJAX functions such as getJSON are used to communicate with a cross domain WCF service Step 2: Migrate the ASP.NET application to Silverlight Now let’s migrate the application to Silverlight. We’ll create a new Silverlight project, work with XAML, create a WCF service proxy to interact with the service, and design a user interface that mirrors the existing ASP.NET user interface. - To add a new Silverlight Application, right-click the application name and add a new project. The Add New Project dialog box is displayed. - Browse to the Silverlight node and select the Silverlight Application template. - Enter the name for the project as SilverlightCustomerViewer and click OK. The New Silverlight Application dialog box is displayed. - Select <New Web Project> from the drop-down list options and confirm that the New Web project name is displayed as SilverlightCustomerViewer.Web. This project will be used to host the Silverlight application in a web page. - Click OK to proceed. The MainPage.xaml page of the SilverlightCustomerViewer project is displayed. - Replace the <UserControl> tag with the following code: <UserControl x:Class=”SilverlightCustomerViewer.MainPage” xmlns=”” xmlns:x=”” xmlns:d=”″ xmlns:mc=”″ mc:Ignorable=”d” d:DesignHeight=”545″ d:DesignWidth=”550″ Width=”545″ Height=”550″> <Grid x:Name=”LayoutRoot” Background=”White”> </Grid> </UserControl> Note: The d:DesignHeight and d:DesignWidth attributes control the size of the design surface in the design mode. However, they don’t have any effect at runtime. The Height and Width attributes constrain the size of the Silverlight screen at runtime. If you don’t define the Height and Width attributes, Silverlight will automatically fill the entire area of its container. - From the toolbox, drag and drop 9 TextBlock controls, 1 ComboBox control, 5 TextBox controls and 2 Button controls to the designer surface and arrange them as the following: Note: The TextBlock control is similar to the Label control in ASP.NET. The Silverlight Toolkit provides the Label control that can be used in Silverlight applications. You can download the Silverlight Toolkit from here. - Change the Text property of each TextBlock control such that it matches the user interface as shown above. - Set the Content property of the first Button control to “Update” and the Content property of the second Button Control to “Delete”. - Select the ComboBox control, and change the name of the control to a value of CustomersComboBox as shown in the following image: - Change the DisplayMemberPath property of the ComboBox control to a value of FullName. Note: The DisplayMemberPath is used to define the property that will be displayed is the ComboBox when it binds to a collection of Customer objects. - Change the names of the update and delete buttons in the interface to “Update” and “Delete” respectively using the Properties window. - To simulate an HTML frameset tag, from the toolbox drag and drop a Rectangle control to the designer surface. - Right-click the Rectangle control, and from the popup menu select Order, and then select Send to Back. - Resize and arrange the Rectangle control as shown in the following figure: - From the toolbox, drag and drop a Border control to the design surface. - Select the Border control and change its Background property to “White” and its BorderBrush property to “White”. - From the Toolbox, drag and drop a TextBlock control into the Border control. - Select the TextBlock control and change the Text property to a value of Customer Details. - Right-click the Customer Details TextBlock and from the popup menu select Reset Layout, and then select Size. - The user interface should look like the following: Step 3: Call a WCF Service and Bind Data Now let’s create a WCF service proxy that can be used to call an existing WCF service. We’ll also use a clientaccesspolicy.xml file to handle cross domain issues and bind data to controls. - Right-click on the SilverlightCustomerViewer project and then select Add Service Reference. Add Service Reference dialog box is displayed. - Click Discover to browse and locate the WCF services. - To expand the CustomerService.svc service, click on the icon next to CustomerService.svc service. Drill down to browse to the ICustomerService contract. Click the contract name and ensure that it has several service operations available. - Enter the namespace as CustomerService.Proxies. - To create the WCF service proxy, click OK. - Add a new Customer class to the SilverlightCustomerViewer project. - Add the following namespace of the new class: Namespace CustomerService.Proxies Note: This namespace is added so as to match the namespace of the new class with that of the namespace generated by the WCF proxy. - To display the FullName property in the ComboBox control , add the following code in the Customer class: Partial Public Class Customer Public ReadOnly Property FullName() As String Get Return FirstName & ” “ & LastName End Get End Property End Class - To import the proxy namespace, open the MainPage.xaml.vb page and add the following code: Imports CustomerService.Proxies - To hook the Loaded event to an event handler, add the following code within the constructor: AddHandler Loaded, AddressOf MainPage_Loaded - To use the WSF service proxy and make an asynchronous data request, add a MainPage_Loaded method with the following code: Private Sub MainPage_Loaded(ByVal sender As Object, ByVal e As RoutedEventArgs) Dim proxy = New CustomerServiceClient() AddHandler proxy.GetCustomersCompleted, AddressOf proxy_GetCustomersCompleted proxy.GetCustomersAsync() End Sub - Add the following method and associated code to handle the asynchronous callback, which will be made when the data from the WCF service is returned to the Silverlight application: Private Sub proxy_GetCustomersCompleted(ByVal sender As Object, ByVal e As GetCustomersCompletedEventArgs) CustomersComboBox.ItemsSource = e.Result End Sub Note: Once the WCF service proxy returns data it can be accessed through the GetCustomersCompletedEventArgs object’s Result property which is typed as an ObservableCollection of Customer. This collection is assigned to the ItemsSource property of the ComboBox. - Open the MainPage.xaml page. - Select the TextBlock control next to Customer ID and select Properties from the menu. - Remove the text from the Text property and click on the click the icon next to the property and from the popup menu select Apply Data Binding. The Data Binding property dialog box is displayed. - Click ElementName and then select CustomersComboBox to set the ComboBox as the data binding source as shown in the following figure: - Click Path area and select SelectedItem from the properties as shown in the following figure: - In the XAML editor, locate the TextBlock control modified in the previous step and change the Text property value to the following: Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.CustomerID }” - Similarly add data bindings to all of the TextBox controls in the designer surface. For this you’ll have to modify the Text property of each control within the XAML as done in the previous step to specify the appropriate property of the SelectedItem to bind to. To set the properties for each TextBox, add the following XAML code: <TextBox Height=”23″ HorizontalAlignment=”Left” Margin=”158,225,0,0″ VerticalAlignment=”Top” Width=”219″ Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.FirstName,Mode=TwoWay}” /> <TextBox Height=”23″ HorizontalAlignment=”Left” Margin=”158,270,0,0″ VerticalAlignment=”Top” Width=”219″ Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.LastName,Mode=TwoWay}” /> <TextBox Height=”23″ HorizontalAlignment=”Left” Margin=”158,316,0,0″ VerticalAlignment=”Top” Width=”219″ Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.CompanyName,Mode=TwoWay}”/> <TextBox Height=”23″ HorizontalAlignment=”Left” Margin=”158,366,0,0″ VerticalAlignment=”Top” Width=”219″ Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.EmailAddress,Mode=TwoWay}” /> <TextBox Height=”23″ HorizontalAlignment=”Left” Margin=”158,416,0,0″ VerticalAlignment=”Top” Width=”219″ Text=”{Binding ElementName=CustomersComboBox, Path=SelectedItem.Phone,Mode=TwoWay}” /> Note: Each TextBox binding has Mode=TwoWay added to it. This allows any change made to a TextBox control to be propagated back to the bound property automatically. - Right-click the SilverlightCustomerViewer.Web project and select Set as StartUp Project. - To set the html page in the project as the startup page, right-click the appropriate file and select Set As Start Page. - Press F5 to compile and run the application. Note: An error will occur once the Silverlight application loads. This is due to a cross domain call that is being made from Silverlight to the WCF service. This service uses a different port than the Silverlight host Web project, which causes this cross domain exception to be thrown. - To fix this cross domain issue, rename the existing clientaccesspolicy.exclude file in the CustomersService project to clientaccesspolicy.xml. - Press F5 to compile and run the application again. Now the data loads in the ComboBox control. - Select a customer from the drop-down list. The data from it is bound to the appropriate TextBlock and TextBox controls. - Go back to Visual Studio application to add the Click event handlers. - To add the event handler for the Update button, add the following code in the Update button’s click event handler:) - To add the event handler for the Delete button, add the following code in the Delete button’s click event handler: Dim proxy = New CustomerServiceClient() Dim cust = TryCast) - Run the application and test the update and delete functionality. Summary.
https://blogs.msdn.microsoft.com/vbteam/2011/04/19/silverlight-4-firestarter-series-2-how-to-migrate-an-asp-net-web-forms-application-to-silverlight/
CC-MAIN-2017-47
refinedweb
2,176
56.05
Essentials All Articles What is LAMP? Linux Commands ONLamp Subjects Linux Apache MySQL Perl PHP Python BSD ONLamp Topics App Development Database Programming Sys Admin LAMP and the Spread Toolkit Pages: 1, 2, 3 Now that you have the Spread daemon running, you can try to access it from code. The latest version of the Spread module for Python (1.5) is available from Zope, and older versions from the original Python Spread page. Download and extract the contents of the distribution (see Resources), then from the created directory, run: python setup.py build sudo python setup.py install Test that the installation was successful by following the same steps as you did with spuser (join a group, send a message to the group): spuser Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import spread >>> c = spread.connect('4804') >>> c.join('test') >>> c.receive() <MembershipMsg object at 0xb7d342c0> >>> c.multicast(spread.RELIABLE_MESS, 'test', 'test message from python') 24 >>> msg = c.receive() >>> msg.message 'test message from python' >>> c.disconnect() Note that you may get an error message when importing the spread module: Traceback (most recent call last): File "<stdin>", line 1, in ? ImportError: libtspread.so: cannot open shared object file: No such file or directory If so, try setting your LD_LIBRARY_PATH to the location of the Spread library files. For example: LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/local/lib PHP is a tad more challenging, given that I was completely unsuccessful in getting the PHP Spread extension package (available from PECL) to compile, build, run, or even look at me slightly askance. If you're a PHP extension expert, you'll probably immediately see the problem and have the package installed in a few moments. For others, unable to find 15 sacrificial PHP virgins to dance the Rites of Extension Installation with, I've included a reworked version of the module in the Resources section of this article. Given that the last time I had to touch C code in anger (and it was in anger, I recall) was slightly more than a decade ago, please either ignore the rather embarrassing attempt at a Makefile or just snicker quietly from behind a small pot-plant somewhere. In addition, I've only tested it with PHP 5, so I'm keen to hear if anyone has any success on earlier versions of PHP (and what changes it needs to work). To build and install this extension, you need to know the directory of the Spread include and library files, the location of includes for PHP, and the directory in which PHP expects its extensions. In the case of my Kubuntu machine, the Makefile variables look like: INSTALL_TO = /usr/lib/PHP5/20051025 SPREAD_INCLUDE = /usr/local/include SPREAD_LIB = /usr/local/lib PHP_INCLUDE = /usr/include/PHP5 Compile and install the extension by running: make sudo make install PHP also needs to know about the shared object, so find the PHP.ini for your distribution (mine is in /etc/PHP5/apache2, or /etc/PHP5/cli if you want to add the extension for command-line execution of PHP scripts). Add the line extension=spread.so. (Look for other references to extension if you can't find the extension directory.) Assuming a successful build and install, you can now try an integration test--sending a message from PHP to Python. Restart Apache httpd and try a PHP page: extension=spread.so extension httpd <html> <body> <?PHP $id = spread_connect('4804', 'PHPtest'); if ($id != null) { spread_join($id, 'test'); $msg = spread_receive($id, 120000); echo "<p>received message " . $msg['message'] . "</p>"; spread_leave($id, 'test'); spread_disconnect($id); } else { echo "<p>Failed to connect</p>"; } ?> </body> </html> While the browser hangs, waiting for a message, open another console, and try: Python 2.4.3 (#2, Apr 27 2006, 14:43:58) >>> import spread >>> c = spread.connect('4804', 'mytest', 0, 0) >>> c.multicast(spread.RELIABLE_MESS, 'test', 'hello there from python') With any luck, the browser should display your message. It should hopefully be obvious from the previous examples that Spread is refreshingly free of any restrictions, which means you can choose your own strategies for its use. Joining a group and sending a message corresponds with the publish/subscribe cycle of messaging, and point-to-point is available using the private name of a connection. For example, in one Python console type: Python 2.4.3 (#2, Apr 27 2006, 14:43:58) >>> import spread >>> c = spread.connect('4804', 'testname1', 0, 0) >>> print c.receive().message In another console: Python 2.4.3 (#2, Apr 27 2006, 14:43:58) >>> import spread >>> c = spread.connect('4804', 'testname2', 0, 0) >>> c.multicast(spread.RELIABLE_MESS, '#testname1#machine1', 'this is a point-to-point message') Where "#testname1#machine1" is the unique private name given to the first connection. #testname1#machine1 Where something like JMS specifies the format of a transmitted message, with Spread you are free to choose your own protocol. Because HTTP is such a well-known protocol, it makes sense, to me at least, to use a similar format for messaging. Thus I've decided to include headers at the beginning of a message, with the body containing whatever content I need to transmit. The first steps, then, are to define the libraries for creating and consuming messages in this format. You can then use these libraries to send an uploaded file from a PHP app to a Python app, which, for the simple purposes of this article, will just document the receipt. For my Python applications, I use a custom Message class, defined in spreadutils.py, that is capable of both creating and parsing messages sent via Spread (once again, see the Resources section for the source to this and other code mentioned in this article). My test code is: import spread from spreadutils import * c = spread.connect('4804', 'mytest', 0, 0) msg = Message({ 'header1' : 'val1', 'header2' : 'val2' }, \ 'this is a test message with headers') c.multicast(spread.RELIABLE_MESS, '#mytest#machine1', str(msg)) smsg = c.receive() recmsg = Message(parse_msg=smsg.message) print 'sent == received == %s' % (recmsg == msg) This script sends a message back to the same connection (point-to-point) using the Message class both to create and then consume--and then to check the equality of sent and received messages. The PHP version of the class is only capable of message creation, as I'm not convinced it makes sense to receive messages in my PHP applications (despite the fact that the Spread extension for PHP allows for this behavior). The concept of sending an asynchronous message only to hang while awaiting a response doesn't sit well in my personal view of the world. For the moment, you can test sending a message from a PHP page, and receipt to a Python app using two scripts. For Python (test2.py): import spread from spreadutils import * c = spread.connect('4804', 'mytest', 0, 0) c.join('testgroup') smsg = c.receive() recmsg = Message(parse_msg = smsg.message) print str(recmsg) For PHP (spreadtest.php.txt): <?PHP require_once('spreadutils.php'); $id = spread_connect('4804', 'PHPtest'); if ($id != null) { $msg = new Message(); $msg->set_header('test1', 'test2'); $msg->set_header('test2', 'test3'); $msg->set_content('this is a test PHP message'); spread_multicast('testgroup', $msg->str()); spread_disconnect($id); } else { echo "<p>Failed to connect</p>"; } ?> Run the Python script first--python test2.py--and then run the PHP script--php spreadtest.php. (Don't forget to start the Spread daemon first, if it's not already running.) The output from the Python script should be something like: python test2.py php spreadtest.php test1: test2 test2: test3 this is a test PHP message Pages: 1, 2, 3 Next Page Sponsored by: © 2017, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.onlamp.com/pub/a/onlamp/2006/11/30/lamp-and-spread.html?page=2
CC-MAIN-2017-13
refinedweb
1,317
64.41
Unix/Linux code to test whether “today” is a weekend day (or weekday) If you ever need an example of a Unix/Linux shell script where you need to determine whether today is a weekend day, I can confirm that this code works: If you ever need an example of a Unix/Linux shell script where you need to determine whether today is a weekend day, I can confirm that this code works: As a quick note, I was just able to get the URI of the current page (node) in a Drupal 8 Twig template theme file using this code: {% set uri = path('entity.node.canonical', {'node': node.id}) %} As an example, I’m rendering some different content based on the URI of the current node, so I first use that code to set the uri field, then I have a little Twig if/then/else condition like this: {% if uri starts with '/foo' %} <div>Option Foo here ...</div> {% elseif uri starts with '/bar' %} <div>Option Bar here ...</div> {% else %} <div>Option Baz here ...</div> {% endif %} In summary, if you wanted to see how to get the URI of the current page/node when using a Drupal 8 Twig theme template file, I hope this example is helpful. This image comes from this Perl.org web page, and shows the Perl “file test” characters. As I show on this page, an if file test looks like this in Perl: This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 3.13, “How to add 'if' expressions (guards) to case statements.” You want to add qualifying logic to a case statement in a match expression, such as allowing a range of numbers, or matching a pattern, but only if that pattern matches some additional criteria. Add an if guard to your case statement. Use it to match a range of numbers: This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 3.6, “How to use a Scala if/then statement like a ternary operator.” You’d like to use a Scala if expression like a ternary operator to solve a problem in a concise, expressive way. This is a bit of a trick problem, because unlike Java, in Scala there is no special ternary operator; just use an if/else expression: This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 3.3, “How to use a 'for' loop with embedded 'if' statements (guards).” You want to add one or more conditional clauses to a for loop, typically to filter out some elements in a collection while working on the others. Add an if statement after your generator, like this: This post contains a collection of Scala control structures examples. I initially created most of these in the process of writing the Scala Cookbook. Unlike the Cookbook, I don’t describe them much here, I just show the examples, mostly as a reference for myself (and anyone else that can benefit from them). Here are some examples of the Scala if/then control structure: Scala FAQ: Can you share some examples of the Scala for loop syntax? Sure. I'm going to start with a comparison to Java for loops, because that's what I was just thinking about. In Java you might write a for loop with a counter like this: for (int i = 0; i < 10; i++) { System.out.println(i); } The equivalent for loop in Scala looks like this: for (i <- 1 to 10) { println(i) } (The use of parentheses isn’t necessary in either example, but most for loops will be longer.) If you ever need to use a Scala for loop (for comprehension) with one or more embedded if statements, I hope the following example is helpful: Scala FAQ: Can you share some examples of the Scala if/then/else syntax? Also, can you show a function that returns a value from an if/then/else statement? In its most basic use, the Scala if/then/else syntax is very similar to Java:
https://alvinalexander.com/taxonomy/term/1262
CC-MAIN-2019-51
refinedweb
687
67.08
Provides classes for working with timecodes (as used in the video industry). Project Description # django-timecode A python class to store and manipulate timecodes with accompanying Django field. ## Examples Timecodes can be created using a string representation >>> from timecode import Timecode >>> start = Timecode('09:59:50:00', fps=25) >>> end = Timecode('10:06:05:12', fps=25) They will print themselves >>> start Timecode('09:59:50:00', fps=25) >>> str(start) '09:59:50:00' They can add and subtract >>> delta = end - start >>> delta Timecode('00:06:15:12', fps=25) Or you can get at the exact frames using the total_frames attribute >>> delta.total_frames 9387 ## In a Django model ### models.py from timecode.fields import TimecodeField from django.db import models - class TestModel(models.Model): - timecode = TimecodeField() You can then store the timecode objects in the database. Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-timecode/
CC-MAIN-2018-17
refinedweb
159
58.58
The following terms are used throughout this book. A list associated with a file that contains information about which users or groups have permission to access or modify the file. A Windows naming service that runs on a domain controller to protect network objects from unauthorized access. This service also replicates objects across a network so that data is not lost if one domain controller fails. A transient share of a user's home directory that is created when the user logs in and is removed when the user logs out. Software that enables a system to access CIFS shares from a CIFS server. Software that enables a system to make CIFS shares available to CIFS clients. A protocol that follows the client-server model to share files and services over the network, and which is based on the Server Message Block (SMB) protocol. A rule that maps between a Windows group and a Solaris user and between a Solaris group and a Windows user. These mappings are needed when Windows uses a group identity as a file owner, or a user identity as a file group. A way to use name mapping information that is stored in user or group objects in the Active Directory (AD), in the native LDAP directory service, or both to map users and groups. A service that provides the naming policy and mechanisms for mapping domain and machine names to addresses outside of the enterprise, such as those on the Internet. DNS is the network information service used by the Internet. A service that is provided with AD that enables a client to dynamically update its entries in the DNS database. A dynamic UID or GID mapping for an SID that is not already mapped by name. A forest can have one or more trees that do not form a contiguous namespace. A logical structure that enables you to interconnect two or more Windows domains by bringing them into bidirectional, chained trust relationships. See also tree and forest. Each tree in this model has a unique name, while a forest does not need to be named. The trees in a forest form a hierarchy for the purposes of the trust relationships. In this model, a single tree can constitute a forest. Each tree within a forest can be independent of the others. You might use this model to run multiple environments under separate DNS namespaces. An unsigned 32-bit identifier that is associated with a Solaris group. A process that enables Windows clients to transparently access CIFS shares and remote services from the Solaris CIFS server. A standard, extensible directory access protocol that enables clients and servers that use LDAP naming services to communicate with each other. A directory to which you mount a file system or a share that exists on a remote system. A way to associate Windows users and groups with equivalent Solaris users and groups by name rather than by identifier. A name-based mapping can consist of directory-based mappings and rule-based mappings. The name of a host or workgroup used by NetBIOS. A valid domain name as defined by DNS. You use a NetBIOS scope identifier to identify logical NetBIOS networks that are on the same physical network. When you specify a NetBIOS scope identifier, the server will only be able to communicate with other systems that have the same scope defined. The value is a text string that represents a domain name and is limited to 16 characters. By default, no value is set. You might specify a NetBIOS scope if you want to divide a large Windows workgroup into smaller groups. If you use a scope, the scope ID must follow NetBIOS name conventions or domain name conventions. The ID is limited to 16 characters. Most environments do not require the use of the NetBIOS scope feature. If you must use this feature, ensure that you track the scope identifier assigned to each node. A distributed database that contains key information about the systems and the users on the network. The NIS database is stored on the master server and all the replica or slave servers. A protocol that enables a client to automatically synchronize its system clock with a time server. The clock is synchronized each time the client is booted and any time it contacts the time server. A stored password that enables a Solaris CIFS client to mount CIFS shares without having to authenticate each mount action. This password remains in storage until removed by the smbutil logout or smbutil logoutall command. A 32-bit identifier similar to a Solaris user identifier (UID) or group identifier (GID) that identifies a user, group, system, or domain. A way to use rules to associate Windows users and groups with equivalent Solaris users and groups by name rather than by identifier. An open source service that enables UNIX servers to provide CIFS/SMB file-sharing and printing services to CIFS clients. A database in which Windows users and groups are defined. The SAM database is managed on a Windows domain controller. A variable length structure that uniquely identifies a user or group both within the local domain and across all possible Windows domains. A protocol that enables clients to access files and to request services of a server on the network. A local resource on a server that is accessible to clients on the network. On a Solaris CIFS server, a share is typically a directory. Each share is identified by a name on the network. To clients on the network, the share does not expose the local directory path directly above the root of the share. Most shares have a type of disk because the shares are directories. A share of type pipe represents a device, such as an IPC share or a printer. A named collection of domains that share the same network configuration, schema, and global catalog. An unsigned 32-bit identifier that is associated with a Solaris user. A centrally administered group of computers and accounts that share a common security and administration policy and database. Computer, user, and group accounts are centrally managed by using servers known as domain controllers. In order to participate in a Windows domain, a computer must join the domain and become a domain member. A Windows system that is used to provide authentication services for its Windows domain. A service that resolves NetBIOS names to IP addresses. A group of standalone computers that are independently administered. Each computer has independent, local user and group accounts, and security and policy database. In a Windows workgroup, computers cooperate through the use of a common workgroup name but this is a peer-to-peer model with no formal membership mechanism.
http://docs.oracle.com/cd/E19082-01/820-2429/glossary/index.html
CC-MAIN-2016-22
refinedweb
1,122
63.59
Win a copy of Barcodes with iOS this week in the iOS forum or Core Java for the Impatient in the Java 8 forum! HI Priya, Congratulations, Could you suggest any reading material for part II Exam. Regards -------------------- SCJP,SCWCD,SCEA Part I Hi Priya, Congratulations ! Can you pls share ur assignmet of SCEA Part-2. Thanks in Advance. S.S. ssarch@yahoo.co Congratulations, Priya. Could you answer me a question? And what's your opinion about the 1-1 relationship between the segment and the flight? As I think a segment is a part of the itinerary reservation. And a flight is a trip of an airplane (equipment) from one city to another at specific date and time. So it is reasonable that every segment correspond to 1 flight. But how could a flight just correspond just 1 segment (reservation). Regards Originally posted by Priya Patel: I would agree with your thoughts above. I did exactly that, i.e changed the name of equipment to airplane , equipment didn't add any value to my class diagram and so i renamed it 'Airplane' and stated this in the assumptions. A flight cannot just corresspond to one segment. You need something in the middle [hint]. If you were to think back to your E.R modelling days and you had a many-to-many relationship between two entities, you would need to decompose that ? [hint] Priya Harvey Shen Congratulations! Priya. I have a question about the "deliverables" of the partII, here is the statement => Create either a Sequence or Collaboration diagram for each use case provided. What confused me is that there're four "Detailed Use Cases" provided in the assignment, however there're three extra Use Cases in the "Use Case Diagram", e.g. "Create profile", "log in", "View frequent Flyer miles" Did you provide sequence diagram for those three extra use case? Thanks a lot. Thank you very much for your hints. But by ?A flight cannot just correspond to one segment?, I think you mean to change the BDM even if some intermediate object is added. I tried another explanation and it seems compliant with the BDM. But it seems a little strange. An itinerary: A route or proposed route (of a journey). (It?s a route related only to the source and destination city and not related to customer reservations.) A segment: A part of a route between 2 cities. A flight: A series of trip of airplanes between 2 cities.(e.g. flight number 235 fling between NY and SF every day at 8AM) Equipment: An airplane. A seat: a seat in the airplane. What?s your opinion? Regards Walter Wang wrote : Hello I have a question if you look Use case specifiaction: In Flow of Event secation it contains 2 parts one is basic flow , another is alternative flows what doest alternative flow here mean? which flow should i adhere to for my diagram ? rgds -------------------- public class Walter{ public boolean is_Working_Now(boolean is_boss_Coming){ return is_boss_Coming; } Dhiren Joshi Wrote :- Congratulations Priya , Could you give some hint as to what you did for the change itinerary use case. I think the use case is a overkill when it says to go back to prepare itinerary. Any suggestions.. .. I cant figure out why removing a segment would necessitate the entire itinerary preparation called all over again. Thanks Dhiren
http://www.coderanch.com/t/152090/java-Architect-SCEA/certification/Passed-SCEA-II-III
CC-MAIN-2015-11
refinedweb
561
66.54
Earn discounts on Azure services with a prepaid 12-month subscription. Learn more. Important: The price in R$ is merely a reference; this is an int'l transaction and the final price is subject to exchange rates and the inclusion of IOF taxes and an eNF will not be issued. Service Bus comes in Basic, Standard and Premium (preview) tiers. Here’s how they compare: An operation is any API call to the Service Bus service. Premium tier is currently in preview and price below reflects a 50% preview discount. Number of AMQP connections or HTTP calls to Service Bus. Relays are available only in Standard tier and are charged by message volume and relay hours... Microsoft charges for the peak number of concurrent brokered connections that exceed the included quantity (1,000 in the Standard and Premium.Examples: Yes they do. There are no connection charges for sending events using HTTP, regardless of the number of sending systems, i.e. to achieve more efficient event streaming or enable bi-directional communication with thousands and improved availability. The Premium tier uses a dedicated resource allocation model to provide workload isolation and consistent performance. is twice as many resources. No. Because Premium runs in an isolated runtime it cannot be downgraded to Standard or Basic. Also Standard and Basic namespaces cannot be upgraded to Premium. Please see this MSDN article for additional Service Bus billing FAQs
https://azure.microsoft.com/en-us/pricing/details/service-bus/
CC-MAIN-2015-48
refinedweb
235
56.96
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also #include <slp.h> SLPError SLPFindAttrs(SLPHandle hSLP, const char *pcURL, const char *pcScopeList, const char *pcAttrIds, SLPAttrCallback *callback, void *pvCookie); The SLPFindAttrs() function returns service attributes matching the attribute tags for the indicated full or partial URL.If pcURL is a complete URL, the attribute information returned is for that particular service in the language locale of the SLPHandle. If pcURL is a service type, then all attributes for the service type are returned, regardless of the language of registration. Results are returned through the callback parameter. The result is filtered with an SLP attribute request filter string parameter, the syntax of which is described in RFC 2608. If the filter string is the empty string, "", all attributes are returned. If an error occurs in starting the operation, one of the SLPError codes is returned. The language-specific SLPHandle on which to search for attributes. It cannot be NULL. The full or partial URL. See RFC 2608 for partial URL syntax. It cannot be NULL. A pointer to a char containing a comma-separated list of scope names. It cannot be NULL or an empty string, "". The filter string indicating which attribute values to return. Use empty string "" to indicate all values. Wildcards matching all attribute ids having a particular prefix or suffix are also possible. It cannot be NULL. A callback function through which the results of the operation are reported. It cannot be NULL. Memory passed to the callback code from the client. It may be NULL. This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP). Use the following example to return the attributes “location” and “dpi” for the URL “service:printer:lpr://serv/queue1” through the callback attrReturn: SLPHandle hSLP; SLPAttrCallback attrReturn; SLPError err; err = SLPFindAttrs(hSLP "service:printer:lpr://serv/queue1", "default", "location,dpi", attrReturn, err); Use the following example to return the attributes “location” and “dpi” for all service URLs having type “service:printer:lpr”: err = SLPFindAttrs(hSLP, "service:printer:lpr", "default", "location, pi", attrReturn, NULL);
https://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3f/index.html
CC-MAIN-2018-09
refinedweb
351
50.12
Here are some of my thoughts on the issue of module systems. I see the evolution of module systems as something like this: * We start off with modules as they are in, say, Java or Perl. They are just packages that contain functions or classes. * Add parametric modules. This requires separating the specification of a module (its signature) from its implementation. This leaves you with something like ML's functors. Unfortunately, this has the disadvantage of disallowing mutual recursion between modules in some cases, such as between two modules that take each other as parameters. * PLT Scheme's module system makes two further changes: * Firstly, mutual recursion between modules is allowed by creating a new linking construct which allows circular references, rather than just having functor application as the linking construct. * Modules are made first class objects. This enables dynamic linking (and is not, incidentally, a barrier to efficient compilation). More subtly, it removes the universal namespace for modules, meaning that modules become more like capabilities and can be used to enforce security boundaries (this is something Java got wrong -- its universal namespace for packages means it needs to use an ACL scheme for security). * The next steps that need to be made are: * Export of syntax. * A programmatic interface to module linking. The problem with PLT Units is that module linking clauses can get quite big, and it becomes a pain maintaining them by hand. Since the clauses are syntax, it is hard to make parts of them conditional. What is needed is an interface via functions, rather than via syntax. This will make it easier to link programs automatically, using whatever implementations are available (recovering one of the advantages of Java/Perl-style package systems -- brevity). Dybvig's system can handle mutual recursion between modules. It can probably handle parametric modules just about. I am not sure if it can handle mutual recursion and parametric modules at the same time (and it can, whether it is efficient, ie. doesn't instantiate functions more than necessary). Even if can handle those things, it can't handle modules being first class objects, dynamic linking, or a programmatic interface to linking, because Dybvig's system is just a static transformation. Of course, you could then voice the objection that Dybvig's system *can* handle these, because you can implement PLT units in it... Mikael Djurfeldt <address@hidden> writes: > Dybvig's module system is actually not the usual thing we mean by a > module system, but rather a tool to control and abstract over scope. This is very true. Dybvig's system is basically a sophisticated library to help in writing macros. The implementation of PLT core units that Dybvig gives is amazingly concise, but if you look it at, it's just passing boxes around to perform the linking. This is how you can implement it anyway -- it's not actually using the extra expressive power that Dybvig's system has over normal macros. The way Dybig's units implementation links units is analagous to linking C programs by passing around structs of function pointers -- there are more efficient ways of doing it (such as the way C linkers link), which could be exploited by the Scheme implementation. Speaking of C, one thing in favour of units is that the concept generalises well to other languages. The PLT team are working on `Knit', which is basically units for C; there's a paper on it somewhere. However, I think it is untyped, doesn't allow units to export macros, and hasn't been released yet. Being untyped is a shame because otherwise it would make it easy to generate bindings for other languages; and the lack of macros means the same thing, as well as making it harder to wrap existing programs. SWIG is too ad-hoc -- units for C should be a cleaner way of doing things. (I plan to work on it some time.) This is quite important since Guile is supposed to be an extension language. Lastly, I'll point to my own portable implementation of units: I'll be uploading it very soon (should be there by the time you read this) in the file primrose-0.0.3.tar.gz to <> and <>. It's part of (or a diversion from :-) ) a music recognition program I'm writing in Scheme. The files of interest are: * unit-core.scm: Core units. * unit-sig.scm: Signed units. * unit-dlink.scm: This is an extension to the interface provided in PLT Scheme. It provides a programmatic interface to module linking, which you can see in use in `link.scm'. * (Also there's `unit.scm', which is an old and slow implementation of core units.) I'd quite like to see this included in Guile. Shall I wrap it up in a Guile module and submit it? The main omission is that it doesn't allow modules to export macros yet. I hope to work on this soon, but it should just be a case of a macro-exporting unit providing a function to perform the macro's transformation along with another unit to link against the free variables the macro introduced. So basically I think units are quite a good module system, once some extras are added. :-) Although Dybvig's system is quite neat, and it would be worthwhile implementing it anyway (could his sample implementation be used?). In particular, identifier macros are very useful. -- Mark Seaborn - address@hidden - - ``Water boils at a lower temperature at high altitude, which partly accounts for the nasty taste of coffee on the summit of Mauna Kea''
http://lists.gnu.org/archive/html/guile-devel/2000-12/msg00147.html
CC-MAIN-2016-36
refinedweb
933
63.19
Turning off the screen From OESF (Difference between revisions) Current revision Howto turn off the screen. Intro For some applications such as audio applications you don't want to waste battery life on the screen when only the audio is being used. There are two ways to turn the screen on and off. One uses QCOP messages and does not take much code and can be done from the command line. The other way uses IOCTLs, which is useful especially when Qtopia is not running. QCOP Method Below are the two lines of code required in a Qtopia app to blank the screen: QCopEnvelope e("QPE/System", "setBlankLCD(int)"); e << 1; To do the same on the command line run this command: qcop QPE/System 'setBlankLCD(int)' 1 In both cases, only the Cancel button will turn the screen on again. IOCTL Method Here is a sample application that can be modified. It both turns the screen both off and on. #include < fcntl.h > #include < stdio.h > #include < pthread.h > #include < unistd.h > #include < stdlib.h > #include < sys/time.h > #include < unistd.h > #include < sys/ioctl.h > #include < time.h > #define VESA_NO_BLANKING 0 #define VESA_VSYNC_SUSPEND 1 #define VESAg_HSYNC_SUSPEND 2 #define VESA_POWERDOWN 3 #define FBIOBLANK0x4611/* arg: 0 or vesa level + 1 */ int main( int argc, char *argv[] ) { int fd,mode; if (argc>1) { if (strcmp(argv[1],"-on")==0) mode = 0; if (strcmp(argv[1],"-off")==0) mode = 1; } else { printf("blank -on \n"); printf(" -off\n"); exit(1); } if ( mode == 1 ) { fd = open( "/dev/fb0", O_RDWR); ioctl( fd, FBIOBLANK,VESA_POWERDOWN ); close(fd); } else { fd = open( "/dev/fb0", O_RDWR); ioctl( fd, FBIOBLANK,VESA_NO_BLANKING ); close(fd); } }
http://www.oesf.org/index.php?title=Turning_off_the_screen&diff=9039&oldid=2717
CC-MAIN-2013-48
refinedweb
274
74.19
Although some other alternatives are available, such as the rkt container engine, Docker Engine has become the de facto containerization platform in the past 2-3 years. In "Using Java with Docker Engine," we discussed creating a Java application with Docker Engine. Docker Engine makes better use of the operating system kernel in comparison to a virtualization platform such as VirtualBox or VMWare because a single Docker container does not make use of a whole OS kernel, whereas a single virtual machine does. Each Docker container includes its own filesystem and networking, which makes it an isolated process on the Docker Engine. A single Docker Engine with multiple Docker containers running in isolation makes it feasible to run different applications and even have some containers make use of other containers. One of the main benefits of Docker Engine is the ease of installation and configuration for software. In this tutorial, we shall discuss using C++ on Docker Engine. This tutorial has the following sections: - Setting the Environment - Creating a C++ Application - Creating and Running a Docker Image with the g++ Compiler - Creating and Running a Docker Image with the gcc Compiler - Compiling and Running C++ Applications Separately - Removing Docker Containers and Images Setting the Environment Docker is pre-installed on some OSes, such as CoreOS, and needs to be installed if some other OS—such as Ubuntu, Amazon Linux, or Redhat Linux—is used. We shall be using CoreOS. We shall use the CoreOS Linux (Stable) on EC2, which may be accessed at. Click Continue. Select 1-Click Launch and the default Version, which is the latest available. Select a Region and a EC2 Instance Type. Figure 1: Launching Select the "default" security group, which provides access from all Source IPs. Select a Key Pair, which may have been created previously. Figure 2: Selecting a security group Click Launch with 1-click. Figure 3: Launching with 1-click A single instance of CoreOS Linux gets started on EC2. Obtain the Public DNS or the Public IP address for the CoreOS instance from the Console. Figure 4: Obtaining the public information Using the Key Pair and the Public IP (or Public DNS) SSH, log in into the CoreOS instance. ssh -i "coreos.pem" core@54.197.150.238 The CoreOS instance gets logged into and the command prompt gets displayed. Figure 5: Displaying the command prompts Creating a C++ Application We shall use a Hello World C++ application to demonstrate the use of C++ on Docker, and the application HelloWorld.cpp is listed: #include <iostream> using namespace std; int main() { cout << "Hello world" << endl; cout << "From a C++ Program" << endl; return 0; } Create a file called HelloWorld.cpp in a vi editor and copy the listing to the file. Figure 6: Creating the HelloWorld file Two options are available to run the C++ application: - Create a Docker image and subsequently run a Docker container. - First, compile the Docker application and subsequently run the application. Creating and Running a Docker Image with the g++ Compiler We shall be using the Docker image "gcc" available on the Docker Hub. The "gcc" Docker image is the GNU Compiler Collection with support for several languages, including C and C++. Two of the main commands that could be used for C/C++ application are gcc and g++. The differences between the two are discussed in the following table. Next, we shall use the g++ command to run a C++ application. Create a Dockerfile (in a vi editor) in the same directory as the HelloWorld.cpp file. A Dockerfile contains instructions to build a Docker image that could be used to run a Docker container. Copy the following listing to the Dockerfile. FROM gcc:4.9 COPY . /HelloWorld WORKDIR /HelloWorld RUN g++ --o HelloWorld HelloWorld.cpp CMD ["./HelloWorld"] The Dockerfile instructions are as follows. The Dockerfile is shown in the vi editor. Figure 7: The Dockerfile The root directory should list two files: Dockerfile and HelloWorld.cpp. Figure 8: The two files in the root directory Before creating a Docker image, create the directory /HelloWorld and set its permissions to global (777). sudo mkdir /HelloWorld chmod 777 /HelloWorld Run the docker build command to create a Docker image called helloworld:v1 from the Dockerfile. docker build -t helloworld:v1 . The Docker image gets created. Figure 9: The newly created Docker image Subsequently, list the Docker images. docker images The helloworld image tag v1 gets listed. Figure 10: The new helloworld image tag Having created the Docker image, run the a Docker container with the docker run command. The Docker container may optionally be named, "HelloWorld" for example, with the --name option. If the --name option is not used, a random name is used for the Docker container. The --rm option is called the "Clean up" option and removes the Docker container and the filesystem & volumes associated with the container after it has run. Run the following docker run command for the Docker image helloworld:v1. docker run -it --rm --name HelloWorld helloworld:v1 The C++ application in the Docker image runs to produce an output. Figure 11: The Docker image producing output Creating and Running a Docker Image with the gcc Compiler It was mentioned previously that the g++ command is used for C++ applications and the gcc command is used for C applications. The main difference between the two is that gcc does not link the standard C++ libraries. But, the standard C++ libraries could be linked explicitly when using the gcc commnd; this is what we shall discuss in this section. Before creating a Docker image using the gcc command, remove the Docker image helloworld:v1 because we will create the same name Docker image with gcc. docker rmi helloworld:v1 The standard C++ libraries could be linked when using the gcc command with the --lstdc++ option. Modify the Dockerfile to replace the g++ command with the gcc command as in following listing. FROM gcc:4.9 COPY . /HelloWorld WORKDIR /HelloWorld RUN gcc --o HelloWorld HelloWorld.cpp --lstdc++ CMD ["./HelloWorld"] The modified Dockerfile is shown in the vi editor. Figure 12: The modified Dockerfile The same two files, Dockerfile and HelloWorld.cpp, should get listed. Figure 13: Listing the same two files Run the same docker build command to create the Docker image helloworld:v1. docker build -t helloworld:v1 A Docker image gets generated, but contains a different command, the gcc command instead of the g++ command to run the C++ application. List the Docker images, and the helloworld:v1 Docker image should get listed in addition to the gcc Docker image. Figure 14: Listing the Docker images Next, run the same docker run command to run a Docker container for the Docker image helloworld:v1. docker run --it --rm --name HelloWorld helloworld:v1 The same output get generated. Figure 15: Generating the same output The Docker containers get removed when using the --rm option. List the running and exited Docker containers with the following commands, respectively: docker ps docker ps -a Because the Docker containers have been removed no Docker container should get listed running or exited. Figure 16: No Docker container is listed In the previous two sections, we compiled the Docker application into a Docker image with docker build and subsequently ran a Docker container with docker run. In the next section, we shall not create a Docker image but instead only compile the HelloWorld.cpp file into a runnable application with docker run with the g++ command being invoked as a command arg to the docker run command. Subsequently, we shall run the application binaries HelloWorld, separately. Compiling and Running C++ Applications Separately To compile only the HelloWorld.cpp application into HelloWorld runnable application, run the following command that invokes the g++ GNU C++ compiler. docker run --rm --v "$PWD":/HelloWorld --w /HelloWorld gcc:4.9 g++ --o HelloWorld HelloWorld.cpp The --rm option removes the container after it has run, but does not remove the application binaries generated. The --v option adds the current directory as a volume and the --w option sets the working directory to the volume. Figure 17: Removing the container Subsequently, listing the files lists the HelloWorld application generated. Figure 18: Listing the files to see HelloWorld Next, run the HelloWorld application. ./HelloWorld The C++ application output gets generated. Figure 19: Generating the C++ output We used the g++ compiler with the docker run command to generate the application binaries, but the gcc command with the --lstdc++ option may be used just as well. docker run --rm --v /HelloWorld --w /HelloWorld gcc:4.9 gcc --o HelloWorld HelloWorld.cpp --lstdc++ Removing Docker Containers and Images The Docker images and containers, if any, may be removed after the C++ application has been run. Remove all Docker containers that have exited. sudo docker rm $(sudo docker ps --a --q) All exited Docker container should get removed. Figure 20: The Docker containers are removed Remove the Docker image helloworld:v1 with the docker rmi command. docker rmi helloworld:v1 The Docker image gets removed. Figure 21: The Docker images are removed Listing the Docker images may still list some images and some images could be called <none>. docker images The <none> images are called dangling images. These are just some images that did not get downloaded properly or built properly. Figure 22: Showing the dangling images Remove all the dangling images. sudo docker rmi $(sudo docker images --f "dangling=true" --q) All the dangling images should get removed. Figure 23: The dangling images are now gone Listing the images lists only the gcc:4.9 Docker image, which could be kept for subsequent use. Figure 24: The remaining gcc:4.9 Docker image Conclusion In this tutorial, we introduced using C++ with Docker Engine.
https://mobile.codeguru.com/cpp/cpp/algorithms/using-c-with-docker-engine.html
CC-MAIN-2019-18
refinedweb
1,626
55.03
celery.utils.debug¶ Sampling Memory Usage¶ This module can be used to diagnose and sample the memory usage used by parts of your application. For example, to sample the memory usage of calling tasks you can do this: from celery.utils.debug import sample_mem, memdump from tasks import add try: for i in range(100): for j in range(100): add.delay(i, j) sample_mem() finally: memdump() API Reference¶ Utilities for debugging memory usage, blocking calls, etc. celery.utils.debug. sample_mem()[source]¶ Sample RSS memory usage. Statistics can then be output by calling memdump(). celery.utils.debug. memdump(samples=10, file=None)[source]¶ Dump memory statistics. Will print a sample of all RSS memory samples added by calling sample_mem(), and in addition print used RSS memory after gc.collect(). celery.utils.debug. sample(x, n, k=0)[source]¶ Given a list x a sample of length nof that list is returned. For example, if n is 10, and x has 100 items, a list of every tenth. item is returned. kcan be used as offset.
https://docs.celeryq.dev/en/v5.1.0/reference/celery.utils.debug.html
CC-MAIN-2022-33
refinedweb
175
50.12
where is an 2n-th root of unity with n being the number of qubits in the register, i.e. How can we effectively create this state with a quantum circuit? They key to this is the observation (see the references below) that the result of the quantum Fourier transform can be written as a product state, namely as which you can easily verify by multiplying out the product and collecting terms. Here we use the tensor product order that is prescribed by OpenQASM, i.e. the most significant bit is q[n-1]. This bit is therefore given by Let us analyse this expression further. For that purpose, we decompose x into its representation as a binary number, i.e. we write with xi being the binary digits of x. If we now multiply this by 2n-1, we will get plus a multiple of 2n. As is a root of unity, this multiple cancels out and we obtain that Thus, we can write the most significant qubit of the Fourier transform as which is nothing but Thus we obtain the most significant qubit of the quantum Fourier transform by simply applying a Hadamard gate to the least significant qubit of the input. This is nice and simple, but what about the next qubit, i.e. qubit n-2? From the decomposition above, we can see that this is simply Using exactly the same arguments as for the most significant qubit, we easily find that this is Thus we obtain this qubit from qubit 1 by first applying a Hadamard gate and then a conditional phase gate, i.e. a conditional rotation around the z-axis, conditioned on the value of x0. In general, qubit n-j is which is a Hadamard gate followed by a sequence of conditional rotations around the z-axis, conditioned on the qubits with lower significance. So we find that each qubit of the Fourier transform is obtained by applying a Hadamard followed by a sequence of conditional rotations. However, the order of the qubits in the output is reversed, i.e. qubit n-j is obtained by letting gates act on qubit j. Therefore, at the end of the circuit, we need to revert the order of the qubits. In OpenQASM and Qiskit, a conditional rotation around the z-axis is called CU1, and there are swap gates that we can use to implement the final reversing of the qubits. Thus, we can use the following code to build a quantum Fourier transformation circuit acting on n qubits. def nBitQFT(q,c,n): circuit = QuantumCircuit(q,c) # # We start with the most significant bit # for k in range(n): j = n - k # Add the Hadamard to qubit j-1 circuit.h(q[j-1]) # # there is one conditional rotation for # each qubit with lower significance for i in reversed(range(j-1)): circuit.cu1(2*np.pi/2**(j-i),q[i], q[j-1]) # # Finally we need to swap qubits # for i in range(n//2): circuit.swap(q[i], q[n-i-1]) return circuit Here is the circuit that this code produces for n=4. We can clearly see the structure – on each qubit, we first act with a Hadamard gate, followed by a sequence of conditional rotations with decreasing angle, conditioned on the less significant qubits, and finally reorder the qubits. This is already a fairly complex circuit, and we need to find a way to test it. Let us look at the options we have. First, a quantum circuit is a unitary transformation and can be described by a matrix. In our case, it is especially easy to figure out what this matrix should be. Looking at the formula for the quantum Fourier transform, we find that the matrix describing this transformation with respect to the computational basis has the elements The Qiskit frameworks comes with a simulator called the unitary_simulator that accepts a quantum circuit as input and returns the matrix describing that circuit. Thus, one possible test approach could be to build the circuit, run the unitary simulator on it, and to compare the resulting unitary matrix with the expected result given by the formula above. In Python, the expected result is produced by the following code def qftMatrix(n): qft = np.zeros([2**n,2**n], dtype=complex) for i in range(2**n): for j in range(2**n): qft[i,j] = np.exp(i*j*2*1j*np.pi/(2**n)) return 1/np.sqrt(2**n)*qft and the test can be done using the following function def testCircuit(n): q = QuantumRegister(n,"x") c = ClassicalRegister(n,"c") circuit = nBitQFT(q,c,n) backend = Aer.get_backend('unitary_simulator') job = execute(circuit, backend) actual = job.result().get_unitary() expected = qftMatrix(n) delta = actual - expected print("Deviation: ", round(np.linalg.norm(delta),10)) return circuit The outcome is reassuring – we find that the matrices are the same within the usual floating point rounding differences. After passing this test, a next reasonable validation step could be to run the algorithm on a specific input. We know that the QFT will map the state into an equal superposition of all elements of the computational basis. Conversely, we therefore expect that if we start with such a superposition, the QFT will map the superposition onto , at least up to a phase. Let us try this out. Our test circuit will consist of a layer of Hadamard gates to create the equal superposition, followed by the QFT circuit, followed by a measurement. The resulting circuit for n=4 is displayed below. It we run this circuit on the QASM simulator embedded into Qiskit, the result is as expected – for 1024 shots, we get 1024 times the output ‘0000’. So our circuit works – at least theoretically. But what about real hardware? Let us compile and run the circuit targeting the IBM Q Experience 14 qubit device. If we dump the QASM code after compilation, we see that the overall circuit will have roughly 140 gates. This is already significant, and we expect to see some noise. To see how bad it is, I have conducted several test runs and plotted the results as a histogramm (if you want to play with this yourself, you will find the notebook on Github). Here is the output of a run with n=4. We still see a clear peak at the expected result, but also see that the noise level is close to making the result unusable – if we did not know the result upfront, we would probably not dare to postulate anything from this output. With only three qubits, the situation becomes slightly better but is still far from being satisfactory. Of course we could now start to optimize the circuit – remove cancelling Hadamard gates, remove the final swap gates, reorder qubits to take the coupling map into account and so on – but it becomes clear that with the current noise level, we are quickly reaching a point where even comparatively simple circuit will inflict a level of noise that is at best difficult to handle. Hopefully, this is what you expected after reading my posts on quantum error correction, but I found it instructive to see noise in action in this example. References 1. M. Nielsen, I. Chuang, Quantum Computation and Quantum Information, Cambridge University Press 2010 2. R. Cleve, A. Ekert, C. Macchiavello, M. Mosca, Quantum Algorithms revisited, arXiv:9708016 2 thoughts on “Implementing the quantum Fourier transform with Qiskit”
https://leftasexercise.com/2019/02/25/implementing-the-quantum-fourier-transform-with-qiskit/
CC-MAIN-2020-05
refinedweb
1,249
59.84
Find Questions & Answers Can't find what you're looking for? Visit the Questions & Answers page! Dear All. i'm using an expression ${date:now:yyyyMMdd 'at' hhmmss z} to get the date,time and the time zone. the result i'm getting is 20170201 at 101744 GMT . how can i get the IST time zone? can you please suggest. Regards Ramesh Hello Ramesh, Below will help you to convert the time to IST. import com.sap.gateway.ip.core.customdev.util.Message; import java.util.HashMap; import java.text.SimpleDateFormat; import java.util.Date; def Message processData(Message message) { def body = message.getBody(); //Get current Date and Time. def currentUTCTime = new Date() //Declare a empty string to hold the Converted Time def convertedISTTime=''; //Convert and format it into required IST format convertedISTTime=currentUTCTime.format("yyyyMMdd ' at 'HH:mm:ss XXX", TimeZone.getTimeZone('IST')) //Set Property for your future use in integration process message.setProperty("P_CurrentISTTime", convertedISTTime ); //Set the value to your Body if required. message.setBody(convertedISTTime); return message; } Regards, Sriprasad Shivaram Bhat Dear Sri. Great Again. first i would like to appreciate your contribution in SCN, thank you so much you contribution in in HCI space. it's working fine. Reagrds Ramesh Ramesh, Where do u use this expression? Hi Jasirrani. i'm using this in my content container and the same content to be call to mail adapter's body by using ${body} expression. Regards Ramesh Hello Ramesh, I have tried the above case and seems its not possible to change it using expressions only. You can use the script and get the time Converted to IST. Regards, Sriprasad Shivaram Bhat Hi Sri. Thanks for your time to check this option at your end. Regards Ramesh Can any one has the script for the same.
https://answers.sap.com/questions/118126/how-to-get-ist-time.html
CC-MAIN-2018-22
refinedweb
298
61.22
how can a Dataframe be converted to a SpatialGridDataFrame using the R maptools library? I am new to Rpy2, so this might be a very basic question. The R Code is: coordinates(dataf)=~X+Y In Python: import rpy2 import rpy2.robjects as robjects r = robjects.r # Create a Test Dataframe d = {'TEST': robjects.IntVector((221,412,332)), 'X': robjects.IntVector(('25', '31', '44')), 'Y': robjects.IntVector(('25', '35', '14'))} dataf = robjects.r['data.frame'](**d) r.library('maptools') # Then i could not manage to write the above mentioned R-Code using the Rpy2 documentation Apart this particular question i would be pleased to get some feedback on a more general idea: My final goal would be to make regression-kriging with spatial data using the gstat library. The R-script is working fine, but i would like to call my Script from Python/Arcgis. What do you think about this task, is this possible via rpy2? Thanks a lot! Richard In some cases, Rpy2 is still unable to dynamically (and automagically) generate smart bindings. An analysis of the R code will help: coordinates(dataf)=~X+Y This can be more explicitly written as: dataf <- "coordinates<-"(dataf, formula("~X+Y")) That last expression makes the Python/rpy2 straigtforward: from rpy2.robjects.packages import importr sp = importr('sp') # "coordinates<-()" is there from rpy2.robjects import baseenv, Formula maptools_set = baseenv.get('coordinates<-') dataf = maptools_set(dataf, Formula(' ~ X + Y')) To be (wisely) explicit about where "coordinates<-" is coming from, use: maptools_set = getattr(sp, 'coordinates<-')
http://www.dlxedu.com/askdetail/3/6326b47f40f52ea278f7bb032be079dd.html
CC-MAIN-2018-47
refinedweb
251
58.79
Log message: #include <limits> to fix the build on netbsd-8/amd64. Log message: libebml: updated to 1.3.6 v1.3.6. * Converted the build system from autoconf/automake to cmake. Patches by Github user "evpobr" with fixes by myself. * Fixed undefined behavior when reading signed integers with negative values from files (though compilers implemented this the way we wanted them to already). * Fixed a small memory leak when reading an element runs into an I/O exception (e.g. due to having reached the end of the file). * Fixed the EbmlMaster::GetDataStart() function returning wrong values for elements with an infinite/unknown size. * Fixed finding the next element ID when garbage data is encountered during the scan for the ID. * Fixed several potential situations where reading child element data could exceed the parent element's size. * Added a code of conduct to the project. Log message: devel/libebml: update to 1.3.5 Released v1.3.5. * The function EbmlMaster::CheckMandatory() will now only return false if a mandatory element is missing for which there's no default value in the specifications. This means that callers such as EbmlMaster::UpdateSize() and by extension EbmlMaster::Render() will not insist on all mandatory elements being present anymore, but only those for which there's no default value. * Added a template function `FindNextChild`. Patch by C.W. Betts. * Fix reading and EBML element even though the ID was not found within * Fixed an instance of undefined behavior in EbmlElement::GetSemantic() due to binding a dereferenced null pointer to a reference. * Replaced the outdated address of the Free Software Foundation with their current one. Log message: Follow some http redirects. Log message: Updated libebml to 1.3.4. 2016-07-02 Moritz Bunkus <moritz@bunkus.org> * Released v1.3.4. 2015-11-21 Moritz Bunkus <moritz@bunkus.org> * EbmlVersion.cpp: in order to enable deterministic builds the EbmlCodeDate variable has been set to "Unknown" instead of the date and time of compilation. Patch by Ed Schouten <ed@nuxi.nl>. 2015-11-18 Moritz Bunkus <moritz@bunkus.org> * libebml_t.h: use C99-style integer typedefs instead of BSD-style ones. Patch by Ed Schouten <ed@nuxi.nl>. 2015-10-24 Moritz Bunkus <moritz@bunkus.org> * EbmlBinary.h: add #include <cstdlib> for compilation with clang and libc++. Patch by Thomas Klausner <wiz@NetBSD.org>. Log message: Remove duplicate SHA512 digests that crept: Add upstream bug report URL.
http://pkgsrc.se/devel/libebml
CC-MAIN-2018-51
refinedweb
403
60.82
Advent of Code 2019 Day 102 1: Part 1 The first challenge is a simple one, I will copy the whole challenge below: — Day 1: The Tyranny of the Rocket Equation —. At the first Go / No Go poll, every Elf is Go until the Fuel Counter-Upper. They haven’t determined the amount of fuel required yet. Fuel required to launch a given module is based on its mass. Specifically, to find the fuel required for a module, take its mass, divide by three, round down, and subtract 2. For example: - For a mass of 12, divide by 3 and round down to get 4, then subtract 2 to get 2. - For a mass of 14, dividing by 3 and rounding down still yields 4, so the fuel required is also 2. - For a mass of 1969, the fuel required is 654. - For a mass of 100756, the fuel required is 33583. The Fuel Counter-Upper needs to know the total fuel requirement. To find it, individually calculate the fuel needed for the mass of each module (your puzzle input), then add together all the fuel values. What is the sum of the fuel requirements for all of the modules on your spacecraft? As I said this is simple enough; we need to calculate the fuel requirement based on the given formula for each module and sum them all together for our answer. The formula given is: fuel = floor(mass / 3) - 2 (floor is just a function that rounds down the input). We are given a puzzle input of a text file where each line is a number denoting the mass of a single module e.g. 86870 94449 119448 53472 140668 64989 112056 88880 131335 94943 We can load this into Scala and apply the formula to each line then sum the answer using code similar to the following: import scala.io.Source import scala.math.floor val filename = "input.txt" // Open the input file val bufferedSource = Source.fromFile(filename) // For each line: val total = bufferedSource.getLines() // Convert it to a Long .map(line => line.toLong) // Apply the formula we were given .map(mass => floor(mass / 3) - 2) // Sum all results together .sum // Display the total println(s"Total: $total") // Close the resource bufferedSource.close() Once we have a result it’s on to part 2 of Day 1. Day 1: Part 2 The puzzle reads as: — Part Two — During the second Go / No Go poll, the Elf in charge of the Rocket Equation Double-Checker stops the launch sequence. Apparently, you forgot to include additional fuel for the fuel you just added. Fuel itself requires fuel just like a module - take its mass, divide by three, round down, and subtract 2. However, that fuel also requires fuel, and that fuel requires fuel, and so on. Any mass that would require negative fuel should instead be treated as if it requires zero fuel; the remaining mass, if any, is instead handled by wishing really hard, which has no mass and is outside the scope of this calculation. So, for each module mass, calculate its fuel and add it to the total. Then, treat the fuel amount you just calculated as the input mass and repeat the process, continuing until a fuel requirement is zero or negative. For example: - A module of mass 14 requires 2 fuel. This fuel requires no further fuel (2 divided by 3 and rounded down is 0, which would call for a negative fuel), so the total fuel required is still.) This is a little harder than Part 1. Now for each module we need to calculate the fuel required for not just the module but the fuel to carry the additional fuel! Luckily the way we structure our Scala code makes this easy to do. We can replace our simple fuel calculation with a call to a more complex function before our sum function call: def calculateFuel(mass: Long): Long = { // define the fuel function we will be using val fuelFunction = (mass: Long) => (floor(mass / 3) - 2).toLong // calculate the initial fuel we need for the given mass val initialFuel: Long = fuelFunction(mass) var total: Long = initialFuel // Loop round adding any additional fuel required until it reaches 0 or less var additional: Long = fuelFunction(initialFuel) while (additional > 0) { total += additional additional = fuelFunction(additional) } // return the total total } // For each line: val total = bufferedSource.getLines() // Convert it to a Long .map(line => line.toLong) // Apply the formula we were given .map(mass => calculateFuel(mass)) // Sum all results together .sum This will return us our result, applying that function to all the masses we are given before totalling up. This completes Day 1 of Advent of Code 2019!
https://lyndon.codes/2019/12/02/advent-of-code-2019-day-1/
CC-MAIN-2019-51
refinedweb
781
62.68
Predict Movie Earnings with Posters Identify the genre and earnings of the movie with movie posters If you have a summer blockbuster or a short film, what would be the best way to capture your audience’s attention and interest? The 2 most prominent methods are apparent: posters and trailers. Movie posters communicate essential information about the movie such as the title, theme, characters and casts as well as producers involved in the movie. Movie posters serve to inform the audience what genre of movie they are watching, so if given a movie poster, can a machine learning model tell the genre of a movie? Movie posters are a crucial source of promotion with a great poster design being advantageous to appeal as extensive a viewership as possible. We want to find out if given a movie poster, can we predict if the movie is going to do well in the box office? In this article, we will explore the data preparation and using convolution neural networks to build machine learning models to answer these questions. Dataset We collected 45466 movies metadata from The Movie Database (TMDb). There is a wide variety of attributes we can get from TMDb, but for this experiment, we are only interested in the following fields, 1) title, 2) genre, 3) poster, 4) popularity, 5) budget, 6) revenue. Since a movie can fall into multiple genres, we will only pick the first genre of each movie so that each movie can only have 1 genre. In this experiment we intend to predict if a movie will do well in the box office, we will use revenue/budget ratio, defined as the movie is making money if the value is greater than 1; otherwise, it is not. Here is the sample dataset loaded in Pandas data frame: Data analysis and filtering We won’t download all 45466 images right away. Instead, we will do some analysis, filter out those with data issues and select the list of movie posters to download. Firstly, we will remove those with missing information: - blank title after removing all non-alphanumeric characters - no genre - no poster URL - no budget - no revenue After filtering out the undesirable data, there are 40727 movies. Below is the distribution of the number of movies in each genre: For our genre prediction task, we want to predict between 10 classes. So we will select the top 10 genres and remove the rest. Hence, we select the top 1000 most popular movies in each genre based on popularity. These are the movies posters we will be downloading, 10,000 images across 10 genres. Download the movie posters From the data frame shown above, the poster_path is the name of the file. To get the image URL for Toy Story poster, we append to the poster URL, for example: We can download all the images with the Requests library. I would suggest adding a 1-second delay between each image download. This code is to download and save the images into respective genre folders for predicting the genre of the movie: Image processing In order to make use of pretrained models, we would first need to transform our rectangular posters into a square. Furthermore, to reduce the computation cost, the image size is resized to 224 by 224. We have identified 4 image processing methods to achieve these requirements: - PIL library resize - center crop library resize - padding random crop and resize Method #1: PIL library resize Use the PIL library to resize the images to 224x224. from PIL import Image image = Image.open(PATHOFIMAGE) image = image.resize((224, 224), Image.BILINEAR) image.save(NEWPATH) The processed image after resize was distorted below: Method #2: center crop We will transform the images using PyTorch’s Torchvision. do_transforms = transforms.Compose([ transforms.CenterCrop(input_size), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) dataset = datasets.ImageFolder(PATH, transform=do_transforms) The processed image caused both the top and bottom of the image are cropped. Method #3: padding As most movie posters are portrait orientated, we decided to add black padding on the left and right. This would avoid any distortion and cropping of the original poster image. Since black padding is zeros in RGB, it will have a minimum effect on our convolution neural networks. from skimage.transform import resize def resize_image_to_square(img, side, pad_cval=0, dtype=np.float64): h, w, ch = img.shape if h == w: padded = img.copy() elif h > w: padded = np.full((h, h, ch), pad_cval, dtype=dtype) l = int(h / 2 - w / 2) r = l + w padded[:, l:r, :] = img.copy() else: padded = np.full((w, w, ch), pad_cval, dtype=dtype) l = int(w / 2 - h / 2) r = l + h padded[l:r, :, :] = img.copy() resized_img = resize(padded, output_shape=(side, side)) return resized_img The processed image after applying Padding: Method #4: random crop and resize We will transform the images using PyTorch’s Torchvision. do_transforms = transforms.Compose([ transforms.RandomCrop((280,280), padding=None, pad_if_needed=True, fill=0, padding_mode='constant'), transforms.Resize(input_size, interpolation=2), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]) dataset = datasets.ImageFolder(PATH, transform=do_transforms) The processed image after Random Crop and Resize. Image processing results To measure the accuracy of image processing methods, we used pretrained ResNet18 to perform classification. We will classify between the comedy and horror genre, as their posters are distinctly different in general. To ensure our comparison is fair, we did the following: - the same set of movies for training and same set for validation - set seed number - load pretrained ResNet18 from PyTorch’s Torchvision Model accuracy with different image processing methods are as follows: - PIL library resize is approximately 80% - Center crop library resize is approximately 80% - Padding is approximately 85% Random crop and resize is approximately 85% Random Crop and Resize method performs the best in model accuracy and processing speed. Position of the object in an image does not matter in convolution neural networks. Can we tell the genre of the movie, by its poster? In our preprocessing step, we can achieve approximately 85% accuracy for classification between 2 classes: comedy and horror. We choose comedy and horror because their posters are distinctly different between the 2 genres. Comedy generally brighter colours, while horror may be darker in contrast. Here are some of our test cases, which are unseen by the model: Interestingly, the model can learn and differentiate between these 2 genres. The model can likely pick up posters with skulls designs and associate the poster with horror movies. The 4th image shows that not all posters with a white background are comedy movies and that the model prediction is correct. However, as not all genres following the general requirement of movie poster designs, these posters may cause the model to misread the designs. Subsequently, the model may misclassify these movies into the opposite genre. Below are some examples of movie posters deviating from the general designs associated with their respective genres. The first image contains many regions of white and generally looked cheerful while the second image contains large regions of black which resulted in the poster looking dark despite cartoonish designs and fonts. These layouts misled the model, thus resulting in the wrong prediction. Model identifying between 10 genres In our dataset, we have 10 genres; each genre contains 1000 movie posters. An 80/20 split was performed to train and validate the model. We used 8000 images for training and 2000 images for validation (not used for training). We utilised weights from the pretrained ResNet18 model to train a model to classify the genre of the movie based on its poster. These are the accuracies and losses during the training. The validation accuracy is approximately 32%. Our model can learn and overfit on the trainset, but unable to generalise on the validation dataset. The top-3 accuracy is approximately 65%. Which leads us to think, what could be causing all the misclassification? How could we further improve its accuracy? Below is a heatmap showing all the misclassification for top-1 model: What we realised is that the model is having difficulty differentiating between horror and thriller posters. If you think about it, it is true even for us humans, where we might not be able to tell the difference between horror and thriller posters. The same result is observed for comedy and romance, as both genres’ posters are in the lighter mood, and contains human and smiling faces. Can we tell if the movie will make money in the box office, by its poster? Since posters are a marketing tool for a movie, we want to find out whether a movie poster attracts more viewers. Can a model identify if a particular type of poster tends to do better in the box office? In our experiment, we define how well a movie is doing by its revenue to budget ratio. A higher budget movie would require higher revenue to break even. The higher the ratio, the better the movie is doing. We created 2 classes with the revenue to budget ratio, “did well” and “didn’t do well”. Movies with the ratio of 1 and higher “did well”, otherwise it is classified as “didn’t do well”. Pretrained ResNet18 Yes! Our pretrained ResNet18 model can correctly identify if a movie would potentially make money, approximately 68% of the time. Can we do better than this? I could change to a deeper Resnet but would not be interesting, so here are a few other experiments that we tried. Bag of Tricks for Image Classification with Convolutional Neural Networks A paper by Tong He et al. suggested ResNet tweaks that would improve by receiving more information in the downsampling blocks. The author used these tweaks to improve ResNet50 model top-1 accuracy on ImageNet from 75.3% to 79.29% Mish activation function Mish is an activation function that is unbounded above, bounded below, smooth and non-monotonic. The positive range of Mish activation function resembles closely to the most popular activation function, ReLu. Being bounded below resulted in regularisation effect. The negative range preserved small negative inputs which improve expressivity and gradient flow. Read more in this article about Mish by Diganta Misra. Data augmentation Recently advances in model accuracy have been attributed to generating more data via data augmentation; which significantly increase the diversity of data available for training. from torchvision import transforms image_transforms = { # Train uses data augmentation 'train': transforms.Compose([ transforms.RandomResizedCrop(size=256, scale=(0.8, 1.0)), transforms.RandomRotation(degrees=15), transforms.ColorJitter(), transforms.RandomHorizontalFlip(), transforms.CenterCrop(size=224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), # Validation does not use augmentation 'validate': transforms.Compose([ transforms.Resize(size=256), transforms.CenterCrop(size=224), transforms.ToTensor(), transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ]), } Homemade deep and wide ResNet This is inspired by the Wide & Deep model for Recommender Systems, by combining a pretrained ResNet with pretrained wide-ResNet. Firstly, we loaded both pretrained ResNet and pretrained wide-ResNet and removed the last fully connected layers for ImageNet classification. We then appended a 3x3 convolution, batch normalisation and ReLu from the inputs to both ResNet. Lastly, we concatenate the output from both ResNet followed by addition of another 3x3 convolution and a fully connected layer for classification. Classification results with various experiments Here are our results: Mish can get a 3% improvements because of the regularisation effect, thus generalise on the unseen a little better. I would give this activation more exploration in future. Data augmentation have 3% improvements too, in fact, I am a little surprised that data augmentation would have improvements on this problem. Conclusion Given based off a movie poster alone, predicting earnings and popularity of a movie can be a daunting task. This issue rings true even for distribution companies and investors who have hundreds of experts and analysts working for them to ensure that their investments are not in vain and reap rich returns. The introduction and progress of our model may result in the future aid these analysts and companies in making more detailed and sound predictions. Upon further experimentation, to achieve a more accurate reading, delving into a deeper ResNet model might increase in performance. However, in our experiments, we applied Mish activation and various tweaks from research papers; as such results returned are promising and is a path worth exploring further. Training the AI model is half the battle; it is worth noting real-world data are “dirty” and “unrefined”; what these meant are not all data are accurate and present. For machine learning to work, we must first understand our data well and understand what is needed for our model to succeed.
https://jinglescode.github.io/2019/11/17/predict-movie-earnings-with-posters/
CC-MAIN-2021-04
refinedweb
2,140
53.21
Using CocosSharp in Xamarin.Forms - PDF for offline use - - Sample Code: - - Related APIs: - Let us know how you feel about this Translation Quality 0/250 last updated: 2016-05 CocosSharp can be used to add precise shape, image, and text rendering to an application for advanced visualization Evolve 2016: Cocos# in Xamarin.Forms Overview CocosSharp is a flexible, powerful technology for displaying graphics, reading touch input, playing audio, and managing content. This guide explains how to add CocosSharp to a Xamarin.Forms application. It covers the following: - What is CocosSharp? - Adding the CocosSharp Nuget packages - Walkthrough: Adding CocosSharp to a Xamarin.Forms app What is CocosSharp? CocosSharp is an open source game engine that is available on the Xamarin platform. CocosSharp is a runtime-efficient library which includes the following features: - Image rendering using the CCSprite class - Shape rendering using the CCDrawNode class - Every-frame logic using the CCNode.Schedule method - Content management (loading and unloading of resources such as .png files) using the CCTextureCache class - Animations using the CCAction class CocosSharp’s primary focus is to simplify the creation of cross-platform 2D games; however, it can also be a great addition to Xamarin Form applications. Since games typically require efficient rendering and precise control over visuals, CocosSharp can be used to add powerful visualization and effects to non-game applications. Xamarin.Forms is built upon native, platform-specific UI systems. For example, Buttons appear differently on iOS and Android, and may even differ by operating system version. By contrast, CocosSharp does not use any platform-specific visual objects, so all visual objects appear identical on all platforms. Of course, resolution and aspect ratio differ between devices, and this can impact how CocosSharp renders its visuals. These details will be discussed later in this guide. More detailed information can be found in the CocosSharp section. Adding the CocosSharp Nuget packages Before using CocosSharp, developers need to make a few additions to their Xamarin.Forms project. This guide assumes a Xamarin.Forms project with an iOS, Android, and PCL project. All of the code will be written in the PCL project; however, libraries must be added to the iOS and Android projects. The CocosSharp Nuget package contains all of the objects needed to create CocosSharp objects. The CocosSharp.Forms nuget package includes the CocosSharpView class, which is used to host CocosSharp in Xamarin.Forms. Add the CocosSharp.Forms NuGet and CocosSharp will be automatically added as well. To do this, right-click on the PCL’s Packages folder and select Add Packages.... Enter the search term CocosSharp.Forms, select CocosSharp for Xamarin.Forms, then click Add Package. Both CocosSharp and CocosSharp.Forms NuGet packages will be added to the project: Repeat the above steps for platform-specific projects (such as iOS and Android). Walkthrough: Adding CocosSharp to a Xamarin.Forms app Follow these steps to add a simple CocosSharp view to a Xamarin.Forms app: - Creating a Xamarin Forms Page - Adding a CocosSharpView - Creating the GameScene - Adding a Circle - Interacting with CocosSharp Once you've successfully added a CocosSharp view to a Xamarin.Forms app, visit the CocosSharp documentation to learn more about creating content with CocosSharp. 1. Creating a Xamarin Forms Page CocosSharp can be hosted in any Xamarin.Forms container. This sample for this page uses a page called HomePage is split in half by a Grid to show how Xamarin.Forms and CocosSharp can be rendered simultaneously on the same page. First, set up the Page so it contains a Grid and two Button instances: public class HomePage : ContentPage { public HomePage () { // This is the top-level grid, which will split our page in half var grid = new Grid (); this.Content = grid; grid.RowDefinitions = new RowDefinitionCollection { // Each half will be the same size: new RowDefinition{ Height = new GridLength(1, GridUnitType.Star)}, new RowDefinition{ Height = new GridLength(1, GridUnitType.Star)}, }; CreateTopHalf (grid); CreateBottomHalf (grid); } void CreateTopHalf(Grid grid) { // We'll be adding our CocosSharpView here: } void CreateBottomHalf(Grid grid) { // We'll use a StackLayout to organize our buttons var stackLayout = new StackLayout(); // The first button will move the circle to the left when it is clicked: var moveLeftButton = new Button { Text = "Move Circle Left" }; stackLayout.Children.Add (moveLeftButton); // The second button will move the circle to the right when clicked: var moveCircleRight = new Button { Text = "Move Circle Right" }; stackLayout.Children.Add (moveCircleRight); // The stack layout will be in the bottom half (row 1): grid.Children.Add (stackLayout, 0, 1); } } On iOS, the HomePage appears as shown in the following image: 2. Adding a CocosSharpView The CocosSharpView class is used to embed CocosSharp into a Xamarin.Forms app. Since CocosSharpView inherits from the Xamarin.Forms.View class, it provides a familiar interface for layout, and it can be used within layout containers such as Xamarin.Forms.Grid. Add a new CocosSharpView to the project by completing the CreateTopHalf method: void CreateTopHalf(Grid grid) { // This hosts our game view. var gameView = new CocosSharpView () { // Notice it has the same properties as other XamarinForms Views HorizontalOptions = LayoutOptions.FillAndExpand, VerticalOptions = LayoutOptions.FillAndExpand, // This gets called after CocosSharp starts up: ViewCreated = HandleViewCreated }; // We'll add it to the top half (row 0) grid.Children.Add (gameView, 0, 0); } CocosSharp initialization is not immediate, so register an event for when the CocosSharpView has finished its creation. Do this in the HandleViewCreated method: void HandleViewCreated (object sender, EventArgs e) { var gameView = sender as CCGameView; if (gameView != null) { // This sets the game "world" resolution to 100x100: gameView.DesignResolution = new CCSizeI (100, 100); // GameScene is the root of the CocosSharp rendering hierarchy: gameScene = new GameScene (gameView); // Starts CocosSharp: gameView.RunWithScene (gameScene); } } The HandleViewCreated method has two important details that we’ll be looking at. The first is the GameScene class, which will be created in the next section. It’s important to note that the app will not compile until the GameScene is created and the gameScene instance reference is resolved. The second important detail is the DesignResolution property, which defines the game’s visible area for CocosSharp objects. The DesignResolution property will be looked at after creating GameScene. 3. Creating the GameScene The GameScene class inherits from CocosSharp’s CCScene. GameScene is the first point where we deal purely with CocosSharp. Code contained in GameScene will function in any CocosSharp app, whether it is housed within a Xamarin.Forms project or not. The CCScene class is the visual root of all CocosSharp rendering. Any visible CocosSharp object must be contained within a CCScene. More specifically, visual objects must be added to CCLayer instances, and those CCLayer instances must be added to a CCScene. The following graph can help visualize a typical CocosSharp hierarchy: Only one CCScene can be active at one time. Most games use multiple CCLayer instances to sort content, but our application uses only one. Similarly, most games use multiple visual objects, but we’ll only have one in our app. A more detailed discussion about the CocosSharp visual hierarchy can be found in the Bouncing Game walkthrough. Initially the GameScene class will be nearly empty – we’ll just create it to satisfy the reference in HomePage. Add a new class to your PCL named GameScene. It should inherit from the CCScene class as follows: public class GameScene : CCScene { public GameScene (CCGameView gameView) : base(gameView) { } } Now that GameScene is defined, we can return to HomePage and add a field: // Keep the GameScene at class scope // so the button click events can access it: GameScene gameScene; We can now compile our project and run it to see CocosSharp running. We haven’t added anything to our GameScene, so the top half of our page is black – the default color of a CocosSharp scene: 4. Adding a Circle The app currently has a running instance of the CocosSharp engine, displaying an empty CCScene. Next, we’ll add a visual object: a circle. The CCDrawNode class can be used to draw a variety of geometric shapes, as outlined in the Drawing Geometry with CCDrawNode guide. Add a circle to our GameScene class and instantiate it in the constructor as shown in the following code: public class GameScene : CCScene { CCDrawNode circle; public GameScene (CCGameView gameView) : base(gameView) { var layer = new CCLayer (); this.AddLayer (layer); circle = new CCDrawNode (); layer.AddChild (circle); circle.DrawCircle ( // The center to use when drawing the circle, // relative to the CCDrawNode: new CCPoint (0, 0), radius:15, color:CCColor4B.White); circle.PositionX = 20; circle.PositionY = 50; } } Running the app now shows a circle on the left side of the CocosSharp display area: Understanding DesignResolution Now that a visual CocosSharp object is displayed, we can investigate the DesignResolution property. The DesignResolution represents the width and height of the CocosSharp area for placing and sizing objects. The actual resolution of the area is measured in pixels while the DesignResolution is measured in world units. The following diagram shows the resolution of various parts of the view as displayed on an iPhone 5 with a screen resolution of 640x1136 pixels: The diagram above displays pixel dimensions on the outside of the screen in black text. Units are displayed on the inside of the diagram in white text. Here are some important details displayed above: - The origin of the CocosSharp display is at the bottom left. Moving to the right increases the X value, and moving up increases the Y value. Notice that the Y value is inverted compared to some other 2D layout engines, where (0,0) is the top-left of the canvas. - The default behavior of CocosSharp is to maintain the aspect ratio of its view. Since the first row in the grid is wider than it is tall, CocosSharp does not fill the entire width of its cell, as shown by the dotted white rectangle. This behavior can be changed, as described in the Handling Multiple Resolutions in CocosSharp guide. - In this example, CocosSharp will maintain a display area of 100 units wide and tall regardless of the size or aspect ratio of its device. This means that code can assume that X=100 represents the far-right bound of the CocosSharp display, keeping layout consistent on all devices. CCDrawNode Details Our simple app uses the CCDrawNode class to draw a circle. This class can be very useful for business apps since it provides vector-based geometry rendering – a feature missing from Xamarin.Forms. In addition to circles, the CCDrawNode class can be used to draw rectangles, splines, lines, and custom polygons. CCDrawNode is also easy to use since it does not require the use of image files (such as .png). A more detailed discussion of CCDrawNode can be found in the Drawing Geometry with CCDrawNode guide. 5. Interacting with CocosSharp CocosSharp visual elements (such as CCDrawNode) inherit from the CCNode class. CCNode provides two properties which can be used to position an object relative to its parent: PositionX and PositionY. Our code currently uses these two properties to position the center of the circle, as shown in this code snippet: circle.PositionX = 20; circle.PositionY = 50; It’s important to note that CocosSharp objects are positioned by explicit position values, as opposed to most Xamarin.Forms views, which are automatically positioned according to the behavior of their parent layout controls. We’ll add code to allow the user to click one of the two buttons to move the circle to the left or to the right by 10 units (not pixels, since the circle draws in the CocosSharp world unit space). First we’ll create two public methods in the GameScene class: public void MoveCircleLeft() { circle.PositionX -= 10; } public void MoveCircleRight() { circle.PositionX += 10; } Next, we’ll add handlers to the two buttons in HomePage to respond to clicks. When finished, our CreateBottomHalf method contains the following code: void CreateBottomHalf(Grid grid) { // We'll use a StackLayout to organize our buttons var stackLayout = new StackLayout(); // The first button will move the circle to the left when it is clicked: var moveLeftButton = new Button { Text = "Move Circle Left" }; moveLeftButton.Clicked += (sender, e) => gameScene.MoveCircleLeft (); stackLayout.Children.Add (moveLeftButton); // The second button will move the circle to the right when clicked: var moveCircleRight = new Button { Text = "Move Circle Right" }; moveCircleRight.Clicked += (sender, e) => gameScene.MoveCircleRight (); stackLayout.Children.Add (moveCircleRight); // The stack layout will be in the bottom half (row 1): grid.Children.Add (stackLayout, 0, 1); } The CocosSharp circle now moves in response to clicks. We can also clearly see the boundaries of the CocosSharp canvas by moving the circle far enough to the left or right: Summary This guide shows how to add CocosSharp to an existing Xamarin.Forms project, how to create interaction between Xamarin.Forms and CocosSharp, and discusses various considerations when creating layouts in CocosSharp. The CocosSharp game engine offers a lot of functionality and depth, so this guide only scratches the surface of what CocosSharp can do. Developers interested in reading more about CocosSharp can find many articles in the CocosSharp.
https://docs.mono-android.net/guides/xamarin-forms/advanced/cocossharp/
CC-MAIN-2017-43
refinedweb
2,148
54.93
Per your collective requests, here is one of the documents with my collective observations from learning Python. This article was posted to the Perl newsgroup, but I have duplicated the posting here without crossposting to avoid flame wars. --tom SIMILARITY: When Python talks about tuples, lists, and dictionaries, Perl people should think of lists, arrays, and hashes respectively. Except that tuples in python are 1st-class citizens. (Unclear whether this is good; see below.) COOLNESS: Python's print() function is more like the Perl debugger's "x" command; that is, it's recursive and pretty-printed.. GOTCHA: (low) You can't use "in" on dicts. Instead, you must use d = { "fred":"wilma", "barney":"betty" } if d.has_key("fred"): # legal if "fred" in d: # ILLEGAL I don't understand why this was done. SIMILARITY: Both "and" and "or" return their last evaluated value in both Python and Perl.? GOTCHA: (high) Local variables are never declared. They just *happen*. If a variable is assigned to, then it belongs to that "scope". If it's only looked at, it may be from the current global scope. SIMILARITY: The number zero, the empty string, and the special value None are all false. Non-zero numbers and non-empty strings are all true. This is all like Perl. But here an empty tuple () is false, as is an empty list [] or an empty dictionary {}. In Perl, the last two are true if you write them that way, because they're references.. GOTCHA: (low) There are no compound assignment operators, like +=. DISSIMILARITY: There is no "test at the bottom" loop as in C's do{}while. DISSIMILARITY: There is no "three part for" loop from. SIMILARITY: Both Python and Perl are 0-based for indexing.] COOLNESS: You can "foreach" across multiple items. a = [ [1,2], [3,4], [5,6] ] for (i,j) in a: print i+j 3 7 11 COOLNESS: Python's "longs" are actually built-in, arbitrary precision integers (i.e. BigInts) COOLNESS: (?) Named parameters are built into the language. So are default paramters and variadic ones, although order matters of course. It's quite elaborate.). DISSIMILARITY: A class method call gets no class name as its implicit extra argument the way an instance method would. DISSIMILARITY: In Python, there is no difference between single and double quotes: both interpolate C-style escapes like \t, and neither interpolates variables. In Perl, single quotes do neither escapes nor variables, but double quotes do both. GOTCHA: (low) Single and double quoted strings can't cross line boundaries. You need triple quotes for that! SIMILARITY: Python does a depth-first recursive search on class ancestors looking for resolution of a method call. So does Perl. DISSIMILARITY: Python also does an inheritance search for data members that are missing. This is probably cool, however. There is no big difference between a functional and a data member.. SIMILARITY: Constructors and destructors (__init__ and __del__) aren't automagically called in parent classes in Python. Neither in Perl. DISSIMILARITY: Because there are no variable markers in Python, there are no variable interpolation in strings. That makes something like print "this $x is $y here\n" become in python print "this %d is %d here\n" % (x, y) But since "stringification" is handy, Python usurps backticks for that purpose: print "this is", `x`, "here" Is going to call the x object's stringification method (x.__str__) to get a print value. In perl: print "this is $x here" also does so, although the stringification method is oddly named in Perl. use overload '""' => .... GOTCHA: (medium) Things that return lists can't be used where a tuple is expected. A function that returns a list must be coerced into a tuple to use this, though. def fn: return [1,2] print "this %d is %d here\n" % fn() # ILLEGAL print "this %d is %d here\n" % tuple(fn()) The illegal part points out that this is an TypeError: illegal argument type for built-in operation Which isn't very helpful. GOTCHA: (high) Python has no manpages! The horror!!!!!!!!!!!!!!!!!!!!!! ENODOC GOTCHA: (low) Often Python's error messages leave something to be desired. I don't know whether GOTCHA: (medium) All ranges are up to but *not including* that point. So range(3) is the list [0,1,2]. This is also true in slices, which is the real gotcha part. A slide t[2:5] does not include t[5] in it. SIMILARITY: Negative subscripts count back from the right. Same as in Perl. COOLNESS: Slices can omit either the starting point or the ending point. t[2:5] # elts 2 through 4 (but not 5!) t[:5] # elts up to 4 (but not 5!) t[:-1] # all but the last element (up to but not to . COOLNESS: Python has a reduce() built-in that works somewhat like map(). COOLNESS: Python's filter() [same-ish as Perl's grep()] and its map() operators can operate on items from separate sequences in parallel. $_.) DISSIMILARITY: This also means that instead of substr() in Perl, you slice a string as in s = "string" print s[2:4] ri Yes, that's all you got. Strange, eh? See below on ranges.; GOTCHA: (medium) Slices in Python must be contiguous ranges of a sequence. In Perl, there's no such restriction. GOTCHA: (medium) You can't slice dictionaries at all in Python. In Perl, it's easy to slice a hash, and just as sensible. GOTCHA: (high) As we saw with lists, because everything is a reference, and there's no way to dereference that reference, this means that again there is also no built-in, intuitive way to copy a dictionary. Instead, the suggested work-around is to write a loop: new = {} for key in old.keys: new[key] = old[key] But this is guaranteed slower, because it's not at the C level. It also shows that dictionaries aren't first class citizens in python as they are in Perl: %old = ( "fred" => "wilma", "barney" => "betty" ); %new = %old; or even with references: $old = { "fred" => "wilma", "barney" => "betty" }; $new = { %$old }; or %new = %$old; GOTCHA: (high) Lists don't autoallocate the way dictionaries do. So while this works on two dictionaries: new = {} for key in old.keys: new[key] = old[key] This fails on two lists: new = [] for i in old: new[i] = old[i] Because Python refuses to grow the list as it did the dictionary. Perl will grow both arrays and hashes just fine. GOTCHA: (medium) There's no way to set up a permitted exports list. The caller may have anything they ask for. COOLNESS: DBM files seem (?) to automatically know about nested datatypes. GOTCHA: (medium) Importing a variable only gives you a local copy. In Perl, it makes two names for the same object.. GOTCHA: (medium) You can't cut and paste these examples because of the issue of white space significance. :-( GOTCHA: (low) List objects have built-in methods, like l.append(x) But string objects don't. You have to import from the string module to get them, and then they're functions, not methods.. DISSIMILARITY: Where Python says "elif", Perl says "elsif". SIMILARITY: Neither Perl nor Python supports a case or switch statement, requiring multiway if's or a lookup of a function dispatch table. DISSIMILARITY: Where Python says "break" and "continue", Perl says "last" and "next". DISSIMILARITY: Where Perl says "continue", Python uses "else" (for a loop block). DISSIMILARITY: Constrast Python: import os data = os.popen('grep %s %s' % (patstr, srcfile)).read() with Perl: $data = `grep $patstr $srcfile`; GOTCHA: (high) There doesn't seem to be away other than using low-level hand-rolling of posix functions to supply things like os.popen and os.system a list of shell-proof arguments. Looks like it always goes through the shell. This has security ramifications.. GOTCHA: (medium) The expression 3/4 is 0, which is false. In Perl, 3/4 is 0.75, which is what you'd expect. You need to force the floatness. Sometimes Python is just too tied to C, other time not enough. This is one of the former.. GOTCHA: (low) An out-of-bounds list reference raises an "IndexError: list index out of range" exception, but not if you use a slice to get at it! t = range(5) print t[2:17] [2, 3, 4] COOLNESS: Relationals stack: x < y < z means x < y and y < z COOLNESS: There's no distinction between the mechanism used for operator overloading and tying, as there is in Perl. Both use special method names. GOTCHA: (high) Python's lambda's aren't really lambdas, because they are only expressions, not full functions, and because they cannot see their enclosing scope. DISSIMILARITY: What Perl calls eval(), Python calls exec(). What Perl calls exec(), Python calls os.execv(). GOTCHA: (low) Python's eval() only works on expressions, not full code blocks full of statements. You need exec() for the whole thing. GOTCHA: (medium) This is a tuple of two elements (1,2) But this is a tuple of one element: (1,) Whereas this is merely the expression 1: (1) Yes, the trailing comma counts. GOTCHA: (low) Normally, a print statement in python adds a newline. If you don't want one, you need a trailing comma! DISSIMILARITY: Python uses a pow() function for Perl's ** operator. Wait, no, they just added "**" later, with the same semantics as in Perl, except that something like 2**3**4 in Perl gives you an answer, and in python, an exception. GOTCHA: (low) Python has a round() function, but it seems to ignore the IEEE rules of rounding. However, its sprintf operator doesn't: >>> print "int of %f is %.0f" % (1.5, 1.5) int of 1.500000 is 2 >>> print round(1.5,0) 2.0 >>> print "int of %f is %.0f" % (2.5, 2.5) int of 2.500000 is 2 >>> print round(2.5,0) 3.0 And I'd jolly well like to know why I wasn't allowed to use print "int of %f is %.0f" % (2.5) * 2 or if needed, print "int of %f is %.0f" % ( (2.5) * 2 )". GOTCHA: (medium) You can't just interchange tuples and lists in python the way you can lists and arrays in Perl. This is an error: import sys # otherwise, no argv for you, buddy print "First arg %s and second %s" % sys.argv[1:3] because you need print "First arg %s and second %s" % tuple(sys.argv[1:3]) GOTCHA: (low) I can't figure out how to write a class destructor for at exit handling the way I can with Perl's END{} GOTCHA: (medium) Python has no 2nd GC pass at thread shutdown time to find lost objects and destruct them. This is a problem on embedded systems, and breaks correctness issues. SIMILARITY: Like Perl, Python requires access to members through a self reference. This gets rid of C++'s icky hidden scope. This is good. COOLNESS: I believe that Python checks the prototype signature on methods as well as on functions. I wonder whether Tim Bunce's ideas run-time evaluation of prototypes for Perl might be able to do this. GOTCHA: (low) sort and reverse are in-place. This leads to verbose code, such as: old = [1,2,3] new = old new.reverse() Likewise for sort. GOTCHA: (low) You have to compiled the definition for a function before you may compile the call it. This is ok def fn(): print "hi" fn() But putting the fn() call first: fn() def fn(): print "hi" Produces the ever helpful: NameError: fn error message. So much for autoloading. Then again, this would save me from the darned legnth() bug in Perl, that is, switching characters. Maybe a "no autoload" pragma for perl? I always though "use strict subs" shoud ldo this. But another python mystery is why it's ok to compile code to methods that you don't know whether will be legal, but not functions. ob = some_fn() ob.scattter() That will compile, even though I used an extra 't'. It will die on run-time, just as in Perl. But the fn() case at the start of this looked like a compile-time death. ADDENDA: It turns out that def() happens *AT RUN TIME* in python. That's the root of this problem. In Perl, a sub{} is a compile-time thing. GOTCHA: (low) When you need to make an empty copy of the same type, you write x = y[:0] So much for being typeless. Sigh. SIMILARITY: In theory, the Python rexec() module should be like Perl's Safe module. I haven't tested to see whether they chroot the namespace, or whether the evil method name leakages occur through the box. COOLNESS: Any function of class can have a significant string called a "doc string" that can be accessed as whatever.__doc__ QUESTION: What is and what is not thread safe? SIMILARITY: Python uses None in many places where Perl uses undef(). But not all. You can't over-subscript an array and get away with it. -- I know it's weird, but it does make it easier to write poetry in perl. :-) --Larry Wall in <7865 at jpl-devvax.JPL.NASA.GOV>
https://mail.python.org/pipermail/python-list/1999-August/013099.html
CC-MAIN-2019-30
refinedweb
2,208
75.5
API Changes in 2.0.0¶ Deprecation and removal¶ Color of Axes¶ The axisbg and axis_bgcolor properties on Axes have been deprecated in favor of facecolor. GTK and GDK backends deprecated¶ The GDK and GTK backends have been deprecated. These obsolete backends allow figures to be rendered via the GDK API to files and GTK2 figures. They are untested and known to be broken, and their use has been discouraged for some time. Instead, use the GTKAgg and GTKCairo backends for rendering to GTK2 windows. WX backend deprecated¶ The WX backend has been deprecated. It is untested, and its use has been discouraged for some time. Instead, use the WXAgg backend for rendering figures to WX windows. CocoaAgg backend removed¶ The deprecated and not fully functional CocoaAgg backend has been removed. round removed from TkAgg Backend¶ The TkAgg backend had its own implementation of the round function. This was unused internally and has been removed. Instead, use either the round builtin function or numpy.around. 'hold' functionality deprecated¶ The 'hold' keyword argument and all functions and methods related to it are deprecated, along with the axes.hold rcParams entry. The behavior will remain consistent with the default hold=True state that has long been in place. Instead of using a function or keyword argument ( hold=False) to change that behavior, explicitly clear the axes or figure as needed prior to subsequent plotting commands. Artist.update has return value¶ The methods matplotlib.artist.Artist.set, matplotlib.artist.Artist.update, and the function matplotlib.artist.setp now use a common codepath to look up how to update the given artist properties (either using the setter methods or an attribute/property). The behavior of matplotlib.artist.Artist.update is slightly changed to return a list of the values returned from the setter methods to avoid changing the API of matplotlib.artist.Artist.set and matplotlib.artist.setp. The keys passed into matplotlib.artist.Artist.update are now converted to lower case before being processed, to match the behavior of matplotlib.artist.Artist.set and matplotlib.artist.setp. This should not break any user code because there are no set methods with capitals in their names, but this puts a constraint on naming properties in the future. Legend initializers gain edgecolor and facecolor keyword arguments¶ The Legend background patch (or 'frame') can have its edgecolor and facecolor determined by the corresponding keyword arguments to the matplotlib.legend.Legend initializer, or to any of the methods or functions that call that initializer. If left to their default values of None, their values will be taken from matplotlib.rcParams. The previously-existing framealpha kwarg still controls the alpha transparency of the patch. Qualitative colormaps¶ Colorbrewer's qualitative/discrete colormaps ("Accent", "Dark2", "Paired", "Pastel1", "Pastel2", "Set1", "Set2", "Set3") are now implemented as ListedColormap instead of LinearSegmentedColormap. To use these for images where categories are specified as integers, for instance, use: plt.imshow(x, cmap='Dark2', norm=colors.NoNorm()) Change in the draw_image backend API¶ The draw_image method implemented by backends has changed its interface. This change is only relevant if the backend declares that it is able to transform images by returning True from option_scale_image. See the draw_image docstring for more information. matplotlib.ticker.LinearLocator algorithm update¶ The matplotlib.ticker.LinearLocator is used to define the range and location of axis ticks when the user wants an exact number of ticks. LinearLocator thus differs from the default locator MaxNLocator, for which the user specifies a maximum number of intervals rather than a precise number of ticks. The view range algorithm in matplotlib.ticker.LinearLocator has been changed so that more convenient tick locations are chosen. The new algorithm returns a plot view range that is a multiple of the user-requested number of ticks. This ensures tick marks will be located at whole integers more consistently. For example, when both y-axes of a``twinx`` plot use matplotlib.ticker.LinearLocator with the same number of ticks, their y-tick locations and grid lines will coincide. matplotlib.ticker.LogLocator gains numticks kwarg¶ The maximum number of ticks generated by the LogLocator can now be controlled explicitly via setting the new 'numticks' kwarg to an integer. By default the kwarg is None which internally sets it to the 'auto' string, triggering a new algorithm for adjusting the maximum according to the axis length relative to the ticklabel font size. matplotlib.ticker.LogFormatter: two new kwargs¶ Previously, minor ticks on log-scaled axes were not labeled by default. An algorithm has been added to the LogFormatter to control the labeling of ticks between integer powers of the base. The algorithm uses two parameters supplied in a kwarg tuple named 'minor_thresholds'. See the docstring for further explanation. To improve support for axes using SymmetricalLogLocator, a linthresh keyword argument was added. New defaults for 3D quiver function in mpl_toolkits.mplot3d.axes3d.py¶ Matplotlib has both a 2D and a 3D quiver function. These changes affect only the 3D function and make the default behavior of the 3D function match the 2D version. There are two changes: - The 3D quiver function previously normalized the arrows to be the same length, which makes it unusable for situations where the arrows should be different lengths and does not match the behavior of the 2D function. This normalization behavior is now controlled with the normalizekeyword, which defaults to False. - The pivotkeyword now defaults to tailinstead of tip. This was done in order to match the default behavior of the 2D quiver function. To obtain the previous behavior with the 3D quiver function, one can call the function with ax.quiver(x, y, z, u, v, w, normalize=True, pivot='tip') where "ax" is an Axes3d object created with something like import mpl_toolkits.mplot3d.axes3d ax = plt.subplot(111, projection='3d') Stale figure behavior¶ Attempting to draw the figure will now mark it as not stale (independent if the draw succeeds). This change is to prevent repeatedly trying to re-draw a figure which is raising an error on draw. The previous behavior would only mark a figure as not stale after a full re-draw succeeded. The spectral colormap is now nipy_spectral¶ The colormaps formerly known as spectral and spectral_r have been replaced by nipy_spectral and nipy_spectral_r since Matplotlib 1.3.0. Even though the colormap was deprecated in Matplotlib 1.3.0, it never raised a warning. As of Matplotlib 2.0.0, using the old names raises a deprecation warning. In the future, using the old names will raise an error. Default install no longer includes test images¶ To reduce the size of wheels and source installs, the tests and baseline images are no longer included by default. To restore installing the tests and images, use a setup.cfg with [packages] tests = True toolkits_tests = True in the source directory at build/install time.
https://matplotlib.org/3.4.3/api/prev_api_changes/api_changes_2.0.0.html
CC-MAIN-2022-33
refinedweb
1,139
57.16
I just did: $ gnulib-tool --import stdlib Now i see in stdlib_h.m4 the definition: AC_DEFUN([gl_STDLIB_MODULE_INDICATOR], ...) Some questions on how to properly use this module: - What do i need to do wrt `gl_STDLIB_MODULE_INDICATOR'? - Is it enough to - add to configure.in gl_STDLIB_H - convert the implementation file construct from: #ifdef STDC_HEADERS # include <stdlib.h> ... #endif to: #include <stdlib.h> #ifdef STDC_HEADERS ... #endif (that is, move the #include <stdlib.h> to top-level, without any surrounding preprocessor conditionals)? - Is it wise to remove `AC_HEADER_STDC' from configure.in? I see (info "(gnulib.info) Various Kinds of Modules") gives an example of substituted-function usage. Once i get the answers to these questions, i'll submit a doc patch (presuming that the module `stdlib' is like other substituted-headers modules). thi
http://lists.gnu.org/archive/html/bug-gnulib/2008-08/msg00129.html
CC-MAIN-2016-26
refinedweb
128
54.08
Get the next character from a file #include <stdio.h> int getc_unlocked( FILE *fp ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The getc_unlocked() function is a thread-unsafe version of getc(). You can use it safely only when the invoking thread has locked fp using flockfile() (or ftrylockfile()) and funlockfile(). The next character from the input stream pointed to by fp, or EOF if an end-of-file or error condition occurs (errno is set). POSIX 1003.1 TSF feof(), ferror(), flockfile(), getc(), getchar(), getchar_unlocked(), putc(), putc_unlocked(), putchar(), putchar_unlocked()
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/g/getc_unlocked.html
CC-MAIN-2018-26
refinedweb
101
66.03
I use Ubuntu’s 10.04 32-bit desktop edition, and Python 2.6.5. We can use GUIs in Python in many ways, such as the PyZenity module, the Tkinter toolkit and PyQt (a Nokia framework). In this article, we are going to talk about the PyZenity module and the Tkinter toolkit. Creating GUIs using PyZenity This is the fastest and simplest way to create a GUI. It mixes the Zenity utility with Python. Before going directly to PyZenity, I would like to demonstrate the Zenity utility. It comes preinstalled on most systems. Just go to the terminal and type zenity --title="Demo" --info --text="hello" and you will see something like what’s shown in Figure 1 — an information box, with its title being “Demo”, and the message “Hello”. Explore the Zenity man page for more details on Zenity. The PyZenity module is an easy way to use the Zenity utility in Python. To use it, you have to download and compile it manually. You can check out the source code here. I’m using version 0.1.4. Follow the steps given below to install the tar.gz file: tar zxf PyZenity-0.1.4.tar.gz cd PyZenity-0.1.4 python setup.py install After installing, I would recommend you first visit the documentation for this module. This Web page lists all the possible GUI items you can create using this module, and their various parameters. For example, you can make info boxes, menu boxes, error dialogue boxes, progress bars, etc. After you scan the available functions, let’s try making an input box, using the GetText function. Enter the following code, save it as demo1.py, and run it (the output is shown in Figure 2): import PyZenity a=PyZenity.GetText(text="Enter the string : ", entry_text="", password=False) print a This will accept “Hello” as your input string, and will display it on the console as soon as you press Enter or the OK button. It will return None if you press the Cancel button. Let’s try and understand the program better. We call the GetText function following the general syntax of module.function(). The text option is the message to be shown; entry_text stands for the initial value of the text-box, which is an empty string in this case. password=False means that you want the text to be visible to the user, not masked with asterisks. This is the beauty of this module, just 3-4 lines of programming and your GUI is ready. While I was using this module, I had some difficulty in understanding its List function, because of the confusing parameters. Let’s quickly take a look at it (save as demo2.py). The output is as shown in Figure 3. import PyZenity x=PyZenity.List(["choices","test1","test2"], title="selection", boolstyle="checklist", editable=False, select_col="ALL", sep='|', data=[["","hello","hi"],["","1","2"]]) print x Let’s review the parameters quickly: - The first parameter is a tuple that names the columns of your list (here — choices, test1 and test2). - The title for the dialogue box (“selection”). boolstyle: This is the checklist or radiolist in the first column of the list. editable: Offers the option of whether the list should be editable or not. select_col: The column number whose value is to be returned if you select a row. sep: The row separator during the return. data: Must be in tuples, or in tuples of tuples. Fills out the columns row-wise. Since we have selected boolstyle=checklist, the first column should be empty, otherwise it will overwrite the selection boxes. There should be an input for every column and every row. A List dialogue box will give you an output in the form of a tuple only. You can check that out by running the above program. To explore more interfaces, read the documentation. Drawbacks of PyZenity Even though PyZenity is very fast and simple, there are a limited number of GUI forms that you can make using this module. You cannot modify the basic GUI according to your needs. If you need two buttons in the info box, you can’t have it with this module — which provides you with an info box with one button only. And you cannot modify that info box since it is a predefined GUI. So, PyZenity is good, but to a limited extent. Create GUIs using Tkinter Tkinter is my favourite among all the toolkits. It’s like creating the GUI from scratch. It takes some time to learn to use it, but believe me, it is worth spending every second of that time. It is the most portable GUI toolkit for Python. It is also known as the top, thin, object-oriented layer of Tcl/Tk. Get started with this module. The Tkinter toolkit comes with the Python interpreter built in, so you don’t need to manually download and install it. To use Tkinter, you have to import the classes from this package by using from Tkinter import * at the top of your program. Let’s write a “hello world” GUI with it ( demo3.py; the output is as shown in Figure 4): from Tkinter import * root=Tk() #1 root.title("demo") #2 l=Label(root, text="Hello World", width=20) #3 l.pack() #4 root.mainloop() #5 Let’s understand, one line at a time, what has happened above: - We have created a Tk root widget (it must be created in almost every program). It acts as a parent to all the widgets, and is a simple window with a title bar. - We set the title text of the window here. - We created a label using the Labelclass, with root as its parent widget (containing it). - We call the packmethod on this widget, which tells it to size itself to fit the given text, and make itself visible. - We entered the Tkinter main loop. The program will stay in the event loop until you close the window. The application window will appear only after you enter the main loop. Tkinter offers many widget classes, such as Button, Label, Entry, Checkbox, Bitmap (for .xbm image), etc., which let you create GUIs as per your need. These classes have many functions available to modify the style of your widget. Some of the important functions are grid(), pack(), configure(); each function contains a lot of parameters/arguments, like width, height, sticky, etc. Refer to the Web for more functions and parameters. We can also bind an event to a particular widget, if we want to. Let’s create a dialogue box with the help of the class method, which accepts the first and last name from the user. This program will demonstrate many useful aspects of Tkinter ( demo4.py; the output is as shown in Figure() Let’s try to understand the program, with reference to the numbered lines: - We defined the constructor of the class diag, in which root is the parent widget. - We called the bindfunction to handle an event. When Enter was hit, it called the okfunction; for Esc, it called the quitfunction (defined in the class). - We have defined the place of the widget label by assigning it Row number 0. You can also assign a column to the widget according to your need. - We used an Entryclass to let the user enter text. - This set the focus to the Entry widget (making it the active control) when the program is run. - We used the Buttonclass to create buttons. - And defined the okand quitfunctions to handle the respective events. It’s so good to program using Tkinter. You define every single step according to your need, and can play with the widget in your own way. One more piece of good news is that you can use the most popular canvas widget too, by using the Canvas class — it provides structured graphical facilities for Tkinter, and is mostly for drawings like graphs, etc. Here’s a very small demo of how to make a stats analysis using the Canvas class ( demo5.py; output shown in Figure 6): from Tkinter import * root=Tk() c=Canvas(root) c.pack() xy=20, 20, 300, 180 #create an arc enclosed by the given rectangle c.create_arc(xy, start=0, extent=270, fill="red", outline="yellow") c.create_text(100, 100, text="75%") c.create_arc(xy, start=270, extent=60, fill="blue", outline="yellow") c.create_text(180, 140, text="16.5%") c.create_arc(xy, start=330, extent=30, fill="green", outline="yellow") c.create_text(280, 120, text="8.5%") root.mainloop() In this program, xy are the end-points of the rectangle in which the arcs are to be enclosed. Then we called the Canvas class, and using that widget, we created arcs with appropriate text positions. For more details regarding Canvas, refer to the Web. I hope you liked playing around with GUIs using the Tkinter toolkit. You can also combine the use of both PyZenity and Tkinter in the same program. Queries and suggestions are always welcome, so feel free to ping me! Thanks for this post . I want to develop an cross platform python gui application , that will also use a MySQL data base. Which gui package do you suggest ? very useful for beginners The optic of Tkinter dialogs doesn’t fit to the rest of my system. That’s why I use PyGtk. Hi, I tested demo2.py and I got a problem with boolstyle at line 188 of PyZenity.py. The solution was to edit PyZenity.py line 186 … if not boolstyle == ‘checklist’ or boolstyle == ‘radiolist’: … I’m using python 2.6.4 under fc 13 (2.6.34.9-69.fc13.i686.PAE) Anyway, this page has been very useful. thx I’m certainly no expert or I wouldn’t have found this page but the demo4 code won’t work as shown. About half of it is duplication. This works ————————————————— from Tkinter import * class diag: def __init__(self, parent): #1 self.parent=parent self.parent.bind(“”, self.ok) #2 self.parent.bind(“”,(“”, self.ok) self.b2=Button(self.parent, borderwidth=2, text=”Cancel”, width=5) self.b2.grid(row=2, column=2, sticky=W) self.b2.bind(“”, self.quit) def ok(self, event=None): #7 print “value is : “, self.e1.get(), self.e2.get() self.parent.destroy() def quit(self, event=None): self.parent.destroy() root=Tk() d=diag(root) root.mainloop() ———————————————————– I think.
https://www.opensourceforu.com/2011/03/quickly-write-gui-using-python/
CC-MAIN-2020-45
refinedweb
1,739
67.65
Created on 2010-08-02 00:31 by lukasz.langa, last changed 2010-08-09 12:53 by fdrake. This issue is now closed. Overview -------- It's a fairly common need in configuration parsing to take configuration from a string or a Python data structure (most commonly, a dictionary). The attached patch introduces two new methods to RawConfigParser that serve this purpose: readstring and readdict. In the process, two behavioral bugs were detected and fixed. Detailed information about the patch ------------------------------------ Source changes: * added readstring and readdict methods * fixed a bug in SafeConfigParser.set when setting a None value would raise an exception (the code tried to remove interpolation escapes from None) * added a new exception DuplicateOptionError, raised when during parsing a single file, string or a dictionary the same option occurs twice. This catches mispellings and case-sensitivity errors (parsers are by default case-insensitive in regard to options). * added checking for option duplicates described above in _read Test changes: * self.fromstring is using readstring now * self.cf removed because it was bad design, now config parser instances are explicit everywhere ** changed definition of get_error and parse_error * split test_basic into two parts: config parser building and the actual test ** introduced config parser built from the dictionary * added a duplicate option checking case for string and dictionary based reading Documentation changes: * documented readstring and readdict * documented DuplicateOptionError * explicit remark about the case-insensitivity * corrected remark about leading whitespace being removed from values (also trailing whitespace is, and for keys as well) There goes the patch. Patch updated after review by Ezio Melotti. To answer a common question that came up in the review: all atypical names and implementation details are there due to consistency with existing configparser code, e.g.: * readstring ~= readfp (no _ between words) * DuplicateOptionError ~= DuplicateSectionError (not Duplicated) * all exceptions use old style BaseClass.__init__ and not super() API won't change so this has to remain that way. Exceptions may be refactored in one go at a later stage. Although you say this is fairly common, I haven't heard of anyone using or requesting this type of feature. Do you have any real-world use cases for this? Before we start adding more read methods I think we should know who wants them and why. I'm not sure duplicates should raise exceptions. To me, the current behavior of using the last read section/option is fine. It's predictable and it works. Halting a program's operation due to duplicate sections/options seems a bit harsh to me. Good questions, thanks! The answers will come useful for documentation and later hype :) READING CONFIGURATION FROM A DATA STRUCTURE ------------------------------------------- This is all about templating a decent set of default values. The major use case I'm using this for (with a homebrew SafeConfigParser subclass at the moment) is to provide *in one place* a set of defaults for the whole configuration. The so-called `defaults=` that we have at the moment don't fit this space well enough because they provide values that can (and will) jump into every section. This made them useless for me twice: - when configuring access to external components in a fairly complex system; abstracting out the useless details the template I was looking for was [name-server] port= protocol= verbose= [workflow-manager] port= protocol= verbose= [legacy-integration] port= protocol= verbose= # there were about 15 of these - second case was a legacy CSV translation system (don't ask!). An abstract of a config with conflicting keys: [company1-report] delimiter=, amount_column= amount_type= description_column= description_type= ignore_first_line=True [company2-report] delimiter=; amount_column= amount_type= description_column= description_type= ignore_first_line=False # and so on for ~10 entries As you can see, in both examples `defaults=` couldn't be a good enough template. The reason I wanted these default values to be specified in the program was two-fold: 1. to be able to use the configuration without worrying about NoOptionErrors or fallback values on each get() invocation 2. to handle customers with existing configuration files which didn't include specific sections; if they didn't need customization they could simply use the defaults provided I personally like the dictionary reading method but this is a matter of taste. Plus, .fromstring() is already used in unit tests :) DUPLICATE OPTION VALIDATION --------------------------- Firstly, I'd like to stress that this validation does NOT mean that we cannot update keys once they appear in configuration. Duplicate option detection works only while parsing a single file, string or dictionary. In this case duplicates are a configuration error and should be notified to the user. You are right that for a programmer accepting the last value provided is acceptable. In this case the impact should be on the user who might not feel the same. If his configuration is ambiguous, it's best to use the Zen: "In the face of ambiguity, refuse the temptation to guess." This is very much the case for large configuration files (take /etc/ssh/sshd_config or any real life ftpd config, etc.) when users might miss the fact that one option is uncommented in the body or thrown in at the end of the file by another admin or even the user himself. Users might also be unaware of the case-insensitivity. These two problems are even more likely to cause headaches for the dictionary reading algorithm where there actually isn't an order in the keys within a section and you can specify a couple of values that represent the same key because of the case-insensitivity. Plus, this is going to be even more visible once we introduce mapping protocol access when you can add a whole section with keys using the dictionary syntax. Another argument is that there is already section duplicate validation but it doesn't work when reading from files. This means that the user might add two sections of the same name with contradicting options. SUMMARY ------- Reading from strings or dictionaries provides an additional way to feed the parser with values. Judging from the light complexity of both methods I would argue that it's beneficial to configparser users to have well factored unit tested methods for these tasks so they don't have to reimplement them over and over again when the need arises. In terms of validation, after you remark and thinking about it for a while, I think that the best path may be to let programmers choose during parser initialization whether they want validation or not. This would be also a good place to include section duplicate validation during file reading. Should I provide an updated patch? After a couple of years of experience with external customers configuring software I find it better for the software to aid me in customer support. This is the best solution when users can help themselves. And customers (and we ourselves, too!) do stupid things all the time. And so, specifying a default set of sane values AND checking for duplicates within the same section helps with that. Reading from a string is certainly fairly common, though I'm pretty happy with using an io.StringIO seems reasonable and straightforward. I've never stumbled over the need to "read" from dictionaries as described. Corrected a simple mistake in the patch. Updated patch after discussion on #python-dev: - PEP8 compliant names used: read_file, read_string, read_dict. readfp has been PendingDeprecated - documentation updates - option validation is now optional with the use of `strict=` argument in the parser's __init__ - new unit tests introduced to check for the new behaviour FTR, some people questioned the purpose of read_dict(). Let me summarize this very briefly here: - the API is using dictionaries similar to those in defaults= but has one level of depth more (sections) - initializing a parser with a dictionary produces syntax that is more natural in code - having a single method implementing reading a dictionary with unit tests, support for proper duplicate handling, etc. frees users from writing their own - we need that anyway for upcoming mapping protocol access discussed in #5412 - more detailed use cases in msg112429 Rietveld review link: I agree that the existing defaults={...} should never have been added to the stdlib. It made sense in the originating application, but should have been implemented differently to keep application-specific behavior out of what eventually was added to the stdlib. Will think further about the rest of this when I'm on my own computer and can read & play with the patch in a more usable environment. Patch updated after review by Ezio Melotti and Éric Araujo. Thanks guys. (Apparently I don't have the right permissions on Rietveld.) - Docstrings should be written in the standard PEP-8 way (single line summary + additional explanation as needed following a blank line). - read_sting and read_dict should still take a `filename` argument for use in messages, with <string> and something like <data in ...> (with the caller's __file__ being filled in if not provided). - Indentation in the last read_dict of test.test_cfgparser.BasicTestCase.test_basic_from_dict is incconsistent with the previous read_dict in the same test. - Lines over 79 characters should be shortened. Most of these are in docstrings, so just re-wrapping should be sufficient for most. - Changing the test structure to avoid self.cf may have been convenient, but is unrelated to the actual functionality changes. In the future such refactorings should be performed in separate patches. (Ordering dependencies are fine, so long as they're noted in the relevant issues.) - DuplicateOptionError is missing from __all__. - Changing the constructor to use keyword-only arguments carries some backward-compatibility risk. That can be avvoided by removing that change and adding strict=False at the end of the parameter list. It's unlikely that this is a significant risk, since these parameters generally lend themselves to keyword usage. I think this should have been several separate patches: - refactoring (the self.cf changes in the tests) - addition of the DuplicateOptionError - the read_* methods (including the readfp deprecation) - the new "strict" option Don't change that at this point, but please consider smaller chunks in the future. Updated patch after review by Fred Drake. Thanks, it was terrific! Status: > Docstrings should be written in the standard PEP-8 way (single line > summary + additional explanation as needed following a blank line). Corrected where applicable. Is it OK if the one-sentence summary is occasionally longer than one line? Check out DuplicateSectionError, could it have a summary as complete as this that would fit a single line? On a similar note, an inconsistency of configparser.py is that sometimes a blank line is placed after the docstring and sometimes there is none. How would you want this to look vi in the end? > read_sting and read_dict should still take a `filename` argument for > use in messages, with <string> and something like <data in ...> (with > the caller's __file__ being filled in if not provided). Are you sure about that? `read_string` might have some remote use for an argument like that, but I'd call it source= anyway. As for `read_dict`, the implementation does not even use _read(), so I don't know where could this potential `filename=` be used. > Indentation in the last read_dict of > test.test_cfgparser.BasicTestCase.test_basic_from_dict is > incconsistent with the previous read_dict in the same test. Updated. All in all, some major unit test lipsticking should be done as a separate patch. > Lines over 79 characters should be shortened. Most of these are in > docstrings, so just re-wrapping should be sufficient for most. Corrected, even made my Vim show me these kinds of formatting problems. I also corrected a couple of these which were there before the change. > Changing the test structure to avoid self.cf may have been convenient, > but is unrelated to the actual functionality changes. In the future > such refactorings should be performed in separate patches. (Ordering > dependencies are fine, so long as they're noted in the relevant > issues.) Good point, thanks. > DuplicateOptionError is missing from __all__. Corrected. > Changing the constructor to use keyword-only arguments carries some > backward-compatibility risk. That can be avvoided by removing that > change and adding strict=False at the end of the parameter list. All of these arguments are new in trunk so there is no backwards compatibility here to think about. The arguments that are not new (defaults and dict_type) are placed before the asterisk. > Don't change that at this point, but please consider smaller chunks in > the future. I will, thanks. Ah, forgot to remind you that I don't have commit privileges yet. >> Docstrings should be written in the standard PEP-8 way (single line >> summary + additional explanation as needed following a blank line). > Corrected where applicable. Is it OK if the one-sentence summary is > occasionally longer than one line? It’s a one-line summary, not one sentence. PEP 257 has all the details you’re looking for, and more. - Summmary lines in docstrings are one line, as Éric points out. They're summaries, so need not be complete. Use elaboration text as needed, and omit anything that's not relevant in context. An alternate wording to consider: """Raised by strict parsers for options repeated in an input source.""" In particular, there's no mention of what the kinds of sources are, since that's not relevant for the exception itself. - I like the blank line before the ending """, others don't. More important is consistency within the module, so feel free to coerce it either way. - Perhaps read_* should all call the extra argument source= instead of filename=. ``readfp`` should take `filename` and pass it as `source` for compatibility. This still makes sense for ``read_file`` since the "file" part of the name refers to the kind of object that's passed in, not that the file represents a simple file on the filesystem. - Since the Duplicate*Error exceptions can now be raised when "reading" (_read itself is an implementation detail, so that's irrelevant), they should grow filename, lineno, line constructor arguments. These should be filled in as much as possible when those exceptions are raised. - Adding a ``source`` attribute to exceptions that have ``filename`` attribute is reasonable; they should have the same value. (One should be a property that mirrors the other.) The ``filename`` parameter and attribute name must remain for compatibility. - There's at least one way to make vim show the too-long text with a violent red background. Appropriate for code, sucks when reading logfiles. But I'm not a vim user. Whatever tool makes it show up for you is fine. - The constructors in Python 3.2 should be compatible with those from Python 2.7 (IMO). They're currently not: `allow_no_value` should be allowed positionally and come right after `dict_type`. (This problem existed before your patch; it should be corrected first.) - In the docs, "Parse configuration from a dictionary" should probably be "Load configuration from a dictionary"; most of the parsing effort is skipped in this case, so this reads a little oddly. - Indentation in the DuplicateOptionError constructor is a bit weird in the superclass constructor call. Something like this would be better: def __init__(self, section, option): Error.__init__( self, "Option %r in section %r already exists" % (option, section)) self.section = section self.option = option self.args = (section, option) Patch updated. All docstrings now have a one-line summary. All multiline docstrings now have a newline character before the closing """. No method docstrings now include any additional newlines between them and the code. Most of them were okay, a couple were edited for consistency. All read_* methods now have a source= attribute. read_file defaults to <???>, read_string to <string>, read_dict to <dict>. Didn't provide any additional introspection because it's not trivial and I want this patch to be committed at last. This might be an additional advantage because a generic <string> or <dict> name may motivate programmers to specify a more context-specific source name of their own. As for Duplicate*Error, I've added source= and lineno= to both. Didn't add line= because it's useless, the error is very specific on what is wrong. Reading from a dictionary does not pass the lineno for obvious reasons. Reading from files and strings does. The `filename' attribute and __init__ argument in ParsingError were PendingDeprecated and a `source' attribute was introduced for consistency. `allow_no_value` was moved to the 3rd place in the argument list for backwards compatibility with Python 2.7. I didn't notice your change made to Python 2.7 as well, all other new arguments were added by me. This ensures there's no backwards compatibility involved in their case. That's why I left all of the new arguments as keyword only. Documentation and unit tests updated. BTW, if `allow_no_value` made it to 2.7 this means it has a bug in SafeConfigParser.set. Try to set a valueless option, line 694 will raise an exception. Should I provide you with a separate patch only to fix this for 2.7.1? PS. I made Vim show me too long lines with a delicate red background. Way better than the violent alternative ;) Plus, I only set it for the Python filetype. Patch committed on py3k branch in r83889.
https://bugs.python.org/issue9452
CC-MAIN-2021-25
refinedweb
2,865
56.25
The left is the original picture The right is after filling For contour fill , We first need to extract the contour in the image , I'm filling the inside with colors , I can't tell the difference myself , What is the difference between threshold segmentation and filling , If anyone knows , Share it , I will be very grateful ! Flooding algorithm is commonly used in contour filling ! The hole filling is under binarization , It's in the image “ White dot ” perhaps “ Black spot ”, This will affect our calculation of the area inside the contour ! Hole filling here, we aim at binary graph , It's not a grayscale image ! stay OpenCV Implementation in imfill Steps for Please refer to the figure below when reading the following steps . * Read picture . * Binarization of input image . * From pixel (0,0) fill color . Please note that , step 2 And steps 3 The difference between the outputs of is in the steps 3 The background of the image in is now white . * Inverts the flood filled image ( That is, black becomes white , White turns black ). * Use bitwise OR The threshold image is combined with the reverse flood fill image to obtain the final foreground mask filled with holes . step 4 The image in has some black areas within the boundary . Step by step 2, The holes in the image are filled . therefore , We combine the two to get a foreground mask . The left is a binary inverse graph On the right is a binary graph In short, it's two images with the same background , Reverse the color inside the outline , Merge the two images again , So that we can get a sense of what's in the profile “ Holes ” Fill in ; The method is stupid , But it works ! Here's the point , On the code !!!!! import cv2; import numpy as np; ''' Image description : The image is a binary image ,255 White is the target ,0 Black background To fill the black hole in the white target ''' imgPath = "H:/image.jpg" im_in = cv2.imread(imgPath, cv2. IMREAD_GRAYSCALE); # copy im_in image im_floodfill = im_in.copy() # Mask be used for floodFill, The official requirement is length and width +2 h, w = im_in.shape[:2] mask = np.zeros((h+2, w+2), np.uint8) # floodFill In function seedPoint It has to be the background isbreak = False for i in range(im_floodfill.shape[ 0]): for j in range(im_floodfill.shape[1]): if(im_floodfill[i][j]==0): seedPoint =(i,j) isbreak = True break if(isbreak): break # obtain im_floodfill cv2.floodFill( im_floodfill, mask, seedPoint, 255); # obtain im_floodfill The inverse of im_floodfill_inv im_floodfill_inv= cv2.bitwise_not(im_floodfill) # hold im_in,im_floodfill_inv The two images are combined to get the foreground im_out = im_in | im_floodfill_inv cv2. imshow('de',im_out) cv2.waitKey(0) cv2.destroyAllWindows() Thank you for your comments ! Technology Daily Recommendation
https://www.toolsou.com/en/article/210466759
CC-MAIN-2022-33
refinedweb
462
64.61
To get week-number from a date Hi, how I do it, get week-number from a date on micropython? I'used utime but not found! Somebody kindly can help me? Many thank @kjm But will that always work though? I have a feeling this might break in some edge cases (week 53, new year week not starting on monday) or is that just in my head? Gijs @StefanoF You could try time.localtime()[7] which gives you day of the year then calculate week number from that? For example week_number=int(time.localtime()[7]/7) should give weeks 0-51 - rcolistete last edited by Sorry, I did not notice that! You could take a look at how they create the weeknumber and copy only that part. Best, Gijs @Gijs said in To get week-number from a date: import datetime Hi Giijs , Many thanks, I not can't writting on lopy 73kb it's very big... This post is deleted! It is indeed not built in by default. I suggest you look at something like this?. You can upload this in the lib folder and include it like import datetime. This will be similar to the datetime Python library.
https://forum.pycom.io/topic/6390/to-get-week-number-from-a-date
CC-MAIN-2021-43
refinedweb
200
85.08
Hi new to Swift and I am really excited that Apple has open sourced it! I use macs at home an Linux at work, so now I may actually be able to develop in one language on both platforms (and not use Java). That being said, is there a way open a file for either reading line by line and writing line by line that is done purely in Swift and is cross platform. I did go through the Getting Started page and saw that one was able to import Glibc on Linux which implements the C fopen, etc., but when I tried to import it in the REPL on OS X, I got the error: repl.swift:1:8: error: no such module 'Glibc' import Glibc ^ swift --version returns: Chateau-Louise:/ gskluzacek$ swift --version Apple Swift version 2.1 (swiftlang-700.1.101.6 clang-700.1.76) Target: x86_64-apple-darwin14.5.0 Chateau-Louise:/ gskluzacek$ Thanks, -- Greg
https://forums.swift.org/t/pure-swift-cross-platform-way-to-open-read-write-files/290
CC-MAIN-2018-51
refinedweb
160
78.69
Building a Web App with Symfony 2: Bootstrapping Free JavaScript Book! Write powerful, clean and maintainable JavaScript. RRP $11.95 Introduction The Symfony PHP Framework is powerful, scalable and flexible. Yet it is considered by many, especially those new to frameworks, to have a very steep learning curve. This is true to a certain extent. At first glance, Models, Views, Controllers, Entities, Repositories, Routing, Templating, etc, altogether can appear very terrifying and confusing. However, if you have grasped the fundamentals of PHP and HTML, have a basic understanding of modern web site development (in particular, Pretty URIs, MVC), and know how to CRUD a database/table, you are not far from developing a fairly good website, be it for your personal usage or business. I will use my personal book collection site as the starting point. It is up and running, so the project I use here is not a demo but a real running site (hosted at, in pure Chinese). The final source code of the site for this part of the tutorial can be found at the Github repository. Any comments and queries, please feel free to contact me. Quick setup Setting up Symfony is fairly easy. My favorite way is to download the Symfony Standard without vendors package. Visit the official download site and make sure you choose the right package. Unzip/Untar the archives to your planned web root directory and you are almost there (my project for this series will be located at f:\www\rsywx_test). After unzipping the Symfony package, you will see a directory structure not unlike this one: Be sure the above directory is correct (you may not see the .hg directory as this is for version control only). The next step is to download the PHP packaging system called Composer. Most modern PHP frameworks use Composer for package management. If you have the cURL utility installed, you can issue the following command: curl -S | php or if not (but you really should install cURL), punch in the following: php -r "eval('?>'.file_get_contents(''));" There will be one new file downloaded, called composer.phar. This file is the entry point for our PHP package management. As the name of the archive suggests, the files we have now contain no libraries (or bundles) to make Symfony really work. To do that, you need to run: php composer.phar update The above will install the latest and necessary bundles normally required. If you run into any errors (missing utilities, critical dependencies that cannot be met..) composer will kindly let you know. The installation procedure should be done within a couple minutes at most, depending on your connection speed. If you have setup your web server (I am using Apache) correctly, you can visit the site already: With Symfony, it is common practice to use app_dev.php as the entry page during development. In a production environment, app_dev.php would be omitted. Feel free to navigate around to get a feeling of the framework and familiarize yourself with the toolbars and debug messages it offers. Detailed setup instructions can be found here, in case you need more help. It is always a good idea to run php composer.phar selfupdateperiodically to get the latest composer.phar distribution. Note: The Symfony package always has a built-in bundle called AcmeDemoBundle. That bundle is a demo app that proves Symfony is up and running, and serves as an example which you can use to build your own app. To remove that bundle out of your web site setup, please follow the instructions in this article. After removing, if you visit the site again, you will see a 404 page (No route found for "GET /") as we have not started to create our own application yet. Bundle, Controllers, Views, Models (Entities) In short, Symfony relies on bundles (often called modules in other frameworks). A bundle can be treated as a collection of files – it serves as the container for data retrieving, logic control and presentation of a particular function set or sets in your website. My site has only one bundle and serves to list all of my books, the detail of one book, the articles I wrote after reading a book, my favorite NBA team (Lakers) scores, etc. These functions (or modules as they're counterintuitively called in Symfony) are all encapsulated in one bundle (trrsywxBundle). A bundle will contain controllers, views and entity files (models). These constitute the foundation of an MVC-structured website. To generate a bundle and start our application development, issue the below command: php app/console generate:bundle Before creating the bundle, the console will ask you several questions: - Bundle namespace: in this example, I am using tr\rsywxBundle. Be sure to have "Bundle" at the end. - Bundle name: use the suggested name derived from the namespace. In this case, "trrsywxBundle". - Target Directory: use the suggested lcoation ("F:/www/rsywx_test/src" in this case). - Configuration format: there are 4 options available: PHP, YAML, XML and annotation. Choose the one your prefer and feel comfortable with. If you'd like to follow along, I will use YAML, so type "yml" as your choice. - Do you want to generate the whole directory structure: Not necessary. - Confirm the generation, kernel update and Routing generation. More detailed instructions about this process can be found here. Routing Consider routing as a mapping mechanism from the HTTP request to the module/function that really handles it (processing the request, grabbing data, and rendering the proper response back to the web browser). Like most other frameworks, Symfony has a routing feature to support pretty URIs. This is done by creating routes in routing.yml (and by having the right .htaccess configuration, which is located in the web/ directory of your Symfony setup). To make the encapsulation stronger, I do recommend you add your own routes in the bundle's routing.yml file (located under path-to-your-site-root/src/tr/rsywxBundle/Resources/config). This comes in handy when you eventually want to port the whole bundle into another site. Please note how the namespace of the bundle is reflected in the directory structure. A common pitfall when defining routes is that routes have their precedence. The earlier a route appears, the earlier it will be matched. In other words, if the routes are not arranged properly, weird, difficult to debug problems will occur. For example, in an earlier version of my site, I had two routes looking like this: tag_list: pattern: /books/{tag}/{page} defaults: {_controller: trrsywxBundle:Book:taglist, page:1} ... reading_list: pattern: /books/readings/{page} defaults: {_controller: trrsywxBundle:Reading:list, page:1} See the problem? /books/readings will actually be mapped to tag_list route, in which the parameter tag will take readings as its value and directs us to a totally different controller. I can't simply switch the order of these two routes (which would solve most of the issues, but crash when someone is really looking for books containing 'readings' as a tag). The only way to get around this is to change the route pattern for the first one: tag_list: pattern: /books/tag/{page}/{tag} defaults: {_controller: trrsywxBundle:Book:taglist, page:1} Derived from the fact that the earlier a route appears, the earlier it is matched, you need to put the most frequently used routes at the beginning of the routing.yml file. This has the added benefit of being faster, since the framework has fewer routes to check before hitting a match. I am not covering the whole aspect of routing in Symfony here. The official documentation says it all. Database In this project, the database used is relatively simple. A few tables are created to reflect the book information (title, author, publisher, purchase date, location, etc), my reading thoughts, the Lakers score, Quote Of The Day, etc. Most of the tables (books, readings, publisher, purchase place, etc) are related to each other with a One-To-Many relationship. The tags table and books table are linked with Many-To-Many relationships. The rsywx.sql file in the project's misc folder will let you import the database (MariaDB or MySQL) – simply import it via the mysql command line tool, or through a web GUI like PhpMyAdmin. Though Symfony official documentation suggests databases/tables be created from within the Symfony framework, I strongly suggest using another 3rd party tool. The advantages of using a 3rd party tool include: - More intuitive and straightforward definition of the database; - Less typings when defining the data type, relationships, indexes, primary keys, etc. We can now link up the database by configuring app/config/parameters.yml: parameters: database_driver: pdo_mysql database_host: 127.0.0.1 database_port: null database_name: symfony database_user: root database_password: null Normally, you will only change the dabase name, user and password to reflect your own choices. After creating the database/tables, it is easy to import these database structures into Symfony: php app\console doctrine:mapping:import And then create the corresponding entities by: php app\console doctrines:generate:entity tr (where tr is the namespace of the bundle). Note: Symfony supports two ORMs for now: Doctrine and Propel. I am using Doctrine in this application. In general, the import will succeed with next to no problems. However, there are at least two things to notice: - The tool is not good at mapping a table with a compound primary key (i.e., a primary key with 2 or more fields) in which at the same time, one of the fields is also a foreign key to another table - You can edit the generated entity files (located under path-to-your-site-root/src/tr/rsywxBundle/Entity/). But if you modified the field names, types and other information, be sure to "reflect" those changes back to the database via the php app/console doctrine:database:createcommand. Don't worry, Syfmony is stronger and smarter now. The command will not destroy the data you loaded before. Conclusion In this part of the tutorial, we set the project and the Symfony framework up, put the database in place and did a clean-up job (removing the built-in AcmeDemoBundle). Having the framework ready for proper further development is often the most difficult part of starting a project, especially when dealing with a framework you've never dealt with before. So far, I hope the setup is still easy and straightforward enough. Symfony is the most well-documented framework that I have ever come across, so if you still have problems be sure to visit the Official Documentation and The Cookbook for help, or just get in touch. Again, the final code of this part of the tutorial is available on Github. Next time, we'll create routes, controllers (to guide the application flow and logic), entities/repositories (to retrieve data), and templates (for presentation) to make the site functional. Get practical advice to start your career in programming! Master complex transitions, transformations and animations in CSS!
https://www.sitepoint.com/building-a-web-app-with-symfony-2-bootstrapping/
CC-MAIN-2021-04
refinedweb
1,827
54.52
Office Open XML Formats: Retrieving Lists of Excel 2007 Worksheets Summary: Learn how to retrieve lists of worksheets from Excel programmatically. Applies to: 2007 Microsoft Office System, Microsoft Office Excel 2007, Microsoft Visual Studio 2005 Ken Getz, MCW Consulting, Inc. March 2007 Code It | Read It | Explore It Code It To help you get started, you can download a set of forty code snippets for Visual Studio 2005, each of which demonstrate various techniques working with the Office 2007 Open XML File Formats. After you install the code snippets, create a sample Excel workbook to test with. Add extra worksheets, and change the names of the sheets, if you like. (See the Read It section for reference). Create a new Windows Application project in Visual Studio 2005, open the code editor, right-click, select Insert Snippet, and select the Excel: Get sheet info from the list of available snippets for the 2007 Office system.GetSheetInfo snippet delves programmatically into the various parts and relationships between the parts to retrieve a list of SheetInfo types (note that SheetInfo is a type inserted by the snippet—it includes information about the sheet name and type). To test the snippet, store your sample workbook somewhere easy to find (for example, C:\Test.xlsx). In a Microsoft Windows application, insert the XLGetSheetInfo snippet, and then call it, modifying the name of the workbook to meet your needs. You see the list of sheets in the Output window. The snippet code starts with the following block, which defines the SheetInfo type. This code starts by creating a constant that is used to refer to the relationship required by the procedure.. Assuming that it found the document part, the code loads the part. After working through the remainder of the code, the procedure returns the list of SheetInfo instances. public List<SheetInfo> XLGetSheetInfo(string fileName) { // Return a generic list containing info about all the sheets. const string documentRelationshipType = "" + "2006/relationships/officeDocument"; // Fill this collection with a list of all the sheets List<SheetInfo> sheets = new List<SheetInfo>(); using (Package xlPackage = Package.Open(fileName, FileMode.Open, FileAccess.Read)) { // Get the main document part (workbook.xml). foreach (System.IO.Packaging.PackageRelationship relationship in xlPackage.GetRelationshipsByType(documentRelationshipType)) { // There should only be one document part in the package. Uri documentUri = PackUriHelper.ResolvePartUri( new Uri("/", UriKind.Relative), relationship.TargetUri); PackagePart documentPart = xlPackage.GetPart(documentUri); // Next code block goes here. // There's only one document part. break; } } return sheets; } Next, the code creates an XmlDocument instance to contain the contents of the workbook. It loads the XML content and creates an XmlNamespaceManager instance loaded with the namespace. This data is used later to perform searches. Note that because the namespace the code searches in is the default namespace for the XML content, it must fabricate an abbreviation—this code uses the name default. // Load the contents of the workbook, which is all you // need to retrieve the names and types of the sheets:( "default", doc.DocumentElement.NamespaceURI); // Next code block goes here. The final block loops through all the worksheet nodes it finds within the workbook part. For each one, it retrieves both the name and the type attributes (the default type is "worksheet") and stores the information in a new SheetInfo instance. Finally, the code adds the new SheetInfo to the list that the procedure returns. // Loop through all the nodes, retrieving the information // about each sheet: foreach (System.Xml.XmlNode node in doc.SelectNodes( "//default:sheets/default:sheet", nsManager)) { string sheetName = string.Empty; string sheetType = "worksheet"; sheetName = node.Attributes["name"].Value; XmlAttribute typeAttr = node.Attributes["type"]; if (typeAttr != null) { sheetType = typeAttr.Value; } sheets.Add(new SheetInfo(sheetName, sheetType)); } Read It It’s important to understand the file structure of a simple Excel document, so that you can find the data you need—in this case, you want to list of all the sheets in the workbook. To do that, create an Excel workbook with several sheets. Change the names of a few sheets, as well. I named my document Test.xlsx, and it contains four sheets, as shown in Figure 1. Figure 1. The sample document contains four sheets.. In _rels\.rels, You use relations between parts to find specific parts. Open xl\workbook.xml, shown in Figure 3. The highlighted element contains a list of all the sheets—it's from here that you gather all the information you need. Figure 3. In xl\workbook.xml, the document part contains a list of worksheets. Close the tool you used to investigate the workbook, and rename the file with a .XLSX extension.
http://msdn.microsoft.com/en-us/library/bb332456(v=office.12).aspx
CC-MAIN-2014-10
refinedweb
762
57.67
C# 4.0 IN A NUTSHELL C# 4.0 IN A NUTSHELL Fourth Edition Joseph Albahari and Ben Albahari Beijing • Cambridge • Farnham • Köln • Sebastopol • Taipei • Tokyo C# 4.0 in a Nutshell, Fourth Edition by Joseph Albahari and Ben Albahari Copyright © 2010 Joseph Albahari and Ben Albah infor- mation, contact our corporate/institutional sales department: (800) 998-9938 or corporate@oreilly.com. Editor:Laurel R.T. Ruma Production Editor:Loranah Dimant Copyeditor:Audrey Doyle Proofreader:Colleen Toporek Indexer:John Bickelhaupt Cover Designer:Karen Montgomery Interior Designer:David Futato Illustrator:Robert Romano Printing History: March 2002:First Edition. August 2003: Second Edition. September 2007:Third Edition. January 2010:Fourth Edition. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trade- marks of O’Reilly Media, Inc. C# 4.0 in a Nutshell, the image of a Numidian-80095-6 [M] 1263924338 Table of Contents Preface .. .......................................................... xiii 1.Introducing C# and the .NET Framework ............................ 1 Object Orientation 1 Type Safety 2 Memory Management 2 Platform Support 3 C#’s Relationship with the CLR 3 The CLR and .NET Framework 3 What’s New in C# 4.0 5 2.C# Language Basics .............................................. 7 A First C# Program 7 Syntax 10 Type Basics 12 Numeric Types 21 Boolean Type and Operators 28 Strings and Characters 30 Arrays 32 Variables and Parameters 36 Expressions and Operators 44 Statements 48 Namespaces 56 3.Creating Types in C# ............................................ 63 Classes 63 Inheritance 76 The object Type 85 v Structs 89 Access Modifiers 90 Interfaces 92 Enums 97 Nested Types 100 Generics 101 4.Advanced C# ................................................. 115 Delegates 115 Events 124 Lambda Expressions 130 Anonymous Methods 134 try Statements and Exceptions 134 Enumeration and Iterators 143 Nullable Types 148 Operator Overloading 153 Extension Methods 157 Anonymous Types 160 Dynamic Binding 161 Attributes 169 Unsafe Code and Pointers 170 Preprocessor Directives 174 XML Documentation 176 5. Framework Overview .......................................... 181 The CLR and Core Framework 183 Applied Technologies 187 6.Framework Fundamentals ...................................... 193 String and Text Handling 193 Dates and Times 206 Dates and Time Zones 213 Formatting and Parsing 219 Standard Format Strings and Parsing Flags 225 Other Conversion Mechanisms 232 Globalization 235 Working with Numbers 237 Enums 240 Tuples 244 The Guid Struct 245 Equality Comparison 245 Order Comparison 255 Utility Classes 258 vi | Table of Contents 7.Collections ................................................... 263 Enumeration 263 The ICollection and IList Interfaces 271 The Array Class 273 Lists, Queues, Stacks, and Sets 282 Dictionaries 292 Customizable Collections and Proxies 298 Plugging in Equality and Order 304 8.LINQ Queries ................................................. 311 Getting Started 311 Fluent Syntax 314 Query Expressions 320 Deferred Execution 324 Subqueries 330 Composition Strategies 333 Projection Strategies 337 Interpreted Queries 339 LINQ to SQL and Entity Framework 346 Building Query Expressions 361 9. LINQ Operators ............................................... 367 Overview 369 Filtering 371 Projecting 375 Joining 387 Ordering 394 Grouping 397 Set Operators 400 The Zip Operator 401 Conversion Methods 402 Element Operators 404 Aggregation Methods 406 Quantifiers 411 Generation Methods 412 10.LINQ to XML .................................................. 413 Architectural Overview 413 X-DOM Overview 414 Instantiating an X-DOM 418 Navigating and Querying 420 Updating an X-DOM 425 Working with Values 428 Documents and Declarations 431 Names and Namespaces 434 Table of Contents | vii Annotations 440 Projecting into an X-DOM 441 11. Other XML Technologies ........................................ 447 XmlReader 448 XmlWriter 457 Patterns for Using XmlReader/XmlWriter 459 XmlDocument 463 XPath 466 XSD and Schema Validation 471 XSLT 474 12.Disposal and Garbage Collection ................................. 475 IDisposable, Dispose, and Close 475 Automatic Garbage Collection 480 Finalizers 482 How the Garbage Collector Works 487 Managed Memory Leaks 491 Weak References 494 13.Diagnostics and Code Contracts .................................. 499 Conditional Compilation 499 Debug and Trace Classes 502 Code Contracts Overview 506 Preconditions 510 Postconditions 514 Assertions and Object Invariants 517 Contracts on Interfaces and Abstract Methods 518 Dealing with Contract Failure 519 Selectively Enforcing Contracts 521 Static Contract Checking 523 Debugger Integration 524 Processes and Process Threads 525 StackTrace and StackFrame 526 Windows Event Logs 528 Performance Counters 530 The Stopwatch Class 535 14.Streams and I/O .............................................. 537 Stream Architecture 537 Using Streams 539 Stream Adapters 552 File and Directory Operations 559 Memory-Mapped Files 569 Compression 571 viii | Table of Contents Isolated Storage 573 15.Networking .................................................. 579 Network Architecture 579 Addresses and Ports 581 URIs 582 Request/Response Architecture 584 HTTP-Specific Support 592 Writing an HTTP Server 597 Using FTP 600 Using DNS 602 Sending Mail with SmtpClient 603 Using TCP 604 Receiving POP3 Mail with TCP 606 16. Serialization ................................................. 609 Serialization Concepts 609 The Data Contract Serializer 613 Data Contracts and Collections 622 Extending Data Contracts 625 The Binary Serializer 628 Binary Serialization Attributes 630 Binary Serialization with ISerializable 634 XML Serialization 637 17. Assemblies ................................................... 647 What’s in an Assembly?647 Strong Names and Assembly Signing 652 Assembly Names 655 Authenticode Signing 657 The Global Assembly Cache 661 Resources and Satellite Assemblies 663 Resolving and Loading Assemblies 671 Deploying Assemblies Outside the Base Folder 675 Packing a Single-File Executable 676 Working with Unreferenced Assemblies 678 18. Reflection and Metadata ....................................... 681 Reflecting and Activating Types 682 Reflecting and Invoking Members 688 Reflecting Assemblies 700 Working with Attributes 701 Dynamic Code Generation 707 Emitting Assemblies and Types 714 Emitting Type Members 717 Table of Contents | ix Emitting Generic Methods and Types 723 Awkward Emission Targets 725 Parsing IL 728 19.Dynamic Programming ........................................ 735 The Dynamic Language Runtime 735 Numeric Type Unification 737 Dynamic Member Overload Resolution 738 Implementing Dynamic Objects 744 Interoperating with Dynamic Languages 747 20.Security ..................................................... 751 Permissions 751 Code Access Security (CAS) 755 Allowing Partially Trusted Callers 758 The Transparency Model in CLR 4.0 761 Sandboxing Another Assembly 769 Operating System Security 772 Identity and Role Security 775 Cryptography Overview 776 Windows Data Protection 777 Hashing 778 Symmetric Encryption 780 Public Key Encryption and Signing 784 21. Threading ................................................... 789 Threading’s Uses and Misuses 789 Getting Started 791 Thread Pooling 800 Synchronization 805 Locking 808 Thread Safety 817 Nonblocking Synchronization 825 Signaling with Event Wait Handles 832 Signaling with Wait and Pulse 840 The Barrier Class 849 The Event-Based Asynchronous Pattern 851 BackgroundWorker 852 Interrupt and Abort 855 Safe Cancellation 857 Lazy Initialization 860 Thread-Local Storage 862 Reader/Writer Locks 865 Timers 869 x | Table of Contents 22.Parallel Programming ......................................... 873 Why PFX?874 PLINQ 877 The Parallel Class 892 Task Parallelism 898 Working with AggregateException 912 Concurrent Collections 914 SpinLock and SpinWait 920 23.Asynchronous Methods ........................................ 927 Why Asynchronous Methods Exist 927 Asynchronous Method Signatures 928 Asynchronous Methods Versus Asynchronous Delegates 930 Using Asynchronous Methods 930 Asynchronous Methods and Tasks 934 Writing Asynchronous Methods 937 Fake Asynchronous Methods 940 Alternatives to Asynchronous Methods 941 24.Application Domains .......................................... 943 Application Domain Architecture 943 Creating and Destroying Application Domains 945 Using Multiple Application Domains 946 Using DoCallBack 948 Monitoring Application Domains 949 Domains and Threads 950 Sharing Data Between Domains 951 25.Native and COM Interoperability ................................. 957 Calling into Native DLLs 957 Type Marshaling 958 Callbacks from Unmanaged Code 961 Simulating a C Union 962 Shared Memory 963 Mapping a Struct to Unmanaged Memory 965 COM Interoperability 969 Calling a COM Component from C#971 Embedding Interop Types 975 Primary Interop Assemblies 975 Exposing C# Objects to COM 976 26.Regular Expressions ........................................... 977 Regular Expression Basics 977 Quantifiers 982 Zero-Width Assertions 983 Table of Contents | xi Groups 985 Replacing and Splitting Text 987 Cookbook Regular Expressions 988 Regular Expressions Language Reference 992 Appendix: C# Keywords .............................................. 997 Index ............................................................ 1005 xii | Table of Contents Preface C# 4.0 further enhances Microsoft’s flagship programming language with much- requested features—including support for dynamic programming, type parameter variance, and optional and named parameters. At the same time, the CLR and .NET Framework have grown to include a rich set of features for parallel programming, code contracts, and a new code security model. The price of this growth is that there’s more than ever to learn. Although tools such as Microsoft’s IntelliSense—and online references—are excellent in helping you on the job, they presume an existing map of conceptual knowledge. This book provides exactly that map of knowledge in a concise and unified style—free of clutter and long introductions. Like the previous edition, C# 4.0 in a Nutshell is organized entirely around concepts and use cases, making it friendly both to sequential reading and to random browsing. It also plumbs significant depths while assuming only basic background knowledge—making it accessible to intermediate as well as advanced readers. This book covers C#, the CLR, and the core Framework assemblies. We’ve chosen this focus to allow space for difficult topics such as concurrency, security, and ap- plication domains—without compromising depth or readability. Features new to C# 4.0 and the associated Framework are flagged so that you can also use this book as a C# 3.0 reference. Intended Audience This book targets intermediate to advanced audiences. No prior knowledge of C# is required, but some general programming experience is necessary. For the begin- ner, this book complements, rather than replaces, a tutorial-style introduction to programming. xiii If you’re already familiar with C# 3.0, you’ll find more than 100 pages dedicated to the new features of C# 4.0 and Framework 4.0. In addition, many chapters have been enhanced from the previous edition, most notably the chapters on the C# language, .NET Framework fundamentals, memory management, threading, and COM interoperability. We’ve also updated the LINQ chapters to make the examples friendly to both LINQ to SQL and Entity Framework programmers. This book is an ideal companion to any of the vast array of books that focus on an applied technology such as WPF, ASP.NET, or WCF. The areas of the language and .NET Framework that such books omit, C# 4.0 in a Nutshell covers in detail— and vice versa. If you’re looking for a book that skims every .NET Framework technology, this is not for you. This book is also unsuitable if you want a replacement for IntelliSense (i.e., the alphabetical listings of types and type members that appeared in the C# 1.1 edition of this book). How This Book Is Organized The first three chapters after the introduction concentrate purely on C#, starting with the basics of syntax, types, and variables, and finishing with advanced topics such as unsafe code and preprocessor directives. If you’re new to the language, you should read these chapters sequentially. The remaining chapters cover the core .NET Framework, including such topics as LINQ, XML, collections, I/O and networking, memory management, reflection, dynamic programming, attributes, security, concurrency, application domains, and native interoperability. You can read most of these chapters randomly, except for Chapters 6 and 7, which lay a foundation for subsequent topics. The three chapters on LINQ are also best read in sequence. What You Need to Use This Book The examples in this book require a C# 4.0 compiler and Microsoft .NET Frame- work 4.0. You will also find Microsoft’s .NET documentation useful to look up individual types and members. The easiest way to get all three—along with an in- tegrated development environment—is to install Microsoft Visual Studio 2010. Any edition is suitable for what’s taught in this book, including Visual Studio Express (a free download). Visual Studio also includes an express edition of SQL Server, re- quired. The samples include everything in those chapters from simple expressions to complete programs and are fully editable, allowing you to learn interactively. You can download LINQ- Pad from; to obtain the additional samples, click “Download more samples” in the Samples tab at the bottom left. You can then advance through each sample with a single click. xiv | Preface Conventions Used in This Book The book uses basic UML notation to illustrate relationships between types, as shown in Figure P-1. A slanted rectangle means an abstract class; a circle means an interface. A line with a hollow triangle denotes inheritance, with the triangle pointing to the base type. A line with an arrow denotes a one-way association; a line without an arrow denotes a two-way association. Figure P-1. Sample diagram The following typographical conventions are used in this book: Italic Indicates new terms, URIs, filenames, and directories Constant width Indicates C# code, keywords and identifiers, and program output Constant width bold Shows a highlighted section of code Constant width italic Shows text that should be replaced with user-supplied values Preface | xv ex- ample,# 4.0 in a Nutshell by Joseph Albahari and Ben Albahari. Copyright 2010 Joseph Albahari and Ben Albahari, 978-0-596-80095-6.”: Code listings and additional resources are provided at: xvi | Preface To comment or ask technical questions about this book, send email to the following, quoting the book’s ISBN (9780596800956): bookquestions@oreilly.com For more information about our books, conferences, Resource Centers, and the O’Reilly Network, see our website. Acknowledgments Joseph Albahari First, I want to thank my brother and coauthor, Ben Albahari, for initially persuading me to take on what has become a highly successful project. I particularly enjoy working with Ben in probing difficult topics: he shares my willingness to question conventional wisdom, and the tenacity to pull things apart until it becomes clear how they really work. I am most indebted to the superb technical reviewers. Starting with the reviewers at Microsoft, the extensive input from Stephen Toub (Parallel Programming team) and Chris Burrows (C# Compiler team) significantly enhanced the chapters on concur- rency, dynamic programming, and the C# language. From the CLR team, I received invaluable input on security and memory management from Shawn Farkas, Brian Grunkemeyer, Maoni Stephens, and David DeWinter. And on Code Contracts, the feedback from Brian Grunkemeyer, Mike Barnett, and Melitta Andersen raised this chapter to the next quality bar. Thank you, people—both for your prompt feedback and for answering all my questions. I really appreciate it! Preface | xvii I have the highest praise for Jon Skeet (author of C# in Depth and Stack Overflow extraordinaire), whose perceptive suggestions enhanced numerous chapters (you work for Google, but we’ll forgive you!). I’m similarly grateful for the keen eye of C# MVP Nicholas Paldino, who spotted errors and omissions that others missed. I’d also like to thank C# MVPs Mitch Wheat and Brian Peek, and reviewers of the 3.0 edition upon which this book was based. This includes the aforementioned Nicholas Paldino, who applied his thoroughness and breadth of knowledge to most chapters of the book, and Krzysztof Cwalina, Matt Warren, Joel Pobar, Glyn Griffiths, Ion Vasilian, Brad Abrams, Sam Gentile, and Adam Nathan. Finally, I want to thank the O’Reilly team, including my prompt and efficient editor, Laurel Ruma, my publicist, Kathryn Barrett, my copyeditor, Audrey Doyle, and members of my family, Miri and Sonia. Ben Albahari Because my brother wrote his acknowledgments first, you can infer most of what I want to say :) We’ve actually both been programming since we were kids (we shared an Apple IIe; he was writing his own operating system while I was writing Hangman), so it’s cool that we’re now writing books together. I hope the enriching experience we had writing the book will translate into an enriching experience for you reading the book. I’d also like to thank my former colleagues at Microsoft. Many smart people work there, not just in terms of intellect but also in a broader emotional sense, and I miss working with them. In particular, I learned a lot from Brian Beckman, to whom I am indebted. xviii | Preface 1 Introducing C# and the .NET Framework C# is a general-purpose, type-safe, the Microsoft .NET Framework. Object Orientation: Unified type system. Classes and interfaces). 1 Properties, methods, and events. Type Safety C# is primarily a type-safe language, meaning that types can interact only through protocols they define, thereby ensuring each type’s internal consistency. For in- stance, man- age, predomi- nately statically typed language. C# is called a strongly typed language because its type rules (whether enforced stat- ically or dynamically) are very strict. For instance, you cannot call a function that’s designed to accept an integer with a floating-point number, unless you first explic- itly convert the floating-point number to an integer. This helps prevent mistakes. Strong typing also plays a role in enabling C# code to run in a sandbox—an envi- ronment where every aspect of security is controlled by the host. In a sandbox, it is important that you cannot arbitrarily corrupt the state of an object by bypassing its type rules. Memory Management C# relies on the runtime to perform automatic memory management. The CLR has a garbage collector that executes as part of your program, reclaiming memory for objects that are no longer referenced. This frees programmers from explicitly deal- locating the memory for an object, eliminating the problem of incorrect pointers encountered in languages such as C++. 2 | Chapter 1: Introducing C# and the .NET Framework C# does not eliminate pointers: it merely makes them unnecessary for most pro- gramming tasks. For performance-critical hotspots and interoperability, pointers may be used, but they are permitted only in blocks that are explicitly marked unsafe. Platform Support: • C# code may run on the server and dish up DHTML that can run on any plat- form. This is precisely the case for ASP.NET. • C# code may run on a runtime other than the Microsoft Common Language Runtime. The most notable example is the Mono project, which has its own C# compiler and runtime, running on Linux, Solaris, Mac OS X, and Windows. • C# code may run on a host that supports Microsoft Silverlight (supported for Windows and Mac OS X). This is a new technology that is analogous to Adobe’s Flash Player. C#’s Relationship with the CLR C# depends on a runtime equipped with a host of features such as automatic mem- ory CLR and .NET Framework The .NET Framework consists of a runtime called the Common Language Run- time (CLR) and a vast set of libraries. The libraries consist of core libraries (which this book is concerned with) and applied libraries, which depend on the core libra- ries. The CLR and .NET Framework | 3 Introducing C# and .NET serv- ices. Figure 1-1. This depicts the topics covered in this book and the chapters in which they are found. The names of specialized frameworks and class libraries beyond the scope of this book are grayed out and displayed outside the boundaries of The Nutshell. 4 | Chapter 1: Introducing C# and the .NET Framework What’s New in C# 4.0 The new features in C# 4.0 are: • Dynamic binding • Type variance with generic interfaces and delegates • Optional parameters • Named arguments • error par- ticularly useful in conjunction with optional parameters. It means that the following C# 3.0 code to open a Word document: What’s New in C# 4.0 | 5 Introducing C# and .NET. 6 | Chapter 1: Introducing C# and the .NET Framework 2 C# Language Basics in- stantlyen- tially. Each statement is terminated by a semicolon: int x = 12 * 30; Console.WriteLine (x); 7 more statements. We defined a single method named Main: static void Main() { ... } Writing higher-level functions that call upon lower-level functions simplifies a pro- gram. out- put data back to the caller by specifying a return type. We defined a method called FeetToInches that has a parameter for inputting feet, and a return type for outputting inches: static int FeetToInches (int feet) {...} The literals 30 and 100 are the arguments passed to the FeetToInches method. The Main method in our example has empty parentheses because it has no parameters, and is void because it doesn’t return any value to its caller: static void Main() C# recognizes a method called Main as signaling the default entry point of execution. The Main method may optionally return an integer (rather than void) in order to return a value to the execution environment. The Main method can also optionally accept an array of strings as a parameter (that will be populated with any arguments passed to the executable). For example: static int Main (string[] args) {...} 8 | Chapter 2: C# Language Basics An array (such as string[]) represents a fixed number of ele- ments of a particular type. Arrays are specified by placing square brackets after the element type and are described in “Ar- rays” on page 32. Methods are one of several kinds of functions in C#. Another kind of function we used was the * operator, used to perform multiplication. There are also construc- tors, properties, events, indexers, and finalizers. In our example, the two methods are grouped into a class. A class groups function members and data members to form an object-oriented building block. The Console class groups members that handle command-line input/output functional- ity, such as the WriteLine method. Our Test class groups two methods—the Main method and the FeetToInches method. A class is a kind of type, which we will ex- amine in “Type Basics” on page 12. At the outermost level of a program, types are organized into namespaces. The using directive was used to make the System namespace available to our application, to use the Console class. We could define all our classes within the TestPrograms namespace, as follows: using System; namespace TestPrograms { class Test {...} class Test2 {...} } The .NET Framework is organized into nested namespaces. For example, this is the namespace that contains types for handling text: using System.Text; The using directive is there for convenience; you can also refer to a type by its fully qualified name, which is the type name prefixed with its namespace, such as System.Text.StringBuilder. Compilation The C# compiler compiles source code, specified as a set of files with the .cs exten- sion, into an assembly. An assembly is the unit of packaging and deployment in .NET. An assembly can be either an application or a library. A normal console or Windows application has a Main method and is an .exe file. A library is a .dll and is equivalent to an .exe without an entry point. Its purpose is to be called upon (ref- erenced) by an application or by other libraries. The .NET Framework is a set of libraries. A First C# Program | 9 C# Basics The name of the C# compiler is csc.exe. You can either use an IDE such as Visual Studio to compile, or call csc manually from the command line. To compile man- ually, first save a program to a file such as MyFirstProgram.cs, and then go to the command line and invoke csc (located under %SystemRoot%\Microsoft.NET \Framework\<framework-version> where %SystemRoot% is your Windows direc- tory) as follows: csc MyFirstProgram.cs This produces an application named MyFirstProgram.exe. To produce a library (.dll), do the following: csc /target:library MyFirstProgram.cs We explain assemblies in detail in Chapter 16. Syntax C# syntax is based on C and C++ syntax. In this section, we will describe C#’s elements of syntax, using the following program: using System; class Test { static void Main() { int x = 12 * 30; Console. By convention, parameters, local variables, and private fields should be in camel case (e.g., myVariable), and all other identifiers should be in Pascal case (e.g., MyMethod). Keywords are names reserved by the compiler that you can’t use as identifiers. These are the keywords in our example program: using class static void int 10 | Chapter 2: C# Language Basics Here is the full list of C# volatile void while Avoiding conflicts If you really want to use an identifier that clashes with a keyword, you can do so by qualifying it with the @ prefix. For instance: class class {...} // Illegal class @class {...} // Legal The @ symbol doesn’t form part of the identifier itself. So @myVariable is the same as myVariable. The @ prefix can be useful when consuming libraries written in other .NET languages that have different keywords. Contextual keywords Some keywords are contextual , meaning that they can also be used as identifiers— without an @ symbol. These are: add ascending by descending dynamic equals from get global group in into join let on orderby partial remove select set value var where yield Syntax | 11 C# Basics With contextual keywords, ambiguity cannot arise within the context in which they are used. Literals, Punctuators, and Operators Literals are primitive pieces of data statically embedded into the program. The lit- erals we used in our example program are 12 and 30. Punctuators help demarcate the structure of the program. These are the punctuators we used in our example program: ; { } The semicolon is used to terminate a statement. This means that statements can wrap multiple lines: Console.WriteLine (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10); The braces are used to group multiple statements into a statement block. An operator transforms and combines expressions. Most operators in C# are de- noted with a symbol, such as the multiplication operator, *. We will discuss oper- ators in more detail later in the chapter. These are the operators we used in our example program: . () * = The period denotes a member of something (or a decimal point with numeric liter- als). The parentheses are used when declaring or calling a method; empty paren- theses are used when the method accepts no arguments. The equals sign is used for assignment (the double equals sign, ==, is used for equality comparison, as we’ll see later). C# offers two different styles of source-code documentation: single-line comments and multiline comments. A single-line comment begins with a double forward slash and continues until the end of the line. For example: int x = 3; // Comment about assigning 3 to x A multiline comment begins with /* and ends with */. For example: int x = 3; /* This is a comment that spans two lines */ tation” on page 176 in Chapter 4. Type Basics A type defines the blueprint for a value. A value is a storage location denoted by a variable or a constant. A variable represents a value that can change, whereas a 12 | Chapter 2: C# Language Basics constant represents an invariant (we will visit constants later in the chapter). We created a local variable named x in our first program: static void Main() { int x = 12 * 30; Console2 31 to 2 31 −1. We can perform functions such as arithmetic with instances of the int type as follows: int x = 12 * 30; Another predefined C# type is string. The string type represents a sequence of characters, such as “.NET” or “”. We can work with strings by calling functions on them as follows: string message = "Hello world"; string upperMessage = message.ToUpper(); Console.WriteLine (upperMessage); // HELLO WORLD int x = 2010; message = message + x.ToString(); Console.WriteLine (message); // Hello world2010 The predefined bool type has exactly two possible values: true and false. The bool type is commonly used to conditionally branch execution flow based with an if statement. For example: bool simpleVar = false; if (simpleVar) Console.WriteLine ("This will not print"); int x = 5000; bool lessThanAMile = x < 5280; if (lessThanAMile) Console.WriteLine ("This will print"); In C#, predefined types (also referred to as built-in types) are recognized with a C# keyword. The System namespace in the .NET Framework contains many important types that are not predefined by C# (e.g., DateTime). Type Basics | 13 C# Basics Custom Type Examples Just as we can build complex functions from simple functions, we can build complex types from primitive types. In this example, we will define a custom type named UnitConverter—a class that serves as a blueprint for unit conversions: using System; public class UnitConverter { int ratio; // Field public UnitConverter (int unitRatio) {ratio = unitRatio; } // Constructor public int Convert (int unit) {return unit * ratio; } // Method } class Test { static void Main() { UnitConverter feetToInchesConverter = new UnitConverter (12); UnitConverter milesToFeetConverter = new UnitConverter (5280); Console.WriteLine (feetToInchesConverter.Convert(30)); // 360 Console.WriteLine (feetToInchesConverter.Convert(100)); // 1200 Console.WriteLine (feetToInchesConverter.Convert( milesToFeetConverter.Convert(1))); // 63360 } } Members of a type A type contains data members and function members. The data member of UnitConverter is the field called ratio. The function members of UnitConverter are the Convert method and the UnitConverter’s constructor. Symmetry of predefined types and custom types A beautiful aspect of C# is that predefined types and custom types have few differ- ences. The predefined int type serves as a blueprint for integers. It holds data—32 bits—and provides function members that use that data, such as ToString. Similarly, our custom UnitConverter type acts as a blueprint for unit conversions. It holds data—the ratio—and provides function members to use that data. Constructors and instantiation Data is created by instantiating a type. Predefined types can be instantiated simply by using a literal. For example, the following line instantiates two integers (12 and 30), which are used to compute a third instance, x: int x = 12 * 30; The new operator is needed to create a new instance of a custom type. We created and declared an instance of the UnitConverter type with this statement: UnitConverter feetToInchesConverter = new UnitConverter (12); 14 | Chapter 2: C# Language Basics Immediately after the new operator instantiates an object, the object’s constructor is called to perform initialization. A constructor is defined like a method, except that the method name and return type are reduced to the name of the enclosing type: public class UnitConverter { ... public UnitConverter (int unitRatio) { ratio = unitRatio; } ... } Instance versus static members The data members and function members that operate on the instance of the type are called instance members. The UnitConverter’s Convert method and the int’s ToString method are examples of instance members. By default, members are in- stance members. Data members and function members that don’t operate on the instance of the type, but rather on the type itself, must be marked as static. The Test.Main and Console.WriteLine methods are static methods. The Console class is actually a static class, which means all its members are static. You never actually create instances of a Console—one console is shared across the whole application. To contrast instance from static members, in the following code the instance field Name pertains to an instance of a particular Panda, whereas Population pertains to the set of all Panda instances: public class Panda { public string Name; // Instance field public static int Population; // Static field public Panda (string n) // Constructor { Name = n; // Assign the instance field Population = Population + 1; // Increment the static Population field } } The following code creates two instances of the Panda, prints their names, and then prints the total population: using System; class Program { static void Main() { Panda p1 = new Panda ("Pan Dee"); Panda p2 = new Panda ("Pan Dah"); Console.WriteLine (p1.Name); // Pan Dee Console.WriteLine (p2.Name); // Pan Dah Console.WriteLine (Panda.Population); // 2 Type Basics | 15 C# Basics } } The public keyword The public keyword exposes members to other classes. In this example, if the Name field in Panda was not public, the Test class could not access it. Marking a member public is how a type communicates: “Here is what I want other types to see— everything else is my own private implementation details.” In object-oriented terms, we say that the public members encapsulate the private members of the class. Conversions C# can convert between instances of compatible types. A conversion always creates a new value from an existing one. Conversions can be either implicit or explicit: implicit conversions happen automatically, and explicit conversions require a cast. In the following example, we implicitly cast an int to a long type (which has twice the bitwise capacity of an int) and explicitly cast an int to a short type (which has half the capacity of an int): int x = 12345; // int is a 32-bit integer long y = x; // Implicit conversion to 64-bit integer short z = (short)x; // Explicit conversion to 16-bit integer Implicit conversions are allowed when both of the following are true: • The compiler can guarantee they will always succeed. • No information is lost in conversion. * Conversely, explicit conversions are required when one of the following is true: • The compiler cannot guarantee they will always succeed. • Information may be lost during conversion. The numeric conversions that we just saw are built into the language. C# also supports reference conversions and boxing conversions (see Chapter 3) as well as custom conversions (see “Operator Overloading” on page 153 in Chapter 4). The com- piler doesn’t enforce the aforementioned rules with custom conversions, so it’s possible for badly designed types to behave otherwise. Value Types Versus Reference Types All C# types fall into the following categories: • Value types • Reference types * A minor caveat is that very large long values lose some precision when converted to double. 16 | Chapter 2: C# Language Basics • Generic type parameters • Pointer types In this section, we’ll describe value types and reference types. In “Generics” on page 101 in Chapter 3, we’ll cover generic type parameters, and in “Unsafe Code and Point- ers” on page 170 in Chapter 4, we’ll cover pointer types. Value types comprise most built-in types (specifically, all numeric types, the char type, and the bool type) as well as custom struct and enum types. Reference types comprise all class, array, delegate, and interface types. The fundamental difference between value types and reference types is how they are handled in memory. Value types The content of a value type variable or constant is simply a value. For example, the content of the built-in value type, int, is 32 bits of data. You can define a custom value type with the struct keyword (see Figure 2-1): public struct Point { public int X, Y; } Figure 2-1. A value type instance in memory The assignment of a value type instance always copies the instance. For example: static void Main() { Point p1 = new Point(); p1.X = 7; Point p2 = p1; // Assignment causes copy Console.WriteLine (p1.X); // 7 Console.WriteLine (p2.X); // 7 p1.X = 9; // Change p1.X Console.WriteLine (p1.X); // 9 Console.WriteLine (p2.X); // 7 } Figure 2-2 shows that p1 and p2 have independent storage. Type Basics | 17 C# Basics Figure 2-2. Assignment copies a value-type instance Reference types A reference type is more complex than a value type, having two parts: an object and the reference to that object. The content of a reference-type variable or constant is a reference to an object that contains the value. Here is the Point type from our previous example rewritten as a class, rather than a struct (shown in Figure 2-3): public class Point { public int X, Y; } Figure 2-3. A reference-type instance in memory Assigning a reference-type variable copies the reference, not the object instance. This allows multiple variables to refer to the same object—something not ordinarily pos- sible with value types. If we repeat the previous example, but with Point now a class, an operation to X affects Y: static void Main() { Point p1 = new Point(); p1.X = 7; Point p2 = p1; // Copies p1 reference Console.WriteLine (p1.X); // 7 Console.WriteLine (p2.X); // 7 p1.X = 9; // Change p1.X Console.WriteLine (p1.X); // 9 Console.WriteLine (p2.X); // 9 } Figure 2-4 shows that p1 and p2 are two references that point to the same object. 18 | Chapter 2: C# Language Basics Figure 2-4. Assignment copies a reference Null A reference can be assigned the literal null, indicating that the reference points to no object: class Point {...} ... Point p = null; Console.WriteLine (p == null); // True // The following line generates a runtime error // (a NullReferenceException is thrown): Console.WriteLine (p.X); In contrast, a value type cannot ordinarily have a null value: struct Point {...} ... Point p = null; // Compile-time error int x = null; // Compile-time error C# also has a construct called nullable types for representing value-type nulls (see “Nullable Types” on page 148 in Chap- ter 4). Storage overhead Value-type instances occupy precisely the memory required to store their fields. In this example, Point takes eight bytes of memory: struct Point { int x; // 4 bytes int y; // 4 bytes } Type Basics | 19 C# Basics Technically, the CLR positions fields within the type at an ad- dress that’s a multiple of the fields’ size (up to a maximum of 8 bytes). Thus, the following actually consumes 16 bytes of mem- ory . Predefined Type Taxonomy The predefined types in C# are: Value types • Numeric —Signed integer (sbyte, short, int, long) —Unsigned integer (byte, ushort, uint, ulong) —Real number (float, double, decimal) • Logical (bool) • Character (char) Reference types • String (string) • Object (object) Predefined types in C# alias Framework types in the System namespace. There is only a syntactic difference between these two statements: int i = 5; System.Int32 i = 5; The set of predefined value types excluding decimal are known as primitive types in the CLR. Primitive types are so called because they are supported directly via in- structions in compiled code, and this usually translates to direct support on the underlying processor. For example: // Underlying hexadecimal representation int i = 7; // 0x7 bool b = true; // 0x1 char c = 'A'; // 0x41 float f = 0.5f; // uses IEEE floating-point encoding The System.IntPtr and System.UIntPtr types are also primitive (see Chapter 25). 20 | Chapter 2: C# Language Basics Numeric Types C# has the predefined numeric types shown in Table 2-1. Table 2-1. Predefined numeric types in C# C# type System type Suffix Size Range Integral—signed sbyte SByte 8 bits −2 7 to 2 7 −1 short Int16 16 bits −2 15 to 2 15 −1 int Int32 32 bits −2 31 to 2 31 −1 long Int64 L 64 bits −2 63 to 2 63 −1 Integral—unsigned byte Byte 8 bits 0 to 2 8 −1 ushort UInt16 16 bits 0 to 2 16 −1 uint UInt32 U 32 bits 0 to 2 32 −1 ulong UInt64 UL 64 bits 0 to 2 64 −1 Real float Single F 32 bits ± (~10 −45 to 10 38 ) double Double D 64 bits ± (~10 −324 to 10 308 ) decimal Decimal M 128 bits ± (~10 −28 to 10 28 ) Of the integral types, int and long are first-class citizens and are favored by both C# and the runtime. The other integral types are typically used for interoperability or when space efficiency is paramount. Of the real number types, float and double are called floating-point types † and are typically used for scientific calculations. The decimal type is typically used for fi- nancial calculations, where base-10-accurate arithmetic and high precision are required. Numeric Literals Integral literals can use decimal or hexadecimal notation; hexadecimal is denoted with the 0x prefix. For example: int x = 127; long y = 0x7F; Real literals can use decimal and/or exponential notation. For example: double d = 1.5; double million = 1E06; † Technically, decimal is a floating-point type too, although it’s not referred to as such in the C# language specification. Numeric Types | 21 C# Basics Numeric literal type inference By default, the compiler infers a numeric literal to be either double or an integral type: • If the literal contains a decimal point or the exponential symbol (E), it is a double. • Otherwise, the literal’s type is the first type in this list that can fit the literal’s value: int, uint, long, and ulong. For example: Console.WriteLine ( 1.0.GetType()); // Double (double) Console.WriteLine ( 1E06.GetType()); // Double (double) Console.WriteLine ( 1.GetType()); // Int32 (int) Console.WriteLine ( 0xF0000000.GetType()); // UInt32 (uint) Numeric suffixes Numeric suffixes explicitly define the type of a literal. Suffixes can be either lower- or uppercase, and are as follows: Category C# type Notes Example F float float f = 1.0F; D double double d = 1D; M decimal decimal d = 1.0M; U uint or ulong Combinable with L uint i = 1U; L long or ulong Combinable with U ulong i = 1UL; The suffixes U and L are rarely necessary, because the uint, long, and ulong types can nearly always be either inferred or implicitly converted from int: long i = 5; // Implicit lossless conversion from int literal to long The D suffix is technically redundant, in that all literals with a decimal point are inferred to be double. And you can always add a decimal point to a numeric literal: double x = 4.0; The F and M suffixes are the most useful and should always be applied when speci- fying float or decimal literals. Without the F suffix, the following line would not compile, because 4.5 would be inferred to be of type double, which has no implicit conversion to float: float f = 4.5F; The same principle is true for a decimal literal: decimal d = −1.23M; // Will not compile without the M suffix. We describe the semantics of numeric conversions in detail in the following section. 22 | Chapter 2: C# Language Basics Numeric Conversions Integral to integral conversions Integral conversions are implicit when the destination type can represent every pos- sible value of the source type. Otherwise, an explicit conversion is required. For example: int x = 12345; // int is a 32-bit integral long y = x; // Implicit conversion to 64-bit integral short z = (short)x; // Explicit conversion to 16-bit integral Floating-point to floating-point conversions A float can be implicitly converted to a double, since a double can represent every possible value of a float. The reverse conversion must be explicit. Floating-point to integral conversions All integral types may be implicitly converted to all floating-point numbers: int i = 1; float f = i; The reverse conversion must be explicit: int i2 = (int)f; When you cast from a floating-point number to an integral, any fractional portion is truncated; no rounding is performed. The static class System.Convert provides methods that round while converting between various numeric types (see Chapter 6). Implicitly converting a large integral type to a floating-point type preserves magni- tude but may occasionally lose precision. This is because floating-point types always have more magnitude than integral types, but may have less precision. Rewriting our example with a larger number demonstrates this: int i1 = 100000001; float f = i1; // Magnitude preserved, precision lost int i2 = (int)f; // 100000000 Decimal conversions All integral types can be implicitly converted to the decimal type, since a decimal can represent every possible C# integral value. All other numeric conversions to and from a decimal type must be explicit. Arithmetic Operators The arithmetic operators (+, −, *, /, %) are defined for all numeric types except the 8- and 16-bit integral types: Numeric Types | 23 C# Basics + Addition − Subtraction * Multiplication / Division % Remainder after division Increment and Decrement Operators The increment and decrement operators (++, −−) increment and decrement numeric types by 1. The operator can either precede or follow the variable, depending on whether you want the variable to be updated before or after the expression is eval- uated. For example: int x = 0; Console.WriteLine (x++); // Outputs 0; x is now 1 Console.WriteLine (++x); // Outputs 2; x is now 2 Console.WriteLine (--x); // Outputs 1; x is now 1 Specialized Integral Operations Integral division Division operations on integral types always truncate remainders. Dividing by a variable whose value is zero generates a runtime error (a DivideByZeroException): int a = 2 / 3; // 0 int b = 0; int c = 5 / b; // throws DivisionByZeroException Dividing by the literal 0 generates a compile-time error. Integral overflow At runtime, arithmetic operations on integral types can overflow. By default, this happens silently—no exception is thrown. Although. checked can be used around either an expression or a statement block. For example: int a = 1000000; int b = 1000000; 24 | Chapter 2: C# Language Basics int c = checked (a * b); // Checks just the expression. checked // Checks all expressions { // in statement block. ... c = a * b; ... } You can make arithmetic overflow checking the default for all expressions in a pro- gram+: int x = int.MaxValue; int y = unchecked (x + 1); unchecked { int z = x + 1; } Overflow checking for constant expressions Regardless of the /checked compiler switch, expressions evaluated at compile time are always overflow-checked—unless you apply the unchecked operator: int x = int.MaxValue + 1; // Compile-time error int y = unchecked (int.MaxValue + 1); // No errors Bitwise operators C# supports the following bitwise operators: Operator Meaning Sample expression Result ~ Complement ~0xfU 0xfffffff0U & And 0xf0 & 0x33 0x30 | Or 0xf0 | 0x33 0xf3 ^ Exclusive Or 0xff00 ^ 0x0ff0 0xf0f0 << Shift left 0x20 << 2 0x80 >> Shift right 0x20 >> 1 0x10 8- and 16-Bit Integrals The 8- and 16-bit integral types are byte, sbyte, short, and ushort. These types lack their own arithmetic operators, so C# implicitly converts them to larger types as required. This can cause a compile-time error when trying to assign the result back to a small integral type: short x = 1, y = 1; short z = x + y; // Compile-time error Numeric Types | 25 C# Basics In this case, x and y are implicitly converted to int so that the addition can be per- formed. This means the result is also an int, which cannot be implicitly cast back to a short (because it could cause loss of data). To make this compile, we must add an explicit cast: short z = (short) (x + y); // OK Special Float and Double Values Unlike integral types, floating-point types have values that certain operations treat specially. These special values are NaN (Not a Number), +∞, −∞, and −0. The float and double classes have constants for NaN, +∞, and −∞, as well as other values (MaxValue, MinValue, and Epsilon). For example: Console.WriteLine (double.NegativeInfinity); // -Infinity The constants that represent special values for double and float are as follows: Special value Double constant Float constant NaN double.NaN float.NaN +∞ double.PositiveInfinity float.PositiveInfinity −∞ double.NegativeInfinity float.NegativeInfinity −0 −0.0 −0.0f Dividing a nonzero number by zero results in an infinite value. For example: Console.WriteLine ( 1.0 / 0.0); // Infinity Console.WriteLine (−1.0 / 0.0); // -Infinity Console.WriteLine ( 1.0 / −0.0); // -Infinity Console.WriteLine (−1.0 / −0.0); // Infinity Dividing zero by zero, or subtracting infinity from infinity, results in a NaN. For example: Console.WriteLine ( 0.0 / 0.0); // NaN Console.WriteLine ((1.0 / 0.0) − (1.0 / 0.0)); // NaN When using ==, a NaN value is never equal to another value, even another NaN value: Console.WriteLine (0.0 / 0.0 == double.NaN); // False To test whether a value is NaN, you must use the float.IsNaN or double.IsNaN method: Console.WriteLine (double.IsNaN (0.0 / 0.0)); // True When using object.Equals, however, two NaN values are equal: Console.WriteLine (object.Equals (0.0 / 0.0, double.NaN)); // True 26 | Chapter 2: C# Language Basics NaNs are sometimes useful in representing special values. In WPF, double.NaN represents a measurement whose value is “Automatic.” Another way to represent such a value is with a nullable type (Chapter 4); another is with a custom struct that wraps a numeric type and adds an additional field (Chapter 3). float and double follow the specification of the IEEE 754 format types, supported natively by almost all processors. You can find detailed information on the behavior of these types at. double Versus decimal double is useful for scientific computations (such as computing spatial coordinates). decimal is useful for financial computations and values that are “man-made” rather than the result of real-world measurements. Here’s a summary of the differences: Category double decimal Internal representation Base 2 Base 10 Precision 15−16 significant figures 28−29 significant figures Range ±(~10 −324 to ~10 308 ) ±(~10 −28 to ~10 28 ) Special values +0, −0, +∞, −∞, and NaN None Speed Native to processor Non-native to processor (about 10 times slower than double) Real Number Rounding Errors float and double 1internally represent numbers in base 2. For this reason, only numbers expressible in base 2 are represented precisely. Practically, this means most literals with a fractional component (which are in base 10) will not be represented precisely. For example: float tenth = 0.1f; // Not quite 0.1 float one = 1f; Console.WriteLine (one - tenth * 10f); // −1.490116E-08 This is why float and double are bad for financial calculations. In contrast, decimal works in base 10 and so can precisely represent numbers expressible in base 10 (as well as its factors, base 2 and base 5). Since real literals are in base 10, decimal can precisely represent numbers such as 0.1. However, neither double nor decimal can precisely represent a fractional number whose base 10 representation is recurring: decimal m = 1M / 6M; // 0.1666666666666666666666666667M double d = 1.0 / 6.0; // 0.16666666666666666 This leads to accumulated rounding errors: decimal notQuiteWholeM = m+m+m+m+m+m; // 1.0000000000000000000000000002M double notQuiteWholeD = d+d+d+d+d+d; // 0.99999999999999989 Numeric Types | 27 C# Basics which breaks equality and comparison operations: Console.WriteLine (notQuiteWholeM == 1M); // False Console.WriteLine (notQuiteWholeD < 1.0); // True Boolean Type and Operators C#’s bool type (aliasing the System.Boolean type) is a logical value that can be as- signed the literal true or false. Although a Boolean value requires only one bit of storage, the runtime will use one byte of memory, since this is the minimum chunk that the runtime and processor can efficiently work with. To avoid space inefficiency in the case of arrays, the Framework provides a BitArray class in the System.Collections namespace that is designed to use just one bit per Boolean value. Bool Conversions No conversions can be made from the bool type to numeric types or vice versa. Equality and Comparison Operators == and != test for equality and inequality of any type, but always return a bool value. ‡ Value types typically have a very simple notion of equality: int x = 1; int y = 2; int z = 1; Console.WriteLine (x == y); // False Console.WriteLine (x == z); // True For reference types, equality, by default, is based on reference, as opposed to the actual value of the underlying object (more on this in Chapter 6): public class Dude { public string Name; public Dude (string n) { Name = n; } } ... Dude d1 = new Dude ("John"); Dude d2 = new Dude ("John"); Console.WriteLine (d1 == d2); // False Dude d3 = d1; Console.WriteLine (d1 == d3); // True The equality and comparison operators, ==, !=, <, >, >=, and <=, work for all numeric types, but should be used with caution with real numbers (as we saw in “Real Num- ber Rounding Errors” on page 27). The comparison operators also work on enum ‡ It’s possible to overload these operators (Chapter 4) such that they return a non-bool type, but this is almost never done in practice. 28 | Chapter 2: C# Language Basics type members, by comparing their underlying integral values. We describe this in “Enums” on page 97 in Chapter 3. We explain the equality and comparison operators in greater detail in Chapter 4 in the sections “Operator Overloading” on page 153 and “Equality Compari- son” on page 245 and in the section “Order Comparison” on page 255 in Chapter 6. Conditional Operators The && and || operators test for and and or conditions. They are frequently used in conjunction with the ! operator, which expresses not. In this example, the UseUmbrella method returns true if it’s rainy or sunny (to protect us from the rain or the sun), as long as it’s not also windy (since umbrellas are useless in the wind): static bool UseUmbrella (bool rainy, bool sunny, bool windy) { return !windy && (rainy || sunny); } The && and || operators short-circuit evaluation when possible. In the preceding example, if it is windy, the expression (rainy || sunny) is not even evaluated. Short- circuiting is essential in allowing expressions such as the following to run without throwing a NullReferenceException: if (sb != null && sb.Length > 0) ... The & and | operators also test for and and or conditions: return !windy & (rainy | sunny); The difference is that they do not short-circuit. For this reason, they are rarely used in place of conditional operators. Unlike in C and C++, the & and | operators perform (non-short- circuiting) boolean comparisons when applied to bool expres- sions. The & and | operators perform bitwise operations only when applied to numbers. The ternary conditional operator (simply called the conditional operator) has the form q ? a : b, where if condition q is true, a is evaluated, else b is evaluated. For example: static int Max (int a, int b) { return (a > b) ? a : b; } The conditional operator is particularly useful in LINQ queries (Chapter 8). Boolean Type and Operators | 29 C# Basics Strings and Characters C#’s char type (aliasing the System.Char type) represents a Unicode character and occupies two bytes. A char literal is specified inside single quotes: char c = 'A'; // Simple character Escape sequences express characters that cannot be expressed or interpreted literally. An escape sequence is a backslash followed by a character with a special meaning. For example: char newLine = '\n'; char backSlash = '\\'; The escape sequence characters are shown in Table 2-2. Table 2-2. Escape sequence characters Char Meaning Value \'Single quote 0x0027 \"Double quote 0x0022 \\Backslash 0x005C \0 Null 0x0000 \a Alert 0x0007 \b Backspace 0x0008 \f Form feed 0x000C \n New line 0x000A \r Carriage return 0x000D \t Horizontal tab 0x0009 \v Vertical tab 0x000B The \u (or \x) escape sequence lets you specify any Unicode character via its four- digit hexadecimal code: char copyrightSymbol = '\u00A9'; char omegaSymbol = '\u03A9'; char newLine = '\u000A'; Char Conversions An implicit conversion from a char to a numeric type works for the numeric types that can accommodate an unsigned short Enter the password to open this PDF file: File name: - File size: - Title: - Author: - Subject: - Keywords: - Creation Date: - Modification Date: - Creator: - PDF Producer: - PDF Version: - Page Count: - Preparing document for printing… 0% Log in to post a comment
https://www.techylib.com/en/view/parkmooseupvalley/c_4.0_in_a_nutshell
CC-MAIN-2019-04
refinedweb
8,993
53.92
Here’s a close look at the technique and code for console handling in Standard (i.e., ISO/ANSI) C/C++. In the latter part of the article readers are encouraged to write their own code for the clrscr, getch, gotoxy, text color, and text background functions. Before taking up the matter at hand, it would be useful to note the key differences between Turbo C/C++ and Standard C/C++: 1. Turbo C and Turbo C++ are IDEs developed by Borland, which use their C/C++ implementations and compilers, while Standard C and C++ are supported by all famous compilers such as GCC. 2. Turbo C/C++ are only for DOS (and Windows), while Standard C/C++ is portable. 3. Turbo C++ was discontinued in 2006, while Standard C/C++ will be sustained for ever (even though the standards might be updated). 4. Turbo C and C++ are proprietary software, while Standard C and C++ are just languages that are supported by Free (Libre) compilers. Hence it is recommended that you use Standard C/C++ if you are doing anything serious. It may be noted here that: 1. Standard C/C++, ANSI C/C++ and ISO C/C++ are the same. 2. For the sake of convenience, I will use only stdio.h and printf from this point onwards. You may replace it with iostream and cout if you are using C++. 3. If you are new to Standard C++, use iostream instead of iostream.h and add a line using namespace std; after adding all header files. There will be no other notable differences. C, Turbo and Standard are almost the same (at least at the basic levels). Console formatting with Turbo C Consider a simple program to make the background colour of the console blue, and to print the message ‘Hello, World! in white. You may use the following code in Turbo C. #include <stdio.h> #include <conio.h> void main() { clrscr(); // Clear the console textbackground(BLUE); textcolor(WHITE); printf(Hello, World!\n); return; } But this code cannot be compiled with a standard compiler. So let us see how to work with Standard C. Console operations in Standard C The problem with the above code is that it is not compatible with Standard C. No standard libraries provide conio.h. Using the void type for the main function itself is an error. The int type may be used and some value returned, normally zero (return 0;). Console handling in Standard C is done using escape sequences. Escape sequences use \n for a newline, \t for a horizontal tab, and so on. In Standard C, there are escape sequences that can do many operations on the console. First, let’s use these escape sequences. Since they look rough, we will write our own code for the functions clrscr, getch, gotoxy, textcolor, textbackground, etc. Some important escape sequences are given in Table 1. You can use them inside printf as you use \n and \t. Table 1: Some important ANSI escape codes As shown in Table 1, you may use printf(\x1b[2J); to clear the console, instead of clrscr();. You may use printf(\x1b10;20H); instead of gotoxy(20,10); or printf(\x1b%d;%dH, a, b); instead of gotoxy(a, b);. But while using colours, with \x1b[;km, you have to replace k with 30+colour_code in the case of foreground colouring and 40+colour_code in the case of background colouring. Refer to Table 2 for the colour codes. Table 2: Colour codes From Table 2, you can use printf(\x1b[31m); instead of textcolor(RED); and printf(\x1b[41m); instead of textbackground(RED);. You can cancel the colouring using \x1b[0m. To learn more about ANSI escape codes, visit: These escape codes are used in other languages too, like in Python, when using the print() function; e.g.: print(“\x1b[32m”) for a green foreground. Typical Turbo C programmers start their programs with clrscr() to clear the screen and end with getch() to wait for a keystroke before the program terminates. Both are meaningless in most cases. If you are familiar with CLI, you might have noticed the irritation while the console clears with every command that you give. A Turbo C programmer gives the getch() command. Otherwise the program will disappear suddenly and the result of the program will not be visible. In a standard console, the result stays there for a long time without getch(). The extra keystroke is an additional irritation. However, these two functions are useful in some cases. We have replaced clrscr() with an escape code, \x1b[2J. But there is no such code for getch() since it is a function with return type char. Actually, getch() can be replaced with getchar() from stdio.h. The problem, however, is that getchar() waits for the user to press the ‘Enter’ key. It echoes what was typed, too. We’d like to write our own function that waits only for a single keystroke, which will not echo the keystroke. Here we need the help of an external CLI program, stty, which is shipped with almost all GNU/Linux distros. The code given below shows the combination of escape codes and stty to create a program that clears the screen, prints ‘Hello, World! in white with a blue background, waits for a keystroke and resets the video. #include <stdio.h> #include <stdlib.h> void clrscr() { printf("\x1b[2J"); } char getch() { char c; // This function should return the keystroke system("stty raw"); // Raw input - wait for only a single keystroke system("stty -echo"); // Echo off c = getchar(); system("stty cooked"); // Cooked input - reset system("stty echo"); // Echo on - Reset return c; } int main() { printf("\x1b[34m"); // Blue background clrscr(); // Clear the screen with blue bg printf("\x1b[47m"); // White foreground printf("Hello, World!\n"); getch(); printf("\x1b[0m"); // Reset the console return 0; } Now we have replaced all the important functions of conio.h. But think about the complexity of the code above- this long code snippet was the replacement of short Code 1! Here arises the need to write our own conio.h. Our own conio.h! Let’s combine all our techniques in our own conio library. To keep things simple, let’s not create a .so shared library, but write conio.c and conio.h separately. Here are the codes, with some additional functions from our earlier discussion (such as nocursor() and showcursor()) also: conio.c: // conio.c for ANSI C and C++ // Extra functions are also provided. // (C) 2013 Nandakumar <[email protected]> #include <stdio.h> #include <stdlib.h> #include <string.h> #include “conio.h” // General Utility Functions void cagxy(unsigned int x, unsigned int y) // Clear and Goto X,Y { printf(“%s\x1b[%d;%df”, CLEAR, y, x); } void clrscr() // Clear the Screen { printf(“%s”,CLEAR); } char getch() { char c; system(“stty raw -echo”); c=getchar(); system(“stty cooked echo”); return c; } void gotox(unsigned int x) { printf(“\x1b[%dG”, x); } void gotoxy(unsigned int x, unsigned int y) { printf(“\x1b[%d;%df”, y, x); } void nocursor() { printf(“\x1b[?25l”); } void reset_video() { printf(“\x1b[0m”); } void showcursor() { printf(“\x1b[?25h”); } void textcolor(char *color) { printf(“%s”,color); } void textbackground(char color[11]) { char col[11]; strcpy(col,color); col[2]=’4′; // The given color will be fg color. Replace ‘3’ with ‘4’. printf(“%s”,col); } conio.h: // conio.h for ANSI C and C++ // Extra functions are also provided. // (C) 2013 Nandakumar <[email protected]> #ifndef CONIO_H #define CONIO_H #include <stdio.h> #include <stdlib.h> #include <string.h> // General Utility #define CLEAR “\x1b[2J” #define SET11 “\x1b[1;1f” // Set the Cursor at 1,1 // Text Modification #define BLINK_SLOW “\x1b[5m” #define BLINK_RAPID “\x1b[6m” // Colors #define CC_CLEAR “\x1b[0m” // Console Color CLEAR // FG Colours #define BLACK “\x1b[30m” #define RED “\x1b[31m” #define GREEN “\x1b[32m” #define YELLOW “\x1b[33m” #define BLUE “\x1b[34m” #define MAGENTA “\x1b[35m” #define CYAN “\x1b[36m” #define WHITE “\x1b[37m” // FG Intense Colors #define IBLACK “\x1b[30;1m” #define IRED “\x1b[31;1m” #define IGREEN “\x1b[32;1m” #define IYELLOW “\x1b[33;1m” #define IBLUE “\x1b[34;1m” #define IMAGENTA “\x1b[35;1m” #define ICYAN “\x1b[36;1m” #define IWHITE “\x1b[37;1m” // BG Colors #define BGC_BLACK “\x1b[40m” #define BGC_RED “\x1b[41m” #define BGC_GREEN “\x1b[42m” #define BGC_YELLOW “\x1b[43m” #define BGC_BLUE “\x1b[44m” #define BGC_MAGENTA “\x1b[45m” #define BGC_CYAN “\x1b[46m” #define BGC_WHITE “\x1b[47m” // BG Intense Colors #define BGC_IBLACK “\x1b[40;1m” #define BGC_IRED “\x1b[41;1m” #define BGC_IGREEN “\x1b[42;1m” #define BGC_IYELLOW “\x1b[43;1m” #define BGC_IBLUE “\x1b[44;1m” #define BGC_IMAGENTA “\x1b[45;1m” #define BGC_ICYAN “\x1b[46;1m” #define BGC_IWHITE “\x1b[47;1m” // General Utility Functions void cagxy(unsigned int x, unsigned int y); // Clear and Goto X,Y void clrscr(); // Clear the Screen char getch(); void gotox(unsigned int x); void gotoxy(unsigned int x, unsigned int y); void nocursor(); void reset_video(); void showcursor(); void textcolor(char *color); void textbackground(char color[11]); #endif Example program The code for the program that illustrates the use of our conio is given below: test.c: #include <stdio.h> #include “conio.h” int main() { textbackground(BLUE); textcolor(WHITE); clrscr(); printf(“Hello, World!\n”); getch(); return 0; } See how short test.c is compared to Code 2! Now, to compile this code, 1) Keep both conio.c and conio.h in the same directory where test.c exists. 2) Compile using the following command: gcc test.c conio.c -o test Now, you can use conio.c and conio.h for any program. Try it out Create a program especially using gotoxy(x,y) to input a table and print it with different colours. Copy conio.h into /usr/include to use with <conio.h> instead of ‘conio.h’. Create a shared library from conio.c (.so extension) and find its usage. You may also try to create a ready-to-install package. Disclaimer These escape codes and tricks are given to help readers and are tested with gnome-terminal, xterm and Linux tty text console. But incompatibilities may occur. So always use a graphical console like GNOME Terminal, which can be closed easily with the mouse and keyboard when you notice improper working. Amateur hacking can destroy a system, for which the author is not responsible. So always be careful.
http://www.opensourceforu.com/2014/03/write-conio-h-gnulinux/
CC-MAIN-2015-11
refinedweb
1,720
65.62
statsd-ruby Installing Bundler: gem "statsd-ruby" Basic Usage # Set up a global Statsd client for a server on localhost:9125 $statsd = Statsd.new 'localhost', 9125 # Set up a global Statsd client for a server on IPv6 port 9125 $statsd = Statsd.new '::1', 9125 # Send some stats $statsd.increment 'garets' $statsd.timing 'glork', 320 $statsd.gauge 'bork', 100 # Use {#time} to time the execution of a block $statsd.time('account.activate') { @account.activate! } # Create a namespaced statsd client and increment 'account.activate' statsd = Statsd.new('localhost').tap{|sd| sd.namespace = 'account'} statsd.increment 'activate' Testing Run the specs with rake spec Run the specs and include live integration specs with LIVE=true rake spec. Note: This will test over a real UDP socket. Performance A short note about DNS: If you use a dns name for the host option, then you will want to use a local caching dns service for optimial performance (e.g. nscd). Extensions / Libraries / Extra Docs Contributing to stats. Contributors Rein Henrichs Ray Krueger Jeremy Kemper Ryan Tomayko Gabriel Burt Rick Olson Trae Robrock Corey Donohoe James Tucker Dotan Nahum Eric Chapweske Hannes Georg John Nunemaker Mahesh Murthy Manu J Matt Sanford Nate Bird Noah Lorang Oscar Del Ben Peter Mounce Ray Krueger Reed Lipman Thomas Whaples Copyright © 2011, 2012, 2013 Rein Henrichs. See LICENSE.txt for further details.
http://www.rubydoc.info/github/reinh/statsd/master/
CC-MAIN-2017-34
refinedweb
223
57.87
Hi. I´m sure it is easy, but I just can´t make it work... What I need is to rename fields while exporting feature class to shapefile. There are strings in fields names, that define new names - I want to use dictionary. I know I have to use field mapping. But when I try to overwrite name of output field, it just does not work (names are not overwritten). How to do it? Sample code: import arcpy # input geodatabase gdb=r"C:\Temp\EXPORT\export.gdb" # output folder folder=r"C:\Temp\EXPORT" # dictionary - string in old name defines new name fldsNamesDict={'String1':'NewName1','String2':'NewName2','String3':'NewName3','String4':'NewName4'} # list of strings fldsNames=list(fldsNamesDict.keys()) arcpy.env.workspace=gdb fcs=arcpy.ListFeatureClasses() # loop over feature classes for fc in fcs: # new fieldmappings object fieldmappings=arcpy.FieldMappings() # load input fc to fieldmappings object fieldmappings.addTable(fc) # fields of input fc flds=fieldmappings.fieldMappings # loop over fields for fld in flds: # loop - which string from dictionary is in old field name? what will be new name? for fldName in fldsNames: if fldName in fld.getInputFieldName(0): # SET NEW FIELD NAME fld.outputField.name=fldsNamesDict[fldName] # export fc to shp using field mapping with new fields names arcpy.FeatureClassToFeatureClass_conversion(fc,folder,fc+".shp","",fieldmappings) Thanks for any help! Stuck for a long time, solved after few hours after posting into forum Two adjustments: - index and replaceFieldMap - new "instance" of output field object
https://community.esri.com/thread/162171-rename-fields-during-export-fieldmappings
CC-MAIN-2018-43
refinedweb
242
60.41
Ok, I feel we're almost over on this one. I'll try summarize and reply to a lot of comments in a single message - it's the best I can do now. This thread started with a proposal for a explicit naming scheme for builtins. It ended up being pretty much a discussion about sorted() and reversed(); why are they named like that, and what options are available to change it. As it evolved, it turned out into a discussion about the merits of a sorting generator as a solution for some particular scenarios. What follows here is my analysis of each part of the problem. However, for the unpatient and faint of heart I will offer my conclusion in advance. If you want to read my reasoning, please be welcome. * I believe that the current naming (sorted() and reversed()) is not good enough, and can be improved; * Naming the current builtins sorted() and ireversed() would make things more consistent, while reserving a few options for future implementation; * A sorting generator can be a useful in some scenarios. A cookbook solution (in pure Python, using the heapq library as a backend) is a good proof of concept and may be enough for most needs. -------------------------------------------------------------- 1. (Re)Naming sorted() and reversed() ------------------------------------- I'm not totally satisfied with the current choice of names, consistency-wise. The differences between sorted() and reversed() are not immediatelly evident from their names alone. In my opinion: -- sorted() and reversed() should work similarly, returning lists. **This is not to say that a builtin to return a reversed list would be useful**. Clearly, returning an iterator for the reverse list is the best and most useful choice -- only the name that is not good enough. -- I've previously proposed naming the iterated versions xsorted() and xreversed(), but that was a bad idea -- the correct choice would be isorted() and ireversed(), to keep with itertools naming scheme. However, **it does not automatically means that a isorted() builtin would be useful** (more on this later). In short, I believe we have a (small) problem, and naming the current builtins sorted() and ireversed() would solve it. 2. Deciding whether isorted() and reversed() would make sense ------------------------------------------------------------- Implicit in the analysis of the item (1) is the assumption that there are four possible variations regarding sort and reverse, iterated and list-returning versions. Of those, two are already accepted as builtins, based on the actual real-world need to solve common situations: -- sorted() returns a sorted list -- ireversed() returns a iterator (please note that I'm following my own proposed naming scheme) Two variations are not currently implemented as builtins: -- isorted() -- would return a iterator -- reversed() -- would return a reversed list Although all variations are possible, it does not mean that they are useful or desirable. The missing variations can be easily written in terms of the existing ones. Currently, sorted() returns a list that can be iterated over. However, if only a part of the sorted list is needed, then sorted() incurs a penalty -- it always sorts the entire list (more on this later). As for reversed(), a simple idiom can be used to return a new, reversed list: reverse_list = [x for x in reversed(mylist)] To put the question bluntly, **is there any reason to implement either isorted() or reversed()**? The arguments (pro and against) are as follows: PRO: -- it's possible, and relatively easy to implement -- completeness -- consistency -- native implementation would perform better in some cases -- the existing idioms may not be immediately clear to the novice (see example above) AGAINST: -- just because it's possible does not means that it's a good idea -- more bloat in the standard library -- more builtins to pollute the namespace -- more thing for a novice to learn -- quoting Alex Martelli: practicality beats purity Given the arguments, I agree that it's better to leave it as it is (but changing the name of reversed() to ireversed(), as proposed on (1)). 3. The case for isorted() ------------------------- Over the last few posts of the thread the topic degenerated into a discussion about the relative merits of isorted() -- the generator version of sorted(). To sum up what was said: Q1: Is it possible to implement a sorting generator? To summarize what was said, -- a sorting algorithm *can* be adapted to work as a generator, **BUT** -- the current sorting algorithm used internally in Python is not adequate for this particular hack, **AND** -- this particular problem can be easily solved using heaps (that are now part of the standard library). Q2: Is it useful in the real world? A sorting generator is useful IF: -- you know you aren't going to need the entire sorted list; -- just a few elements will do. A particular situation is where you don't know in advance exactly how many elements you need; in this case, a sorting generator is probably the best approach. My gut feeling is that such a builtin would be useful, and in fact, could help to simplify existing code that uses a much more complicated and verbose approach. But then, I don't have Guido's track record when it comes to instinctive decisions on language design. My best bet, now, is writing a cookbook solution to implement a "pseudo-sort-generator" using the heapq library. It's a good proof of concept, and may help to illuminate the question with a practical tool. p.s.: [Alex Martelli] > If you hadn't responded publically I'd never would have > got a chance to object, so of course I don't mind!-) It's always a pleasure to have you participating, even after you have effectively killed most of my arguments :-) I just feel honored ;-) -- Carlos Ribeiro blog: mail: carribeiro at gmail.com
https://mail.python.org/pipermail/python-list/2004-September/273378.html
CC-MAIN-2014-10
refinedweb
956
54.46
The man page for signal says Unlike on BSD systems, signals under Linux are reset to their default behavior when raised. However, if you include <bsd/signal.h> instead of <signal.h> then signal is redefined as __bsd_signal and signal has the BSD semantics. However I tried the following simple program and found that if I install a signal handler once, it remains installed over multiple deliveries of the same signal. i.e. the signal does not get reset to its default behaviour when raised. #include <stdio.h> #include <signal.h> void handler( int x) { printf( "Got the signal\n"); main() {main() {Quote:} char c; signal(28, handler); sleep(3600); printf( "Woke up\n"); sleep(3600); and the I did 'kill -28 <pid>' twice. Both the times I got theand the I did 'kill -28 <pid>' twice. Both the times I got theQuote:} message from signal handler. Can anyone help explain this contradiction or am I messing up somewhere in the concepts? Thanks, -Kartik
http://www.verycomputer.com/169_0857528855fb1bce_1.htm
CC-MAIN-2020-40
refinedweb
164
65.83
Why, clever readers, did you know that you can use Flash MX Professional 2004′s Alert component to create non-modal pop-up windows instead of the default modal mode? If you did, well, you can stop reading. If you didn’t, you’re probably already typing up an angry email to me complaining how I haven’t defined the term “modal” or “nonmodal”. So, rest your ten angry digits and read on. From our very own docs: A nonmodal window allows a user to interact with other windows in the application. So, what this all means is that a modal Alert window prevents users from clicking on anything but the displayed Alert window itself. Therefore, a user can’t go any further until they dismiss the pop-up, similar to how the OS displays some error messages. A nonmodal Alert window lets users merrily click on either the Alert window or any components on gizmos underneath the Alert. Enough of my jibber-jabber, on with some code… Before you can test the code, add a copy of both the Button component and Alert component to your library. Add the follow code to Frame 1 of the main Timeline: import mx.controls.Alert; this.createClassObject(mx.controls.Button, “modal_button”, 10, {_x:10, _y:10}); this.createClassObject(mx.controls.Button, “nonmodal_button”, 20, {_x:120, _y:10}); modal_button.label = “modal”; modal_button.addEventListener(“click”, modalListener); function modalListener(evt_obj:Object):Void { var a:MovieClip = Alert.show(“This is a modal Alert window”, “Alert Test”, Alert.OK, this); a.move(100, 100); } nonmodal_button.label = “nonmodal”; nonmodal_button.addEventListener(“click”, nonmodalListener); function nonmodalListener(evt_obj:Object):Void { var a:MovieClip = Alert.show(“This is a nonmodal Alert window”, “Alert Test”, Alert.OK | Alert.NONMODAL, this); a.move(100, 100); } Test the movie, and click on the “modal” button to launch a modal window. This means you can no longer click on either of the Button instances on the Stage. Close the Alert and click the “nonmodal” button. Even though the nonmodal Alert window is visible, you can still click on either of the button instances on the Stage. Thar ye be havin’ ‘t. Dasn’t say I neredo anythin’ fer ye. YAR!!! Hello,I am a Canadian born engineer who is a Java programmer. Recently I have been thinking about buying Flash. It looks impressive. At this stage I guess I am wondering if these programs could be done in Flash: course you will need Java enabled to view them.Your answers, or lack thereof, should prove interesting. I guess I am particularly interested in Flash’s capabilities in parsing a non UML web site, something as pathetic and old fashioned as a CSV data site. The Java programs kind of have to do that, due to usage considerations. It looks like ActionScript is much like Java, so that is good. It also looks like Flash would be fun to play with, and Flash has version control down to an art…Thanks.Larry Druhall Does anybody know of a way to get the Alert component to react to the mouseDownOutside event, apart from having to use the PopUpManager class (shown below)?import mx.managers.PopUpManager;import mx.controls.Alert;var options:Object = {title:”Hello World”, text:”Danke”, okButton:true, cancelButton:true, defButton:Alert.CANCEL};var a:MovieClip = PopUpManager.createPopUp(this, Alert, true, options, true);a.addEventListener(“click”, miscHandler);a.addEventListener(“mouseDownOutside”, miscHandler);function miscHandler(eventObj:Object):Void { trace(“Event: ” + eventObj.type);} hi larry,suffice to say that flash can certainly do those things.enjoy I was just wondering if anybody knew how to add the NumericStepper Component into the alert dialog box, or any custom button for that matter?
http://blogs.adobe.com/jdehaan/2005/05/alert_component_nonmodal_popup.html
CC-MAIN-2013-20
refinedweb
611
59.09
Fractal Theory: Sample Code To demonstrate just how simple it is to generate pictures of the Mandelbrot set, here's a small program written in the C++ programming language. If you have a C++ compiler, try it out. It is a complete working program. #include "stdio.h" const int MaxIters = 200; const int XSIZE = 80; const int YSIZE = 60; const int BLACK = -1; const double LEFT = -2.0; const double RIGHT = 1.0; const double TOP = 1.0; const double BOTTOM = -1.0; int main(int argc, char *argv[]) { for (int y = 0; y < YSIZE; ++y) { for (int x = 0; x < XSIZE; ++x) { double zr = 0.0; double zi = 0.0; const double cr = LEFT + x * (RIGHT - LEFT) / XSIZE; const double ci = TOP + y * (BOTTOM - TOP) / YSIZE; double rsquared = zr * zr; double isquared = zi * zi; int counter = 0; for (/**/; rsquared + isquared <= 4.0 && counter < MaxIters; ++counter) { zi = zr * zi * 2; zi += ci; zr = rsquared - isquared; zr += cr; rsquared = zr * zr; isquared = zi * zi; } if (rsquared + isquared <= 4.0) printf("*"); /* In the set. */ else printf(" "); /* Not in the set. */ } printf("\n"); } return 0; } For those of you who aren't programmers, here's an excerpt of the code that actually does all of the calculations. Here it is, all eleven lines of it: for (counter = 0; rsquared + isquared <= 4.0 && counter < MaxIters; ++counter) { zi = zr * zi * 2; zi += ci; zr = rsquared - isquared; zr += cr; rsquared = zr * zr; isquared = zi * zi; } That's all it takes to do a rudimentary exploration of the Mandelbrot set. Slowly. But where did this magical sequence of instructions come from? It certainly looks very arbitrary, and very peculiar. It turns out that it is a computerized version of an even simpler formula.
http://www.cygnus-software.com/docs/html/FractalTheorySampleCode.htm
CC-MAIN-2017-51
refinedweb
286
82.44
In this tutorial, we will learn how to find the saddle point of a matrix in Java. I will give you two examples. Try to run the program with different examples and let me know if you have any query. What is Saddle point of a Matrix : Saddle point is an element of a matrix which is smallest in that row and largest in that column. Let me show you with a example : 4 5 2 5 1 1 0 1 0 In the above example, for ‘2’, It is the smallest element for the first row. Again it is also the largest element for the third column . So, 2 is a saddle point. A matrix can have more than one saddle point. How to find a saddle point in a Matrix ? Aah..it is little bit complex. Before explaining the whole algorithm, let me tell you that the below algorithm is written by me , so if you think any way to improve it , please please let me know.I have commented the whole code, so maybe it would not be difficult to understand. Steps : - Pass the ‘matrix’, count of row and count of column to the method findSaddlePoint - Scan through each row and find out the smallest element - Save the first element of that row as smallest - Now iterate through other elements and find for any element less than it i.e. find the smallest element - If any equal element is found, scan other elements and if it is the smallest, then check if it is largest or not for this column. If this element is both largest in column and smallest in row, print it as saddle point. - After the loop is completed, check for the minimum element if it is largest in the column. - For checking if an element is largest or not in a column, we are using checkColumnMax method. /* * Copyright (C) 2017 codevscolor to find saddle point of a matrix */ public class SaddlePoint { static void print(String value) { System.out.println(value); } /** * This method will check maximum value in a specific column and compare it with a given value. * i.e. it will compare minimum value of a row with all elements of that element's column and check if it is * maximum or not for that column . If yes, it is a saddle point. * Return true if it is a saddle point. false otherwise * * @param mat : Given matrix * @param minValColPosition : coloum position for which we need to check * @param minValueRow : minimum value for that Row we have found * @param rowSize : total no of row * @return true or false */ static boolean checkColumnMax(int[][] mat, int minValColPosition, int minValueRow, int rowSize) { //first, set the value as maximum int maxValCol = minValueRow; //iterate through each element for that column for (int i = 0; i < rowSize; i++) { if (mat[i][minValColPosition] > maxValCol) { //update maximum value if any value is greater than the stored maximum maxValCol = mat[i][minValColPosition]; } } if (maxValCol == minValueRow) { //if maximum value is same as the value given , return true. i.e. it is a saddle point return true; } return false; } /** * Main method to find saddle point * * @param mat : given matrix */ static void findSaddlePoint(int[][] mat, int rowSize, int colSize) { //scan through each row and find out the smallest element for the row for (int row = 0; row < rowSize; row++) { int minValueRow = mat[row][0]; //storing the first element int minValColPosition = 0; for (int col = 1; col < colSize; col++) { //iterate through other elements of the row and check for min // value if (mat[row][col] < minValueRow) { minValueRow = mat[row][col]; minValColPosition = col; } else if (mat[row][col] == minValueRow) { //if minimimum value stored is equal to another element, i // .e. two values are present. Check for this element if it is a saddle point or not . But first // confirm this is the minimum value or not . boolean isMin = true; // compare with other elements if it is actually a minimum value for (int i = col + 1; i < colSize; i++) { if (mat[row][i] < minValueRow) { isMin = false; } } if (isMin) { //if it is minimum, check it is maximum for that column or not if (checkColumnMax(mat, col, minValueRow, rowSize)) { print("Saddle Point " + "[" + row + ":" + col + "]" + " = " + minValueRow); } } } } //check if the minimum value is maxim or not for this column if (checkColumnMax(mat, minValColPosition, minValueRow, rowSize)) { print("Saddle Point " + "[" + row + ":" + minValColPosition + "]" + " = " + minValueRow); } } } public static void main(String[] args) { print("For the first matrix :"); int mat[][] = {{ 2, 2, 1, 1, 0}, { 1, 1, 1, 1, -1}, {-1, -1, -1, 0, -2}, { 0, 0, 0, 0, -4}}; findSaddlePoint(mat, 4, 5); print("For the second matrix :"); int mat1[][] = {{0, 1, 0}, {-1, -2, -3}, { 0, 1, 0}}; findSaddlePoint(mat1, 3, 3); } } Output : For the first matrix : Saddle Point [0:4] = 0 For the second matrix : Saddle Point [0:2] = 0 Saddle Point [0:0] = 0 Saddle Point [2:2] = 0 Saddle Point [2:0] = 0 For the second matrix, we have four saddle points. Similar tutorials : - Java program to check if a Matrix is Sparse Matrix or Dense Matrix - Java Program to find Transpose of a matrix - Java program to print an identity matrix - Java program to print the boundary elements of a matrix - Java program to check if a matrix is upper triangular matrix or not - Java program to subtract one matrix from another
https://www.codevscolor.com/java-program-to-find-saddle-point-of-a-matrix
CC-MAIN-2020-40
refinedweb
879
50.6
how to use mbrola with java?843802 Oct 19, 2006 12:56 PM hi all ive develope a simple text to phone engine. i didnt use any jsapi tho. can i still use mbrola for synthesizing the speech? may i know how? thank youuuuu.... ive develope a simple text to phone engine. i didnt use any jsapi tho. can i still use mbrola for synthesizing the speech? may i know how? thank youuuuu.... This content has been marked as final. Show 5 replies 1. Re: how to use mbrola with java?843802 Oct 21, 2006 3:10 AM (in response to 843802)Check out regards 2. Re: how to use mbrola with java?843802 Apr 16, 2007 10:00 PM (in response to 843802)Add these to your classpaths. then compile the code below!!!! ........ I have not tested them but one by one but These jar's are all requried to compile the FreeTTS File. The jars are available in the FreeTTS download page in the FreeTTs Bin Zip File at ------------------------ System Classpath + User Classpath C:\Program Files\Java\jdk1.5.0_06\lib\jsapi.jar; C:\Program Files\Java\jdk1.5.0_06\lib\cmulex.jar; C:\Program Files\Java\jdk1.5.0_06\lib\mbrola.jar; C:\Program Files\Java\jdk1.5.0_06\lib\freetts.jar; C:\Program Files\Java\jdk1.5.0_06\lib\en_us.jar; C:\Program Files\Java\jdk1.5.0_06\lib\cmutimelex.jar; C:\Program Files\Java\jdk1.5.0_06\lib\cmudict04.jar; C:\Program Files\Java\jdk1.5.0_06\lib\cmu_time_awb.jar; C:\Program Files\Java\jdk1.5.0_06\lib\cmu_us_kal.jar; -------------------- User Path C:\Program Files\Java\jdk1.5.0_06\lib /** * Copyright 2003 Sun Microsystems, Inc. * * See the file "license.terms" for information on usage and * redistribution of this file, and for a DISCLAIMER OF ALL * WARRANTIES. */ import com.sun.speech.freetts.Voice; import com.sun.speech.freetts.VoiceManager; import com.sun.speech.freetts.audio.JavaClipAudioPlayer; /** * Simple program to demonstrate the use of the FreeTTS speech * synthesizer. This simple program shows how to use FreeTTS * without requiring the Java Speech API (JSAPI). */ public class FreeTTS { VoiceManager voiceManager; Voice voice; public FreeTTS() { } public void TTS(String words) { String voiceName = "mbrola_us3"; //String voiceName = "kevin16"; // System.out.println(); // System.out.println("Using voice: " + voiceName); // The VoiceManager manages all the voices for FreeTTS. voiceManager = VoiceManager.getInstance(); voice = voiceManager.getVoice(voiceName); /* Voice[] voices = voiceManager.getVoices(); for (int i = 0; i < voices.length; i++) { System.out.println(" " + voices.getName() + " (" + voices[i].getDomain() + " domain)"); }*/ if (helloVoice == null) { System.err.println( "Cannot find a voice named " + voiceName + ". Please specify a different voice."); System.exit(1); } // Allocates the resources for the voice. voice.allocate(); //Synthesize speech. voice.speak(words); // Clean up and leave. voice.deallocate(); // System.exit(0); } } 3. Re: how to use mbrola with java?843802 Jun 12, 2007 8:25 AM (in response to 843802)Hello, Everytime I try to use MBROLA voices in Java programs such as mbrola_us1, I get following messages: Processing Utterance: com.sun.speech.freetts.ProcessException: Cannot start mbrola program: [Ljava.lang.String;@f3d6a5 Processing Utterance: com.sun.speech.freetts.ProcessException: Cannot start mbrola program: [Ljava.lang.String;@f3d6a5 I could never figure out what I did wrong because I downloaded and installed the mbrola binary and voices data files according to the instructions. I would greatly appreciate if someone can tell me what I might be missing. My email is ylees@gmu.edu. Thank you. 4. Re: how to use mbrola with java?842463 Feb 24, 2011 10:53 AM (in response to 843802)I am also getting the same problem. So any one please help me too. You will be doing a big help for me if u do this favor for me. My id is harika.garapati.cse@gmail.com 5. Re: how to use mbrola with java?PhHein Feb 24, 2011 10:59 AM (in response to 842463)Welcome to the forum..
https://community.oracle.com/thread/1273908
CC-MAIN-2016-44
refinedweb
647
54.9
core data update existing object swift How to Update existing objects in Core Data? - You simply update any property of Core data object and call save on NSManagedObjectContext. You can also check for any changes with hasChanges method. Updating an object in CoreData is quite similar to creating a new one. First you have to fetch your object from CoreData using a fetchRequest . Reading and Updating Managed Objects With Core Data - Reading and Updating Managed Objects With Core Data Last Updated on Jan 29, 2019. Swift 4 Xcode 9 iOS Open AppDelegate.swift and implement the Core Data (CRUD) with Swift 4.2 for Beginners – Ankur Vekariya - Core Data is an object graph and persistence framework provided by Apple in memory); Need to load entire data if we need to drop table or update. Although you can add that framework for your existing project, it's easier CoreData: CRUD With Concurrency In Swift - Part 3 - CoreData: CRUD With Concurrency In Swift – Part 3 Saves the entry updated . Sets the result type as array of object IDs updated. Editing and Deleting Data - Beginning Core Data - This post looks at the rest of CRUD functions: updating and deleting. related data transfer object and save its updated name using Core Data:. Updating and Deleting in Core Data - This is an abridged chapter from our book Core Data by Tutorials, which has been completely updated for Swift 4.2 and iOS 12. This tutorial is presented as part Updating and Deleting with Swift and Core Data - Core Data by Tutorials, which has been completely updated for Swift 4.2 and Inside this container is an object to manage the Core Data state as a whole, Multiple Managed Object Contexts with Core Data Tutorial - Swift 2 examples – #9 Core Data, create, get, update and remove entities. This is the object that we need to access the data model. We can Getting Started with Core Data Tutorial - In this video, you'll learn about editing and deleting data in Core Data. View the updated Swift 2 examples – #9 Core Data. Create, get, update and remove - Today go over how to update and delete objects in Core Data. Now that we have established how to avoid duplicate values in core data swift Swift: Best way to avoid creating duplicate entries in CoreData - Thanks for your help, I solved the issue by changing the position of the try context .save to elsewhere in the method. How to make a Core Data attribute unique using - Previous: Loading Core Data objects using NSFetchRequest and a unique identifier, and it will make sure objects with that same value don't get repeated. full save and load of our objects because it's an easy way to avoid problems later. Duplicate Value Save Check while saving in coredata? · Issue #965 - Your JSON Array of dictionary: Save Array Of Dictionary in core data mapSet( JSONArray: value) ### Your model: ```swift public class How can I avoid duplication entry, I need to check the value exists in coredata or not. How to implement Unique Constraints in Core Data with iOS 9 - With iOS 9, Apple introduces Unique Constraints for Core Data. to keep the old value or if you want it to be nil on purpose in order to remove Multiple Managed Object Contexts with Core Data Tutorial - This tutorial is presented as part of our iOS 12 Launch Party — enjoy! button on the top-left exports the data to a comma-separated values (CSV) file. the extension to avoid being destroyed when you make changes to the Core Data model. How to avoid duplicate data in core data : iOSProgramming - is core data always duplicating my entries each time I added the data. here Swift Array: removing duplicate elements. – If let swift - Recently I had to remove duplicate items from an Swift Array while maintaining the original order. Searching on the web you can find many More Fetching and Deleting Managed Objects With Core Data - Swift 4 Xcode 9 iOS 11. The final piece of the Since we are focusing on deleting items, you can remove or comment out the code we added to Fetching records is a common task when working with Core Data. To show you Reading and Updating Managed Objects With Core Data - This tutorial focuses on reading and updating records. Reading and Updating Managed Objects With Core Data Swift 4 Xcode 9 iOS 11 To avoid this scenario, we need to fetch every list record from the persistent store performance - Remove Duplicates from Array in Swift 3.0 - Since the array elements are sorted, you can traverse through all elements and only keep those which are different to the previous one. when to use core data Core Data with Swift 4 for Beginners – XCBlog – Medium - Core Data will mainly help in the auxiliary facets of the application - things like data persistence, presentation, etc. Some bullet points for your Why should I use Core Data for my iPhone app? -. The Laws of Core Data - These are a set of rules I've developed over time on how to use Core Data in such a way that it is almost entirely painless. When I follow these When to use UserDefaults, Keychain, or Core Data - There are many ways to store data locally in iOS app. UserDefaults, Keychain and Core Data are some of the most popular ways to persist data Getting Started with Core Data Tutorial - Checking the Use Core Data box will cause Xcode to generate boilerplate code for what's known as an NSPersistentContainer in AppDelegate.swift. Core Data - Use Core Data to save your application's permanent data for offline use, to cache temporary data, and to add undo functionality to your app on a single device. Core Data Programming Guide: What Is Core Data? - Core Data is a framework that you use to manage the model layer objects in your application. It provides generalized and automated solutions What Is Core Data - Not knowing what Core Data is, makes it hard and frustrating to wrap wide range of applications, not every application should use Core Data. Don't Use a Core Data Library - Core Data is a fantastic framework and I love using it. I agree that Core Data has a learning curve, but isn't this true for many other frameworks? When to use Core Data vs Codable? : iOSProgramming - Hi, Fairly new to iOS development and I've been wondering when to use Core Data vs Codable for persistence. In the app I'm currently working
http://www.brokencontrollers.com/article/2164017.shtml
CC-MAIN-2019-39
refinedweb
1,087
59.43
Writing to Logs Write to the C1 CMS's log from your code You can use C1 CMS's logging functionality to write to the log from your code. This functionality is based on Microsoft Enterprise Library (the EntLib Logging Application Block) which you can configure to use the Event Log and other providers and write custom providers for if needed. Examples of code that writes data to the log: using System; using Composite.Core; public class LoggingExample { public static void DoABC() { Log.LogInformation("ABC", "Starting to do ABC"); bool SomethingGoesWrong = true; try { if (SomethingGoesWrong == true) { Log.LogWarning("ABC", "Something is wrong with " + "..."); } } catch (Exception e) { Log.LogError("ABC", "Failed to do ... "); Log.LogError("ABC", e); } } } Please also see C1 CMS API logging examples. You can also consult the EntLib Logging Application Block documentation on how to customize logging. The configuration settings are located in ~/App_Data/Composite/Composite.config. Stack Trace A stack trace provides information on the execution history of the current thread when the exception occurred and displays the names of the classes and methods called at that very moment. Normally, it is logged as an Error entry with a few extra lines to fit all the information. In the example above, a call to Log.LogError("ABC", e); will display a stack trace if the exception occurs.
https://docs.c1.orckestra.com/Configuration/Logging/Writing-to-Logs
CC-MAIN-2021-43
refinedweb
220
57.67
A SAP API for SIFAC Project description Django-SIFAC is a API to interact with the financial repository called SIFAC and deployed in many french universities. It is not really a django specific app for SIFAC but it’s easy to use with Django. For the moment, only data on cost centers, eotp, funds and functional domains are available for reading, but not for writing. Installation To install the saprfc library, please refer to this documentation. If you place the rcfsdk headers in the right place, you can run this command pip install django-sifac Integrate with your django app You need to add this lines to the settings file of your django project SIFAC = { 'HOST': '' # Hostname to connect (i.e sap.host.com) 'SYSNR': '' # System number to connect to (i.e '00') 'CLIENT': '' # Client number logged in (i.e '500') 'USER': '' # Username 'PASSWORD': '' # Password } If you want to use the SAP models filter application, you must activate the administration interface in the settings file of your project and add the sifac application in your INSTALLED_APPS setting INSTALLED_APPS = ( ..., django.contrib.admin, ..., sifac ) To create tables needed by the sifac application, syncing your database is necessary $> python manage.py syncdb Basic usage If you’re using filters and pattern for your SAP Models (or not), it is really easy to use the library to retrieve filtered data. Filters and patterns for each SAP models can be created or updated in the admnistration interface from sifac import service sifac_service = SifacService() cost_centers = sifac_service.get_filtered_cost_center_list() Launching tests To launch tests, you should install django, saprfc and packages in file requirements-test.txt $> python run_tests.py sifac Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-sifac/0.3.0/
CC-MAIN-2022-40
refinedweb
301
52.29
Right now we have the ability to collect a single compartment at a time. This bug is about changing things so that we always collect entire compartment groups (see bug 751618). The benefit here is that we could put objects from different compartments into the same arena. This would allow us to reclaim some of the memory we lost with CPG. The challenge is how to handle obj->compartment(). For objects, we're thinking that shape arenas would still be compartment-specific. So we could replace this call with obj->shape->compartment(). For some specific callers, like checking cell->compartment()->needsBarrier(), we can move the data to the compartment group and use cell->compartmentGroup() instead, since arenas will be specific to a given compartment group. Hopefully that covers all uses. Minor thing, but can you call these "compartment group" zones? This concept and the zones in bug 718121 are essentially the same thing, and using the same name makes merges between branches easier. > This concept and the zones in bug 718121 are essentially the same thing Are they? A zone in bug 718121 is basically the closure of the "X can touch Y" relation -- that is, basically origin, right? We certainly could group compartments according to zones, and I doubt we'd want to be *less* granular than that, but might it be reasonable to group some or all the system compartments' memory together and leave content compartments as they are? > A zone in bug 718121 is basically the closure of the "X can touch Y" relation -- that is, basically > origin, right? That's the *transitive* closure, of course. > might it be reasonable to group some or all the system compartments' memory together and leave > content compartments as they are? Although I am of course not a fan of CPG's memory regression, by not shoving all memory from an origin into one compartment, it makes per-tab memory reporting possible. That's a big win for us. If we used closure-over-can-touch as the grouping for allocation "zones" but keep per-compartment memory reporting (in some sane but not necessarily precise way; for example, the blame for fragmentation within a zone is shared between all the compartments, but we don't have to be perfect in how we assign the blame), and if we had reason to believe that closure-over-can-touch allocation zones are the right thing in most cases (*), I'd be OK with not having the flexibility to draw the allocation zone border wherever we please. (*) I'm not so sure it is. For example, the Facebook like button shouldn't be in the same compartment as my facebook.com window, because the lifetime of the like button isn't at all related to the lifetime of my Facebook window. (In reply to Justin Lebar [:jlebar] from comment #2) > > This concept and the zones in bug 718121 are essentially the same thing > > Are they? A zone in bug 718121 is basically the closure of the "X can touch > Y" relation -- that is, basically origin, right? Not quite --- data in the same origin can be in different zones if there is no way they can observe each other (e.g. facebook widgets opened in independent tabs) and data in different origins can be in the same zone if they are associated with the same window or group of windows. A content zone in bug 718121 is all the data in a window and the other windows it has transitively opened and can potentially see effects in. There is a single additional zone for all chrome data. > We certainly could group compartments according to zones, and I doubt we'd > want to be *less* granular than that, but might it be reasonable to group > some or all the system compartments' memory together and leave content > compartments as they are? The zone concept in bug 718121 would effectively force our hand to group all system compartments together. Not sure if that is good or bad, but thinking some more I don't think I really understand the point of this bug. Fixing memory regression, sure, but won't all this be obviated by a compacting GC? (guess that's a ways off) > The zone concept in bug 718121 would effectively force our hand to group all system compartments > together. That's probably fine, so long as we maintain the ability to report memory usage at the current level of granularity. Most system compartments have a similar lifetime. Indeed, it sounds like all the stuff in a zone likely has a similar lifetime, so it might make sense to group content that way too. I'm still not sure we should force ourselves into that position unless we have to... > I really understand the point of this bug. Fixing memory regression, sure, but won't all this be > obviated by a compacting GC? (guess that's a ways off) The situation would be improved by a compacting GC. But atm aiui each compartment stores 21 different kinds of things, and each kind of thing has to go in a separate arena. So that's potentially 4KB * 21 = 84KB of wasted space per compartment, or 16MB if you have 200 compartments. Of course we're not usually going to create an arena for each type of thing in each compartment, and we're not usually going to waste the entire arena. But that's at least one benefit that we can't get without a compacting GC. But fixing things Now is also nice. :). As Justin said, this is a memory savings even with compacting GC. It's likely we're going to stick with the existing arena model for the tenured generation for a while, so we need something like this. This could be handy for the cycle collector, too, with something like bug 754495. (In reply to Bill McCloskey (:billm) from comment #6) >. Yeah, this is the same as the description in comment 4, except that zones can include data from multiple tabs when they open each other with window.open etc. (in which case they will be able to point to and observe effects in those other tabs). > Well, my intention here was that a group would consist of all the compartments from the same tab. > [...] They shouldn't have pointers to any other groups besides a chrome group. The second condition is what I understand bhackett's zones gets us. But that's not "all compartments from the same tab"; like he says, you can have pointers to any window you transitively opened. We removed the MemShrink tag because it seems more appropriate to track it with bug 764220. Created attachment 713232 [details] [diff] [review] rolled up patch on top of 8987eff12bd8 I've been having trouble measuring how this affects memory usage. Nick offered to do some measurements. In the meantime, I'll work on fixing a few remaining bugs. I think it should be stable to test with though. Comment on attachment 713232 [details] [diff] [review] rolled up patch on top of 8987eff12bd8 Review of attachment 713232 [details] [diff] [review]: ----------------------------------------------------------------- I've just looked at the memory reporting so far. In a (simplified) example like this: - top(, id=14) - active/window() - layout - js/compartment() - dom - style-sheets - js/zone(0x7f1c6ff1f800) I was expecting the 4th line to instead be this: - js/zone(0x7f1c6ff1f800)/compartment() but I guess you omitted that because it can be inferred from the 7th line? ::: js/xpconnect/src/XPCJSRuntime.cpp @@ +2158,5 @@ > + if (nsCOMPtr<nsPIDOMWindow> piwindow = do_QueryInterface(native)) { > + // The global is a |window| object. Use the path prefix that > + // we should have already created for it. > + if (mTopWindowPaths->Get(piwindow->WindowID(), &path)) > + path.AppendLiteral("/js/"); Nit: this can be "/js-". That will give a path like "top(foo)/js-zone/..." instead of "top(foo)/js/zone/...". It might not seem much different, but it makes it clear that there will only ever be one zone per top window. So, the following things are now stored at a zone level rather than a compartment level: - strings (normal and short) and their chars - type objects and their data - IonCodes - XML objects :) I guess strings are done that way to allow cross-compartment (intra-zone) string sharing. What about type objects and IonCodes? Initial measurements are very promising! There's a lot of noise in the browser, but I was able to get reasonably steady results like this. 1. Create a session containing just about:memory?verbose. 2. Restart, hit "restore previous session". 3. Repeat step 2 a few times; the first few are noisy but then it settles down, with the main numbers (explicit, resident) varying by less than 1 MiB. Below are some sample numbers from starting up a 64-bit build. They all look really good, exactly like what I was hoping for! Basically, "unused-gc-things" dropped by ~4.8 MiB (to only 0.2 MiB), and "explicit" and "resident" both fell by ~4.6 MiB. EXPLICIT: shrunk by 7.1% (before) 65,572,029 B (100.0%) -- explicit ├──31,338,496 B (47.79%) -- js-non-window │ ├──26,866,824 B (40.97%) -- compartments (after) 60,935,117 B (100.0%) -- explicit ├──26,489,864 B (43.47%) -- js-non-window │ ├──22,120,592 B (36.30%) -- zones RESIDENT: shrunk by 4.6% (before) 101,756,928 B ── resident (after) 97,095,680 B ── resident TINY COMPARTMENTS: shrunk by a lot (before) │ │ │ ├──────42,840 B (00.07%) -- compartment([System Principal], chrome://global/content/bindings/toolbarbutton.xml) │ │ │ │ ├──32,768 B (00.05%) -- gc-heap │ │ │ │ │ ├──23,680 B (00.04%) ── unused-gc-things │ │ │ │ │ └───9,088 B (00.01%) ── sundries │ │ │ │ └──10,072 B (00.02%) ── other-sundries │ │ │ ├──────33,760 B (00.05%) -- compartment([System Principal], chrome://browser/content/places/placesOverlay.xul) │ │ │ │ ├──24,576 B (00.04%) -- gc-heap │ │ │ │ │ ├──23,200 B (00.04%) ── unused-gc-things │ │ │ │ │ └───1,376 B (00.00%) ── sundries │ │ │ │ └───9,184 B (00.01%) ── other-sundries │ │ │ ├──────29,376 B (00.04%) -- compartment([System Principal], chrome://global/content/bindings/scrollbar.xml) │ │ │ │ ├──20,480 B (00.03%) -- gc-heap │ │ │ │ │ ├──19,432 B (00.03%) ── unused-gc-things │ │ │ │ │ └───1,048 B (00.00%) ── sundries │ │ │ │ └───8,896 B (00.01%) ── other-sundries (after) │ │ │ ├──────17,328 B (00.03%) -- compartment([System Principal], chrome://global/content/bindings/toolbarbutton.xml) │ │ │ │ ├───9,048 B (00.01%) ── other-sundries │ │ │ │ └───8,280 B (00.01%) ── gc-heap/sundries │ │ │ ├───────9,072 B (00.01%) -- compartment([System Principal], chrome://browser/content/places/placesOverlay.xul) │ │ │ │ ├──8,160 B (00.01%) ── other-sundries │ │ │ │ └────912 B (00.00%) ── gc-heap/sundries │ │ │ ├───────8,496 B (00.01%) -- compartment([System Principal], chrome://global/content/bindings/scrollbar.xml) │ │ │ │ ├──7,872 B (00.01%) ── other-sundries │ │ │ │ └────624 B (00.00%) ── gc-heap/sundries OVERALL GC HEAP: shrunk by 22.2% ("unused/gc-things" shrunk by -95.5%!) (before) 18,874,368 B (100.0%) -- js-main-runtime-gc-heap-committed ├──11,700,840 B (61.99%) -- used │ ├──11,183,248 B (59.25%) ── gc-things │ ├─────278,528 B (01.48%) ── chunk-admin │ └─────239,064 B (01.27%) ── arena-admin └───7,173,528 B (38.01%) -- unused ├──5,027,224 B (26.64%) ── gc-things ├──1,097,728 B (05.82%) ── arenas └──1,048,576 B (05.56%) ── chunks (after) 14,680,064 B (100.0%) -- js-main-runtime-gc-heap-committed ├──12,407,320 B (84.52%) -- used │ ├──12,015,656 B (81.85%) ── gc-things │ ├─────212,992 B (01.45%) ── chunk-admin │ └─────178,672 B (01.22%) ── arena-admin └───2,272,744 B (15.48%) -- unused ├──1,048,576 B (07.14%) ── chunks ├────995,328 B (06.78%) ── arenas └────228,840 B (01.56%) ── gc-things More measurements coming soon. We can't run the full JS memory reporter for telemetry, but is there some data we /could/ collect for telemetry which would help us validate the changes here (and help us not regress them in the future)? > We can't run the full JS memory reporter for telemetry, but is there some > data we /could/ collect for telemetry which would help us validate the > changes here (and help us not regress them in the future)? First, let's imagine we can run the full JS memory reporter. I guess "js-main-runtime-gc-heap-committed/unused/gc-things" is the most likely candidate. The ratio between "js-main-runtime-gc-heap-committed/used" and "js-main-runtime-gc-heap-committed/unused" is another good candidate. But both of those require touching all the live heap elements, and so are no good for telemetry. The only other thing I can think of is "js-gc-heap", which we already track for telemetry. But it's a rather oblique measure of the impact of zones. > But both of those require touching all the live heap elements, and so are no good for telemetry. We couldn't keep a counter somewhere which we decrement every time we create an object and increment every time we allocate a new chunk or gc an object? I'm not convinced that's worth while, though; maybe js-gc-heap is sufficient for our purposes. Created attachment 713701 [details] AWSY "explicit" results of rolled up patch I did an before/after run on AWSY. Results are at. (And instructions on how to do your own custom AWSY run are at Here are the explicit results. (I've attached a screenshot of the explicit graph.) EXPLICIT light blue: Fresh start 56.25MiB Δ -6.90MiB light green:Fresh start [+30s] 51.97MiB Δ -7.25MiB pink: After TP5 313.14MiB Δ -4.87MiB orange: After TP5 [+30s] 304.33MiB Δ -6.63MiB dark green: After TP5 [+30s, forced GC] 296.90MiB Δ -6.00MiB dark blue: After TP5, tabs closed 261.16MiB Δ -8.46MiB red: After TP5, tabs closed [+30s] 138.92MiB Δ 34.27MiB purple: After TP5, tabs closed [+30s, forced GC] 124.92MiB Δ 23.03MiB Excluding the red and purple lines, they were all good: 4.8 to 8.5 MiB better. But the red and purple lines (which measure after all the tabs have been closed) were much worse! Why? Consider the red line first. Here's a partial breakdown. (before) - 104.65 MiB explicit - 47.42 MiB (45.32%) js-non-window - 30.95 MiB (65.27%) compartments - 13.39 MiB (28.23%) gc-heap - 12.82 MiB (95.80%) decommitted-arenas - 576.00KiB (4.20%) chunk-admin - 0B (0%) unused-arenas - 0B (0%) unused-chunks - 3.08 MiB (6.50%) runtime - 21.99 MiB (21.02%) heap-unclassified - 6.60 MiB (6.30%) window-objects (after) - 138.92 MiB explicit - 77.69 MiB (55.92%) js-non-window - 48.07 MiB (61.88%) gc-heap - 47.01 MiB (97.79%) decommitted-arenas - 1,088.00KiB (2.21%) chunk-admin - 0B (0%) unused-arenas - 0B (0%) unused-chunks - 26.48 MiB (34.08%) zones - 3.14 MiB (4.04%) runtime - 23.46 MiB (16.89%) heap-unclassified - 8.45 MiB (6.09%) window-objects In summary: - window-objects: +1.8 MiB - heap-unclassified: +1.5 MiB - JS: +30 MiB The "window-objects" is because there was an extra "1,367.42 KiB (15.80%) top none" window in the "after" case. I checked, that appears in some runs and not in others, so it's ignorable noise. No idea about the "heap-unclassified", but it could well be noise similar to window-objects. The main difference is JS, and it's almost entirely due to "decommitted-arenas" being much higher -- 38.2 MiB higher, which is greater than the 34.3 MiB explicit increase. ("chunk-admin" was ~2x higher as well, which corroborates the "more chunks are present" story.) I looked at the purple numbers closely, the story is much the same there. Here are the resident numbers. RESIDENT light blue: Fresh start 103.68MiB Δ -4.70MiB light green:Fresh start [+30s] 100.05MiB Δ -6.57MiB pink: After TP5 369.96MiB Δ 3.23MiB orange: After TP5 [+30s] 361.68MiB Δ 1,352.00KiB dark green: After TP5 [+30s, forced GC] 356.42MiB Δ 3.37MiB dark blue: After TP5, tabs closed 324.45MiB Δ -1,512.00KiB red: After TP5, tabs closed [+30s] 208.71MiB Δ 112.00KiB purple: After TP5, tabs closed [+30s, forced GC] 198.95MiB Δ -4.17MiB Resident is a noisier measurement in general, and these measurements reflect that. The important thing is that the red and purple lines didn't get the big regression. In some sense, the increase in decommitted-arenas doesn't matter that much... except on 32-bit platforms (including Windows!) where virtual address space is moderately tight. But it would definitely be nice to (a) understand it, and (b) avoid it if possible. billm said on IRC that the details of the user/system chunk split has changed a bit with the patch, and that he'll poke around with that aspect of the patch to see if it helps. (We saw big wins when that split was first added in Firefox 7; we also didn't have decommitting at that time.) > (And instructions on how to do your own custom AWSY run are at ... (Sorry, that got chopped off somehow.) Comment on attachment 713232 [details] [diff] [review] rolled up patch on top of 8987eff12bd8 f=me. Looking good, just need to work out the "decommitted-arenas" usage. The decommitted arenas bug is fixed, and everything looks green on try. I'm going to start putting patches up for review. For the most part, I've tried to ensure that intermediate patches compile and everything on their own. Here's a basic overview of how the patches are structured: 1. Right now Zone is typedefed to JSCompartment. The first patch makes it a separate type. All the GC-related fields go in the zone, as well as a little bookkeeping. Zones and compartments are still 1:1; a Zone is just a field of JSCompartment. Iterators like GCZoneGroupIter also get their own separate definitions. 2. Whereas before we could use CellIter to iterate over all the cells in a compartment, now we instead need to iterate over the zone. In some cases this is fine, but in other cases we're only interested in a particular compartment and we need to skip over cells from other compartments in the zone. 3. The above scheme is pretty inefficient if stuff like the following happens: ...during GC... iterate over all compartments C being collected: for each cell in the C->zone(): if cell->compartment() != C: continue // else do work If there's a zone with many compartments (like the system compartment) then this will iterate over the same cell many times. It's much better to structure the code this way: iterate over all zones Z: for each cell in Z: // do work However, this requires some restructuring in the methodjit, IonMonkey, and TI. The next few patches do that. 4. Then we need to fix up the memory reporters to handle zones. 5. There are patches to add a compartment() method to JSObject, JSScript, Shape, and BaseShape. I've added a compartment field to JSScript and BaseShape, so those types use that. JSObject and Shape get their compartment by going through their BaseShape. 6. The cycle collector needs to be modified to do zone merging rather than compartment merging. 7. With 5 and 6 done, the Cell::compartment() method can be eliminated. 8. Now we can permit multiple compartments in a zone. Each zone holds a vector of compartments within it. All the iterators need to be changed to permit all the different kinds of iteration (all compartments, all zones, all compartments in a zone, etc.). The JS_NewGlobalObject allows you to specify the zone in which the new compartment should belong. 9. Finally, nsGlobalWindow.cpp needs to be changed to make roughly one zone per tab. Created attachment 714613 [details] [diff] [review] 1. make a separate Zone type This makes a separate Zone type as well as separate iterators. I've added the function ZONE_TO_COMPARTMENT to convert a zone to a compartment (since they're still 1:1). This function will gradually be eliminated in later patches. JSCompartment has a zone as a field. Created attachment 714614 [details] [diff] [review] 2. move Zone to gc/Zone.{cpp,h} Straightforward. No code changes. Created attachment 714618 [details] [diff] [review] 3. CellIter changes This patch switches CellIter to iterate over a Zone instead of a JSCompartment. In a bunch of places I had to test whether the resulting cell is from the compartment we're interested in. I suspect this is pretty inefficient for discardJitCode and JSCompartment::sweep (as well as the TI stuff called by sweep). That stuff will be fixed in later patches. Created attachment 714619 [details] [diff] [review] 4. Methodjit changes This changes some methodjit code to operate on zones instead of compartments. The goal is to solve some of the perf problems with the CellIter patch. Created attachment 714624 [details] [diff] [review] 5. Ion changes This patch does a similar thing for IonMonkey. It also moves discardJitCode to be a method of Zone rather than compartment, which solves one of the iteration issues mentioned above. All the IonContext and AutoFlusher changes were necessary because the stack of AutoFlushers was stored on the compartment. However, I think it can be just as easily stored with the runtime. Created attachment 714627 [details] [diff] [review] 6. Memory reporting style changes I think this file used to be part of xpconnect, and the style was really bothering me. The only real change is that I removed a JS_THREADSAFE #ifdef around the entire file. It didn't seem to serve a purpose, and it covered up a lot of compile errors in my non-threadsafe build. Created attachment 714628 [details] [diff] [review] 7. Main JS-engine memory reporting changes This does kinda what you would expect. Objects, shapes, and scripts are reported per-compartment. Types, strings, and IonCode are per-zone because we don't have any way of finding their compartment. (And in fact strings can be shared between compartments in a zone.) The FIXME thing is fixed later on in the patch stack when we have an actual separation between compartments and zones. I added a void* pointer in JSCompartment that can point to the CompartmentStats structure during memory reporting. This is kinda hacky, but I couldn't think of any better way to efficiently get that data. Created attachment 714629 [details] [diff] [review] 8. Style fixes for XPC mem reporting XPConnect is now JS style (sortof) so I tried to bring this stuff in line. Mostly I just wanted to kill all the extra whitespace. Created attachment 714634 [details] [diff] [review] 9. XPC memory reporting I just kept iterating on this until the results seemed reasonable. I'd appreciate some feedback. Created attachment 714636 [details] [diff] [review] 10. TI changes This patch moves the typeLifoAlloc to the zone. As a consequence, I had to make nukeTypes apply to an entire zone. The reason for doing this was to fix the CellIter issue in JSCompartment::sweep. Also, typeLifoAlloc is probably something that makes sense to put in the zone to save memory for small compartments. Created attachment 714638 [details] [diff] [review] 11. Add script::compartment() Very simple. Created attachment 714639 [details] [diff] [review] 12. Add Shape::compartment() Created attachment 714640 [details] [diff] [review] 13. Add JSObject::compartment() This one also goes through the BaseShape. Created attachment 714642 [details] [diff] [review] 14. Add a cx->zone field I'm undecided whether to do it this way or to use cx->compartment->zone. At the time, this seemed better, but I'll try to benchmark this once I finish posting the patches. Created attachment 714643 [details] [diff] [review] 15. Update cycle collector merging This changes the cycle collector so that we merge zones rather than compartments. I had to do this because it's no longer possible to get the compartment of an arbitrary GC thing, which is needed for merging. Created attachment 714645 [details] [diff] [review] 16. Changes outside the JS engine With these patches, strings no longer belong to a compartment--they just belong to a zone. I had to update a few uses outside the engine. Created attachment 714648 [details] [diff] [review] 17. Remove JSAutoCompartment(JSString*) Again, strings no longer belong to a specific compartment, so we can't try to enter a string's compartment. I fixed this by no longer requiring JS_GetStringCharsZAndLength to be called from the string's compartment (or zone, which is what the assertion would really check). It looks like we don't allocate from the path, so I don't think it should matter. Created attachment 714649 [details] [diff] [review] 18. Remove Cell::compartment() This takes out the accessor and fixes a few miscellaneous places where it was used. Created attachment 714650 [details] [diff] [review] 19. Allow many compartments in one zone This adds a vector of compartments to the zone. A bunch of iterators have to be changed so that they iterate over the right entities now. I also changed JS_NewGlobalObject so you can decide what zone to put the global in. You can ask for a fresh zone, for an existing zone, or for the special "system" zone (which is used by the browser but not the JS engine). The atoms compartment gets its own special zone. Created attachment 714654 [details] [diff] [review] 20. Group compartments into zones by tab This uses the window's GetTop method to decide how to group globals into zones. It seems to roughly give us a per-tab grouping, although there are a few extra zones in there that don't correspond to normal windows. However, I think this works well enough. Created attachment 714656 [details] [diff] [review] 21. Compartment sweeping This patch allows us to destroy compartments from zones before the entire zone is destroyed. I noticed that some pages create a new compartment every few seconds, only for it to die a little later. A compartment will get destroyed whenever it doesn't have any more base shapes or scripts. Because of how we mark, this should mean that a compartment with live objects or shapes will never get destroyed, since these things cause a base shape to be marked. One subtle point is that a compartment that is added after an incremental GC starts is automatically considered marked. Comment on attachment 714618 [details] [diff] [review] 3. CellIter changes Review of attachment 714618 [details] [diff] [review]: ----------------------------------------------------------------- What is the future of Cell::compartment() vis a vis generational GC? It seems like many of the changes in this patch are correcting for the case where CellIter returns a cell not in the queried compartment. This seems a bit of a footgun and it'd be nice to avoid by having (optionally) CellIter or a similar class filter out things not from the original compartment. ::: js/src/jsinfer.cpp @@ +2949,1 @@ > unsigned count = object->getPropertyCount(); This should filter out objects not from the target compartment. @@ +2963,2 @@ > RootedScript script(cx, i.get<JSScript>()); > if (script->types) { Ditto. @@ +3103,2 @@ > RootedScript script(cx, i.get<JSScript>()); > if (script->hasAnalysis() && script->analysis()->ranInference()) Ditto. @@ +3110,1 @@ > TypeObject *object = i.get<TypeObject>(); Ditto. (In reply to Brian Hackett (:bhackett) from comment #43) > Comment on attachment 714618 [details] [diff] [review] > 3. CellIter changes > > Review of attachment 714618 [details] [diff] [review]: > ----------------------------------------------------------------- Hmm, I should have stopped to think a minute before reviewing this. Ignore these comments, the patch is fine. Fantastic stuff, billm! > The decommitted arenas bug is fixed What was the problem? Can you attach a new rolled-up patch so I can do some more measurements on Monday? Thanks. Comment on attachment 714645 [details] [diff] [review] 16. Changes outside the JS engine r=me Comment on attachment 714636 [details] [diff] [review] 10. TI changes Review of attachment 714636 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/gc/Zone.cpp @@ +142,5 @@ > +Zone::sweep(FreeOp *fop, bool releaseTypes) > +{ > + /* > + * Periodically release observed types for all scripts. This is safe to > + * do when there are no frames for the compartment on the stack. s/compartment/zone/ Comment on attachment 714643 [details] [diff] [review] 15. Update cycle collector merging Review of attachment 714643 [details] [diff] [review]: ----------------------------------------------------------------- Is compartment nuking probably going to wipe out the entire tab zone when we close the tab? This is suboptimal, in that it will add intra-zone cross-compartment edges as self-edges on the zone's node in the CC graph, but it should still generate a strictly smaller graph than we currently do, so no big deal. I filed bug 842137 for that, in case I feel like looking into it at some point in the future. I'm not sure there's any way other than adding an explicit check for if the child is in the same zone, so maybe it isn't worthwhile. ::: js/src/jsfriendapi.cpp @@ +325,5 @@ > return comp->zone(); > } > > JS_FRIEND_API(bool) > +js::IsSystemCompartment(JSCompartment *comp) OOC, why did you drop the const here? ::: js/xpconnect/src/nsXPConnect.cpp @@ +2449,4 @@ > * or is a cross-compartment wrapper. In the former case, we don't need to > * represent these edges in the CC graph because JS objects are not ref counted. > * In the latter case, the JS engine keeps a map of these wrappers, which we > * iterate over. Please add a sentence saying that we will end up adding intrazone cross-compartment edges to the CC graph, even though we don't really need them. Comment on attachment 714627 [details] [diff] [review] 6. Memory reporting style changes Review of attachment 714627 [details] [diff] [review]: ----------------------------------------------------------------- I have a vague memory of the JS_THREADSAFE define being there for a reason, but I can't recall it now. If it doesn't cause problems it seems like removing it is an improvement. Comment on attachment 714628 [details] [diff] [review] 7. Main JS-engine memory reporting changes Review of attachment 714628 [details] [diff] [review]: ----------------------------------------------------------------- Looks good. Nice work. ::: js/public/MemoryMetrics.h @@ +142,5 @@ > + gcHeapTypeObjects(0), > + gcHeapIonCodes(0), > +#if JS_HAS_XML_SUPPORT > + gcHeapXML(0), > +#endif Don't forget to remove all the XML stuff when you rebase. @@ +188,5 @@ > + hugeStrings.append(other.hugeStrings); > + } > + > + // These fields can be used by embedders. > + void *extra1; "This field". @@ +207,5 @@ > + size_t typeObjects; > + > + js::Vector<HugeStringInfo, 0, js::SystemAllocPolicy> hugeStrings; > + > + // The size of all the live things in the GC heap. This function just measures GC things that belong to the zone but not to a compartment within the zone, right? Please expand the comment to explain this. ::: js/src/jscompartment.h @@ +376,5 @@ > } > #endif > + > + /* Used by memory reporters. */ > + void *extraData; This is not a good name. Maybe |compartmentStats|, and expand the comment to explain that it's only valid during memory reporting. Come to think of it, this will be a dangling pointer once memory reporting ends. Can you NULL this field out in all compartments at the end of reporting? And please initialize it to NULL in JSCompartment's constructor. ::: js/src/jsmemorymetrics.cpp @@ +323,1 @@ > } You had to remove the assertion in CompartmentStats::gcHeapThingsSize(). I think you can replicate it here, more or less. If you add rtStats->gcHeapGcThings to the sum of all zStats->gcHeapArenaAdmin values and the sum of all zStats->gcHeapUnusedGcThings values, it should be a multiple of the arena size. I hope you can. Getting all these numbers to add up correctly isn't easy, and we've had bugs along these lines before. So assertions help. Comment on attachment 714629 [details] [diff] [review] 8. Style fixes for XPC mem reporting Review of attachment 714629 [details] [diff] [review]: ----------------------------------------------------------------- Removing the whitespace is good. As for the other changes... I think the whole idea of taking a file written in one style and saying "we're using a different style now" (as was done a while back; I'm not blaming billm for this) is pretty bogus if you *don't actually change the style when you make the decision*. Because you end up with a complete mish-mash and some people know that the style actually used in the file shouldn't be copied... sigh. Anyway, I won't hold up progress in this bug arguing over such minutiae: r=me. Comment on attachment 714634 [details] [diff] [review] 9. XPC memory reporting Review of attachment 714634 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/workers/WorkerPrivate.cpp @@ +1418,5 @@ > + > + for (size_t i = 0; i != zoneStatsVector.length(); i++) { > + free(zoneStatsVector[i].extra1); > + // No need to free |extra2| because it's a static string. > + } Put zone stuff before compartment stuff. @@ +1435,5 @@ > + // This is the |cJSPathPrefix|. Each worker has exactly two compartments: > + // one for atoms, and one for everything else. > + nsAutoCString cJSPathPrefix(mRtPath); > + cJSPathPrefix += NS_LITERAL_CSTRING("zone(web-worker)/"); > + aZoneStats->extra1 = strdup(cJSPathPrefix.get()); Too much cargo-culting here :) - Rename cJSPathPrefix as pathPrefix. - The comment is wrong. Each worker has exactly one zone. - You're building a literal CString from a literal char* just so you can immediately pull out the literal char*. Just treat this one like aCompartmentStats->extra2, and then you won't need to free it in ~WorkerJSRuntimeStats. ::: js/public/MemoryMetrics.h @@ +183,5 @@ > ADD(gcHeapXML); > #endif > > ADD(stringCharsNonHuge); > + ADD(typeObjects); Oh, I guess this should have really been in patch 7? ::: js/xpconnect/src/XPCJSRuntime.cpp @@ +1556,5 @@ > namespace xpc { > > static nsresult > +ReportZoneStats(const JS::ZoneStats &zStats, > + const nsACString &path, Can you rename |path| as |pathPrefix| for consistency with ReportCompartmentStats? @@ +1563,4 @@ > { > size_t gcTotal = 0, gcHeapSundries = 0, otherSundries = 0; > > + CREPORT_GC_BYTES(path + NS_LITERAL_CSTRING("gc-heap/arena-admin"), Hmm, the CREPORT_* macros are named that because they are for compartments. (We also have RREPORT_* macros for the runtime.) But now they're used for zones too. Maybe rename as ZCREPORT_*? You'll need to update the comments above SUNDRIES_THRESHOLD, CREPORT_BYTES, and CREPORT_BYTES2. @@ +1606,5 @@ > + CREPORT_GC_BYTES(path + NS_LITERAL_CSTRING("gc-heap/xml"), > + zStats.gcHeapXML, > + "Memory on the garbage-collected JavaScript " > + "heap that holds E4X XML objects."); > +#endif Again: please remove all the XML stuff before landing. @@ +1609,5 @@ > + "heap that holds E4X XML objects."); > +#endif > + > + CREPORT_BYTES(path + NS_LITERAL_CSTRING("type-inference/type-objects"), > + zStats.typeObjects, This is the only measurement under "type-inference/" at the zone level, right? In which case, please rename it just "type-objects", which makes it clear that there aren't other zone-level measurements relating to type inference. @@ +1655,5 @@ > + REPORT_GC_BYTES(path + NS_LITERAL_CSTRING("gc-heap/sundries"), > + gcHeapSundries, > + "The sum of all the gc-heap " > + "measurements that are too small to be worth showing " > + "individually."); I realize it was pre-existing... this string could be made to fit on two lines. @@ +1664,5 @@ > + REPORT_BYTES(path + NS_LITERAL_CSTRING("other-sundries"), > + nsIMemoryReporter::KIND_HEAP, otherSundries, > + "The sum of all the non-gc-heap " > + "measurements that are too small to be worth showing " > + "individually."); Ditto. @@ +1909,5 @@ > + nsCString path(static_cast<char *>(zStats.extra1)); > + > + rv = ReportZoneStats(zStats, path, cb, closure, &gcTotal); > + NS_ENSURE_SUCCESS(rv, rv); > + } Put zone stuff before compartment stuff. @@ +2152,5 @@ > free(compartmentStatsVector[i].extra2); > } > + > + for (size_t i = 0; i != zoneStatsVector.length(); ++i) > + free(zoneStatsVector[i].extra1); Put zone stuff before compartment stuff. @@ +2160,5 @@ > + // Get the compartment's global. > + nsXPConnect *xpc = nsXPConnect::GetXPConnect(); > + JSContext *cx = xpc->GetSafeJSContext(); > + JSCompartment *comp = js::GetAnyCompartmentInZone(zone); > + nsCString path; Rename as |pathPrefix|, to match initExtraCompartmentStats(). @@ +2177,5 @@ > + } else { > + path.AssignLiteral("explicit/js-non-window/zones/"); > + } > + } else { > + path.AssignLiteral("explicit/js-non-window/zones/"); There are only really two cases here: (a) we have a top window (b) we don't have a top window. Currently (b) is covered by three |else| branches. I think you should be able to combine the conditions somehow so you only need to write |path.AssignLiteral("explicit/js-non-window/zones/");| once. @@ +2182,5 @@ > + } > + > + char zoneName[32]; > + sprintf(zoneName, "%p", (void *)zone); > + nsCString zName(zoneName); Use nsPrintfCString for this. There are examples in this file that you can crib from. @@ +2229,5 @@ > > + if (needZone) { > + char zoneName[32]; > + sprintf(zoneName, "%p", (void *)js::GetCompartmentZone(c)); > + nsCString zName(zoneName); Use nsPrintfCString for this. Looking at the memory reporting more closely, there are three minor changes I'd like you to make to the paths. NON-WINDOW OBJECTS ------------------ Currently it's like this. -- js-non-window -- zones -- zone(0x7f4ee0b5a800) ++ compartment([System Principal], ... ++ compartment([System Principal], ... ++ compartment([System Principal], ... ++ compartment([System Principal], ... ++ gc-heap ── other-sundries ++ zone(...) ++ runtime ++ gc-heap Here, a zone's size is measured as all the zone-specific data *plus* the sum of all the compartment sizes. Doing it this way makes it clear which compartments belong to which zone, which is good. This doesn't need changing. WORKERS ------- Currently it's like this. -- workers/workers()/worker(resource://gre/modules/... ++ compartment(web-worker) ++ gc-heap ++ runtime ++ zone(web-worker) ── compartment(web-worker-atoms)/other-sundries: -- workers/workers()/worker(resource://gre/modules/... -- zone(web-worker) ++ compartment(web-worker) ── compartment(web-worker-atoms)/other-sundries ++ gc-heap ++ ... ++ gc-heap ++ runtime I think that's easy to do and worth if for the consistency. Change #1: please do this... and use the "zone(<address>)" rather than "zone(web-worker)" for consistency with the "js-non-window" case. WINDOWS ------- Currently it's like this. -- window-objects -- top(chrome://browser/content/browser.xul, id=3) -- active -- window(chrome://browser/content/browse... ++ js/compartment([System Principal], ... ++ layout ++ dom ── style-sheets ── property-tables -- window(about:blank) ++ js/compartment([System Principal], ... ++ dom ── style-sheets ++ js/zone(0x7f4ed5d80000) There are two ways to go here. Either we can tie each compartment's memory usage to its window, and report the zone in pieces (like above). Or we can report each zone in a single place (consistent with the non-window cases above) but lose the clear connection between compartments and windows, like this: -- window-objects -- top(chrome://browser/content/browser.xul, id=3) ++ js-zone(0x7f4ed5d80000)/ ++ compartment([System Principal], ... ++ compartment([System Principal], ... -- active -- window(chrome://browser/content/browse... ++ layout ++ dom ── style-sheets ── property-tables -- window(about:blank) ++ dom ── style-sheets ++ js/zone(0x7f4ed5d80000) The window-centric easily answers the question "how much memory would we save if this window object disappeared". The zone-centric view easily answers the question "how much memory is used by JS for this top window?" I think window-centric is probably better. Summing the JS values to get the zone's size is tedious but simple, whereas sometimes manually working out which compartment belongs to which window isn't easy. Also, sometimes windows with different names have compartments with the same name (this is common for "about:blank" compartments) and the zone-centric view would collapse those same-named compartments together. So let's keep the window-centric view, but the "js/zone" can become "js-zone", because there's always only one zone per top window (as I mentioned in comment 12). And I just realized that "js/compartment" can become "js-compartment" because there's always one compartment per window. Change #2 and #3: please make these two changes (in initExtra{Zone,Compartment}Stats). Thanks! Finally, I'd love to do some more pre-landing measurements. Can you attach a rolled-up patch and tell me what version it applies against? Thanks. Comment on attachment 714650 [details] [diff] [review] 19. Allow many compartments in one zone In js/src/jsgc.cpp, js::NewCompartment: + ScopedJSDeletePtr<Zone> zoneHolder; + if (!zone) { + zone = cx->new_<Zone>(rt); + if (!zone) return NULL; + if (!zone->init(cx)) + return NULL; + + zoneHolder.reset(zone); + zone->setGCLastBytes(8192, GC_NORMAL); + } Even though Zone::init() is infallible at the moment, shouldn't the call to zoneHolder.reset() go before it? Comment on attachment 714656 [details] [diff] [review] 21. Compartment sweeping Review of attachment 714656 [details] [diff] [review]: ----------------------------------------------------------------- In SweepCompartments, can you add a comment to explain what the connection is between keepAtleastOne, foundOne and the condition "!(read == end && !foundOne)"? I found this pretty confusing. Comment on attachment 714654 [details] [diff] [review] 20. Group compartments into zones by tab Review of attachment 714654 [details] [diff] [review]: ----------------------------------------------------------------- r- while we get the API and the sandbox stuff sorted out. ::: dom/base/nsGlobalWindow.cpp @@ +1960,5 @@ > + top = aNewInner->GetTop(); > + } > + JSObject *sameZone = NULL; > + if (top) { > + sameZone = top->GetGlobalJSObject(); So, |top| here will be an outer window, which means that GetGlobalJSObject() will return the outer window proxy. This object isn't a global object in the JS sense, though it should be in the compartment of the current global. Is that enough? @@ +1961,5 @@ > + } > + JSObject *sameZone = NULL; > + if (top) { > + sameZone = top->GetGlobalJSObject(); > + } I talked to smaug, and I think this should work well, because there aren't any docshells on the other side of the chrome->content boundary with content lifetime. The parent of the top-level content window is a chrome window with a much longer lifetime.? ::: js/xpconnect/idl/nsIXPConnect.idl @@ +319,1 @@ > */ Needs an IID rev. @@ +324,5 @@ > in nsIPrincipal aPrincipal, > in uint32_t aFlags); > > + nsIXPConnectJSObjectHolder > + initClassesWithNewWrappedGlobalInZone( I'm not very happy about this. Can we make initClassesWithNewWrappedGlobal just take a ZoneSpecifier? ::: js/xpconnect/src/XPCComponents.cpp @@ +3278,5 @@ > nsIPrincipal *principal = sop->GetPrincipal(); > > JSObject *sandbox; > > + sandbox = xpc::CreateGlobalObject(cx, &SandboxClass, principal, JS::SystemZone); This is going to be pretty suboptimal in a lot of cases, and sandboxes account for an ever-growing portion of our compartments. Can you add a ZoneSpec to SandboxOptions, and parse it out of a "sameZoneAs" property in ParseOptionsObject? This should be very easy to do, as all of the existing machinery is there. It's fine to do as a separate patch, but I'd like the functionality to be a part of this landing so that Jetpack & co can start taking advantage of zones immediately. What's the impact of something with page-lifetime ending up with the system zone? I'm assuming that it isn't an issue for correctness, but will it significantly increase fragmentation? ::: netwerk/base/src/ProxyAutoConfig.cpp @@ +529,5 @@ > NS_ENSURE_TRUE(mContext, NS_ERROR_OUT_OF_MEMORY); > > JSAutoRequest ar(mContext); > > + mGlobal = JS_NewGlobalObject(mContext, &sGlobalClass, nullptr, JS::SystemZone); SystemZone is per-runtime, right? Because this stuff is all happening in an off-main-thread runtime... Created attachment 715835 [details] [diff] [review] rolled up patch on top of 081cf5b0121e I'll respond to the specific comments tomorrow. In the meantime, here's a rolled up patch for Nick to test. Created attachment 716298 [details] [diff] [review] 20. Group compartments into zones by tab (v. 2) This addresses the API issue. Created attachment 716300 [details] [diff] [review] 22. add a parameter to specify which zone a sandbox is placed in I think this is what you wanted. Created attachment 716302 [details] [diff] [review] 23. Add per-compartment isSystem flag I forgot to post this before. This is what fixes the problem where there were lots of decommitted arenas. Previously, the isSystem flag was being set inconsistently and causing us to use the wrong kind of chunk. When I started the zone patches, I put the isSystem flag on the zone. I thought it needed to be there because we use it to decide whether a new arena should be allocated in a system chunk or a non-system chunk, and arenas are now per-zone. However, isSystem doesn't really fit on the zone because a zone can contain a mixture of system compartments and non-system compartments. So it seemed like it would be simpler to put isSystem back on the compartment where it belongs. However, I added another isSystem flag to the zone. This is needed in the few cases (like the decision about which kind of chunk to use) where we only have a zone and not a compartment. mccr8 said: > Is compartment nuking probably going to wipe out the entire tab zone when we close the tab? Nuking is still compartment-based. I think in practice we'll nuke all the compartments in the tab's zone, but I haven't checked. njn said: >: I changed the worker naming as you recommended. However, keep in mind that workers have multiple zones. One for atoms and two others for globals. I don't know why there are two globals. bholley said: > So, |top| here will be an outer window, which means that GetGlobalJSObject() will return the > outer window proxy. This object isn't a global object in the JS sense, though it should be in > the compartment of the current global. Is that enough? Yeah, you can use any object as long as it's in the right zone. >"? And if so, I'm not sure how I would detect this. Is there a way to get the TabChildGlobal from nsGlobalWindow? > What's the impact of something with page-lifetime ending up with the system zone? I'm assuming > that it isn't an issue for correctness, but will it significantly increase fragmentation? It's not a correctness issue. It will increase fragmentation. The significance depends on when objects are allocated and how many of them there are. > SystemZone is per-runtime, right? Because this stuff is all happening in an off-main-thread > runtime... Each runtime has its own system zone, so this is fine. > However, I added another isSystem flag to the zone. This is needed in the > few cases (like the decision about which kind of chunk to use) where we only > have a zone and not a compartment.? Maybe it does thanks to the FreshZone stuff, because the first compartment in the zone is a system compartment so the zone is marked as a system zone? BTW, I'm bamboozled by SameZoneAs() -- you're casting from a JSObject* to a ZoneSpecifier? Comment on attachment 716302 [details] [diff] [review] 23. Add per-compartment isSystem flag Review of attachment 716302 [details] [diff] [review]: ----------------------------------------------------------------- This patch seems ok, but I'd like clarification on comment 63. (In reply to Bill McCloskey (:billm) from comment #62)"? Docshells form a tree, and nodes are labeled as either "content" or "chrome". In general, the root of the tree begins as chrome, and then at some point towards the leaves the nodes switch to being content. That switch is a chrome-content boundary, and is detected when content gets |top|. That is to say: in content, |top| will always return the deepest ancestor whose type is still "content".. > And if so, I'm not sure how I would detect this. Is there a way to get the > TabChildGlobal from nsGlobalWindow? It's tricky, because depending on the setup there may not be a TabChildGlobal at all. But smaug thinks we should add such an API, and use it. In particular, we should check if the nsGlobalWindow is descended from a TabChildGlobal, and if so, use the TabChildGlobal's zone. Otherwise, we do what the current patch does. Make sense? File a bug? > |top| will always return the deepest ancestor whose type is still "content". Except when browserframe is involved. Comment on attachment 716298 [details] [diff] [review] 20. Group compartments into zones by tab (v. 2) Review of attachment 716298 [details] [diff] [review]: ----------------------------------------------------------------- r=bholley Comment on attachment 716300 [details] [diff] [review] 22. add a parameter to specify which zone a sandbox is placed in Review of attachment 716300 [details] [diff] [review]: ----------------------------------------------------------------- I'm pretty sure you need the sameZoneAs object at some point, otherwise it'll always just be a proxy in the caller's compartment. Unless JS_NewGlobalObject handles that somehow? Created attachment 716814 [details] [diff] [review] 22. add a parameter to specify which zone a sandbox is placed in (v2) Fixed. Comment on attachment 716814 [details] [diff] [review] 22. add a parameter to specify which zone a sandbox is placed in (v2) Review of attachment 716814 [details] [diff] [review]: ----------------------------------------------------------------- In XPCWrappedNativeScope::EnsureXBLScope, please set: options.sameZoneAs = global; Do we have any way of testing this? Please let the jetpack folks know what this is about and how to use it. r=bholley with all that >? Yes, that happens. Right now browser.xul is in the system zone and the hidden window gets its own zone, which also has isSystem=true. This happens because the hidden window compartment is initially created with system principals and then its principals are somehow downgraded with JS_SetCompartmentPrincipals. However, the zone keeps the isSystem=true flag since JS_SetCompartmentPrincipals doesn't affect the zone. > BTW, I'm bamboozled by SameZoneAs() -- you're casting from a JSObject* to a ZoneSpecifier? The ZoneSpecifier is a tagged pointer. 0 means FreshZone, 1 means SystemZone, and anything else means it gets put in the zone of that JSObject*. So we're relying on 0 and 1 not being valid pointers. >. I was playing around with this today and I realized that browser.xul is actually created by InitTabChildGlobalInternal. So it does seem like I should figure out how to put that in the same zone as the rest of the page-level stuff. However, I still don't understand the relationship. How does one convert between a docshell, an nsGlobalWindow, and whatever creates the TabChildGlobal? I added a small test for the sandbox creation thing. Who should I talk to about Jetpack? (In reply to Bill McCloskey (:billm) from comment #72) > Who should I talk to about Jetpack? Dave Townsend should be able to point you in the right direction. Could this have caused a significant performance degradation, possibly debug-only? Windows debug reftests are red due to timeouts (unable to complete run in less than 2 hours) since this landed.. One thing to investigate is what CC times look like. When I was doing the initial compartment merging work, mochitest browser chrome would get ridiculous CC times due to (I assume) leakiness. (In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #73) > (In reply to Bill McCloskey (:billm) from comment #72) > > Who should I talk to about Jetpack? > > Dave Townsend should be able to point you in the right direction. Alex is your man and I filed bug 844180 to track adding support for this to the Jetpack SDK. (In reply to Jonathan Kew (:jfkthame) from comment #75) >. Though debug was 40%+ slower (went from 98-103 minutes up to 141-145 minutes). (I say "+" since there's overhead of downloading builds and tests in those times.) And the backout did indeed fix the slowdown; see I submitted a patch to the SDK in order to use this feature (bug 844180), it looks like the win is big. And I tested against inbound build, no particular crashes nor test fail in SDK test suite.
https://bugzilla.mozilla.org/show_bug.cgi?id=759585
CC-MAIN-2016-22
refinedweb
8,366
67.86
Polymorphism is an important concept in programming, and novice programmers usually learn about it during the first months of studying. Polymorphism basically means that you can apply a similar operation to entities of different types. For instance, the count/1 function can be applied both to a range and to a list: Enum.count(1..3) Enum.count([1,2,3]) How is that possible? In Elixir, polymorphism is achieved by using an interesting feature called a protocol, which acts like a contract. For each data type you wish to support, this protocol must be implemented. All in all, this approach is not revolutionary, as it is found in other languages (like Ruby, for example). Still, protocols are really convenient, so in this article we will discuss how to define, implement and work with them while exploring some examples. Let's get started! Brief Introduction to Protocols So, as already mentioned above, a protocol has some generic code and relies on the specific data type to implement the logic. This is reasonable, because different data types may require different implementations. A data type can then dispatch on a protocol without worrying about its internals. Elixir has a bunch of built-in protocols, including Enumerable, Collectable, Inspect, List.Chars, and String.Chars. Some of them will be discussed later in this article. You may implement any of these protocols in your custom module and get a bunch of functions for free. For instance, having implemented Enumerable, you'll get access to all the functions defined in the Enum module, which is quite cool. If you have come from the wondrous Ruby world full of objects, classes, fairies and dragons, you'll have met a very similar concept of mixins. For example, if you ever need to make your objects comparable, simply mix a module with the corresponding name into the class. Then just implement a spaceship <=> method and all instances of the class will get all methods like > and < for free. This mechanism is somewhat similar to protocols in Elixir. Even if you have never met this concept before, believe me, it is not that complex. Okay, so first things first: the protocol must be defined, so let's see how it can be done in the next section. Defining a Protocol Defining a protocol does not involve any black magic—in fact, it is very similar to defining modules. Use defprotocol/2 to do it: defprotocol MyProtocol do end Inside the protocol's definition you place functions, just like with modules. The only difference is that these functions have no body. It means that the protocol only defines an interface, a blueprint that should be implemented by all the data types that wish to dispatch on this protocol: defprotocol MyProtocol do def my_func(arg) end In this example, a programmer needs to implement the my_func/1 function to successfully utilize MyProtocol. If the protocol is not implemented, an error will be raised. Let's return to the example with the count/1 function defined inside the Enum module. Running the following code will end up with an error: Enum.count 1 # ** (Protocol.UndefinedError) protocol Enumerable not implemented for 1 # (elixir) lib/enum.ex:1: Enumerable.impl_for!/1 # (elixir) lib/enum.ex:146: Enumerable.count/1 # (elixir) lib/enum.ex:467: Enum.count/1 It means that the Integer does not implement the Enumerable protocol (what a surprise) and, therefore, we cannot count integers. But the protocol actually can be implemented, and this is easy to achieve. Implementing a Protocol Protocols are implemented using the defimpl/3 macro. You specify which protocol to implement and for which type: defimpl MyProtocol, for: Integer def my_func(arg) do IO.puts(arg) end end Now you can make your integers countable by partly implementing the Enumerable protocol: defimpl Enumerable, for: Integer do def count(_arg) do {:ok, 1} # integers always contain one element end end Enum.count(100) |> IO.puts # => 1 We will discuss the Enumerable protocol in more detail later in the article and implement its other function as well. As for the type (passed to the for), you may specify any built-in type, your own alias or a list of aliases: defimpl MyProtocol, for: [Integer, List] do end On top of that, you may say Any: defimpl MyProtocol, for: Any def my_func(_) do IO.puts "Not implemented!" end end This will act like a fallback implementation, and an error will not be raised if the protocol is not implemented for some type. In order for this to work, set the @fallback_to_any attribute to true inside your protocol (otherwise the error will still be raised): defprotocol MyProtocol do @fallback_to_any true def my_func(arg) end You can now utilize the protocol for any supported type: MyProtocol.my_func(5) # simply prints out 5 MyProtocol.my_func("test") # prints "Not implemented!" A Note About Structs The implementation for a protocol can be nested inside a module. If this module defines a struct, you don't even need to specify for when calling defimpl: defmodule Product do defstruct title: "", price: 0 defimpl MyProtocol do def my_func(%Product{title: title, price: price}) do IO.puts "Title #{title}, price #{price}" end end end In this example, we define a new struct called Product and implement our demo protocol. Inside, simply pattern-match the title and price and then output a string. Remember, however, that an implementation does have to be nested inside a module—it means that you can easily extend any module without accessing its source code. Example: String.Chars Protocol Okay, enough with abstract theory: let's have a look at some examples. I am sure you have employed the IO.puts/2 function quite extensively to output debugging info to the console when playing around with Elixir. Surely, we can output various built-in types easily: IO.puts 5 IO.puts "test" IO.puts :my_atom But what happens if we try to output our Product struct created in the previous section? I will place the corresponding code inside the Main module because otherwise you'll get an error saying that the struct is not defined or accessed in the same scope: defmodule Product do defstruct title: "", price: 0 end defmodule Main do def run do %Product{title: "Test", price: 5} |> IO.puts end end Main.run Having run this code, you'll get an error: (Protocol.UndefinedError) protocol String.Chars not implemented for %Product{price: 5, title: "Test"} Aha! It means that the puts function relies on the built-in String.Chars protocol. As long as it is not implemented for our Product, the error is being raised. String.Chars is responsible for converting various structures to binaries, and the only function that you need to implement is to_string/1, as stated by the documentation. Why don't we implement it now? defmodule Product do defstruct title: "", price: 0 defimpl String.Chars do def to_string(%Product{title: title, price: price}) do "#{title}, $#{price}" end end end Having this code in place, the program will output the following string: Test, $5 Which means that everything is working just fine! Example: Inspect Protocol Another very common function is IO.inspect/2 to get information about a construct. There is also an inspect/2 function defined inside the Kernel module—it performs inspection according to the Inspect built-in protocol. Our Product struct can be inspected right away, and you'll get some brief information about it: %Product{title: "Test", price: 5} |> IO.inspect # or: %Product{title: "Test", price: 5} |> inspect |> IO.puts It will return %Product{price: 5, title: "Test"}. But, once again, we can easily implement the Inspect protocol that requires only the inspect/2 function to be coded: defmodule Product do defstruct title: "", price: 0 defimpl Inspect do def inspect(%Product{title: title, price: price}, _) do "That's a Product struct. It has a title of #{title} and a price of #{price}. Yay!" end end end The second argument passed to this function is the list of options, but we are not interested in them. Example: Enumerable Protocol Now let's see a slightly more complex example while talking about the Enumerable protocol. This protocol is employed by the Enum module, which presents us with such convenient functions as each/2 and count/1 (without it, you would have to stick with plain old recursion). Enumerable defines three functions that you have to flesh out in order to implement the protocol: - count/1 returns the enumerable's size. - member?/2 checks whether the enumerable contains an element. - reduce/3 applies a function to each element of the enumerable. Having all those functions in place, you'll get access to all the goodies provided by the Enum module, which is a really good deal. As an example, let's create a new struct called Zoo. It will have a title and a list of animals: defmodule Zoo do defstruct title: "", animals: [] end Each animal will also be represented by a struct: defmodule Animal do defstruct species: "", name: "", age: 0 end Now let's instantiate a new zoo: defmodule Main do def run do my_zoo = %Zoo{ title: "Demo Zoo", animals: [ %Animal{species: "tiger", name: "Tigga", age: 5}, %Animal{species: "horse", name: "Amazing", age: 3}, %Animal{species: "deer", name: "Bambi", age: 2} ] } end end Main.run So we have a "Demo Zoo" with three animals: a tiger, a horse, and a deer. What I'd like to do now is add support for the count/1 function, which will be used like this: Enum.count(my_zoo) |> IO.inspect Let's implement this functionality now! Implementing the Count Function What do we mean when saying "count my zoo"? It sounds a bit strange, but probably it means counting all the animals that live there, so the implementation of the underlying function will be quite simple: defmodule Zoo do defstruct title: "", animals: [] defimpl Enumerable do def count(%Zoo{animals: animals}) do {:ok, Enum.count(animals)} end end end All we do here is rely on the count/1 function while passing a list of animals to it (because this function supports lists out of the box). A very important thing to mention is that the count/1 function must return its result in the form of a tuple {:ok, result} as dictated by the docs. If you return only a number, an error ** (CaseClauseError) no case clause matching will be raised. That's pretty much it. You can now say Enum.count(my_zoo) inside the Main.run, and it should return 3 as a result. Good job! Implementing Member? Function The next function the protocol defines is the member?/2. It should return a tuple {:ok, boolean} as a result that says whether an enumerable (passed as the first argument) contains an element (the second argument). I want this new function to say whether a particular animal lives in the zoo or not. Therefore, the implementation is pretty simple as well: defmodule Zoo do defstruct title: "", animals: [] defimpl Enumerable do # ... def member?(%Zoo{title: _, animals: animals}, animal) do {:ok, Enum.member?(animals, animal)} end end end Once again, note that the function accepts two arguments: an enumerable and an element. Inside we simply rely on the member?/2 function to search for an animal in the list of all animals. So now we run: Enum.member?(my_zoo, %Animal{species: "tiger", name: "Tigga", age: 5}) |> IO.inspect And this should return true as we indeed have such an animal in the list! Implementing the Reduce Function Things get a bit more complex with the reduce/3 function. It accepts the following arguments: - an enumerable to apply the function to - an accumulator to store the result - the actual reducer function to apply What's interesting is that the accumulator actually contains a tuple with two values: a verb and a value: {verb, value}. The verb is an atom and may have one of the following three values: :cont(continue) :halt(terminate) :suspend(temporarily suspend) The resulting value returned by the reduce/3 function is also a tuple containing the state and a result. The state is also an atom and may have the following values: :done(processing is done, that's the final result) :halted(processing was stopped because the accumulator contained the :haltverb) :suspended(processing was suspended) If the processing was suspended, we should return a function representing the current state of the processing. All these requirements are nicely demonstrated by the implementation of the reduce/3 function for the lists (taken from the docs): def reduce(_, {:halt, acc}, _fun), do: {:halted, acc} def reduce(list, {:suspend, acc}, fun), do: {:suspended, acc, &reduce(list, &1, fun)} def reduce([], {:cont, acc}, _fun), do: {:done, acc} def reduce([h | t], {:cont, acc}, fun), do: reduce(t, fun.(h, acc), fun) We can use this code as an example and code our own implementation for the Zoo struct: In the last function clause, we take the head of the list containing all animals, apply the function to it, and then perform reduce against the tail. When there are no more animals left (the third clause), we return a tuple with the state of :done and the final result. The first clause returns a result if the processing was halted. The second clause returns a function if the :suspend verb was passed. Now, for example, we can calculate the total age of all our animals easily: Enum.reduce(my_zoo, 0, fn(animal, total_age) -> animal.age + total_age end) |> IO.puts Basically, now we have access to all the functions provided by the Enum module. Let's try to utilize join/2: Enum.join(my_zoo) |> IO.inspect However, you'll get an error saying that the String.Chars protocol is not implemented for the Animal struct. This is happening because join tries to convert each element to a string, but cannot do it for the Animal. Therefore, let's also implement the String.Chars protocol now: defmodule Animal do defstruct species: "", name: "", age: 0 defimpl String.Chars do def to_string(%Animal{species: species, name: name, age: age}) do "#{name} (#{species}), aged #{age}" end end end Now everything should work just fine. Also, you may try to run each/2 and display individual animals: Enum.each(my_zoo, &(IO.puts(&1))) Once again, this works because we have implemented two protocols: Enumerable (for the Zoo) and String.Chars (for the Animal). Conclusion In this article, we have discussed how polymorphism is implemented in Elixir using protocols. You have learned how to define and implement protocols, as well as utilize built-in protocols: Enumerable, Inspect, and String.Chars. As an exercise, you can try to empower our Zoo module with the Collectable protocol so that the Enum.into/2 function can be properly utilized. This protocol requires the implementation of only one function: into/2, which collects values and returns the result (note that it also has to support the :done, :halt and :cont verbs; the state should not be reported). Share your solution in the comments! I hope you have enjoyed reading this article. If you have any questions left, don't hesitate to contact me. Thank you for the patience, and see you soon! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/articles/polymorphism-with-protocols-in-elixir--cms-29081
CC-MAIN-2020-05
refinedweb
2,549
54.73
BPEL M4 information What's in M4 ? This is an interim release for a bunch of bug fixes and some new features. It includes most (if not all) of the bug fixes contributed by the JBoss Riftsaw and BPEL team (thanks guys!) According to the milestone plan we now have the ability to deploy to ODE as well as Riftsaw runtimes, although the deployment classes were coded before the WTP deployment framework was completed. See here for more info. We are working on this and should have a working and tested patch soon. Development and design work on process debugging has still not begun and may not be available until the 1.0 release or later. Here is a complete list of bug fixes and features that have been added since M3. The BPEL "New File" Wizard has been enhanced to allow specification of a SOAP or HTTP service endpoint, which generates the <service> and <binding> elements in the WSDL artifact. See here for details. A new Property Sheet Tab has been added for the Process and Activities. This tab displays a list of all namespaces and namespace prefixes that are in-scope. Missing namespace prefixes can also be assigned here. See here for details.
http://www.eclipse.org/bpel/users/m4.php
CC-MAIN-2016-44
refinedweb
205
74.69
Rahul wrote: HiUsers who want to take a look at this new Echo theme would want a quick and easy way to install and check it out.Currently all I see is a list of SVG files in a wiki page. Can we put up a tarball appropriately packaged or a RPM package that is updated frequently perhaps?Currently all I see is a list of SVG files in a wiki page. Can we put up a tarball appropriately packaged or a RPM package that is updated frequently perhaps?] J5 wrote: Attached is the python script used to pull them all from the wiki. I whipped it up pretty quickly and there are several areas where it can be improved if someone wants to work on it: * It uses a base directory of echo_art/ this should be able to be overridden by a command line switch * If the directory exists it should confirm with the user and thendelete or move the old directory * Statistics could be added as well as a way to diff previous pulls to see what has changed * There is already a filter function that returns a new name and directory based on the icons name. Right now the script just checks for 'image-missing' and returns None indicating that icon should not be downloaded. This filter function can be expanded to filter icons into specific directories for easier packaging provided you have a consistent naming scheme on the wiki. * Inputs need to be filtered to make sure someone doesn't add input on the wiki that would cause the script to do bad thing on your machine. Right now it is not an issue but if the filter function gets more complex (such as using a part of a file name as the directory) you might want to scan those directory strings to make sure characters like ~ or .. don't get in. * Someone could add a specfile generator and auto packager so that new sets of icons can be tested easily. Hope this helps you guys out and I can't wait to see the new icons in action. -- John (J5) Palmieri <johnp redhat com> #!/bin/env python ## echo_pull - pulls echo icons off the fedora wiki ## Copyright (C) 2006 Red Hat, Inc. ## Copyright (C) 2006 John (J5) Palmieri <john. base_url='' echo_dir='wiki/Artwork/EchoDevelopment' base_dir='echo_art' import sys import os import urllib2 import HTMLParser import re href_re=re.compile('\/(.*)\?.*target=(.*[(\.svg)(\.png)])') def _mkdir(newdir): """works the way a good mkdir should :) - already exists, silently complete - regular file in the way, raise an exception - parent directory(ies) does not exist, make them as well """ if os.path.isdir(newdir): pass elif os.path.isfile(newdir): raise OSError("a file with the same name as the desired " \ "dir, '%s', already exists." % newdir) else: head, tail = os.path.split(newdir) if head and not os.path.isdir(head): _mkdir(head) #print "_mkdir %s" % repr(newdir) if tail: os.mkdir(newdir) class ArtParser(HTMLParser.HTMLParser): def handle_starttag(self, tag, attr): if tag == 'a': for a in attr: if a[0] == 'href': match = href_re.match(a[1]) if match: self.filter_and_download (base_url, a[1], match.group(2)) def download(self, url, directory, filename): if not os.path.isdir(directory): _mkdir(directory) file_path = os.path.join(directory, filename) file = os.popen ('wget "%s" -O %s'%(url, file_path)) error = file.close() if error: sys.stderr.write ('Error downloading %s to %s\n'%(url, file_path)) def filter(self, file): if file.startswith('image-missing'): return else: return (base_dir, file) def filter_and_download (self, base_url, resource, file): art_url = "%s%s"%(base_url, resource) filter = self.filter(file) if filter: (filtered_dir, filtered_file) = filter self.download(art_url, filtered_dir, filtered_file) def main(): #get the main page url = '%s%s'%(base_url, echo_dir) try: data = urllib2.urlopen(url).read() except urllib2.HTTPError, e: print "HTTP error: %d" % e.code exit(1) except urllib2.URLError, e: print "Network error: %s" % e exit(2) #pull out <a > tags with a target graphics in the href p = ArtParser() p.feed(data) p.close() main()
https://www.redhat.com/archives/fedora-art-list/2006-August/msg00134.html
CC-MAIN-2015-11
refinedweb
670
65.42
Forum Index I want add Yes.keepTerminator flag on lineSplitter. On 6/9/21 2:48 PM, Marcone wrote: > I want add Yes.keepTerminator flag on lineSplitter. It's a template parameter, which is specified after an exclamation mark. The trouble is, Flag is defined in std.typecons. I would expect std.typecons (or at least Flag from it) to be publicly imported but it is not. I think that is an interface bug: If it appears in the interface (of lineSplitter), it should be available without needing to import additional modules. import std.string; import std.stdio; import std.typecons; void main() { auto input = "line1\nline2\nline3"; auto range = input.lineSplitter!(Yes.keepTerminator); writeln(range); } Ali
https://forum.dlang.org/thread/embtthqmtqzqufkfcujn@forum.dlang.org
CC-MAIN-2021-25
refinedweb
117
55.2
This is the mail archive of the newlib@sourceware.org mailing list for the newlib project. On 2013-11-20 15:35, Joel Sherrill wrote: On 11/20/2013 8:26 AM, Corinna Vinschen wrote:On Nov 20 07:38, Joel Sherrill wrote:On 11/20/2013 3:46 AM, Corinna Vinschen wrote.. Thanks. The guards are tricky. glibc has #if defined __USE_BSD || defined __USE_XOPEN_EXTENDED it as XSI with no explicit macros that I am spotting.Is that close enough?I don't understand the question. BSD and XOPEN don't correspond to strict ANSI.I was just pointing out that glibc has BSD and XOPEN as guards. Is that equivalent? In FreeBSD we have this (<stdlib.h>): /* * Extensions made by POSIX relative to C. We don't know yet which edition * of POSIX made these extensions, so assume they've always been there until * research can be done. */ #if __POSIX_VISIBLE /* >= ??? */ int posix_memalign(void **, size_t, size_t); /* (ADV) */ int rand_r(unsigned *); /* (TSF) */ char *realpath(const char * __restrict, char * __restrict); int setenv(const char *, const char *, int); int unsetenv(const char *); #endif --.
https://sourceware.org/ml/newlib/2013/msg00926.html
CC-MAIN-2018-09
refinedweb
180
77.33
FileSecurity Class Represents the access control and audit security for a file. This class cannot be inherited. Assembly: mscorlib (in mscorlib.dll) The FileSecurity class specifies the access rights for a system file and how access attempts are audited. This class represents access and audit rights as a set of rules. Each access rule is represented by a FileSystemAccessRule object, while each audit rule is represented by a FileSystemAuditRule object. The. The FileSecurity class hides many of the details of DACLs and SACLs; you do not have to worry about ACE ordering or null DACLS. Use the FileSecurity class to retrieve, add, or change the access rules that represent the DACL and SACL of a file. To persist new or changed access or audit rules to a file, use the SetAccessControl method. To retrieve access or audit rules from an existing file, use the GetAccessControl method. The following code example uses the FileSecurity class to add and then remove an access control list (ACL) entry from a file. You must supply a valid user or group account to run this example. using System; using System.IO; using System.Security.AccessControl; namespace FileSystemExample { class FileExample { public static void Main() { try { string fileName = "test.xml"; Console.WriteLine("Adding access control entry for " + fileName); // Add the access control entry to the file. AddFileSecurity(fileName, @"DomainName\AccountName", FileSystemRights.ReadData, AccessControlType.Allow); Console.WriteLine("Removing access control entry from " + fileName); // Remove the access control entry from the file. RemoveFileSecurity(fileName, @"DomainName\AccountName", FileSystemRights.ReadData, AccessControlType.Allow); Console.WriteLine("Done."); } catch (Exception e) { Console.WriteLine(e); } } // Adds an ACL entry on the specified file for the specified account. public static void static void); } } } Available since 2.0 Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
https://msdn.microsoft.com/en-us/library/system.security.accesscontrol.filesecurity.aspx
CC-MAIN-2017-09
refinedweb
305
51.04
Definition The type CoderPipe2< Coder1, Coder2 > can be used to combine two coders of type Coder1 and Coder2 into one single coder. This works in an analogous way as a pipe of the operating system: In encoding mode the original input is processed by Coder1. Its output is fed into Coder2. The output of Coder2 becomes the output of the pipe. In decoding mode the situation is reversed: The data is sent through Coder2 first and then through Coder1. We also provide pipes for combining more than two coders (up to six): CoderPipe3,..., CoderPipe6. (Since these classes have a similar interface as CoderPipe2, we do not include manual pages for them.) #include < LEDA/coding/coder_util.h > Types Creation Operations Standard Operations Additional Operations
http://www.algorithmic-solutions.info/leda_manual/CoderPipe2.html
CC-MAIN-2021-10
refinedweb
123
64.71
In the last article in the image processing series we discussed convolutional neural networks. We even created one, but it must be recognized that it is very simple. The problem when we want to have good results is that we need a lot of labelled images but also a lot of resources because very quickly we have to stack a good number of layers in our deep neural network in order to have great accuracy. In short, the question of time, resources but also the choice of hyperparameters of the neural network can become key for many projects. What if we reused a piece of existing, pre-trained neural network? This is exactly the goal we are trying to achieve with Transfer Learning! and to do this there are a lot of neural networks that are already available. We will start in this article with one of the most famous: VGG. What is VGG ? VGG16 is a convolutional neural network model designed by K. Simonyan and A. Zisserman. The implementation details can be found in the document “Very Deep Convolutional Networks for Large-Scale Image Recognition”. This model achieves 92.7% test accuracy in ImageNet, which aggregates more than 14 million images belonging to 1000 classes. Why vgg-16 and good simply because this neural network includes 16 deep layers: So of course, you could create this neural network by yourself – and from scratch – then find out the best hyperparameters to finally train it. But that would take a lot of your time and resources… so why not use all the settings in this model and see how to complete this network by adding custom layers to it? This is exactly what we are going to do in this article, and we will doing it with some labels/classes that VGG has not ever been trained for. Crazy isn’t it? that’s kind of the magic of convolutional neural networks. We will in fact be able to reuse the feature map mechanisms that were produced by the VGG but to detect new shapes. VGG & TensorFlow Good news, Tensorflow provides the VGG-16 model as standard and therefore makes Transfer Learning very easy. In fact it provides as standard other pre-trained models such as: - VGG16 - VGG19 - ResNet50 - Inception V3 - Xception You will see that the reuse of these models is child’s game 🙂 Let’s build a fruit detector! To test our custom VGG-16 Transfer Learning model we will use a dataset made up of fruit images (131 types of fruit to be exact). You can find this dataset on Kaggle at the following address: Be careful if you want to follow me step by step and create your own neural network, know that you will need power (GPU / TPU). So I suggest you do like me and create a notebook in Kaggle. You can watch mine here: Dataset presentation & discovery The dataset is structured by directory (and you will see that this structure has not been done randomly): We have two datasets: Training and Test. In each of these directories there are sub-directories (labels) in which we have photos of the different fruits. Here is some additional information that may be useful later: - Images: 90483 - One fruit per image - Training: 67703 images - Test: 19543 images - Labels/fruits: 131 - Image Size: 100×100 pixels First steps … Commençons par importer les librairies nécessaires : import numpy as np import pandas as pd from glob import glob tensorflow.keras.callbacks import EarlyStopping import numpy as np import matplotlib.pyplot as plt from skimage.io import imread, imshow Let’s look at an image : image = imread("/content/drive/MyDrive/Colab Notebooks/fruits-360/Training/Apple Braeburn/0_100.jpg") plt.imshow(image) Perfect we have a beautiful apple in color (RGB). image.shape (100, 100, 3) The size of the images is confirmed 100 by 100 pixels. Data Augmentation with TensorfFlow 90,483 images is fine, but much more would be even better. I will use this article to introduce what is called “data augmentation”. The principle is very simple, the idea is to decline an image by shifting, rotating, zooming in order to duplicate it in several copies. From an image we can therefore have x new images and therefore improve the learning of our model by this method. With TensorFlow we will use what are called Generators. there are several, but here we are going to use the ImageDataGenerator () Image Generator which will do all this duplication work for us and automatically. To start, we configure the way we will create the image combinations with the ImageDataGenerator () function Then we will use and apply this Generator to our two datasets (Training and Test) src_path_train = "/content/drive/MyDrive/Colab Notebooks/fruits-360/Training" src_path_test = "/content/drive/MyDrive/Colab Notebooks/fruits-360/Test" batch_size = 32 image_gen = ImageDataGenerator( rescale=1 / 255.0, rotation_range=20, zoom_range=0.05, width_shift_range=0.05, height_shift_range=0.05, shear_range=0.05, horizontal_flip=True, fill_mode="nearest", validation_split=0.20) # create generators train_generator = image_gen.flow_from_directory( src_path_train, target_size=IMSIZE, shuffle=True, batch_size=batch_size, ) test_generator = image_gen.flow_from_directory( src_path_test, target_size=IMSIZE, shuffle=True, batch_size=batch_size, ) Found 67703 images belonging to 131 classes. Found 19543 images belonging to 131 classes. Here it is! simple isn’t it? Modeling We will now create the model from VGG-16. 3 minimum things to remember here: - We use the VGG16 class provided by TensorFlow (here we use imagenet weights), include_top specifies that we take the whole model except the last layer - We tag the layers of the neural network so as not to overwrite the learning already retrieved (layer.trainable = False) - We add a new Dense layer at the end (we could add others by the way), it is this layer that will make the choice of this or that fruit. NBCLASSES = 131 train_image_files = glob(src_path_train + '/*/*.jp*g') test_image_files = glob(src_path_test + '/*/*.jp*g') def create_model(): vgg = VGG16(input_shape=IMSIZE + [3], weights='imagenet', include_top=False) # Freeze existing VGG already trained weights for layer in vgg.layers: layer.trainable = False # get the VGG output out = vgg.output # Add new dense layer at the end x = Flatten()(out) x = Dense(NBCLASSES, activation='softmax')(x) model = Model(inputs=vgg.input, outputs=x) model.compile(loss="binary_crossentropy", optimizer="adam", metrics=['accuracy']) model.summary() return model mymodel = create_model() Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 100, 100, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 100, 100, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 100, 100, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 50, 50, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 50, 50, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 50, 50, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 25, 25, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 25, 25, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 25, 25, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 25, 25, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 12, 12, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 12, 12, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 12, 12, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 12, 12, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 6, 6, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 6, 6, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 6, 6, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 6, 6, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 3, 3, 512) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 4608) 0 _________________________________________________________________ dense_1 (Dense) (None, 131) 603779 ================================================================= Total params: 15,318,467 Trainable params: 603,779 Non-trainable params: 14,714,688 _________________________________________________________________ We find all the layers of VGG-16 upstream and the added layer (Dense / 131) at the end. Also note the number of pre-trained parameters (14,714,688) that will be reused. Model Training The training will take a long time, but then a long time if you don’t have a GPU locally, use colab or kaggle if you don’t have one. epochs = 30 early_stop = EarlyStopping(monitor='val_loss',patience=2) mymodel.fit( train_generator, validation_data=test_generator, epochs=epochs, steps_per_epoch=len(train_image_files) // batch_size, validation_steps=len(test_image_files) // batch_size, callbacks=[early_stop] ) Epoch 1/10 2115/2115 [==============================] - 647s 304ms/step - loss: 0.0269 - accuracy: 0.6334 - val_loss: 0.0085 - val_accuracy: 0.8802 Epoch 2/10 2115/2115 [==============================] - 289s 136ms/step - loss: 0.0037 - accuracy: 0.9787 - val_loss: 0.0055 - val_accuracy: 0.9295 Epoch 3/10 2115/2115 [==============================] - 290s 137ms/step - loss: 0.0018 - accuracy: 0.9923 - val_loss: 0.0047 - val_accuracy: 0.9391 Epoch 4/10 2115/2115 [==============================] - 296s 140ms/step - loss: 0.0012 - accuracy: 0.9959 - val_loss: 0.0043 - val_accuracy: 0.9522 Epoch 5/10 2115/2115 [==============================] - 298s 141ms/step - loss: 8.8540e-04 - accuracy: 0.9967 - val_loss: 0.0040 - val_accuracy: 0.9524 Epoch 6/10 2115/2115 [==============================] - 298s 141ms/step - loss: 6.6982e-04 - accuracy: 0.9985 - val_loss: 0.0037 - val_accuracy: 0.9600 Epoch 7/10 2115/2115 [==============================] - 299s 142ms/step - loss: 5.5506e-04 - accuracy: 0.9984 - val_loss: 0.0035 - val_accuracy: 0.9613 Epoch 8/10 2115/2115 [==============================] - 353s 167ms/step - loss: 4.5906e-04 - accuracy: 0.9988 - val_loss: 0.0037 - val_accuracy: 0.9599 Epoch 9/10 2115/2115 [==============================] - 295s 139ms/step - loss: 3.9744e-04 - accuracy: 0.9987 - val_loss: 0.0033 - val_accuracy: 0.9680 Epoch 10/10 2115/2115 [==============================] - 296s 140ms/step - loss: 3.4436e-04 - accuracy: 0.9993 - val_loss: 0.0035 - val_accuracy: 0.9671 Quick evaluation score = mymodel.evaluate_generator(test_generator) print('Test loss:', score[0]) print('Test accuracy:', score[1]) Test loss: 0.003505550790578127 Test accuracy: 0.9680007100105286 With an accuracy of 97% we can say that the model did its job very well… and did you see? in so few lines of Python / Tensorflow. Once again I invite you to look at my notebook on Kaggle: Try adding layers, changing settings, etc.
http://aishelf.org/vgg-transfer-learning/
CC-MAIN-2021-31
refinedweb
1,597
58.38
Bledsoe Multual Funds: 401k, Money Market Funds A job at S&S Air $50,000 per year. If you combine percent match. That means that the company will match your contribution up to 5 Bledsoe Financial Services as its 401(k) plan administrator. Here are the investment options offered for employees: Company Stock One option in the 401(k) plan is stock in S&S Air. The company is currently privately held. However, when you interviewed the owners, Mark Sexton and Todd Story, they informed you the company stock was expected to go public in the next three years. Until then, a company stock price is simply set each year by the board of directors. Bledsoe S&P 500 Index Fund This mutual fund tracks the S&P 500. Stocks in the fund are weighted exactly the same as the S&P 500. This means the fund return is approximately the return on the S&P 500, minus expenses. Because an index fund purchases assets based on the composition of the index it is following, the fund manager is not required to research stocks and make investment decisions. The result is that the fund expenses are usually low. The Bledsoe S&P 500 Index Fund charges expenses of .15 percent of assets per year. Bledsoe Small-Cap Fund This fund primarily invests in small capitalization stocks. As such, the returns of the fund are more volatile. The fund can also invest 10 percent of its assets in companies based outside of the United States. This fund charges 1.70 percent in expenses. Bledsoe Large-Company Stock Fund This fund invests primarily in large capitalization stocks of companies based in the United States. The fund is managed by Evan Bledsoe and has outperformed the market in six o f the last eight years. The fund charges 1.50 percent in expenses. Bledsoe Bond Fund This fund invests in long-term corporate bonds issued by U.S.-domiciled companies. The fund is restricted to investments in bonds with an investment-grade credit rating. This fund charges 1.4 percent in expenses. Bledsoe Money Market Fund This fund invests in short-term, high credit-quality debt instruments, which include Treasury bills. As such, the return on the money market fund is only slightly higher than the return on Treasury bills. Because of the credit quality and short-term nature of the investments, there is only a very slight risk of negative return. The fund charges .60 percent in expenses. 1. What advantages do the mutual funds offer compared to the company stock? 2. Assume that you invest 5 percent of your salary and receive the full 5 percent match from S&S Air. What EAR do you earn from the match? What conclusions do you draw about matching plans? 3. Assume you decide you should invest at least part of your money in large-capitalization stocks of companies based in the United States. What are the advantages and disadvantages of choosing the Bledsoe Large-Company Stock Fund compared to the Bledsoe S&P 500 Index Fund? 4. The returns on the Bledsoe Small-Cap Fund are the most volatile of all the mutual funds offered in the 401(k) plan. Why would you ever want to invest in this fund? When you examine the expenses of mutual funds, you will notice that this fund also has the highest expenses. Does this affect your decision to invest in this fund? 5. A measure of risk-adjusted performance that is often used is the Sharpe ratio. The Sharpe ratio is calculated as the risk premium of an asset divided by its standard deviation. The standard deviation and return of the funds over the past 10 years are listed in the following table. Calculate the Sharpe ratio for each of these funds. Assume that the expected return and standard deviation of the company stock will be 18 percent and 70 percent, respectively. Calculate the Sharpe ratio for the company stock. How appropriate is the Sharpe ratio for these assets? When would you use the Sharpe ratio? 10-year Annual Return Standard Deviation Bledsoe S&P 500 Index Fund 11.48% 15.82% Bledsoe Small-Cap Fund 16.68 19.64 Bledsoe Large-Company Stock Fund 11.85 15.41 Bledsoe Bond Fund 9.67 10.83 6. What portfolio allocation would you choose? Why? Explain your thinking carefully. Solution Preview 1. The main advantage of mutual funds over company stock is diversification. As a company employee, the 401(k) holder is already risking his or her entire compensation stream (frequently one's largest source of income) on the fortunes of the company. By investing in the company's stock, the employee is risking two losses, his or her wages and retirement income, by investing in a single company. This company stock presents a further hazard; because the company is privately-held, its stock is not readily marketable. Therefore, if the company runs into trouble, the employee could not easily liquidate the stock. 2. You earned a 100% effective annual return. Matching plans are always good deals; even the slightest match increases the employee's return on his or her investment in the plan. It also buffers the employee from losses. For example, if you invest $100 in a mutual fund at $10 per share (i.e., the employee buys 10 shares), which the company matches 100 percent (another 10 shares at $10 each), and the fund price increases to $15 per share, the you have earned (($15*20)-$100)/$100, or a 200 percent return. Had the you bought the fund without a match, your return would be (($15*10)-$100)/$100, or 50 percent. The match protects you if the shares drop, as well. For example, if you invest $100 in a mutual fund at $10 per share (i.e., the employee buys 10 shares), which the company matches 100 percent (another 10 shares at $10 each), and the fund price decreases to $5 per share, the you have earned (($5*20)-$100)/$100, or a 0 percent return (i.e., you broke even). Had the you bought the fund without a match, your loss would be (($5*10)-$100)/$100, or 50 ... Solution Summary Using the widely-used Bledsoe Mutual Funds case, this solution explains the benefits of investing in mutual funds over company stock, the value of a company match, the advantages of an actively-managed fund over an index fund, the relationship of management expenses to possible return, and the computation of the Sharpe ratio. This solution is 1078 words.
https://brainmass.com/business/finance/bledsoe-multual-funds-401k-money-market-funds-257957
CC-MAIN-2018-09
refinedweb
1,098
65.93
! void allocate(int*& p) // & - by reference Pointer int* p is passed by reference to allocate functon. Line p = new int[10]; changes the original value of pointer passed from main function. delete[] p works as expected. & means that function works with the same variable which is passed to it and not with copy. It you have a type T (any type) then: T x; allocates an object of type T named x. T & y = x; Allocates an object of type reference to T named y and it is set to reference x. y = foo; will now call the assignment operator of type T on whatever object y references, i.e. x. so the above is exactly the same as if you had written: x = foo; In your case: int * p; allocate(p); p is passed by reference so whenever the p in allocate is modified it actually modifies the variable p defined in the function that calls allcoate. so after that call the variable p will point to an array and so delete [] p will work as expected. Alf We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! void allocate(itn&*); is in DLL? then DLL has its own heap. how to free mem? what about global and static vars? If the DLL uses its own heap it is probably best to let the DLL delete the object also, call a DLL function that will destroy the object: instead of simply: delete [] p; do: destroy(p); and let destroy be a function in the DLL that do the delete []: extern "C" void __stdcall destroy(int * p) { delete [] p; } void be a typical PC windows DLL function to destroy the array. Alf void allocate(itn&*); is in DLL? then DLL has its own heap. how to free mem? thats not quite true _CrtIsValidHeapPointer assertion fails for global variables (outside of functions and outside of classes), they are allocated on the .data segment and possibly an .rodata segment for read only data if the system support that (typically const objects). The name of this segment doesn't have to be .data but on Unix that is the name that is used and it is also the name used in Windows. Other systems may use other names. For static variables anywhere (in classes, in functions or in global places) they are also allocated space in .data or .rodata. Function code is placed in .text segment which contain all the machine instructions for the function bodies etc. In addition there's a segment .bss which is used for large data objects that are initialized with all 0. The executable can be smaller if such objects are placed in .bss since .bss is all zero the segment isn't actually stored in the executable file, only the size of the segment is stored in it and the system loader allocate space for it when it loads the executable into your process virtual memory space. In addition there are portions of memory used by OS and other stuff for other purpose. Whatever virtual memory is not used by OS, nor by .data, .rodata, .text and .bss is available as heap and stack. Typically a large portion of this is allocated as combined heap stack and the stack start at the top and grows down while the heap starts at the bottom and grows up. There are no fixed size of these two so if a program uses lots of stack but little heap or if the program uses lots of heap but little stack it will work OK in either way, only when the heap limit and the stack limit collide will the program run out of heap memory. Well, one of the heaps. The program can also use other pockets of free memory space to allocate a heap or for other uses if it wants to. For example a program can load DLLs or it can map a file into memory so that reading that memory is a fast and easy way to read the file. Many systems such as windows and unix (including Linux) support this 'mapping file to memory' feature. All of these can allocate blocks of virtual memory and reserve it for its use. A heap allocator can also do the same and allocate a block of memory so that next time you do a new it might allocate a block from that memory instead. So on most systems a DLL will not have its own heap space but use the same heap as the main executable program and so which one deletes the block may appear the same. However, there may be reasons why you would want the DLL to delete the block anyway. If for example the main .exe has overloaded new and delete the DLL will not make use of those overloaded versions but use the standard new and delete from the C++ run time library. If so it is a bad idea to have your overloaded delete delete an object that was allocated by a standard new and not your overloaded new. Similarly it is a bad idea to let the DLL delete an object that was allocated by your overloaded new. In some cases that might work but that depends on how the overload was done and how it works. The general rule is that whoever allocate an object should be the one that delete it. When your program runs it also have a stack, in fact it may have more than one stack if you run several threads. If you declare local variables in a function they will be allocated on the stack along with arguments to the function and temporary variables. Typically a stack frame will also have some system pointers pointing back to previous stack frame, return address etc so a stack frame will typically look like this: arg[n] arg[n-1] .... arg[2] arg[1] return addr f---> previous stack frame local variables temporary variables sp --> unused space. The f--> points to an approximate location where the stack frame pointer typically points. The return address is then at offset +4, arg1 at offset +8, arg2 at offset 8 + sizeof(arg1) etc. (these offsets may differ slightly from machine to another, but typically arg1 has a higher offset than return addr and arg2 come at a higher address than arg1 - unless the order of the arguments are reversed). This is one example of a C and C++ style stack frame. Often a platform provide more than one "calling convention" such as __cdecl __stdcall, __fastcall etc and these might all differ from the above in one or more respects. The above stack frame layout support the concept of variable number of arguments and is therefore the form used for __cdecl functions in windows. When calling a subroutine or function you simply use the unused space to store the parameter values to the function and then issue a CALL or similar machine instruction so that the function is called. The CALL instruction will typically push the return address on the stack just after the parameters and then the first thing the function does after the call is to push the stack frame pointer onto the stack after the return address and then copy the stack free pointer into the stack frame pointer and set the stack free pointer to point past the local variables and temporary variables that the function itself need space for. Well, this is one way to do it, again __cdecl, __stdcall etc will do things slightly differently, for example in __cdecl I believe it is the caller that sets up the stack frame instead of the called function, similarly, when the function return it is the caller that will fix up the stack back again. To return from a frame as above the stack frame pointer is copied into the stack free register (the register that points to the unused part of the stack) and the stack frame register retrieves the stored value from the stack and then the function returns using the return addr stored on the stack. Again, the differences in "calling convention" will do this slightly differently. Note that this will automatically make the stack frame that the function used free and any local variables allocated on it will be automatically deallocated by the above procedure. Due to these differences it is very important that you never attempt to call a function that was defined as __stdcall by declaring it as __cdecl or vice versa. To ensure that this is very difficult to do most compilers that do support more than one calling convention will give them slightly different names to the linker so that it is impossible to call the right function using wrong calling convention. The linker will simply not see the names as matching and won't link the call to the function. so to allocate an array on the stack you do: void do_something() { int arr[10]; } such an array do not need to be deallocated, it is automatically reclaimed when the function returns (as per the explanation above). Alf? DLL = Dynamically Linked Library it's the same as .so files under unixes. The issue of wether or not a DLL share a heap with the main executable depends partly on what you mean by 'heap'. If a heap is a set of functions that manipulate a block of area and give out slices of it as the program needs them and able to reclaim them when the program no longer needs them then there are good reasons why a DLL and .exe should not share the heap. The .exe heap may be overloaded to some user defined heap. This heap is completely unknown to the DLL and the DLL can also be used by other .exe files which do not use this overloaded heap and so it is best to let the DLL use its own heap. The DLL may use an overloaded heap but the .exe has no access to that and doesn't want it either. Again the DLL and the .EXE need their own heap. Both the DLL and the .EXE may use an overloaded heap but they use different overloads. Again, the same story. The heap is managed by linking in some library code that defines functions for this. The standard C++ library export operator new and operator delete that can be used by users but they can also define their own and if so then their program will be linked without the standard C++ library version. Typically the standard C++ library version calls malloc to actually allocate the space. malloc() is a low level C library set of functions to manipulate a heap. Typically malloc() operate by a set of data structures used by its own purposes and it has a set of functions to export its functionality. The DLL and the .EXE may both independently of each other load separate versions of this malloc library. Hope this explains why even though they may share the same memory area they have separate heaps. Each malloc library will then use the system allocator to allocate a portion of virtual memory for use by its own use. The system allocator is a platform dependent set of functions to manipulate the virtual memory and allocate chunks of it for use by heap. Win32 have a function and Unix has another function to fascilitate this. However, a portable malloc version uses the functionality to map a file to memory to allocate space for a heap. This function can also be used without a file to just allocate a portion of virtual memory for whatever use you might want and malloc() uses it to create its data structures for heap. Writing a good memory allcoator is hard. Not because it is very difficult to do but because there are many conflicting concerns. It must be safe, it must be fast, it must be efficient in memory usage, it must not create fragmented memory etc etc etc. A simple memory allocator is simply one that have a heap from base to end and the unused pointer pointing in between those two and pointing to the unused part. A simple malloc may then simply do: void * malloc(size_t sz) { unsigned char * p = unused; unused = p + sz; return (void *)p; } A free function cannot give the space back to unused though because you may have allocated space after it. If you allocate A and then B and then free A then you can't set unused to point to A since that would make block B unused but it is still in use, One way to solve that is to have a freelist: struct entry { entry * next; }; and insert the free'd block into a free list: void free(void * p) { struct entry * e = (entry *)p; e -> next = freelist; freelist = p; } However, this raises the issue that when you allocate you should probably first check the freelist and so you want to amend the malloc function. One problem is that you need to ensure that a free list entry is big enough for the current request and thus every block must have a size stored somewhere. So you amend the malloc function like this: // replace entry with this: struct block { size_t sz; block * next; }; void * malloc(size_t sz) { struct block * prev = 0; struct block * p = freelist; while (p != 0 && p -> sz < sz) { prev = p; p = p -> next; } if (p != 0) { // p -> sz >= sz // unlink p from freelist. if (prev == 0) freelist = p -> next; else prev -> next = p -> next; // return allocated block. return (void *) & p -> next; } // no entry in freelist. p = (block *)unused; // assuming that unused is declared as char * unused += sizeof(block *) + sz; return (void *) & p -> next; } Now this may work for a while but it has numerous problems, for one thing you do not check if unused reaches end of the heap and what to do if it does. Another problem is that if you want to allocate a block of 16 bytes and you find a block of 2000 bytes available you return that 2000 byte block for the 16 byte data thus waisting 1984 bytes. There are several ways to solve that problem. Another problem is also that when you free you just insert the element in the free list. If you have a block immediately before it in memory and/or immediately after it that is also free you should perhaps combine those into one bigger block. There are several ways to solve that problem too. About the end you simply should test if the new value for unused is > end and if it is you shouldn't just return the block but should return 0 (out of memory) or perhaps you should allcoate a bigger heap and try to allocate the object from the new bigger heap. For the problem of wasting space there are several ways. One way is to split a block if it is too large. Specifically if it is so large that it can successfully be split into two blocks then you can split it and return the unused portion back into the freelist and return the part that fit exactly for the request. Another way is to keep an array of freelists and keep one freelist for each size, well almost. First off, on a typical machine you want the returned block to be such that a double value will be properly aligned at the start of the block. This is the way compilers with alignment likes it since they assume that all objects are such that offset 0 start at an double alignment unless the object is such that it require less. Since you don't know what the object is to be used for one should assume the worst case - which is double - and always return a pointer value that is always divisible by sizeof(double). To ensure this each block must be a multiple of sizeof(double). To be more specific I will use the PC architecture with sizeof(int) == 4 and sizeof(double) == 8 here. So if each address is a multiple of 8 then each block must also be a multiple of 8 so step 1 is that we do not allocate exactly the number of bytes the user request but we allocate a value that is >= that size but a multiple of 8: sz = (sz + 7) & ~7; ensures that the size is always a multiple of 8. Since we have multiple of 8 we can have an array of freelists, so that freelist[0] is a freelist of blocks that have 8 bytes available, freelist[1] is a freelist of blocks that have 16 bytes available, freelist[2] have 24 bytes available etc. Of course we also need one entry of this array to have a freelist of 'all others' for the blocks that are much higher. This makes it easy to pick an entry of possible size, if we need 8 bytes we simply check if freelist[0] has an entry, if it does we use it. If it doesn't we check if freelist[1] has an entry, if it does we divide it in two and put one into the freelist[0] and the other to the caller. If freelist[1] is also empty we check freelist[2], if found we divide it in two and put one entry (16 bytes) into freelist[1] and the other (8 bytes) to user. etc. Similarly if we want 16 bytes, we first check freelist[1] and if none there we check the next list freelist[2] and if found we split and put one entry (8 bytes) into freelist[0] and the other to user etc. If all freelists are empty we allocate from unused and then we allcoate exactly enough to make one 8 byte entry and also make sure that the allocated object is divisible by 8 and return that block to user. When freeing the objects we can see if the object just before and just after the block is also free, if so we remove them from freelist, combine them into one bigger block and place it in freelist based on the new size of the combined block. To combine objects based on address you could keep the freelist sorted on address of the block. However, that would require you to walk through the list. You could also make the objects like this: // use this when the block is in use. int sz; char userdata[sz]; int sz; // store size here also at the end of the block. // use this when the block is free: int sz; int freex; // index in freelist array for this block. block * next; // next in freelist. char not_used[sz - 8]; int sz; // size is here also at the end of the block. Since size is stored both at the beginning of the block and at the end of the block it is easy for the allocator if it has a pointer to a block to check the size at the entry just before it and then subtract that pointer with sz bytes to get to the beginning of previous pointer. It must also be able to detect if a block is free or not. Since sz is always a multiple of 8 this is easiest handled by using one of the 3 available bits at the bottom of sz. For example sz | 1 is the stored size when the block is free, sz | 0 == sz is the stored size when the block is in use. There, we're well on our way to implement an allocator. Essentially, there are many choices here and one allcoator is likely to make a different choice from another allocator and it should be clear that these choices are to a large extent incompatible and if you allocate an object using one allocator it is devastating if you deallocate it using a different allocator. So, always make sure that whatever module allocates an object should also - as a general rule - be the module that deallocates it. There are exceptions to this rule, if you want to return a VB string to VB you might have to allcoate the string. It is then important that you use VB's allocator to allocate the string so that when VB deallocates the string it will use the corresponding deallocator. A similar situation exist with functions like strdup() and aprintf() of the glibc which allocates strings. It is then important that they document whatever method they use to allocate those strings and that you use the same method to deallocate them. For example strdup() uses malloc() so free() is correct - and not delete []. It is possible that on some implementations delete [] would work as free() and delete and delete [] might be compatible on those implementations but in general it won't. I often reimplement strdup for that reason so that I have a strdup that uses new: namespace my { char * strdup(const char * str) { if (str == 0) return 0; size_t len = strlen(str)+1; char * s = new char[len]; memcpy(s,str,len); return s; } }; using my::strdup() instead of standard strdup allows me to delete [] the strings instead of free()'ing them. Alf it looks correct, we all know that, but what can you tell about that: int* do_something() { int arr[]={0,1,2,3,4,5,6}; This appears to be a non-standard extension. Since arr is not a static array it is allocated on the stack so initializing it with an aggregate appear to me to be non-standard. It is possible that the implementation copies the array to the stack before the user code start. That isn't standard C++ though since standard C++ would expect a static array if you initialize it with an aggregate. If you initialize it with a constructor it would be standard C++ code and would be compiled into a call to the constructor when executing the definition of arr. return arr; } void main() { int i = do_something()[4]; // i = 4!!! } how it works then? This doesn't really works. The stack area is now free()'d however, freeing the stack area doesn't erase it. It still has the values 0,1,2,3,4,5,6 or so until you start to call other functions to modify it. In your simple example you do not call any such functions and so it is quite possible that the returned array still exist, at least portions of it. It is possible that you would get some 'random' result if you tried to index element 0 of that array but it is also possible that that would work. However, if you call a function and then try to access the array I am pretty sure that you would get wrong result if the array was copied to the stack. So, the answer to your "how it works then?" is that "It doesn't work, it only appear to work in your specific example for your specific compiler" Change the code to: void call_some_other_func() { int arr[50]; for (int i = 0; i < 50; ++i) arr[50] = 0x53abcf73; } void main() { int * a = do_something(); call_some_other_func(); int i = a[4]; // i == 4???? I think not. } my guess is that i would equal 0x53abcf73 and not 4 - proof that it really didn't work after all. Alf on my machine(NT5.2) a = 0x53abcf73 (ok, i got it - not necessary 0x53abcf73 and not 4 either:)) so... void allocate(int*& p){ p = new int[10]; p[3] = 3; } void main(){ int* p; allocate(p); int t = p[3]; ASSERT(3 == t); //...is ok. } Alf
https://www.experts-exchange.com/questions/20570109/another-explanation-needed.html
CC-MAIN-2018-05
refinedweb
3,964
66.17
Adobe has notified me that it addressed the Flash Player Unloading vulnerability I reported them earlier. The patch is now released to end users and the security bulletin is published which contains the nature of the vulnerability. In this post, I describe the technical details of the vulnerability in the hope that software developers can learn from it. In the Windows version of Safari, when a Flash file is opened, the browser initiates to load the Flash Player into the process address space. When the Flash file is no longer opened the browser initiates to unload the Flash Player. The Flash Player can display a dialog box using MessageBoxAPI. While the dialog box is active, an other thread can unload the Flash Player module from the address space. When the execution returns back from the MessageBox call to the Flash Player module, which is already unloaded, the instruction dereferences freed memory causing data execution prevention exception. This is a use-after-free vulnerability and it is same in nature to the one I reported Adobe earlier however this time the vulnerability is triggered via a different code path. The crash state looks like below, and it suggests the issue is most likely exploitable for code execution. One possible fix would be to synchronize the DLL unloading thread with the code callingOne possible fix would be to synchronize the DLL unloading thread with the code calling (3860.37ec): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=523bb781 ebx=00000001 ecx=006f0d68 edx=00000000 esi=523bb781 edi=00000000 eip=523bb781 esp=001deb90 ebp=001debb8 iopl=0 nv up ei pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00210206 <Unloaded_NPSWF32_11_7_700_224.dll>+0x20b781: 523bb781 ?? ??? 0:000> |. . 0 id: 3860 attach name: C:\Program Files (x86)\Safari\Apple Application Support\WebKit2WebProcess.exe MessageBox. The vulnerability I reported Adobe earlier is same in nature to this one. Also, if a developer purely adds a new MessageBoxcall he has a high probability to inherently create a new vulnerability that can be reached in the same way described in this post. Therefore, for Adobe, it would be beneficial to review the entire code base for unsafe dialog box calls. Also, for Microsoft, hardening on OS level is worthy to consider which may involve to crash in a not exploitable manner when a bug is triggered. The problem looks to be more ubiquitous than the public vulnerability reports suggest. The approach to discover these issues is to identify dialog box and DLL unloading code fragments that can be manually attacked. Since there may be many bugs that could end up dereferencing unloaded Flash Player, Mozilla long time back fixed the problem by permanently keeping the plugin in the memory in Firefox process. I'm currently working on tools to identify attack surfaces. It involves to identify DLLs calling MessageBox, and identify ways to unload MessageBoxDLLs from the process address space. I created a sample implementation to represent the vulnerability including how the freed memory gets dereferenced. It is a good exercise for developers to think about how to make the code secure without removing the thread creation. The complete source code for Visual C++ 2010 can be downloaded from here. When executing the program it looks like below.The complete source code for Visual C++ 2010 can be downloaded from here. When executing the program it looks like below. // This is the EXE file. // Main thread loads the DLL file to call ShowMessageBox() export that calls MessageBox(). // Secondary thread frees DLL but the dialog box is still visible. // When OK is clicked the freed memory is dereferenced. #include <Windows.h> static HMODULE handle; DWORD WINAPI Thread(LPVOID lpParam) { printf("[Thread] Waiting 5 seconds.\n"); Sleep(5000); FreeLibrary(handle); printf("[Thread] FreeLibrary() called. Click OK to dereference freed memory.\n"); return 0; } int _tmain(int argc, _TCHAR* argv[]) { typedef void (WINAPI *FUNC)(void); FUNC ShowMessageBox; handle = LoadLibrary(L"DialogDLL.dll"); printf("LoadLibrary() called.\n"); ShowMessageBox = (FUNC)GetProcAddress(handle, "ShowMessageBox"); CreateThread(NULL, 0, Thread, NULL, NULL, NULL); printf("Thread created for FreeLibrary().\n"); printf("MessageBox pops up.\n"); ShowMessageBox(); return 0; } // This is the DLL file. // It has an export that calls MessageBox() extern "C" __declspec( dllexport ) void ShowMessageBox(); void ShowMessageBox() { MessageBox(NULL, L"...shown by the DLL that can be unloaded while this dialog box is still visible.\n\nThe unloaded DLL can be dereferenced when OK is clicked.", L"This is a dialog box...", MB_OK); } When OK is clicked the instruction dereferences freed memory. callthat can be seen in v-table bugs but ret.
https://reversingonwindows.blogspot.de/2013/12/flash-player-unloading-vulnerability-ii.html
CC-MAIN-2017-09
refinedweb
780
56.25
$ cnpm install cheap-watch Cheap Watch is a small, simple, dependency-free, cross-platform file system watcher for Node.js 8+. new CheapWatch({ dir, filter, watch = true, debounce = 10 }) dir- The directory whose contents to watch. It's recommended, though not required, for this to be an absolute path, say one returned by path.resolve. filter({ path, stats })- (optional) A function to decide whether a given file or directory should be watched. It's passed an object containing the file or directory's relative pathand its stats. It should return trueor false(or a Promiseresolving to one of those). Returning falsefor a directory means that none of its contents will be watched. watch- (optional) Whether to actually watch the directory for changes. Defaults to true. If false, you can retrieve all of the files and directories within a given directory along with their initial Statsbut changes will not be monitored. debounce- (optional) Length of timeout in milliseconds to use to debounce incoming events from fs.watch. Defaults to 10. Multiple events are often emitted for a single change, and events can also be emitted before fs.statreports the changes. So we will wait until debouncemilliseconds have passed since the last fs.watchevent for a file or directory before handling it. The default of 10ms Works On My Machine. init() Initialize the watcher, traverse the directory to find the initial files and directories, and set up watchers to look for changes. This returns a Promise that resolves once the initial contents of the directory have been traversed and all of the watchers have been set up. Close all FSWatcher instances, and stop watching for file changes. paths A Map of the watched files and directories. Each key is a relative path from the CheapWatch's dir, and each value is a Stats object for the file or directory. Paths are always separated by forward slashes, regardless of platform. This Map is kept up to date as files are changed on disk. You can use stats.isFile() and stats.isDirectory() to determine whether something is a file or a directory. A CheapWatch is an EventEmitter, and emits two events to report a new, updated, or deleted file or directory. + { path, stats, isNew } A + event is emitted whenever a watched file or directory is created or updated. It's emitted with an object containing a path string, a stats object, and an isNew boolean which will be true for newly created files and directories and false for updated ones. - { path, stats } A - event is emitted whenever a watched file or directory is deleted. It's emitted with an object containing a path string and a stats object. stats will be the most recent Stats collected for the file or directory before it was deleted. import CheapWatch from 'cheap-watch'; const watch = new CheapWatch({ dir, /* ... */ }); await watch.init(); for (const [path, stats] of watch.paths) { /* ... */ } watch.on('+', ({ path, stats, isNew }) => { /* ... */ }); watch.on('-', ({ path, stats }) => { /* ... */ });
https://developer.aliyun.com/mirror/npm/package/cheap-watch
CC-MAIN-2020-24
refinedweb
490
67.15
Lee Thomason - 2010-03-15 Thanks for the kind words! Regrettably there is no plan for namespace support. Hi Lee, I began to work on a C++ project where I need XML parsing, so I looked around to find the best open-source library. Your TinyXML library is one of the best I found: simple, concise, small learning curve, good documentation, clean work. But there is one caveat: I need the ability to manage XML namespaces for my parser, and AFAIK, this is not yet possible with TinyXML. Do you plan to implement this functionnality soon ? (Please excuse my poor english) Etienne Thanks for the kind words! Regrettably there is no plan for namespace support. Log in to post a comment.
https://sourceforge.net/p/tinyxml/feature-requests/63/
CC-MAIN-2017-17
refinedweb
121
83.56
Interacting with the Windows 2000 Event Viewer Using C# Consider the need for a software program developed using C# language to interact with Microsoft Windows 2000's built-in utility, Event Viewer. Imagine, developing a critical client-server application. You may need to provide a capability to access Windows 2000's built-in utilities like Event Viewer using the software you are creating. Interestingly, the .NET Framework provides a wide range of namespaces and class libraries for developing programs with such advanced capabilities. Even though, many languages provide facility for performing these types of advanced operations, .NET Framework provides much more simplified approach in your quest for building robust applications. Consider the need from two angles: From a user's perspective and from an administrator's. Users: It's possible to keep track of all kinds of errors and events occurring to the system from the C# application itself. It will be available in the form of human readable logs. Users can read these log entries with a click of a button, without knowing that event viewer utility has been accessed. A user can read the exact time the problem or an event occurred. Admin: An administrator can write messages to these logs through a GUI front-end application developed using C# or any .NET language. It's also possible for an administrator to read all of the messages on the client's system. Remedial measures can be taken based on these log entries. An admin can also pass messages to clients and the same can be displayed in message boxes on the relevant client machine. The entries can also be displayed in text boxes. Moreover, these features can be changed as per the requirements. For example, you can give write permission to users as well. In this way, users and administrators can directly interact with the Windows 2000 Operating System utilities through user-friendly applications. As a primary requirement, you should have a system with Windows 2000 and .NET Framework SDK installed on it. A text editor like Notepad is sufficient but you can also use other editors like Antechinus C# Editor. These types of editors can be downloaded from the Internet easily and it will also support features like color coding, syntax highlighting etc. Windows 2000 ships with a useful tool called Event Viewer, more often known as a Microsoft Management Console (MMC) Snap-in. It can be located under Start | Programs | Administrative Tools | Event Viewer. With the help of this viewer, you can monitor information about software and hardware installed on your computer and also identify any system problems and errors. Figure 1.1. Event Viewer. For example, if your C# program generated an exception (meaning runtime error), the same will be recorded or logged as an entry under the Application Log of the Viewer. The corresponding event type will be Information. If an error occurs, the event type will be Error. In this article, you will learn how to read from and write to the Event Viewer logs programmatically using C# language. Writing to the Event Viewer Log You can write to the viewer using the EventLog class under System.Diagnostics namespace of the .NET Framework. This class provides a lot of properties like Source, Log, Close, and WriteEntrywith which you can manipulate the Event Viewer. Listing 1.1 given below illustrates how to create and write a event log upon clicking a button: Listing 1.1: //Source File : Eventwrite.cs //Compilation : csc Eventwrite.cs //Execution : Eventwrite using System; using System.Diagnostics; using System.Windows.Forms; using System.Drawing; public class Eventwrite: Form { Button b1 = new Button(); public Eventwrite() { this.Text = "An Article for Developer.com by Anand"; b1.Text = "Click here"; b1.Click += new EventHandler(b1_click); b1.Location = new Point(100,50); this.Controls.Add(b1); } public void b1_click(object sender, EventArgs e) { //An object of the EventLog class created EventLog elog = new EventLog(); elog.Log = "Application"; elog.Source = "From Developer.com article"; elog.WriteEntry("Hello, I'm from C#"); elog.Close(); MessageBox.Show("One message successfully written to EventViewer", "Anand.N"); } public static void Main() { Application.Run(new Eventwrite()); } } Upon execution of the above program, the output looks like as in Figure 1.2 Figure 1.2 - Output of Eventwrite.cs. After clicking the button, launch Event Viewer and you will be able to view a log entry "From Developer.comarticle" below the Source heading. Right click the entry and select properties from the pop up menu. Now you will be able to view the message "Hello, I'm from C#" on the Description box of the Information Properties dialog (Figure 1.3). Figure 1.3 - Event Viewer Properties. Reading from the Event Viewer Log You can read information from the viewer using the same class as mentioned above; however, you have to iterate a variable in a for loop. The variable specifies how many entries you want to read from the viewer. Listing 1.2 examines reading of five Event Viewer logs. All entries are displayed on message boxes one after another. Listing 1.2: //Source File : Eventread.cs //Compilation : csc Eventread.cs //Execution : Eventread using System; using System.Diagnostics; using System.Windows.Forms; using System.Drawing; public class Eventread: Form { Button b1 = new Button(); public Eventread() { this.Text = "An Article for Developer.com by Anand"; b1.Text = "Click here"; b1.Click += new EventHandler(b1_click); b1.Location = new Point(100,50); this.Controls.Add(b1); } public void b1_click(object sender, EventArgs e) { EventLog elog = new EventLog(); elog.Log = "Application"; elog.Source = "From Developer.com article"; for(int i = 0; i<5;i++) { try { MessageBox.Show("Message: " +elog.Entries[i].Message + "\n" + "App: " +elog.Entries[i].Source + "\n" + "Entry type: " +elog.Entries[i].EntryType); }catch{} } } public static void Main() { Application.Run(new Eventread()); } } Entries is one of the properties of the EventLog class. This property returns an instance of EventLog.EventLogEntryCollection, which defines a number of EventLogEntry types, corresponding to relevant entries in a Event Log. In the above example, I have applied three of those types with which you can read log entries. Other types includes UserName, TimeGenrated, TimeWritten etc. Just substitute these types in the above listing and observe the output. Try - Yourself Questions - MMC stands for __________ - __________ namespace is used for manipulating the Event Viewer. - __________ class is used for reading from the Event Viewer. - How will you locate the Event Viewer in Windows 2000. - Entries is one of the properties of the __________ class. Download You can download the source codes used in this article by clicking here About the Author Anand Narayanaswamy works as a freelance Web developer and freelance writer. He lives in Thiruvananthapuram, Kerala State, India. He runs learnxpress.com and provides free technical support to users worldwide through the site besides featuring tutorials and articles related to Java, C#, Visual Basic, and other web<<
https://www.developer.com/net/net/article.php/1370771/Interacting-with-the-Windows-2000-Event-Viewer-Using-C.htm
CC-MAIN-2018-26
refinedweb
1,139
51.95
I am trying to write a list of numbers to Test.txt file which is located in the same directory as the java class file. The program outputs the expected results to the Ide output window but when the file is checked after the program has terminated there is no data in the file - why is this? Code : public class IoTest { final int MAX = 10; int intValue; String fileName = "Test.txt"; public IoTest () throws IOException { Random rand = new Random(); File fileObj = new File(fileName); FileWriter fw = new FileWriter(fileObj,true); BufferedWriter bw = new BufferedWriter(fw); PrintWriter pw = new PrintWriter(bw); for (int line = 0;line <= MAX; line++){ for (int num = 0;num <= MAX; num++){ intValue = rand.nextInt (90)+(10); pw.print(intValue + " "); pw.write(intValue + " "); System.out.println(intValue + " "); } pw.println(); } pw.flush (); pw.close (); bw.close (); fw.close (); System.out.println ("debugging, file created : "+ fileName); } }
http://forums.devx.com/printthread.php?t=147644&pp=15&page=1
CC-MAIN-2014-41
refinedweb
146
53.92
This is an extention module to access PostgreSQL database from Ruby. This library works with PostgreSQL 6.4/6.5 and 7.0/7.1. Authors: Yukihiro Matsumoto <matz@ruby-lang.org> Eiji Matsumoto <usagi@ruby.club.or.jp> Noboru Saitou <noborus@zetabits.com> (current maintainer) WWW: To install the port: cd /usr/ports/databases/ruby-postgres/ && make install cleanTo add the package: pkg_add -r ruby18-postgres cd /usr/ports/databases/ruby-postgres/ && make install clean pkg_add -r ruby18-postgres No options to configure Number of commits found: - a fix for ruby-dbd_pg is now possible PR: 114048 - ressurect - new MASTER_SITE - update to 0.7.9.2008.01.28 PR: 118290 - databases/ruby-postgres -> databases/rubygem-postgres PR: ports/114048 Submitted by: Roderick van Domburg <r dot s dot a dot vandomburg_AT_nedforce dot nl> - take maintainership - Fix mastersite. - Add SHA256 SIZE data. Submitted by: trevor Bump PORTREVISION on all ports that depend on gettext to aid with upgrading. (Part 2) De-pkg-comment. Update to 0.7.1. Chase libpq version bump. Update to 0.7.0. Use RUBY_MOD*. Support the new layout of PostgreSQL 7.2 and drop the WITH_OLD_LAYOUT support. Update MASTER_SITES and WWW. Update to 0.6.5. Bump the PORTREVISION's of the ports which install architecture dependent ruby modules, due to the RUBY_ARCH change I've just committed. Add a WITH_OLD_LAYOUT hook for those who still use PostgreSQL with the old layout. Update to 0.6.4. Update to 0.6.3. PostgreSQL 7.1 is officially supported. Fix include directory for postgresql7.1's new layout. Add %%PORTDOCS%%. Update to 0.6.2. Convert category databases to new layout. Now bsd.ruby.mk is automatically included by bsd.port.mk when USE_RUBY or USE_LIBRUBY is defined, individual ruby ports no longer need to include it explicitly. Update with bsd.ruby.mk. Make all these Ruby related ports belong also in the newly-added "ruby" virtual category. Do The Right Thing. (R) Update to 0.6.1. Set DIST_SUBDIR=ruby for all these Ruby ports to stop distfile namespace pollution. Servers and bandwidth provided byNew York InternetSuperNews 6 vulnerabilities affecting 11 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/databases/ruby-postgres
crawl-002
refinedweb
370
54.18
Best practice when using SegmentedControl to switch visible subviews Hi, I want to use a SegmentedControl to switch between to two overlapped customview subviews. What is the best practice for this? I tried to set the .hidden attribute in the SegmentedControl .action callback but this seems to work only when set before presenting the view. Any advice would be wellcome. Cheers, Eric T. If the views are completely overlapped, you could use bring_to_top or send_to_back. I am a little surprised hidden didnt work... are you sure you are targetting the right subviews? Another approach would be to add_subview/remove_subview, but you would need to have to subview variables as global or instance vars of a custom Class. The hiddenattribute should work for this. Here's a minimal example that uses a segmented control to switch between a blue and a green view: import ui def segment_action(sender): view2 = sender.superview['view2'] view2.hidden = (sender.selected_index == 0) main_view = ui.View(frame=(0, 0, 400, 400), bg_color='white') view1 = ui.View(frame=(0, 40, 400, 360), bg_color='blue', name='view1', flex='WH') view2 = ui.View(frame=(0, 40, 400, 360), bg_color='green', name='view2', flex='WH') main_view.add_subview(view1) main_view.add_subview(view2) control = ui.SegmentedControl(frame=(10, 4, 380, 32), flex='W') control.segments = ['View 1', 'View 2'] control.selected_index = 1 control.action = segment_action main_view.add_subview(control) main_view.present('sheet') Thanks for your reply, it does actually work. I don't know what I did first, I must have had a typo somewhere! Using the (add/remove)_subview does the work to.
https://forum.omz-software.com/topic/2149/best-practice-when-using-segmentedcontrol-to-switch-visible-subviews
CC-MAIN-2022-27
refinedweb
260
62.54
Details - Type: Bug - Status: Reported - Priority: P3: Somewhat important - Resolution: Unresolved - Affects Version/s: 5.15.0 - Fix Version/s: None - Component/s: Quick: Core Declarative QML - Labels:None Description I have observed the following, somehow strange behaviour: I am using a ListView to render some items, i.e. I have a model and a delegate defined for this. In addition, I am using a header and footer in the view, to render additional meta information. In the concrete app where I initially observed this behaviour, I have some interactive components in the header and footer, which can change their height while the user interacts with them (most notably, by entering multi-line text). As soon as such a change of the header height happens and the list view is otherwise "empty" (i.e. the model currently holds zero items), the view is positioned back at the beginning of the list. I managed to break this down into a small example: import QtQuick 2.12 import QtQuick.Controls 2.5 ApplicationWindow { width: 640 height: 480 visible: true title: qsTr("Scroll") ScrollView { anchors.fill: parent ListView { id: listView width: parent.width model: 100 delegate: ItemDelegate { text: "Item " + (index + 1) width: listView.width } header: Column { Button { text: qsTr("Toggle List View Items") onClicked: if (listView.model === 0) { listView.model = 100; } else { listView.model = 0; } } Item { id: spacer property int foo: 1 width: 1 height: (foo % 2 == 0) ? 100 : 150 Timer { interval: 2000 repeat: true running: true onTriggered: spacer.foo += 1 } } } footer: Item { height: 3000 width: parent.width TextField { width: parent.width placeholderText: qsTr("Write something here") } } } } } Here, I have a header which changes its height every two seconds. By default, the list view contains some items, hence, everything behaves fine. However, by clicking the button, the list is emptied. Afterwards, when scrolling down, the view will reset the position as soon as the header height changes again. I know that this rather might be a corner case, but would be cool if this still could be fixed
https://bugreports.qt.io/browse/QTBUG-89957?gerritReviewStatus=Open
CC-MAIN-2022-21
refinedweb
335
66.74
I suffered from problems, when doing text topic classification. I got the data in NLTK "reuters" corpus.. However when I try "reuters.categories()" the result is ['acq', 'alum', 'barley', 'bop', 'carcass', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copper', 'copra-cake', 'corn', 'cotton', 'cotton-oil', 'cpi', 'cpu', 'crude', 'dfl', 'dlr', 'dmk', 'earn', 'fuel', 'gas', 'gnp', 'gold', 'grain', 'groundnut', 'groundnut-oil', 'heat', 'hog', 'housing', 'income', 'instal-debt', 'interest', 'ipi', 'iron-steel', 'jet', 'jobs', 'l-cattle', 'lead', 'lei', 'lin-oil', 'livestock', 'lumber', 'meal-feed', 'money-fx', 'money-supply', 'naphtha', 'nat-gas', 'nickel', 'nkr', 'nzdlr', 'oat', 'oilseed', 'orange', 'palladium', 'palm-oil', 'palmkernel', 'pet-chem', 'platinum', 'potato', 'propane', 'rand', 'rape-oil', 'rapeseed', 'reserves', 'retail', 'rice', 'rubber', 'rye', 'ship', 'silver', 'sorghum', 'soy-meal', 'soy-oil', 'soybean', 'strategic-metal', 'sugar', 'sun-meal', 'sun-oil', 'sunseed', 'tea', 'tin', 'trade', 'veg-oil', 'wheat', 'wpi', 'yen', 'zinc'] I almost don't know what each one means, can I find some explanations ? Information about the Reuters corpus in NLTK corpus API: The Reuters-21578 "ApteMod" corpus is built for text classification. ApteMod is a collection of 10,788 documents from the Reuters financial newswire service In the ApteMod corpus, each document belongs to one or more categories. There are 90 categories in the corpus. The mapping of the fileids to the categories can be found in ~/nltk_data/corpora/reuters/cats.txt from os.path import expanduser from collections import defaultdict from nltk.corpus import reuters home = expanduser("~") id2cat = defaultdict(list) for line in open(home+'/nltk_data/corpora/reuters/cats.txt','r'): fid, _, cats = line.partition(' ') id2cat[fid] = cats.split() for fileid in reuters.fileids(): for sent in reuters.sents(fileid): print id2cat[fileid], sent [out]: ['trade'] ['ASIAN', 'EXPORTERS', 'FEAR', 'DAMAGE', 'FROM', 'U', '.', 'S', '.-', 'JAPAN', 'RIFT', 'Mounting', 'trade', 'friction', 'between', 'the', 'U', '.', 'S', '.', 'And', 'Japan', 'has', 'raised', 'fears', 'among', 'many', 'of', 'Asia', "'", 's', 'exporting', 'nations', 'that', 'the', 'row', 'could', 'inflict', 'far', '-', 'reaching', 'economic', 'damage', ',', 'businessmen', 'and', 'officials', 'said', '.'] ... You can find the information about the categories from this file: ~/nltk_data/corpora/reuters/README: The Reuters-21578 benchmark corpus, ApteMod version This is a publically available version of the well-known Reuters-21578 "ApteMod" corpus for text categorization. It has been used in publications like these: - Yiming Yang and X. Liu. "A re-examination of text categorization methods". 1999. Proceedings of 22nd Annual International SIGIR. - Thorsten Joachims. "Text categorization with support vector machines: learning with many relevant features". 1998. Proceedings of ECML-98, 10th European Conference on Machine Learning. ApteMod is a collection of 10,788 documents from the Reuters financial newswire service, partitioned into a training set with 7769 documents and a test set with 3019 documents. The total size of the corpus is about 43 MB. It is also available for download from , which includes a more extensive history of the data revisions. The distribution of categories in the ApteMod corpus is highly skewed, with 36.7% of the documents in the most common category, and only 0.0185% (2 documents) in each of the five least common categories. In fact, the original data source is even more skewed---in creating the corpus, any categories that did not contain at least one document in the training set and one document in the test set were removed from the corpus by its original creator. In the ApteMod corpus, each document belongs to one or more categories. There are 90 categories in the corpus. The average number of categories per document is 1.235, and the average number of documents per category is about 148, or 1.37% of the corpus. -Ken Williams [email protected] (extracted from the README at the UCI address above)"). Thanks alvas for summing it up so nicely, this would help other people too. Moreover I also find any version of the Reuters dataset which has relatively less number of categories. This article here explains it better.
http://m.dlxedu.com/m/askdetail/3/2c15109a4e6f3ad6db1826e21e276652.html
CC-MAIN-2018-39
refinedweb
640
56.76
April 16, 2012 Introduction Note: Use of this feature is limited to backups started from the application's cron or task queue. You can run scheduled backups for your application using the App Engine Cron service. To do this for Python or Go apps, specify backup cron jobs in cron.yaml. For Java apps, specify the backup cron job in cron.xml. Currently there is no way to specify a scheduled backup programmatically. Setting Up a Scheduled Backup To set a scheduled backup for your app, - If you haven't already done so, enable Datastore Admin for your app. - If you haven't already, create and configure the Cloud Storage bucket you wish to use for backups. - In your application directory, if you don't already have one, create a cron.yamlfile for a Python or Go app or a cron.xmlfile for a Java app. - Add the backup cron entries. These specify the backup schedule, the set of entities to back up, and the storage to be used for the backups, as described in Specifying Backups in a Cron File. Here are some examples: Python Sample Python cron.yaml cron: - description: My Daily Backup url: /_ah/datastore_admin/backup.create?name=BackupToCloud&kind=LogTitle&kind=EventLog&filesystem=gs&gs_bucket_name=whitsend schedule: every 12 hours target: ah-builtin-python-bundle Java Sample Java cron.xml(note use of "&", as "&" is interpreted by XML) <?xml version="1.0" encoding="UTF-8"?> <cronentries> <cron> <description>My Daily Backup</description> <url>/_ah/datastore_admin/backup.create?name=BackupToCloud&kind=LogTitle&kind=EventLog&filesystem=gs&gs_bucket_name=whitsend</url> <schedule>every 12 hours</schedule> <target>ah-builtin-python-bundle</target> </cron> </cronentries> - Deploy this file with your app. (You can verify the Cron job you just deployed by clicking Cron Jobs in the left nav pane.) The backups will occur on the schedule you specified. While it runs, it will show up in the Pending Backups list. After the backup is complete, you can view it and use it in the list of available backups within the Datastore Admin tab. Specifying Backups in a Cron File These are the fields to include in your cron file to perform scheduled backups: description - This is the title that appears in the Cron Job list. It can be anything you wish. url - The url is required and must be in this format: /_ah/datastore_admin/backup.create?name=<backup-name-prefix>&kind=<kind-1>&kind=<kind-N>&queue=<task-queue>&filesystem=<filesystem-type>&gs_bucket_name=<bucket-name>&namespace=<namespace> These fields can appear in the url query string: nameis an optional prefix that is prepended to the backup name. It helps you identify your backups. If not supplied, the default "cron-" will be used. - The kindfield can appear one or more times. Each value specifies an entity kind that you wish to back up. You must specify at least one entity kind. In the Google Cloud Platform Console, the default is that all entity kinds are backed up. With a cron backup, there is no such default: if you don't specify a kind, it doesn't get backed up. queueis optional. It specifies the task queue to be used. If not supplied, the default task queue is used. filesystemspecifies the storage to be used for backups. Specify the value "gs", which means that Google Cloud Storage will be used. gs_bucket_nameis required. It specifies the Cloud Storage bucket name used for backup storage. namespaceis optional. When provided, only entities from the selected namespace are included in the backup. Note: The url cannot be longer than 2000 characters. As shown in the cron.xml Java example above, you must use the HTML entity " &" to separate fields, rather than the ampersand character (" &") since that will be interpreted by XML. schedule - This field is required: it defines the recurring schedule at which the backup runs. For complete details, see the Schedule Format documentation for Python or Java). target - This is required. It identifies the app version the cron backup job is to be run on. You must use the value ah-builtin-python-bundlebecause that is the version of your app that contains the Datastore Admin features that the cron job needs to execute. Keep in mind that the cron backup job is running against this version of your app, so you incur costs when the cron backup job is running. (The ah-builtin-python-bundleversion of your app is enabled when you enable Datastore admin for your app.) Warning! Backup, restore, copy, and delete operations are executed within your application, and thus count against your quota. Very frequent backups often lead to higher costs. When you run a Datastore Admin job, you are actually running underlying MapReduce jobs.. Troubleshooting When the scheduled backup runs, App Engine performs a GET using the backup url. If the GET succeeds it results in http status 200. When it fails it results in http status code 400. You can look at the logs to determine whether a backup succeeded or failed by doing the following: - In the GCP Console, visit the logs for your project. - In the pulldown menu of resources, select App Engine, and then select the appropriate module. For the version, select ah-builtin-python-bundleto display the logs. - Locate your backup job in the log to determine whether it succeeded or failed. If there was a failure, in addition to the status code 400, there will be an error message to help you determine the cause of the error.
https://cloud.google.com/appengine/articles/scheduled_backups?hl=ja
CC-MAIN-2018-26
refinedweb
914
57.57
@Kevin , Tried mate, dint work :( Any more class, which can provide file system info @Kevin , Tried mate, dint work :( Any more class, which can provide file system info @deependeroracle - Thnks mate :) It really useful I dont think there should be a reason to know some thing, anyways reason is i have some raw data, before i proceed to work on i have to check filesystem information where it present and all ... Hello, i've a query its regarding FileSystem information FileSystemView fileSystemView = FileSystemView.getFileSystemView(); System.out.println(" File system list of roots");... Yep i found it by using org.apache.commons.io.FileUtils class import java.io.File; import java.io.IOException; import org.apache.commons.io.FileUtils; public class FileMover extends... Hello, i may be asking very small question, but i could nt find how to do ? Anyways lemme give my code for copying all the files from one directory to another public class CopyingFile { static... Hello, i have a .properties file in my spring porject out side src folder, i want to keep away from src folder i gave complet classpath as <property name="location"... How to split a file based on size by no of chunks ? I have a text file 5 MB data, now i want to split this into 5 parts with 1Mb each, in to some other directory ? is it possible to split a... i am sorry, i found the logic that is public class App1 { File source = new File("/home/dev06/sourcefolder"); File destination = new File("/home/dev06/temporaryfolder"); Hello, i am newbie in this forum, forgive me in case if i give my query in wrong place, My query is i have a directory consists of few files with different memory size, now i want to move the...
http://www.javaprogrammingforums.com/search.php?s=049e2ffafd218217ad3f66f198dbcd34&searchid=204200
CC-MAIN-2016-30
refinedweb
296
62.58
(as always, the sources and binaries are available at the end of the article) As one may know, there is no native OpenGL support in .BAT scripts. But, as one may also know, almost no language had native OpenGL support in the beginning - the OpenGL support was added the moment somebody created an OpenGL interface for the given language. So, the first thing to do is obvious - we must create an OpenGL interface. Let's talk a little about the architecture of such interface. Since .BAT itself has no mechanism for creating a window (btw, if it would, then cmd.exe would own the window technically), then we must create "something" that will create us a windows. Hmm, but there is a problem - when we run that "something", and it creates a window (OpenGL window that is, using CreateWindow and setting up OpenGL, or using libSDL/GLUT/whatever to do the same), and it would exit (so that the batch script regains control), then the system garbage collector will destroy the window. Uh. We need both the window to exist, and the batch script to regain control. So let's make a daemon-style application that will run in the background, and keep the OpenGL window alive. Let's call it... GLDaemon - teh daemon of OpenGL ;> Now, how do we get .BAT to "draw" something 3D in that window? Well, we need to call some OpenGL functions (like glTranslatef, glBein, glVertex3f, etc) in the context of the GLDaemon. But, how do we tell the daemon that some function should be executed? Well, let's create another program, called GLOpcode, that will be used to send a single command to the daemon. The transport of messages between GLOpcode and GLDaemon can be done in several ways: - we can use sockets and TCP/UDP - named pipes are also cool - messages (SendMessage, etc) - and all other IPC mechanisms (DDE, shared memory, or even some driver) Which one should we choose... I've chosen TCP - because I have a handy TCP library I can use ;>, but I admit, that using pipes, messages, or UDP would be better. How should the GLDaemon work? Well, first of all, it should create an OpenGL window, and then wait for commands. In my case, I wanted the daemon not to execute the commands instantly, but instead, I wanted it to create a command list, and execute the list on demand. So, the first 3 GL opcodes will be used to work on the command list (a quick note: from the left, I will write the function name in the .BAT interface, then the opcode sent by the GLOpcode, and then a short description): gl.LockRender, L - Locking the list execution and "opening" the list for appending new commands. gl.ClearRender, C - Clearing the list. gl.UnlockRender, U - Unlocking list execution (the list is executed continuously in a loop). The following commands are well known from OpenGL: gl.Translatef, AglTranslatef %1 %2 %3 - Calling the glTranslatef function (translation/displacement) with parameters X,Y,Z gl.PushMatrix, AglPushMatrix - Pushing the matrix on a stack gl.PopMatrix, AglPopMatrix - Poping the matrix from the stack gl.Begin, AglBegin %1 - Start taking coordinates/other stuff for some 3D figure (give the figure name in the parameter, i.e. GL_TRIANGLE) gl.End, AglEnd - End of coordinates gl.Color3f, AglColor3f %1 %2 %3 - Setup a color gl.Vertex3f, AglVertex3f %1 %2 %3 - Send a coordinate gl.Rotatef, AglRotatef %1 %2 %3 %4 - Rotate according to the provided vector Now, let's create a "static class" in .BAT (see my previous post about classes in .BAT), that will "contain" the OpenGL interface. First, the constructor (inter alia, it will fire up the GLDaemon): :gl.init start GLDaemon rem Let's give the daemon 2 seconds to start sleep 2 set gl.Translatef=call :gl.Translatef set gl.PushMatrix=call :gl.PushMatrix set gl.PopMatrix=call :gl.PopMatrix set gl.Begin=call :gl.Begin set gl.End=call :gl.End set gl.Color3f=call :gl.Color3f set gl.Vertex3f=call :gl.Vertex3f set gl.Rotatef=call :gl.Rotatef set gl.LockRender=call :gl.LockRender set gl.UnlockRender=call :gl.UnlockRender set gl.ClearRender=call :gl.ClearRender goto :EOF Now to implement the static methods. I'll show gl.LockRender and gl.Rotate as examples - they all look the same: :gl.LockRender GLOpcode "L" goto :EOF :gl.Rotatef GLOpcode "AglRotatef %1 %2 %3 %4" goto :EOF So, we have the OpenGL interface for .BAT ready. How do we use this interface? Let's look upon an example that will draw a colorful triangle: @echo off call :gl.init set r=0 :loop !gl.LockRender! !gl.ClearRender! !gl.Translatef! 0 0 -10 !gl.PushMatrix! !gl.Rotatef! !r! 1 0.3 0.2 !gl.Begin! GL_TRIANGLES !gl.Color3f! 1 0 0! !gl.Vertex3f! 0 1 0 !gl.Color3f! 0 1 0 !gl.Vertex3f! -1 -1 0 !gl.Color3f! 0 0 1 !gl.Vertex3f! 1 -1 0 !gl.End! !gl.PopMatrix! !gl.UnlockRender! set /a r=!r!+20 goto loop goto :EOF As one can see, it's rather simple. We call the interface constructor, then we have a loop that clears the command list, sends some GL commands that draw a triangle rotated by r, orders to execute the list, and increases r by 20 degrees. Here is a screen shot (one can see the GLDaemon window at the top): Now for some C++. I'll start with GLOpcode implementation (may I remind You, that everything is in the pack at the bottom): #include <cstdio> #include <cstring> #include <cstdlib> #include "NetSock.h" int main(int argc, char **argv) { if(argc == 1) return 1; NetSock a; a.Connect(0x7f000001, 31337); a.Write((unsigned char*)argv[1], strlen(argv[1])); a.Disconnect(); return 0; } It just checks if there is a parameter, then it connects to 127.0.0.1 port 31337 (what other port could I possible use ? ;D), it sends the opcode, and disconnects (thats why using UDP would be better, but nvm). The daemon implementation is a bit longer, so I'll just show here a part of it. Below there is the back-end of the gl.LockRender (L) and gl.Rotatef (AglRotatef) functions: First the Lock [...] else if(Buffer[0] == 'L') // Lock { puts("Lock"); UserLock = true; } [...] void static scene() { if(Connection || UserLock) return; [...] Long story short: it sets the flag UserLock. If the flag is set, then the 'scene' function (that does the render stuff) is not being executed. Now for glRotatef: else if(strcmp(Cmd, "glRotatef") == 0) { float a, b, c, d; sscanf(*i, "%*s %f %f %f %f", &a, &b, &c, &d); glRotatef(a,b,c,d); } Nothing to comment, right? It takes the i list element, parses it (well it should be pre-parsed already at command addition, but I wanted the code to be more simple then optimized), and executed glRotatef with the given parameters ("%*s" is "well, there is a string there, but I don't care... ignore it" - thats why sscanf has 5 "formats", but only 4 target variables). How much FPS do we have? Well, the question should be: How much SPF (Seconds Per Frame) do we have. However, we can do some optimizing. There are a few things we can do: As I have written before, TCP could be changed to something else - pipes, UDP, etc. GLOpcode could execute faster. Another thing is decreasing the number of times GLOpcode is called. We need to do some remakes in GLOpcode, so we can give it more than one command at once to transfer: int i; NetSock a; a.Connect(0x7f000001, 31337); for(i = 1; i < argc; i++) { a.Write((unsigned char*)argv[i], strlen(argv[i])); Sleep(0); // Let it send ;D } a.Disconnect(); Now, let's remake the interface to create a command list in the .BAT script, and send it when gl.UnlockRender is called. The main changes are shown below: :gl.init [...] set gl.CommandList= goto :EOF :gl.LockRender set gl.CommandList=%gl.CommandList% "L" goto :EOF :gl.UnlockRender set gl.CommandList=%gl.CommandList% "U" GLOpcode %gl.CommandList% goto :EOF :gl.ClearRender GLOpcode "L" "C" "U" set gl.CommandList= goto :EOF :gl.Translatef set gl.CommandList=%gl.CommandList% "AglTranslatef %1 %2 %3" goto :EOF This boosts things quire well (creating a process is slower then changing some environment variable). I've put this version in the "opt" directory. Another thing would be to create a few separate command lists in the daemon, and tell him which one in what order should he execute (hmm, sound like CallLists in OpenGL... or even we could make something like VBO!). There are many other possible ways to optimize it (another proof it's not optimal in any way ;>), but I leave them to be figured out by the readers, as home work ;> Pack with source and binaries: batgl.zip (305 KB) And thats all! You are welcomed to leave your optimization ideas in the comments ;> Add a comment:
http://gynvael.coldwind.pl/?id=129
CC-MAIN-2016-30
refinedweb
1,498
68.06
Pythonista on Ipad Hello, I am fairly new to python and pythonista. I have a small problem, when writing a fhand=open() function for .txt file my app gives a traceback error of no such fil exists. How do i read a txt file in python thats on my Files app locally on my ipad? Thanks in advance @Ayzar247 you have to, once, give Pythonista access to your "on my iPad" folder in Files. This is done via the open external, then folder, then select "on my iPad". . The choose the location Then you will see this And in your script, you will have access to a file by a path like path = '/private/var/mobile/Containers/Shared/AppGroup/EF3F9065-AD98-4DE3-B5DB-21170E88B77F/File Provider Storage/Myfile.txt' Where the EF3F9065-AD98-4DE3-B5DB-21170E88B77F part will be different for your device. I still have to remember how to find this value... Édit: create this script in this folder and run it import sys print(sys.argv[0])
https://forum.omz-software.com/topic/7101/pythonista-on-ipad
CC-MAIN-2022-27
refinedweb
169
81.83
ThreeThree An updated Open311 API Python wrapper that was built to be as absolute user-friendly and easy-to-use as possible. Many of the design decisions made will reflect these qualities. InstallationInstallation The best way to install is through pip. pip install three At A GlanceAt A Glance Three was made to make the differences in Open311 GeoReport implementations completely unknown to the end user. Interacting with service requests should be easy. >>> import three >>> three.cities() ['boston', 'macon', 'sf', ...] # Let's start off with Macon. >>> three.city('macon') >>> three.key('my_macon_api_key') >>> three.discovery() {'service': {'discovery': 'data'}} >>> three.services() {'macon': {'service': 'data'}} >>> three.services('033') {'033': {'service_code': 'info'}} >>> three.requests() {'macon': {'request': 'data'}} >>> three.requests('123') {'123': {'request': 'data'}} # Now, let's switch it up to San Francisco. >>> three.city('sf') >>> three.key('my_sf_api_key') >>> three.services() {'SF': {'service': 'data'}} >>> three.requests() {'SF': {'request': 'data'}} # And, finally Baltimore. >>> three.city('baltimore') >>> three.key('baltimore_api_key') >>> three.services() {'baltimore': {'service': 'data'}} >>> three.requests() {'baltimore': {'request': 'data'}} Three also aims to make working with dates and result counts easier, even though not all Open311 implementations support these features. >>> import three >>> three.city('macon') >>> # Let's grab requests between certain dates. ... three.requests(start='03-10-2012', end='03-17-2012') >>> # But let's use the between parameter. ... three.requests(between=['03-10-2012', '03-17-2012']) >>> # And, let's get all the requests! (Or, as many as possible...) ... three.requests(between=['03-10-2012', '03-17-2012'], count=100) >>> # We could even get requests of different types between those days. >>> requests = [] >>> dates = ['03-10-2012', '03-17-2012'] >>> requests.extend(three.requests(between=dates, count=100)) >>> requests.extend(three.requests(between=dates, count=100, status="closed")) SubclassingSubclassing A Three class can also be imported and customized, but, for casual users, working with the three module should feel effortless. Any pain points (such as dealing with XML, required parameters, etc.) should be abstracted away. from three import Three class SF(Three): def __init__(self): super(SF, self).__init__() self.endpoint = "" self.format = "xml" self.jurisdiction = "sfgov.org" You could then use the SF class just as you would an instance of Three. >>> SF().services() >>> SF().requests() SettingsSettings These settings apply to the core Three class. A casual user of the Open311 API, by default, should not have to work with the Three class. API KeyAPI Key If you have an Open311 API key that you always intend to use, rather than initializing the Three class with it each time, you can set an OPEN311_API_KEY environment variable on the command line. export OPEN311_API_KEY="MY_API_KEY" Otherwise, you can initialize the class with your API key and endpoint. >>> from three import Three >>> t = Three('api.city.gov', api_key='my_api_key') HTTPSHTTPS By default, Three will configure a URL without a specified schema to use HTTPS. >>> t = Three('api.city.gov') >>> t.endpoint == '' True FormatFormat The default format for the Three wrapper is JSON -- although not all Open311 implementations support it. This is done mainly for ease-of-use (remember, that's the over-arching goal of the Three wrapper). You can, however, specifically request to use XML as your format of choice. >>> t = Three('api.city.gov', format='xml') >>> t.format == 'xml' True SSL/TLS versionSSL/TLS version With certain combinations of the client operating system and the server application, the SSL/TLS negotiation may fail. Forcing Three to use TLS version 1.0 may help in these cases. >>> import ssl >>> t = Three('', ssl_version=ssl.PROTOCOL_TLSv1) UsageUsage ConfigureConfigure After you've initialized your Three class, you can readjust its settings with the configure method. You can also switch back to the orgininal settings with the reset method. >>> from three import Three >>> import ssl >>> t = Three('api.city.gov', api_key='SECRET_KEY') >>> t.services() {'service': 'data'} >>> t.configure('open311.sfgov.org/dev/V2/', format='xml', ... api_key='SF_OPEN311_API_KEY', ... ssl_version=ssl.PROTOCOL_TLSv1) >>> t.services() {'SF': {'service': 'data'}} >>> t.configure(api_key='ANOTHER_API_KEY') >>> # Switch back to original settings. ... t.reset() DiscoveryDiscovery In order to use the Open311 service discovery, simply invoke the discovery method. >>> t = Three('api.city.gov') >>> t.discovery() {'service': {'discovery': 'data'}} Sometimes, however, service discovery paths differ from service and request URL paths -- in which case you can pass the specified URL to the discovery method as an argument. >>> t.discovery('') ServicesServices To see the available services provided by an Open311 implementation, use the services method. >>> t = Three('api.city.gov') >>> t.services() {'all': {'service_code': 'info'}} You can also specify a specific service code to get information about. >>> t.services('033') {'033': {'service_code': 'info'}} RequestsRequests To see available request data, use the requests method. >>> t = Three('api.city.gov') >>> t.requests() {'all': {'requests': 'data'}} Most Open311 implementations support page and page_size parameters. >>> t.requests(page_size=50) {'total': {'of': {'50': 'requests'}}} >>> t.requests(page=2, page_size=50) {'next': {'50': 'results'}} You can also specify a specific service code. >>> t.requests('123') {'123': {'requests': 'data'}} Other parameters can also be passed as keyword arguments. >>> t.requests('456', status='open') {'456': {'open': {'requests': 'data'}}} RequestRequest If you're looking for information on a specific Open311 request (and you have it's service code ID), you can use the request method. >>> t = Three('api.city.gov') >>> t.request('12345') {'request': {'service_code_id': {'12345': 'data'}}} PostPost Sometimes you might need to programmatically create a new request, which is what the post method can be used for. NOTE: the Open311 spec states that all POST service requests require a valid API key. >>> t = Three('api.city.gov', api_key='SECRET_KEY') >>> t.post('123', name='Zach Williams', address='85 2nd St', ... description='New service code 123 request.', ... email='zach@codeforamerica.org') {'new': {'request': 'created'}} TokenToken Each service request ID can be tracked with a temporary token. If you need to find the service request ID and have the request's token, you can use the token method. >>> t = Three('api.city.gov') >>> t.token('12345') {'service_request_id': {'for': {'token': '12345'}}}
https://libraries.io/pypi/three/0.6.6
CC-MAIN-2019-30
refinedweb
978
51.44
Results 1 to 2 of 2 - Join Date - Nov 2008 - 165 - Thanks - 0 - Thanked 0 Times in 0 Posts 1046 type not a compile-time constant my fla is as follows Code: import net.quip.sound.CuePointEvent; function onCuePoint(event:CuePointEvent):void { play(); } actionscript file like this; } } } 1046 type not a compile-time constant ..... i have searched high and low for this solution .... cant figure it out Jeremy: This is from that long tutorial about curpoints that you suggested... they provided download files but i have no idea why i am getting errors even from the download files Last edited by gani; 12-04-2008 at 07:14 PM. - Join Date - Nov 2008 - 165 - Thanks - 0 - Thanked 0 Times in 0 Posts nevermind.... solved I had not saved the fla to its proper location. If you just try to save the swf to the right location you will get this error. The fla file must be saved there too doh !
http://www.codingforums.com/flash-and-actionscript/153578-1046-type-not-compile-time-constant.html
CC-MAIN-2017-30
refinedweb
160
80.01
I have a static CSV document with the following headers City State Zip Latitude Longitude Subnet Wired1 Wired2 Wireless1 Wireless2 Domain Name NAT_DHCP NAT_Wireless if (addr >= wired1 and addr <= wired2) or (addr >= wireless1 and addr <= wireless2): print all header fields. netaddr.core.AddrFormatError: failed to detect a valid IP address from '' File "script.py", line 175, in myfile wireless1 = (int(IPAddress(row['Wireless1']))) with open('csvfiles/myfile.csv', 'rb') as incsv: reader = csv.DictReader(incsv, delimiter=',') addr = (int(IPAddress(ip))) wired1 = (int(IPAddress(row['Wired1']))) wired2 = (int(IPAddress(row['Wired2']))) wireless1 = (int(IPAddress(row['Wireless1']))) wireless2 = (int(IPAddress(row['Wireless2']))) Wired1 Wired2 Wireless1 Wireless2 10.65.0.0 10.65.239.255 10.65.240.1 10.65.255.254 10.38.0.0 10.38.239.255 10.34.0.0 10.34.239.255 10.34.240.1 10.34.255.254 10.83.0.0 10.83.239.255 Consider using pandas, you can manipulate your csv file lot better. import pandas as pd df = pd.read_csv('your_file.csv') #See how does your data looks like in pandas print df Wired1 Wired2 Wireless1 Wireless2 0 10.65.0.0 10.65.239.255 10.65.240.1 10.65.255.254 1 10.38.0.0 10.38.239.255 NaN NaN 2 10.34.0.0 10.34.239.255 10.34.240.1 10.34.255.254 3 10.83.0.0 10.83.239.255 NaN NaN #Select only the rows where 'Wireless1' is not null df[pd.notnull(df['Wireless1'])] Wired1 Wired2 Wireless1 Wireless2 0 10.65.0.0 10.65.239.255 10.65.240.1 10.65.255.254 2 10.34.0.0 10.34.239.255 10.34.240.1 10.34.255.254 #Select only 'Wireless' clolumn where it is not null df[pd.notnull(df['Wireless1'])]['Wireless1'] 0 10.65.240.1 2 10.34.240.1 Name: Wireless1, dtype: object Likewise you can do many more manipulations using pandas. Thus you can avoid null values being passed to netaddr.
https://codedump.io/share/I4N7JhoaPEzj/1/python-netaddr-throwing-error-when-hitting-empty-csv-cell
CC-MAIN-2016-44
refinedweb
343
60.61
Vim: Set wrapping and indentation according to file type Although I use emacs for most of my coding, I use vim quite a lot too, for quick edits, mail messages, and anything I need to edit when logged onto a remote server. In particular, that means editing my procmail spam filter files on the mail server. The spam rules are mostly lists of regular expression patterns, and they can include long lines, such as: gift ?card .*(Visa|Walgreen|Applebee|Costco|Starbucks|Whitestrips|free|Wal.?mart|Arby) My default vim settings for editing text, including line wrap, don't work if get a flood of messages offering McDonald's gift cards and decide I need to add a "|McDonald" on the end of that long line. Of course, I can type ":set tw=0" to turn off wrapping, but who wants to have to do that every time? Surely vim has a way to adjust settings based on file type or location, like emacs has. It didn't take long to find an example of Project specific settings on the vim wiki. Thank goodness for the example -- I definitely wouldn't have figured that syntax out just from reading manuals. From there, it was easy to make a few modifications and set textwidth=0 if I'm opening a file in my procmail directory: " Set wrapping/textwidth according to file location and type function! SetupEnvironment() let l:path = expand('%:p') if l:path =~ '/home/akkana/Procmail' " When editing spam filters, disable wrapping: setlocal textwidth=0 endfunction autocmd! BufReadPost,BufNewFile * call SetupEnvironment() Nice! But then I remembered other cases where I want to turn off wrapping. For instance, editing source code in cases where emacs doesn't work so well -- like remote logins over slow connections, or machines where emacs isn't even installed, or when I need to do a lot of global substitutes or repetitive operations. So I'd like to be able to turn off wrapping for source code. I couldn't find any way to just say "all source code file types" in vim. But I can list the ones I use most often. While I was at it, I threw in a special wrap setting for mail files: " Set wrapping/textwidth according to file location and type function! SetupEnvironment() let l:path = expand('%:p') if l:path =~ '/home/akkana/Procmail' " When editing spam filters, disable wrapping: setlocal textwidth=0 elseif (&ft == 'python' || &ft == 'c' || &ft == 'html' || &ft == 'php') setlocal textwidth=0 elseif (&ft == 'mail') " Slightly narrower width for mail (and override mutt's override): setlocal textwidth=68 else " default textwidth slightly narrower than the default setlocal textwidth=70 endif endfunction autocmd! BufReadPost,BufNewFile * call SetupEnvironment() As long as we're looking at language-specific settings, what about doing language-specific indentation like emacs does? I've always suspected vim must have a way to do that, but it doesn't enable it automatically like emacs does. You need to set three variables, assuming you prefer to use spaces rather than tabs: " Indent specifically for the current filetype filetype indent on " Set indent level to 4, using spaces, not tabs set expandtab shiftwidth=4 Then you can also use useful commands like << and >> for in- and out-denting blocks of code, or ==, for indenting to the right level. It turns out vim's language indenting isn't all that smart, at least for Python, and gets the wrong answer a lot of them time. You can't rely on it as a syntax checker the way you can with emacs. But it's a lot better than no language-specific indentation. I will be a much happier vimmer now! [ 11:29 Jun 15, 2014 More linux/editors | permalink to this entry | comments ]
http://shallowsky.com/blog/linux/editors/vim-settings-by-file.html
CC-MAIN-2016-44
refinedweb
624
57.71
How to process GRIB2 weather data for maritime EEZ of Australia as our complex polygon, but this could just as easily be any other geographical area. When opening a Shapefile with GDAL, we only need to point to the file with the .shp extension. However, it is required that the other component files exist in the same directory. If we are opening a file called eez.shp, there should at least be files named eez.shx and eez.dbf in the same directory as well. import os filename = os.path.join("shpfile/eez Maritime Weather GRIB2 data is stored - The local directory where the Shapefile components are stored For example, the script can be run like this: python script.py --variable HTSGW_P0_L101_GLL0 --source-data grib_directory/ --shapefile shpfile_directory/eez.shp( "--shapefile", type=str, help="The name of the directory containing Esri Shapefile component files",.
https://spire.com/tutorial/how-to-process-grib2-data-for-maritime-applications-shapefile/
CC-MAIN-2021-43
refinedweb
143
59.5